fake news – Dataconomy https://dataconomy.ru Bridging the gap between technology and business Tue, 20 Aug 2024 10:02:15 +0000 en-US hourly 1 https://dataconomy.ru/wp-content/uploads/2025/01/DC_icon-75x75.png fake news – Dataconomy https://dataconomy.ru 32 32 OpenAI takes down Iranian cluster using ChatGPT to craft fake news https://dataconomy.ru/2024/08/19/openai-takes-down-iranian-fake-news/ Mon, 19 Aug 2024 07:45:23 +0000 https://dataconomy.ru/?p=56845 OpenAI has identified and deactivated a cluster of ChatGPT accounts being used by an Iranian group to create fake news articles and social media comments aimed at influencing the 2024 U.S. elections. This marks the first time OpenAI has detected and removed an operation specifically targeting the U.S. elections, underscoring the growing concern among experts […]]]>

OpenAI has identified and deactivated a cluster of ChatGPT accounts being used by an Iranian group to create fake news articles and social media comments aimed at influencing the 2024 U.S. elections.

This marks the first time OpenAI has detected and removed an operation specifically targeting the U.S. elections, underscoring the growing concern among experts that AI tools like ChatGPT could accelerate the spread of disinformation campaigns by nation-state adversaries.

The discovered operation

As explained in a blog post, OpenAI identified the accounts in question as part of a group known as Storm-2035, which has been linked to creating fake news websites and disseminating them on social media platforms to sway electoral outcomes. The Iranian operators utilized ChatGPT both for crafting long-form fake news stories and writing comments for social media posts.

The topics covered by the fabricated content ranged from the Israel-Hamas war and Israel’s presence at the Olympic Games to the U.S. presidential election. OpenAI linked the accounts to a dozen X (formerly Twitter) accounts and one Instagram account, which have since been deactivated. Meta has also taken down the identified Instagram account, which was reportedly part of a 2021 Iranian campaign targeting users in Scotland.

openai takes down iranian fake news
The Iranian group, known as Storm-2035, aimed to influence the 2024 U.S. elections by generating fake news (Image credit)

In addition to social media activity, the operators created five websites posing as both progressive and conservative news outlets, sharing information about the elections. One example of AI-generated content spotted by OpenAI featured a headline that read, “Why Kamala Harris Picked Tim Walz as Her Running Mate: A Calculated Choice for Unity”.

The impact and future concerns

While most of the social media accounts sharing this AI-fueled disinformation did not gain significant traction, experts warn that the threat is far from over. Ben Nimmo, principal investigator on OpenAI’s intelligence and investigations team, emphasized the importance of remaining vigilant but calm.


EU imposes election guidelines on big tech companies


As the 2024 U.S. elections approach, it remains to be seen whether foreign influence operations will intensify their efforts online. In response to this development, OpenAI has stressed the need for continued innovation in detecting and counteracting disinformation campaigns.

The role of AI in disinformation

The use of AI-assisted tools like ChatGPT to create and disseminate disinformation raises significant concerns about the potential scale and impact of future influence operations. By leveraging advanced algorithms, nation-state adversaries may be able to generate more convincing content at an unprecedented pace, making it challenging for both platforms and users to identify and counteract these efforts.

As AI technology continues to evolve, it is essential for developers, policymakers, and social media platforms to work together to address the risks associated with disinformation campaigns and safeguard the integrity of democratic processes.


Featured image credit: Freepik

]]>
Truth Serum for the AI Age: Factiverse Nabs Funds to Fight Fake News and Hallucinations https://dataconomy.ru/2024/06/21/ai-factiverse-funds-fake-news-hallucinations/ Fri, 21 Jun 2024 13:00:01 +0000 https://dataconomy.ru/?p=53864 Norwegian startup Factiverse, a self-proclaimed antidote to the ongoing battle against fake news, and the growing epidemic of “AI hallucinations,” announced today it has secured €1 million in funding to expand its AI-powered fact-checking platform. The investment comes as the world grapples with the double-edged sword of artificial intelligence – immense potential marred by the […]]]>

Norwegian startup Factiverse, a self-proclaimed antidote to the ongoing battle against fake news, and the growing epidemic of “AI hallucinations,” announced today it has secured €1 million in funding to expand its AI-powered fact-checking platform. The investment comes as the world grapples with the double-edged sword of artificial intelligence – immense potential marred by the occasional tendency to fabricate information with the casual confidence of a seasoned con artist.

Unlike many AI startups riding the ChatGPT wave, Factiverse has been quietly refining its patented technology since 2016. Its platform isn’t just another chatbot; it’s designed to be a discerning judge, scrutinizing everything from news headlines, verbal claims, articles, and AI-generated content for accuracy, with the latter going through its API and other integrations, sniffing out those pesky “hallucinations” before they cause too much trouble.

AI hallucinations and errors have already shown themselves to be costly. During a promotional video, Google Bard (now Gemini) made a factual error about the James Webb Space Telescope, leading to a significant drop in Alphabet’s (Google’s parent company) stock value. CNET faced scrutiny for publishing articles written by an AI tool that plagiarised content en mass and contained multiple factual errors, raising questions about AI-generated content’s reliability and ethical implications. And ChatGPT keeps landing in hot water with the EU due to its inaccurate responses. 

The company claims its technology outperforms even the latest AI models like ChatGPT and Mistral-7b in detecting factual errors and sourcing reliable information, and it does it in 100 languages. While these models have captured the public’s imagination with their impressive language capabilities, Factiverse argues that accuracy and trustworthiness should be paramount, especially in a world where misinformation can have dire consequences.

The company’s peer-reviewed research paper, penned by cofounder and CTO Vinay Setty, boldly asserts its superiority in both claim detection and sourcing credible information across multiple languages.

The funding round attracted a diverse group of investors, including Murshid Ali, founder of Huddlestock and Skyfri, Johann Olav Koss, investor and four-time Olympic gold medalist, Yasmin Namini, former C-level executive at the New York Times, Herfo, a privately owned investment firm in Norway and China, and Valide Invest, a Norwegian pre-seed investment house, as well as other angel investors from the USA and UK. 

The funding comes after Factiverse’s early traction and impressive results, which include winning both the Best AI Startup award by Nora AI and the Digital Trust Challenge by KI Park Germany in 2023. Both the funding and the awards underscore the growing urgency to address AI’s credibility problem.

“We are thrilled to announce this new round of funding,” exclaimed Maria Amelie, CEO and co-founder at Factiverse. “This investment will allow us to accelerate the development of our now crucial fact-checking solutions and empower even more businesses to leverage the power of AI with confidence.”

Amelie herself is no stranger to accolades. She was recently named one of the “50 Women in Tech” by the prestigious network Abelia and Oda. Selected from 700 nominees, and following a rigorous award process, the honor is a clear testament to her leadership in the AI field.

With fresh funding in hand, Factiverse plans to add new features to its platform, including a political bias filter for sources and an “explainable AI” function that will break down the fact-checking process in plain language. It’s a move aimed at transparency and empowering users to make informed decisions in an increasingly AI-saturated world.

As the AI arms race heats up, Factiverse positions itself as the much-needed referee, ensuring that the quest for artificial intelligence doesn’t sacrifice accuracy on the altar of innovation. 

In the words of Espen Egil Hansen, Chairman at Factiverse, “instances of companies facing legal repercussions for factual errors in their chatbots are already a reality. As AI becomes more deeply ingrained in our daily lives, ensuring the accuracy of AI models isn’t just about good business; it’s the bedrock upon which democratic values are built.” 

Factiverse’s mission, it seems, is nothing short of upholding the truth in the age of AI.

]]>