fact checking – Dataconomy https://dataconomy.ru Bridging the gap between technology and business Thu, 09 Jan 2025 11:08:23 +0000 en-US hourly 1 https://dataconomy.ru/wp-content/uploads/2025/01/DC_icon-75x75.png fact checking – Dataconomy https://dataconomy.ru 32 32 The Facebook Exodus: Why I’m Leaving and Why Expert Verification Matters More Than Ever https://dataconomy.ru/2025/01/09/facebook-exodus-expert-verification-matters/ Thu, 09 Jan 2025 11:08:23 +0000 https://dataconomy.ru/?p=63180 Mark Zuckerberg just dropped a bombshell. Meta, the parent company of Facebook and Instagram, is abandoning its professional fact-checking program. Instead, they’re moving to a “community-driven” system, putting the onus on users to determine what’s true and what’s not. Zuckerberg says it’s about fostering “free speech,” but it feels a lot like abdicating responsibility, saving […]]]>

Mark Zuckerberg just dropped a bombshell. Meta, the parent company of Facebook and Instagram, is abandoning its professional fact-checking program. Instead, they’re moving to a “community-driven” system, putting the onus on users to determine what’s true and what’s not.

Zuckerberg says it’s about fostering “free speech,” but it feels a lot like abdicating responsibility, saving money, bowing down to political pressure, and more.

Frankly, it’s the last straw. I’m done with Facebook.

I’ve been wrestling with this for a while now. The endless scroll, the monetization of my life, the performative outrage, the nagging feeling that I’m being manipulated by algorithms, the blatant and widely covered manipulation… it’s exhausting. But this latest move? It’s a dealbreaker.

Look, I get the appeal of crowdsourcing. The wisdom of the crowd, right? But when it comes to complex issues, “common sense” isn’t always enough. We need experts. We need evidence. We need nuanced analysis, not just knee-jerk reactions and confirmation bias.

Zuckerberg, in his infinite wisdom (read: with a healthy dose of self-preservation), has decided to throw his fact-checking partners under the bus. Possibly, those annoying truth-tellers were just too good at their jobs, exposing uncomfortable truths and generally making life difficult for the Facebook overlords.

According to Zuck, these fact-checkers were “too politically biased” and, get this, “destroyed more trust than they created.” It’s a classic case of blaming the Messenger, wouldn’t you say?

Of course, the fact-checking organizations themselves aren’t taking this lying down. They’ve fired back, pointing out the obvious: they simply flagged potentially false content. What Facebook chose to do with that information was entirely up to them.

It’s a bit like a chef blaming the health inspector for a dirty kitchen. “Oh, those inspectors are just too picky! They’re ruining my reputation!” Never mind the fact that the kitchen’s a mess and the menu is probably giving people food poisoning.

Take climate change, for example. The science is clear, yet misinformation runs rampant on social media. Do we really want the veracity of climate data determined by a popularity contest? Or how about public health? Anti-vaccine sentiment is already a serious problem, fueled by conspiracy theories and misleading claims. Letting those narratives go unchecked – or chosen to be true by coordinated consortiums of community members that have an agenda and a vote – could have devastating consequences.

This isn’t about censorship. It’s about accountability. Social media platforms have a responsibility to ensure the information they disseminate is accurate and trustworthy. They’ve become our primary source of news and information, and with that power comes a responsibility to combat the spread of harmful falsehoods.

So where do we go from here? I, for one, am turning to platforms and tools that prioritize expert verification and rigorous fact-checking. Solutions like Factiverse, for example, which leverages a network of over 350k human-performed fact-checks from more than 100 trusted outlets globally to analyze information and provide context.

Factiverse’s approach gives me hope and gives me the tools to see which sources back up and contest a statement, so I can be informed and balanced. It’s a reminder that truth still matters, and that there are people out there dedicated to upholding it. In a world where facts are increasingly contested, we need reliable sources of information more than ever.

Maybe Zuckerberg’s gamble will pay off. Maybe the “wisdom of the crowd” will prevail. But I’m not sticking around to find out. I’m logging off Facebook and investing my time in platforms that value truth and accuracy. Because in the end, facts matter. And we all deserve better than to be drowning in a sea of misinformation.

This article was originally published on Hackernoon and is republished with permission.

]]>
Truth Serum for the AI Age: Factiverse Nabs Funds to Fight Fake News and Hallucinations https://dataconomy.ru/2024/06/21/ai-factiverse-funds-fake-news-hallucinations/ Fri, 21 Jun 2024 13:00:01 +0000 https://dataconomy.ru/?p=53864 Norwegian startup Factiverse, a self-proclaimed antidote to the ongoing battle against fake news, and the growing epidemic of “AI hallucinations,” announced today it has secured €1 million in funding to expand its AI-powered fact-checking platform. The investment comes as the world grapples with the double-edged sword of artificial intelligence – immense potential marred by the […]]]>

Norwegian startup Factiverse, a self-proclaimed antidote to the ongoing battle against fake news, and the growing epidemic of “AI hallucinations,” announced today it has secured €1 million in funding to expand its AI-powered fact-checking platform. The investment comes as the world grapples with the double-edged sword of artificial intelligence – immense potential marred by the occasional tendency to fabricate information with the casual confidence of a seasoned con artist.

Unlike many AI startups riding the ChatGPT wave, Factiverse has been quietly refining its patented technology since 2016. Its platform isn’t just another chatbot; it’s designed to be a discerning judge, scrutinizing everything from news headlines, verbal claims, articles, and AI-generated content for accuracy, with the latter going through its API and other integrations, sniffing out those pesky “hallucinations” before they cause too much trouble.

AI hallucinations and errors have already shown themselves to be costly. During a promotional video, Google Bard (now Gemini) made a factual error about the James Webb Space Telescope, leading to a significant drop in Alphabet’s (Google’s parent company) stock value. CNET faced scrutiny for publishing articles written by an AI tool that plagiarised content en mass and contained multiple factual errors, raising questions about AI-generated content’s reliability and ethical implications. And ChatGPT keeps landing in hot water with the EU due to its inaccurate responses. 

The company claims its technology outperforms even the latest AI models like ChatGPT and Mistral-7b in detecting factual errors and sourcing reliable information, and it does it in 100 languages. While these models have captured the public’s imagination with their impressive language capabilities, Factiverse argues that accuracy and trustworthiness should be paramount, especially in a world where misinformation can have dire consequences.

The company’s peer-reviewed research paper, penned by cofounder and CTO Vinay Setty, boldly asserts its superiority in both claim detection and sourcing credible information across multiple languages.

The funding round attracted a diverse group of investors, including Murshid Ali, founder of Huddlestock and Skyfri, Johann Olav Koss, investor and four-time Olympic gold medalist, Yasmin Namini, former C-level executive at the New York Times, Herfo, a privately owned investment firm in Norway and China, and Valide Invest, a Norwegian pre-seed investment house, as well as other angel investors from the USA and UK. 

The funding comes after Factiverse’s early traction and impressive results, which include winning both the Best AI Startup award by Nora AI and the Digital Trust Challenge by KI Park Germany in 2023. Both the funding and the awards underscore the growing urgency to address AI’s credibility problem.

“We are thrilled to announce this new round of funding,” exclaimed Maria Amelie, CEO and co-founder at Factiverse. “This investment will allow us to accelerate the development of our now crucial fact-checking solutions and empower even more businesses to leverage the power of AI with confidence.”

Amelie herself is no stranger to accolades. She was recently named one of the “50 Women in Tech” by the prestigious network Abelia and Oda. Selected from 700 nominees, and following a rigorous award process, the honor is a clear testament to her leadership in the AI field.

With fresh funding in hand, Factiverse plans to add new features to its platform, including a political bias filter for sources and an “explainable AI” function that will break down the fact-checking process in plain language. It’s a move aimed at transparency and empowering users to make informed decisions in an increasingly AI-saturated world.

As the AI arms race heats up, Factiverse positions itself as the much-needed referee, ensuring that the quest for artificial intelligence doesn’t sacrifice accuracy on the altar of innovation. 

In the words of Espen Egil Hansen, Chairman at Factiverse, “instances of companies facing legal repercussions for factual errors in their chatbots are already a reality. As AI becomes more deeply ingrained in our daily lives, ensuring the accuracy of AI models isn’t just about good business; it’s the bedrock upon which democratic values are built.” 

Factiverse’s mission, it seems, is nothing short of upholding the truth in the age of AI.

]]>