AI News, Can AI Win the War Against Fake News?

Can AI Win the War Against Fake News?

It may have been the first bit of fake news in the history of the Internet: in 1984, someone posted on Usenet that the Soviet Union was joining the network.

The clients see a report for each piece the system considered, with scores that assess the likelihood that something is fake news, carries malware, or contains anything else they’ve ask the system to look out for, like nudity.

Breitbart stories were classified as “unreliable, right, political, bias,” while Cosmopolitan was considered “left.” It could tell when a Twitter account was using a logo but the links weren’t associated with the brand it was portraying.

AdVerif.ai not only found that a story on Natural News with the headline “Evidence points to Bitcoin being an NSA-engineered psyop to roll out one-world digital currency” was from a blacklisted site, but identified it as a fake news story popping up on other blacklisted sites without any references in legitimate news organizations.

Delip Rao, one of its organizers and the founder of Joostware, a company that creates machine-learning systems, said spotting fake news has so many facets that the challenge is actually going to be done in multiple steps.

The next challenge might take on images with overlay text (think memes, but with fake news), a format that is often promoted on social media, since its format is harder for algorithms to break down and understand.

“Like fact checkers on steroids.” Even if a system is developed that is effective in beating back the tide of fake content, though, it’s unlikely to be the end of the story.

What AdVerif.ai and others represent, then, looks less like the final word in the war on fake content than the opening round of an arms race, in which fake content creators get their own AI that can outmaneuver the “good” AIs (see “AI Could Set Us Back 100 Years When It Comes to How We Consume News”).

Fake news is still a problem. Is AI the solution?

Fake news is fueled in part by advances in technology — from bots that automatically fabricate headlines and entire stories to computer software that synthesizes Donald Trump’s voice and makes him read tweets to a new video editing app that makes it possible to create authentic-looking videos in which one person’s face is stitched onto another person’s body.

But technology, in the form of artificial intelligence, may also be the key to solving the fake news problem — which has rocked the American political system and led some to doubt the veracity even of reports from long-trusted media outlets.

These systems could also work with various fake news alert plugins available from Google’s web store, such as the browser extension This is Fake, which uses a red banner to flag debunked news stories on your Facebook newsfeed.

“All of the current systems for tracking fake news are manual, and this is something we need to change as the earlier you can highlight that a story is fake, the easier it is to prevent it going viral,” says Delip Rao, founder of the San Francisco-based AI research company Joostware and organizer of the Fake News Challenge, a competition set up within the AI community to foster development of tools that can reliably spot fake content.

“But because a lot of this content is recycled and repeated in different ways, we believe we can use AI to pinpoint trends which detect it as being fake.” In November, AdVerif.ai launched an AI-based algorithm that the company claims can identify fraudulent stories with an accuracy approaching 90 percent.

“Statements like ‘Trump is the best U.S. president’ can’t easily be measured, so it’s very hard for AI to compute whether they’re true or false.” The latest breed of image and video manipulation tools further complicates the task facing AI researchers.

The Fake News Arms Race

Since the 2016 US presidential election, the term “fake news” has become part of our daily vernacular.

It has reached such epidemic proportions that French President Macron recently commented, “we must fight against the virus of fake news.” In fact, the World Economic Forum ranks this massive digital spread of misinformation among the top future global risks, along with failure to adapt to climate change, organized crime, and the food shortage crisis.

In my opinion, improving the ability of platforms and media to address the fake news phenomenon requires taking a more holistic approach, identifying areas in every part of the value chain — from creation all the way to circulation and discovery — that need to change.

In order to pinpoint where these opportunities and challenges lie, I’ve broken down fake news into four distinct stages: 1) creation, 2) publication, 3) circulation, and 4) discovery.

Although this is technologically a hard problem to solve given the nuances of human understanding, I view the startups operating solely in this stage as projects rather than ventures, as there is no clear business model and it’s very much a game of whack-a-mole that requires automation at scale.

As such, startups like Israeli-based AdVerif.ai are looking to help ad agencies not only identify fake stories to ensure their brand doesn’t appear alongside them, but also defund the bad actors who are producing low-quality content.

In order to prevent the discovery of such content, some startups are taking a decentralized approach whereby an editorial board no longer determines what’s worth reading: instead, users, developers, and publishers run custom rankings to produce search results.

A decentralized platform gives publishers and application developers new business models, outside of advertising and subscriptions, and potentially improves “organic” content discovery for audiences that doesn’t rely on easily abused social media signals such as links, likes, and votes.

The companies I highlighted above were startups operating in singular stages, but if you take a look at the market map, you’ll notice three companies operating more as a full-stack solution, whereby the output of one stage is fed as inputs into another to create a data feedback loop, a competitive moat.

It’s clear that, as attackers continue to evolve their techniques and innovate, so too will defenders, whether it’s: And as I think through the opportunities for why these companies are the most likely to succeed, I boil it down to three core themes: Similar to the way a computer virus spreads from host to host, misinformation spreads like wildfire from person to person.

By providing a free consumer service, a company that roots out and identifies fake news can gather data at little to no cost and build up a base of knowledge about where all harmful content resides.

In my view, companies that aim to create a more holistic, trustworthy news platform have a higher chance of succeeding than those looking to build the next crowdsourced fact-checking service or bias-detection algorithm.

Just as the availability of food metrics like calories and nutrients helps drive better decision-making and changes public health at a meta level, metrics that scale up to increase the authority of accurate news will drive more accurate thinking about a multiplicity of information sources.

Companies race to build robots to tell humans what "truth" is

In 2017, misleading and maliciously false online content is so prolific that we humans have little hope of digging ourselves out of the mire. Instead, it looks ...