Advertisement
Trending

‘Algorithmic warnings’: Study finds AI is a better lie detector than humans. It could be huge for Facebook, Google, TikTok misinfo

‘It’s impossible to do it with everything at a human level.’

Claire Goforth

Featured Video
Featured Video Hide

An influencer flips their camera around and spouts off for a few minutes. Seconds later, they upload the video to the internet, where it swiftly starts getting views. Reactions, reposts, and comments accelerate its spread. Within hours, it’s crossed platforms, become a trending topic, and gets the meme treatment. By tomorrow, millions will have seen and accepted it as true.

Advertisement Hide
Advertisement

There’s just one problem: Everything the influencer said is a lie.

The history of the internet is rich with stories like this. Post goes viral, people believe it, then after the fact it turns out to be mis- or disinformation. (The difference lies in the intent. Disinformation is intentionally misleading; misinformation may simply be a mistake.) There’s no way to stop people from lying online, nor an automated process to detect these lies—at least not yet.

Advertisement Hide

A new study out of the University of California San Diego has found that artificial intelligence (AI) could provide a solution. Researchers there found that machine learning was nearly 50% better than humans at detecting lies.

Advertisement

To create the AI, they had the machine consume and analyze episodes of the British show Golden Balls. In this show, contestants get four balls that contain cash values ranging from 10 to 75,000 pounds. They opened two in view of the others and two secretly, then told the others how much was in the latter two. They could choose to lie or not. Then the group debated whether they were lying.

Researchers fed the machine episodes of the show, then had it analyze other episodes to determine if contestants were lying. They also had 600 people watch the show and decide if they thought contestants were lying. The AI predicted lies correctly 74% of the time; humans between 51% and 53%.

Advertisement Hide

Methodology

Marta Serra-Garcia, the study’s lead author and associate professor of behavioral economics at UC San Diego’s Rady School of Management, told the Daily Dot on Tuesday that the AI used a variety of data points—including facial emotion recognition software, the text of contestants’ statements, and voice analysis—to make its determinations.

Advertisement

“The statistical software is going to use all of that information to make predictions, or to kind of learn and make predictions,” Serra-Garcia said.

She said that this type of AI has potential use as a tool for content moderation for social media platforms. TikTok, YouTube parent company Google, and Meta all use AI to moderate content, but it’s mostly used to determine whether content violates the platform’s policies, such as those prohibiting nudity or violence, not whether it’s true.

Advertisement Hide

The problem with the social media landscape

Serra-Garcia characterized platforms’ current content moderation for false information as more reactive in that platforms rely heavily on users to flag potential falsehoods. This was not a criticism so much as recognition of the herculean task required for platforms to analyze the millions of posts uploaded every day.

Advertisement

“They have content checkers, but it’s impossible to do it with everything at a human level,” she said.

A key problem with this approach is that by the time something gets flagged as untrue, the post may have already gone viral. Then it’s extremely difficult to effectively correct the record, as those who have already decided something is true are arguably less likely to change their minds.

The study reinforced this. Researchers had two groups of people watch the same episodes of Golden Balls. One was told prior to watching that the AI had flagged potential lies; the other told after watching it.

Advertisement Hide
Advertisement

They found that people are less likely to believe a Golden Balls contestant was lying if they get warning after watching the episode as opposed to before.

So if platforms used AI to flag potential falsehoods, they could potentially slow the spread of misinformation.

Applying the findings

Co-author Uri Gneezy, professor of behavioral economics at the Rady School, said in a release, “Our study suggests that these online platforms could improve the effectiveness of their flagging systems by presenting algorithmic warnings before users engage with the content, rather than after, which could lead to misinformation spreading less rapidly.”

Advertisement

The paper will be published in Management Science.

Advertisement Hide

Serra-Garcia is now researching how incentives to generate content change what people post.

“I’m looking at that freelancers generating content, varying their incentives, and seeing how that affects people who are on the viewer side, how they click, what they pay attention to, how much they know at the end of the day,” she said.

Advertisement

“In the future, we need to think of ways we make the truth travel faster,” Serra-Garcia added.


Send Hi-Res story tips and suggestions here.

Advertisement
Advertisement Hide

Internet culture is chaotic—but we’ll break it down for you in one daily email. Sign up for the Daily Dot’s web_crawlr newsletter here. You’ll get the best (and worst) of the internet straight into your inbox.

 
Exit mobile version