By Dana Yamashita

Fake news, defined as false or misleading information, has been around for centuries. Rameses the Great, Mark Antony, and Cleopatra were surrounded by fake news that was passed on verbally. Once the printing press was invented, it became much easier to circulate fake news more widely and more quickly. Benjamin Franklin crafted fake news and Mark Twain’s famous quote, “The report of my death was an exaggeration,” was a direct result of false information.

Today, we are bombarded with news in print, on social media, and through radio airwaves ─ both real and fake. There are often warnings about misinformation accompanying such news, but the cautions are not all created equal. New research from Rensselaer Polytechnic Institute indicates that artificial intelligence (AI) can help form accurate news assessments, but only when a news story is first emerging.

The team of interdisciplinary researchers found that AI-driven interventions are generally not effective when used to flag issues with stories on frequently covered topics that people have already established beliefs about, such as climate change and vaccinations.

However, when a topic is so new that people have not had time to form an opinion, tailored AI-generated advice can lead readers to make better judgments regarding the legitimacy of news articles. The guidance is most effective when it aligns with a person’s natural thought process, such as an evaluation of the accuracy of facts provided or the reliability of the news source.

“It’s not enough to build a good tool that will accurately determine if a news story is fake,” said Dorit Nevo, associate professor in the Lally School of Management and one of the lead authors of the paper “Tailoring Heuristics and Timing AI Interventions for Supporting News Veracity Assessments,” which was recently published in Computers in Human Behavior Reports.

“People actually have to believe the explanation and advice the AI gives them, which is why we are looking at tailoring the advice to specific heuristics. If we can get to people early on when the story breaks and use specific rationales to explain why the AI is making the judgment, they’re more likely to accept the advice,” she said. “Our work with coronavirus news shows that these findings have real-life implications for practitioners. If you want to stop fake news, start right away with messaging that is reasoned and direct. Don’t wait for opinions to form.”

In addition to Nevo, the team at Rensselaer included Lydia Manikonda, assistant professor in the Lally School of Management; Sibel Adali, professor and associate dean in the School of Science; and Clare Arrington, doctoral student of computer science. The other lead author was Benjamin D. Horne, Rensselaer alumnus and assistant professor at the University of Tennessee.