Warnings about misinformation are now regularly posted on Twitter, Facebook, and other social media platforms, but not all of these cautions are created equal. A research from Rensselaer Polytechnic Institute shows that artificial intelligence can help form accurate news assessments – but only when a news story is first emerging.
Ineffective with with stories on frequently covered topics
Researchers found that AI-driven interventions are generally ineffective when used to flag issues with stories on frequently covered topics about which people have established beliefs, such as climate change and vaccinations.
However, when a topic is so new that people have not had time to form an opinion, tailored AI-generated advice can lead readers to make better judgments regarding the legitimacy of news articles. The guidance is most effective when it provides reasoning that aligns with a person’s natural thought process, such as an evaluation of the accuracy of facts provided or the reliability of the news source.
“It’s not enough to build a good tool that will accurately determine if a news story is fake,” said Dorit Nevo, an associate professor in the Lally School of Management at Rensselaer and one of the lead authors of this paper.
“People actually have to believe the explanation and advice the AI gives them, which is why we are looking at tailoring the advice to specific heuristics. If we can get to people early on when the story breaks and use specific rationales to explain why the AI is making the judgment, they’re more likely to accept the advice.”
To stop the spread of misinformation, you need to start right away
The nearly simultaneous onset of the COVID-19 pandemic offered the researchers an opportunity to collect real-time data on a major emerging news event.
“Our work with coronavirus news shows that these findings have real-life implications for practitioners,” Nevo said. “If you want to stop fake news, start right away with messaging that is reasoned and direct. Don’t wait for opinions to form.”
from Help Net Security https://ift.tt/3cdaqsS
0 comments:
Post a Comment