The Challenges of Misinformation in the Digital Age
In recent years, Western democracies have grappled with a concerning mix of cyberattacks, information operations, political and social subversion, exploitation of societal tensions, and malign financial influence. At the heart of these challenges lies the proliferation of misinformation – false, inaccurate, or misleading information designed to cause public harm or achieve financial gain.
The rapid digital transformation of our society has fundamentally altered how information, and misinformation, can be produced and disseminated. The emergence and accelerated adoption of new technologies, such as the Internet of Things (IoT), robotics, artificial intelligence (AI), 5G, and augmented/virtual reality, have empowered malicious actors to infiltrate government and corporate networks, steal sensitive data, compromise individual privacy, and distort democratic processes on an unprecedented scale.
The 2016 US presidential election served as a stark wake-up call, demonstrating how the strategic use of algorithms, automation, and AI can boost the efficiency and scope of disinformation campaigns, impacting the opinion formation and voting decisions of citizens. As the role of AI in our daily lives grows, algorithms will hold increasing sway, enabling adversaries to execute their nefarious objectives with greater sophistication and reach.
Automated Fact-Checking and the Limitations of AI-Powered Solutions
In response to the misinformation crisis, a growing number of fact-checking initiatives have emerged worldwide. According to the Duke Reporters’ Lab, there are now 194 active fact-checking projects in over 60 countries – a fourfold increase in the past five years.
Traditionally, fact-checking has relied on manual human intervention to verify the accuracy of information. However, as the volume of misinformation continues to escalate, this approach is increasingly deemed ineffective and inefficient. To address this challenge, the first proposals for automating online fact-checking appeared a decade ago, and the 2016 US election further fueled research interest in AI-assisted fact-checking (AFC) tools.
AI-powered systems hold significant promise in tackling misinformation. Driven by computing power rather than human biases, AI can detect and remove illegal, dubious, and undesirable content at scale. It has also proven effective in screening for and identifying fake bot accounts through techniques like “bot-spotting” and “bot-labeling.”
However, AI solutions come with their own limitations and unintended consequences. The risk of over-blocking lawful and accurate content, or the “overinclusiveness” feature of AI, is a significant shortcoming. AI models are still prone to false negatives and false positives, leading to the censorship of legitimate and reliable content that is incorrectly labeled as disinformation.
This is because automated technologies remain limited in their ability to assess the nuanced accuracy of individual statements. Current AI systems can only identify simple declarative claims, missing more complex expressions where contextual or cultural cues are necessary. They struggle with understanding sarcasm, irony, and other subtle forms of misinformation.
Linguistic barriers and country-specific cultural and political environments further compound the challenge of developing AI systems capable of making nuanced judgments about the veracity of online content. Moreover, some automated algorithms risk replicating and automating human biases and personality traits, producing outcomes that are less favorable to certain groups.
The complexity and opacity of AI systems pose additional limitations. The inherently “black box” nature of machine learning models, where the evolution based on self-teaching goes beyond the understanding of the developers, makes it difficult to explain how they arrived at a particular recommendation. While efforts are underway to develop more explainable AI systems, this technology remains largely academic and cannot yet be widely deployed.
The Dual-Edged Sword of AI: Empowering Defenders and Adversaries
While advances in machine learning technologies will undoubtedly benefit those defending against malign information operations, they are also likely to allow adversaries to magnify the scale and effectiveness of their disinformation campaigns in the short term.
State actors, such as Russia and China, invest considerable resources in new technologies, and the proliferation of AI among authoritarian regimes poses a long-term risk to democratic principles. Non-state actors, like the Islamic State (ISIS), have also demonstrated the ability to use disinformation effectively for recruitment purposes, although they lack the resources to scale up their operations as significantly as state-backed actors.
Four key threats stand out in particular:
-
User Profiling and Segmentation: With advances in machine learning, adversaries will increasingly be able to identify individuals’ unique characteristics, beliefs, needs, and vulnerabilities, and deliver highly personalized content to target those who are most susceptible to influence.
-
Hyper-Personalized Targeting: Combining user profiling with automated content generation tools, malicious actors can create and disseminate disinformation tailored to the personality traits and preferences of individual users, potentially swaying their political opinions and voting decisions.
-
Deep Fakes: The proliferation of AI-powered tools for generating realistic but fabricated audio, video, and text content poses a growing threat, as it becomes increasingly difficult to distinguish between genuine and manipulated information.
-
Humans “Out of the Loop”: As AI systems improve in their ability to understand human language, context, and reasoning, there is a risk of humans becoming increasingly “out of the loop” in the content moderation process, with AI-enabled bots potentially generating, persuading, and tailoring content for different audiences autonomously.
Empowering Users to Assess Content Accuracy
Rather than leaving content moderation solely in the hands of social media platforms or governments, a growing body of research has explored ways to empower individual users to determine the credibility of online content for themselves. This approach is rooted in the observation that users already engage in informal fact-checking within their social circles, seeking and sharing information about the reliability of the content they encounter.
One promising approach is to enable users to assess the accuracy of content they come across, which has been shown to reduce the likelihood of sharing misinformation. By priming users to have accuracy in mind, this intervention encourages more critical thinking and discernment of content veracity.
However, a key challenge with this approach is scalability. While it is feasible to ask users to assess the accuracy of content they are about to share, it is not practical to expect them to evaluate every piece of information they encounter in their social media feeds. Additionally, the assessments from a user’s limited social circle are unlikely to match the sheer volume of content they are exposed to.
Personalized AI: Amplifying Democratized Assessments
To address the scalability issue, this article explores the potential of a Personalized AI system that can learn from a user’s own assessments of content and predict how the user is likely to evaluate other information they encounter.
The vision for this Personalized AI is that it can serve as an aide, directing the user’s attention to content they are likely to find credible or flagging items they may assess as inaccurate. When the user is about to share a post, the AI can also provide a nudge, reminding the user to consider the accuracy of the information before disseminating it further.
Importantly, the Personalized AI’s predictions could also be displayed publicly, expanding the reach of the user’s limited set of assessments. If their warnings about misinformation are shown on similar content across the social network, it could help their social circle better navigate the online information landscape.
However, a key challenge with this approach is the potential for the Personalized AI’s predictions to influence the user’s own judgments, creating a self-fulfilling prophecy. If the AI mispredicts the user’s assessments, it could lead the user to believe content they would otherwise find inaccurate, or to disregard credible information.
Exploring the Influence of Personalized AI on User Judgments
To better understand the potential benefits and risks of a Personalized AI for identifying misinformation, we conducted a user study in which participants interacted with such a system in a simulated social media environment.
In the study, participants assessed the accuracy of a feed of COVID-19-related tweets. For each participant, we trained two separate Personalized AI models – one whose predictions were displayed to the user (Visible), and one whose predictions were kept hidden (Hidden). This allowed us to compare how often participants agreed with the AI’s predictions when they were shown versus when they were withheld.
Our results suggest that participants’ judgments on the accuracy of tweets were indeed influenced by seeing the predictions of their Personalized AI. This influence grew stronger over time as the AI learned more about the user’s assessment patterns. However, we found that this effect disappeared when participants were required to provide justifications for their assessments, suggesting that encouraging users to think critically about the content can mitigate the influence of the AI.
Interestingly, we did not find that participants’ confidence in their assessments was affected by their agreement with the AI’s predictions. This indicates that the Personalized AI may not lead to the kind of “overreliance” or “automation bias” that has been observed in other decision-support systems.
Design Implications and Future Directions
The findings from our user study point to several important design considerations and future research directions for the use of Personalized AI in the context of misinformation detection:
-
Transparency and Explainability: To build trust and reduce the potential for undue influence, it is crucial to provide users with transparency about how the Personalized AI makes its predictions, as well as the ability to understand and scrutinize its reasoning.
-
Encouraging Critical Thinking: Interventions that prompt users to justify their assessments, as observed in our study, can help mitigate the AI’s influence and encourage more active engagement in evaluating content accuracy.
-
Scalable and Decentralized Approaches: Personalized AI can help scale the reach of user-generated assessments, but should be considered alongside other decentralized approaches that empower users to collectively identify and counter misinformation.
-
Ethical Considerations: The use of Personalized AI for content moderation raises important questions about the appropriate roles and responsibilities of platforms, governments, and users in determining the “truth.” Careful consideration of privacy, freedom of expression, and the potential for abuse is essential.
-
Continuous Improvement and Evaluation: As Personalized AI systems for misinformation detection continue to evolve, ongoing research and user testing will be crucial to understand their long-term impacts and ensure they serve the public interest.
Conclusion: Towards a Resilient Information Ecosystem
The fight against misinformation requires a multifaceted approach that leverages the strengths of both human and artificial intelligence. While AI-powered solutions hold significant promise in automating the detection and removal of false content, they must be deployed thoughtfully and with appropriate safeguards to avoid unintended consequences.
Personalized AI offers a novel way to amplify democratized assessments of online information, but its implementation requires careful consideration of the potential for user influence, as well as broader questions of content moderation, digital rights, and the preservation of a healthy information ecosystem.
Ultimately, the next wave of misinformation calls for a greater societal resilience – one that empowers individuals to think critically, encourages collaborative fact-checking, and fosters a shared responsibility between platforms, governments, and citizens to uphold the integrity of our digital public sphere. By working together to develop ethical and effective solutions, we can build a future where the truth prevails over the tide of disinformation.