AI and misinformation
Feature

Professor Harith Alani, director of the Knowledge Management Institute at the Open University explains how AI can be used for good and bad.

Social media still runs on a fuel of controversy. That means people being actively rewarded for sharing engaging content regardless of the facts — contaminating beliefs and attitudes. Work at the Open University during the Covid-19 period, as one example, showed how false information about the disease reached three times more people than the corrected facts.

It might be casual, unintentional posting or it might be intentional harm. Either way, when misinformation or disinformation relates to issues of health, politics, the environment, the economy, it’s a kind of pollution which is a threat to the role of governments and societies themselves. A shared belief in the existence of a common good and common truths, after all, has been the basis of democracy and its freedoms.
    
Obvious breaches of law, those posts which involve hate crime or child pornography, now fall under the Online Safety Act. But misinformation can be subjective, subtle and complex. People share what they want to share, what they find eye-catching and alarming, and are rewarded with shares and attention for being controversial, no matter how inaccurate or harmful the claim: an unstoppable flood across networks of information, into people’s homes, conversations and thinking around the world.
    
Here is a hugely important and particular way that AI can be used for social good, taking on the vast job of protecting media content of all kinds from obvious types of pollution and restoring trust. AI is going to be an increasingly important tool for analysing what’s happening around misinformation and experimenting with ways of preventing spread and repairing its damage.
    
There have been many lab studies into the work of misinformation, involving simulations with controlled groups and their responses. The problem is that in the real world, the dynamics are very different, especially when misinformation is being shared deliberately. There is also the need to look at the actual impact of corrections. It can’t be assumed that sharing accurate information will resolve anything in itself.
    
The OU’s Knowledge Management Institute is currently looking into the mechanics and impact of corrections. Research into both Covid-19 and other misinformation spread via Twitter/X found how 5,000 posts were corrected, but of those, only around 10 per cent reacted in any way, the majority appeared to ignore the correction; and only around 10 per cent of those who did react did so positively.
    
Given the nature of digital media and how it’s used, misinformation can’t be eliminated, but AI and machine learning can be used to build a new environment, improving awareness and the responses in a more timely and effective way. In this way the system can be turned on its head: where truth matters and is recognised positively, creating a new kind of fuel for social media and Internet content generally, pushing engagement in the right direction.
    
AI can work with the mass of historic data to help identify what is likely to constitute misinformation, picking up on previously debunked claims and recurring templates. The technology can automatically assess and monitor the credibility of online accounts, and be used to predict the use of misinformation before it happens based on past events — when and why trends for misinformation occur, like a pandemic or conflict — allowing for more advanced algorithms and counter-strategies to be prepared. AI is also important for tracking the spread and effectiveness of fact-checking and corrections. Timing is critical. Evidence suggests that corrections need to be circulating before a tipping point of false claims has already taken hold.
    
Generic fact-checked responses can be more or less effective depending on the audience. More needs to be done, using AI, to identify the nature of the recipients of corrective messages and personalise material. Are they influencers, conspiracy theorists, extremists or just accidental misinformers?

Bot-like programs can be used to trial different approaches and monitor impacts, monitoring audience reactions to corrections, and automatically tuning and personalising interventions to maximise visibility and effects as they learn more about people’s characters and behaviours. E

Certainly the major social media platforms are doing more to verify and report misinformation.
    
During the pandemic, Meta worked with fact-checkers from more than 80 organisations, claiming to have removed more than 3,000 accounts, pages and groups and 20 million pieces of content. Twitter/X published policies to highlight its approach to reduce misinformation.

But there are still questions over when policies are actually being enforced and to what extent. Businesses want to protect their operations from criticism and restrictions, while at the same time minimising the costs involved. Twitter/X has been employing ‘curators’ to provide notes on context relating to trending topics which might be controversial, around the war in Ukraine for example (when the curators are believed to have removed 100,000 accounts for breaking rules). There is evidence this has had a positive effect in limiting the spread of false claims. The purchase of Twitter by Elon Musk, however, is understood to mean a reduction in the use of moderation.
    
There can be an element of self-regulation. When generative AI software first came out, such as ChatGPT and BARD, they were ready to generate endless streams of false claims if prompted to do so, but more recent updates have improved the situation. Systems now refuse to generate what they detect to be potentially harmful or misinforming. However, it is unclear what stems have been used and to what extent they span across different topics and claims.
    
Ultimately though, self-regulation by social media platforms needs to be combined with legal frameworks. And that needs to include every social media player, not just the obvious targets, the fringe platforms as they emerge: blocking illegal content, demoting false information, promoting fact-checked and known truths.

Monitoring and management of a good global communications space is a mind-boggling one, but positive use of AI makes it workable.