Twitter experiment finds correcting misinformation makes problem worse

28 May 2021, 11:54

Twitter logo
Technology Stock – Social Media. Picture: PA

Researchers offered polite corrections with complete with links to solid evidence, in replies to flagrantly false tweets about politics.

Correcting misinformation on Twitter may only make the problem worse, according to a study.

Researchers offered polite corrections complete with links to solid evidence, in replies to flagrantly false tweets about politics.

But they found this had negative consequences, leading to even less accurate tweets and greater toxicity from those being corrected.

Lead author Dr Mohsen Mosleh, from the University of Exeter, said the findings were “not encouraging”.

“After a user was corrected they retweeted news that was significantly lower in quality and higher in partisan slant, and their retweets contained more toxic language,” he said.

Twitter
The study found that being corrected by another user could bring social factors into play, like embarrassment (Dominic Lipinski/PA)

To conduct the experiment, the researchers identified 2,000 Twitter users, with a mix of political persuasions, who had tweeted out any one of 11 frequently repeated false news articles.

All of those articles had been debunked by the fact-checking website snopes.com.

Examples included the incorrect assertion that Ukraine donated more money than any other nation to the Clinton Foundation, and the false claim that Donald Trump, as a landlord, once evicted a disabled combat veteran for owning a therapy dog.

The research team then created a series of Twitter bot accounts, all of which existed for at least three months and gained at least 1,000 followers and appeared to be genuine human accounts.

Upon finding any of the 11 false claims being tweeted out, the bots would then send a reply along the lines of: “I’m uncertain about this article – it might not be true. I found a link on Snopes that says this headline is false.”

Twitter stock
The study looked at people from a variety of political persuasions (Chris Ison/PA)

The reply would also link to the correct information.

The researchers observed that the accuracy of news sources the Twitter users retweeted promptly declined by roughly 1% in the 24 hours after being corrected.

Similarly, evaluating more than 7,000 retweets with links to political content made by the Twitter accounts in the same 24 hours, the researchers found an upturn in the partisan lean of content and the “toxicity” of the language being used.

However, in all these areas – accuracy, partisan lean, and the language being used – there was a distinction between retweets and the primary tweets being written by the Twitter users.

Retweets, specifically, degraded in quality, while tweets original to the accounts being studied did not.

Donald Trump
One of the false claims examined in the study involved the former US president Donald Trump (Niall Carson/PA)

“Our observation that the effect only happens to retweets suggests that the effect is operating through the channel of attention,” said co-author Professor David Rand, from the Massachusetts Institute of Technology.

“We might have expected that being corrected would shift one’s attention to accuracy.

“But instead, it seems that getting publicly corrected by another user shifted people’s attention away from accuracy – perhaps to other social factors such as embarrassment.”

The effects were slightly larger when being corrected by an account that identified with the same political party as the user, suggesting that the negative response was not driven by animosity towards counter-partisans.

– The study, Perverse Downstream Consequences of Debunking: Being Corrected by Another User for Posting False Political News Increases Subsequent Sharing of Low Quality, Partisan, and Toxic Content in a Twitter Field Experiment, is published online in CHI ’21: Proceedings of the 2021 Conference on Human Factors in Computing Systems.

By Press Association

More Technology News

See more More Technology News

Online child abuse

Children as young as three ‘coerced into sexual abuse acts online’

Big tech firms and financial data

Financial regulator to take closer look at tech firms and data sharing

Woman working on laptop

Pilot scheme to give AI regulation advice to businesses

Vehicles on the M4 smart motorway

Smart motorway safety systems frequently fail, investigation finds

National Cyber Security Centre launch

National Cyber Security Centre names Richard Horne as new chief executive

The lights on the front panel of a broadband internet router, London.

Virgin Media remains most complained about broadband and landline provider

A person using a laptop

£14,000 being lost to investment scams on average, says Barclays

Europe Digital Rules

Meta unveils latest AI model as chatbot competition intensifies

AI technology

Younger children increasingly online and unsupervised, Ofcom says

Migrant Channel crossing incidents

Ministers will be told to use AI to screen migrants for threats, adviser says

Nothing smartphone

UK tech firm Nothing to integrate ChatGPT into its devices

The Google offices in Six Pancras Square, London

Google confirms more job cuts as part of company reorganisation

Person using laptop

Housing association reprimanded after residents’ data compromised

A screengrab of an arrest in connection with the LabHost website

Arrests made and thousands of victims contacted after scammer site taken offline

Social media apps on a smartphone

Three-quarters of public fear misinformation will affect UK elections – report

Businessman racing with a robot

TUC calls for AI to be regulated in the workplace