Yves here. This piece is intriguing. It finds, using Twitter as the basis for its investigation, that “conservatives” are more persistent in news-sharing to counter the platform trying to dampen propagation. If you read the techniques Twitter deployed to try to prevent spread of “disinformation,” they rely on nudge theory. A definition from Wikipedia:
Nudge theory is a concept in behavioral economics, decision making, behavioral policy, social psychology, consumer behavior, and related behavioral sciences that proposes adaptive designs of the decision environment (choice architecture) as ways to influence the behavior and decision-making of groups or individuals. Nudging contrasts with other ways to achieve compliance, such as education, legislation or enforcement.
Nowhere does the article consider that these measures amount to a soft form of censorship. All is apparently fair in trying to counter “misinformation”.
By Daniel Ershov, Assistant Professor at the UCL School of Management University College London; Associate Researcher University of Toulouse and Juan S. Morales, Associate Professor of Economics Wilfrid Laurier University. Originally published at VoxEU
Prior to the 2020 US presidential election, Twitter modified its user interface for sharing social media posts, hoping to slow the spread of misinformation. Using extensive data on tweets by US media outlets, this column explores how the change to its platform affected the diffusion of news on Twitter. Though the policy significantly reduced news sharing overall, the reductions varied by ideology: sharing of content fell considerably more for left-wing outlets than for right-wing outlets, as Conservatives proved less responsive to the intervention.
Social media provides a crucial access point to information on a variety of important topics, including politics and health (Aridor et al. 2024). While it reduces the cost of consumer information searches, social media’s potential for amplification and dissemination can also contribute to the spread of misinformation and disinformation, hate speech, and out-group animosity (Giaccherini et al. 2024, Vosoughi et al. 2018, Muller and Schwartz 2023, Allcott and Gentzkow 2017); increase political polarisation (Levy 2021); and promote the rise of extreme politics (Zhuravskaya et al. 2020). Reducing the diffusion and influence of harmful content is a crucial policy concern for governments around the world and a key aspect of platform governance. Since at least the 2016 presidential election, the US government has tasked platforms with reducing the spread of false or misleading information ahead of elections (Ortutay and Klepper 2020).
Top-Down Versus Bottom-Up Regulation
Important questions about how to achieve these goals remain unanswered. Broadly speaking, platforms can take one of two approaches to this issue: (1) they can pursue ‘top-down’ regulation by manipulating user access to or visibility of different types of information; or (2) they can pursue ‘bottom-up’, user-centric regulation by modifying features of the user interface to incentivise users to stop sharing harmful content.
The benefit of a top-down approach is that it gives platforms more control. Ahead of the 2020 elections, Meta started changing user feeds so that users see less of certain types of extreme political content (Bell 2020). Before the 2022 midterm US elections, Meta fully implemented new default settings for user newsfeeds that include less political content (Stepanov 2021). 1 While effective, these policy approaches raise concerns about the extent to which platforms have the power to directly manipulate information flows and potentially bias users for or against certain political viewpoints. Furthermore, top-down interventions that lack transparency risk instigating user backlash and a loss of trust in the platforms.
As an alternative, a bottom-up approach to reducing the spread of misinformation involves giving up some control in favour of encouraging users to change their own behaviour (Guriev et al. 2023). For example, platforms can provide fact-checking services to political posts, or warning labels for sensitive or controversial content (Ortutay 2021). In a series of experiments online, Guriev et al. (2023) show that warning labels and fact checking on platforms reduce misinformation sharing by users. However, the effectiveness of this approach can be limited, and it requires substantial platform investments in fact-checking capabilities.
Twitter’s User Interface Change in 2020
Another frequently proposed bottom-up approach is for platforms to slow the flow of information, and especially misinformation, by encouraging users to carefully consider the content they are sharing. In October 2020, a few weeks before the US presidential election, Twitter changed the functionality of its ‘retweet’ button (Hatmaker 2020). The modified button prompted users to use a ‘quote tweet’ instead when sharing posts. The hope was that this change would encourage users to reflect on the content they were sharing and slow the spread of misinformation.
In a recent paper (Ershov and Morales 2024), we investigate how Twitter’s change to its user interface affected the diffusion of news on the platform. Many news outlets and political organisations use Twitter to promote and publicise their content, so this change was particularly salient for potentially reducing consumer access to misinformation. We collected Twitter data for popular US news outlets and examine what happened to their retweets just after the change was implemented. Our study reveals that this simple tweak to the retweet button had significant effects on news diffusion: on average, retweets for news media outlets fell by over 15% (see Figure 1).
Figure 1 News sharing and Twitter’s user-interface change
Perhaps more interestingly, we then investigate whether the change affected all news media outlets to the same extent. In particular, we first examine whether ‘low-factualness’ media outlets (as classified by third-party organisations), where misinformation is more common, were affected more by the change as intended by Twitter. Our analysis reveals that this was not the case: the effect on these outlets was not larger than for outlets of better journalistic quality; if anything, the effects were smaller. Furthermore, a similar comparison reveals that left-wing news outlets (again, classified by a third party) were affected significantly more than right-wing outlets. The average drop in retweets for liberal outlets was around 20%, whereas the drop for conservative outlets was only 5% (Figure 2). These results suggest that Twitter’s policy failed, not only because it did not reduce the spread of misinformation relative to factual news, but also because it slowed the spread of political news of one ideology relative to another, which may amplify political divisions.
Figure 2 Heterogeneity by outlet factualness and slant
We investigate the mechanism behind these effects and discount a battery of potential alternative explanations, including various media outlet characteristics, criticism of ‘big tech’ by the outlets, the heterogeneous presence of bots, and variation in tweet content such as its sentiment or predicted virality. We conclude that the likely reason for the biased impact of the policy was simply that conservative news-sharing users were less responsive to Twitter’s nudge. Using an additional dataset for news-sharing individual users on Twitter, we observe that following the change, conservative users altered their behaviour significantly less than liberal users – that is, conservatives appeared more likely to ignore Twitter’s prompt and continue to share content as before. As additional evidence for this mechanism, we show similar results in an apolitical setting: tweets by NCAA football teams for colleges located in predominantly Republican counties were affected less by the user interface change relative to tweets by teams from Democratic counties.
Finally, using web traffic data, we find that Twitter’s policy affected visits to the websites of these news outlets. After the retweet button change, traffic from Twitter to the media outlets’ own websites fell, and it did so disproportionately for liberal news media outlets. These off-platform spillover effects confirm the importance of social media platforms to overall information diffusion, and highlight the potential risks that platform policies pose to news consumption and public opinion.
Conclusion
Bottom-up policy changes to social media platforms must take into account the fact that the effects of new platform designs may be very different across different types of users, and that this may lead to unintended consequences. Social scientists, social media platforms, and policymakers should collaborate in dissecting and understanding these nuanced effects, with the goal of improving its design to foster well-informed and balanced conversations conducive to healthy democracies.
See original post for references
“…we first examine whether ‘low-factualness’ media outlets (as classified by third-party organisations)”
Exactly which event: RussiaGate, WMD, BLM, Syria, Ukraine, SARS-CoV2/ PASC, Ukraine & Gaza has US/ G7 media/ blog aggregators not lied, cherry-picked and obfuscated across sources, almost verbatim (3rd party fact-checking, the most egregious, blatant & obsequious corporate controlled offenders?)
The elites of course dream of a kind of “soft totalitarianism” where they can control the herd via mind control. Huxley, perhaps more realistically than Orwell, thought this would only work via lots of drugs. In the beginning the web arose precisely to counter the “misinformation” coming out of big media with its self interested ownership. In earlier periods alternative newspapers served this need and back at the US founding pamphlets. Sophie Scholl used a memeorgraph machine for which she was guillotined by the Nazis.
Keeping the populace illiterate and passive may have once worked, more or less, but is much more difficult in a sophisticated technological age where the lowers need enough education to deal with all that technology. Perhaps our would be aristocrats need to go back to jousting and eating with their fingers since this is a time more fit for their attitude. As for the new priesthood of academia, they are pricing themselves out of reach of their would be followers.
Who determines “false or misleading information ahead of elections (Ortutay and Klepper 2020).”
May be as important to make the “right choice” as who runs the count!
What is “false and misleading” and “right choice” is “conditional”!
Back in 2020 the authorities said that Hunter Biden’s laptop story was ‘false or misleading information’ and the story was almost dropped down a memory hole. About 50 spooks wrote a letter saying that it had all the hallmarks of ‘Russian misinformation’. And now? Oh yeah, everybody knows that Hunter Biden’s laptop was actually real but get over it and stop living in the past. And those spooks? They are now saying that they never said that it was Russian misinformation but only that it had the hallmarks of Russian misinformation. And then you remember how spooks actually do a training course on how to lie.
Humans don’t need a training course on how to lie. It’s instinctive.
Humans need a training course on how NOT to lie, even to themselves.
“Humans need a training course on how NOT to lie, even to themselves.”
100% Truth!!
Richard Feynman
Is a “Bachelor of communication” a modern version of rhetoric and sophistry?
Perhaps the spooks do a training course on how to lie effectively , convincingly and believably without quite telling outright courtroom-quality lies.
James Clapper lied outright to Congress. As in perjury. Of course has not prosecuted.
Who determines? The MiniTrue Media Monopoly and the “intelligence community” that is behind the scenes working with them. Thanks to folks like Ed Snowden and Julian Assange, we have documentation on it. The electromagnetic spectrum is auctioned off to the oligarchy to monopolize and “nudge” us into believing nonsense. We hear and obey the Masters of the Universe
I´m impressed when people remain calm dicussing this “misinformation” – idiocy.
Because I myself cannot remain calm. I can only yell to people by now.
What is the worst:
Regardless of the fact that making “misinformation” illegal is a bullshit proto-fascist discussion – it is those very parties demanding this who would (or should) end up in prison first for spreading “misinformation” because that´s what they are doing 24/7.
This is true for almost every MSM legacy outlet.
And what´s furthermore maddening.
The sane minority instead of attacking the MSM on this well-founded argument for their 1000% hypocrisy are in defensive posture trying to prove that their own position is no “misinformation”.
But when you start adopting your opponent´s talking points you have lost already.
They should at least have learned THAT lesson from TRUMP:
Because that´s his simple receipe – never engage the opponent on his/her terms but counter on a totally different level. That´s how you win a popular argument.
https://www.youtube.com/watch?v=licwOgzBqo0
In addition to ignoring the problems of oligopoly/monopoly market power, privatization of the “public” electromagnetic spectrum, the study does not address the limited scope of acceptable discourse, which has narrowed over time.
I was pleased to see Yves put conservative in quotes. I would argue that the study is flawed from the outset and has limited value. We would need to fully define political terms with working definitions first, otherwise the “liberal” and “conservative” binary is largely meaningless. The terms are loaded, and tied to the US experience. (Plus: The binary, one-dimensional political spectrum is outdated and over-simplifies complex political positions on a range of issues)
The term “liberal”, for example, means something different in the UK and in other places outside the US. Also these meanings have shifted over time. Nowadays a “librul” is an elitist, economically right wing, pro-oligarchy warmonger who differentiates themselves from their country-club colleagues by virtue-signalling with faux concern for people of color, LGBTQ, migrants etc. The callous disregard for the welfare of the vast majority of the domestic population and victims of US foreign policy abroad should be glaringly obvious. But the branding, “nudging”, “gaslighting” distractions, emotional manipulation, perception management and outright falsehood and lies repeated in the echo-chamber of the mass media obscures that. (Saddam has WMDs, Putin shot down the plane, Assad gassed his own people, Gaddafi gave Viagra to his troops etc. etc.)
The most glaring example is Palestine: the superficial identity politics of the Obama and now Harris regime. It is an uncomfortable topic, but we may have the first “woman of color” to be POTUS, who will continue to fund the genocide in Palestine, and continue proxy wars that mass murder many thousands of innocent people, many of them “people of color”. Yet few notice the gorilla in the room. (See “the parable of the house negro, and the field negro, by Malcolm X)
The article makes huge assumptions: we have meaningful choice in our so-called democracy, we have a functioning, free and fair media environment, there are no oligopolies, no monopolies and no “intelligence community” interference, we have a very wide and comprehensive space for political discourse. Many would say that these assumptions are false.
Kinda reminds me of Econ 101, assume a can opener
What methods or frames-of-mind did conservatives use and cultivate to resist Twitter’s mind-guidance than liberals did?
If conservatives are better information-insurgency war-fighters than non-conservatives are, perhaps non-conservatives should learn the conservative methods and how to apply them against the mind-guides.