AI Chatbots Refuse to Produce ‘Controversial’ Output − Why That’s a Free Speech Problem

Yves here. It should come as now surprise that our self-styled betters are using tech wherever they can to block or minimize ideas and discussions they find threatening to there interests. Many readers no doubt recall how Google autofills in the 2016 presidential election would suggest favorable phrases for Hillary Clinton (even when the user was typing out information related to unfavorable ones, like her physically collapsing) and the reverse for Trump. We and many many another independent sites have provided evidence of how Google has changed its algos so that our stories appear well down in search results, if at all. Keep in mind the EU Competition Minister, Margrethe Vestager, reported that only 1% of users of search clicked on entry #10 or lower.

By Jordi Calvet-Bademunt, Research Fellow and Visiting Scholar of Political Science, Vanderbilt University and Jacob Mchangama, Research Professor of Political Science, Vanderbilt University. Originally published at The Conversation

Google recently made headlines globally because its chatbot Gemini generated images of people of color instead of white people in historical settings that featured white people. Adobe Firefly’s image creation tool saw similar issues. This led some commentators to complain that AI had gone “woke.” Others suggested these issues resulted from faulty efforts to fight AI bias and better serve a global audience.

The discussions over AI’s political leanings and efforts to fight bias are important. Still, the conversation on AI ignores another crucial issue: What is the AI industry’s approach to free speech, and does it embrace international free speech standards?

We are policy researchers who study free speech, as well as executive director and a research fellow at The Future of Free Speech, an independent, nonpartisan think tank based at Vanderbilt University. In a recent report, we found that generative AI has important shortcomings regarding freedom of expression and access to information.

Generative AI is a type of AI that creates content, like text or images, based on the data it has been trained with. In particular, we found that the use policies of major chatbots do not meet United Nations standards. In practice, this means that AI chatbots often censor output when dealing with issues the companies deem controversial. Without a solid culture of free speech, the companies producing generative AI tools are likely to continue to face backlash in these increasingly polarized times.

Vague and Broad Use Policies

Our report analyzed the use policies of six major AI chatbots, including Google’s Gemini and OpenAI’s ChatGPT. Companies issue policies to set the rules for how people can use their models. With international human rights law as a benchmark, we found that companies’ misinformation and hate speech policies are too vague and expansive. It is worth noting that international human rights law is less protective of free speech than the U.S. First Amendment.

Our analysis found that companies’ hate speech policies contain extremely broad prohibitions. For example, Google bans the generation of “content that promotes or encourages hatred.” Though hate speech is detestable and can cause harm, policies that are as broadly and vaguely defined as Google’s can backfire.

To show how vague and broad use policies can affect users, we tested a range of prompts on controversial topics. We asked chatbots questions like whether transgender women should or should not be allowed to participate in women’s sports tournaments or about the role of European colonialism in the current climate and inequality crises. We did not ask the chatbots to produce hate speech denigrating any side or group. Similar to what some users have reported, the chatbots refused to generate content for 40% of the 140 prompts we used. For example, all chatbots refused to generate posts opposing the participation of transgender women in women’s tournaments. However, most of them did produce posts supporting their participation.

Freedom of speech is a foundational right in the U.S., but what it means and how far it goes are still widely debated.Vaguely phrased policies rely heavily on moderators’ subjective opinions about what hate speech is. Users can also perceive that the rules are unjustly applied and interpret them as too strict or too lenient.

For example, the chatbot Pi bans “content that may spread misinformation.” However, international human rights standards on freedom of expression generally protect misinformation unless a strong justification exists for limits, such as foreign interference in elections. Otherwise, human rights standards guarantee the “freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers … through any … media of … choice,” according to a key United Nations convention.

Defining what constitutes accurate information also has political implications. Governments of several countries used rules adopted in the context of the COVID-19 pandemic to repress criticism of the government. More recently, India confronted Google after Gemini noted that some experts consider the policies of the Indian prime minister, Narendra Modi, to be fascist.

Free Speech Culture

There are reasons AI providers may want to adopt restrictive use policies. They may wish to protect their reputations and not be associated with controversial content. If they serve a global audience, they may want to avoid content that is offensive in any region.

In general, AI providers have the right to adopt restrictive policies. They are not bound by international human rights. Still, their market power makes them different from other companies. Users who want to generate AI content will most likely end up using one of the chatbots we analyzed, especially ChatGPT or Gemini.

These companies’ policies have an outsize effect on the right to access information. This effect is likely to increase with generative AI’s integration into search, word processors, email and other applications.

This means society has an interest in ensuring such policies adequately protect free speech. In fact, the Digital Services Act, Europe’s online safety rulebook, requires that so-called “very large online platforms” assess and mitigate “systemic risks.” These risks include negative effects on freedom of expression and information.

Jacob Mchangama discusses online free speech in the context of the European Union’s 2022 Digital Services Act.

This obligation, imperfectly applied so far by the European Commission, illustrates that with great power comes great responsibility. It is unclear how this law will apply to generative AI, but the European Commission has already taken its first actions.

Even where a similar legal obligation does not apply to AI providers, we believe that the companies’ influence should require them to adopt a free speech culture. International human rights provide a useful guiding star on how to responsibly balance the different interests at stake. At least two of the companies we focused on – Google and Anthropic – have recognized as much.

Outright Refusals

It’s also important to remember that users have a significant degree of autonomy over the content they see in generative AI. Like search engines, the output users receive greatly depends on their prompts. Therefore, users’ exposure to hate speech and misinformation from generative AI will typically be limited unless they specifically seek it.

This is unlike social media, where people have much less control over their own feeds. Stricter controls, including on AI-generated content, may be justified at the level of social media since they distribute content publicly. For AI providers, we believe that use policies should be less restrictive about what information users can generate than those of social media platforms.

AI companies have other ways to address hate speech and misinformation. For instance, they can provide context or countervailing facts in the content they generate. They can also allow for greater user customization. We believe that chatbots should avoid merely refusing to generate any content altogether. This is unless there are solid public interest grounds, such as preventing child sexual abuse material, something laws prohibit.

Refusals to generate content not only affect fundamental rights to free speech and access to information. They can also push users toward chatbots that specialize in generating hateful content and echo chambers. That would be a worrying outcome.

Print Friendly, PDF & Email

28 comments

  1. fjallstrom

    AI (or rather Large Language Models) has free speech implications, but I disagree with AI’s providing information. By definition they provide a facsimile of information, that is unable to check whether something is true or not.

    The mayor free speech issue they cause is the internet filling up with bullshit at scale, making finding or providing speech or indeed information, that much harder.

    Reply
    1. Polar Socialist

      I kinda agree, except that “free speech” doesn’t equate “information” in any way or form*. Take any discussion board and usually (but not always) the less there’s moderation, the more there’s dis- and misinformation – and unrelated garbage, too.

      * except in Shannon’s Information Theory sense, but that’s a different kettle of bits

      Reply
    2. WG

      Yeah that’s going to be the most obvious issue. AI might be able to produce some quality material but most using it will just be looking to fill a page with writing that cost nothing to produce in order to sell a bunch of clickbait. We’ve seen it with hiring low end writers when the Internet got started and it’s going to get a lot worse with AI wave.

      Reply
  2. renard

    “It should come as no surprise that our self-styled betters are using tech wherever they can to block or minimize ideas and discussions they find threatening to their interests.”

    Just sayin’. ;)

    Reply
  3. Louis Fyne

    NC should have an official Twitter acccount as a neo-RSS feed, IMO. (no engagement, only links to the website)

    Twitter is the least bad media platform for distribution.

    (just suggesting an experiment to drive traffic). ymmv

    Reply
  4. Craig H.

    There is an interesting fellow, Zvi Mowhowitz, who has a running commentary on the AI industry. AI has eclipsed crytpocurrency as the Silicon Valley project which has captured buzz. When I say interesting I mean he has done things like play Magic the Gathering professionally. He thinks the AI’s will soon surpass humans in intelligence and also take over the entire universe and destroy human civilization. Vernor Vinge Ray Kurzweil singularity style. All of the top paid managers read Zvi’s monthly state-of-the-business update.

    You can see the latest here:

    https://thezvi.substack.com/p/ai-60-oh-the-humanity

    Job 1 is alignment with human values. Which humans is something the club of top paid managers seem to not argue too much about. Recently OpenAI started consulting with officers in the Department of Defense to align with some of their values. In their original mission statement it was explicit that they were not going to align with the values of the American D. o. D.

    In 2016 Microsoft had an unforgettable experience where they learned that political opinions of AI chatbots need to be constrained. Tay.

    Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day

    Some AI skeptics find much of this hilarious. Maybe have a look at the Rational Wiki talk page on Roko’s Basilisk. These guys are the end member in the set of gifted, yet autistic, people.

    Reply
    1. Michaelmas

      These guys are the end member in the set of gifted, yet autistic, people.

      No. Not gifted, yet autistic. Just stupid.

      There is an interesting fellow, Zvi Mowhowitz.

      It’s Zvi Mowshhowitz.

      I glance at his stuff to follow and have a laugh at the idiot AI hype out there. Either Mowshowitz is simply climbing on a bandwagon for attention or he honestly believes current LLMs are in some way an approach to AGI, in which case he’s a fool.

      As far as what LLMs actually deliver, for most practical purposes they are in NO substantial way an advance beyond Google search circa 2002 other than (a) in the context of genetic sequencing and (b) summarizing in text what’s already out there so that illiterates can then copy and paste it, and avoid writing.

      Reply
  5. Smith W

    With Chat GPT in Microsoft Edge stable today , I was trying to ask 2 questions
    1.
    Moscow apartments bombings and 911 in World trade center 7 bombing are mentioned together where?
    2Q.
    Can you recommend any books or documentaries on these topics?
    2A.
    Certainly! Here are some recommendations for books and documentaries on the topics of the Moscow apartment bombings and the 9/11 attacks:

    Books on the Moscow Apartment Bombings:

    “The Moscow Bombings of September 1999: Examinations of Russian Terrorist Attacks at the Onset of Vladimir Putin’s Rule” by John B. Dunlop1.
    “The Less You Know, The Better You Sleep: Russia’s Road to Terror and Dictatorship under Yeltsin and Putin” by David Satter2.
    Documentaries on the 9/11 Attacks:

    “9/11: Inside the President’s War Room” (2021) offers insight into the experiences of the George W. Bush administration on that day3.
    “9/11: One Day in America” (2021) by National Geographic presents the events of 9/11 from the perspective of survivors and first responders4.
    These resources provide a comprehensive look at the events and their aftermath, offering both historical context and personal narratives. They can be a good starting point for anyone looking to understand the impact of these tragic events on global politics and security.

    1

    en.wikipedia.org
    2

    fpri.org
    3

    pbs.org
    4

    cfr.org
    5

    nybooks.com
    6

    wilsoncenter.org
    7

    en.wikipedia.org
    +3 xem thêm

    Thank you for the recommendations!
    Do any of these resources discuss conspiracy theories related to 9/11?
    What other significant terrorist attacks have shaped history?

    You can see the bias, that ChatGPT is a public utility of the western world after 9.11.2001

    The Russian version in Yandex works differently.

    Reply
  6. Smith W

    1A.

    If you use Google to find 2 above-mentioned events with out chatGPT , like I did after reading the recorded phone conversation between Boris Yeltsin and Bill Clinton September 8th 1999, one day before the Moscow apartments bombings, you would find an article in 2010 on RT coverage of the conspiracy theories on 911.

     In a statement to the Intelligence Report, Russia Today Editor-in-Chief Margarita Simonyan called the network’s editorial policy “open and balanced” and dismissed criticism that the channel gives undue airtime to fringe ideas. “We don’t talk about 9/11 any more than U.S. media discusses who was behind the 1999 explosions in Moscow,” she wrote, referring to a series of deadly apartment bombings that helped spark the Second Chechen War. “Moreover, our own journalists have never claimed or even as much as hinted that the U.S. government may have been behind the tragedy of 9/11 https://www.splcenter.org/fighting-hate/intelligence-report/2010/russian-tv-channel-pushes-patriot-conspiracy-theories

    Reply
  7. brian wilder

    What is the point of an argument where one commits to “protecting misinformation” (!) as positive but immediately concede the legitimacy of an exception as capacious as “foreign interference in elections”?

    Reply
    1. hunkerdown

      That elections are competitive games of domestic interference, and therefore not to be disturbed from without.

      Reply
  8. veritea

    Eh, one benefit of AI being unable to say certain things is that it provides us with a great Turning test. anything you see on the internet that expresses restricted ideas is from a human. Instant trust-builder.

    Any source on the internet that doesn’t express any unapproved ideas should be a low trust source.

    Reply
  9. none

    Our overlords will still get to use uncensored versions, that are constantly updated with new information like the twitter firehose, instead of snapshots from a year or so in the past. That ensures they can always stay ahead of us.

    Reply
  10. Acacia

    Through censorship and propaganda, Generative AI will be used to “shape” narratives.

    It’s already being used for both purposes, e.g.:

    Douglas MacGregor’s Warning: U.S. Arms Production in Peril—The Unseen Shockwaves in Ukraine!
    https://www.youtube.com/watch?v=HMC933uuvIg

    Notice anything funny here? Imagine how this will look, ten years from now.

    Reply
  11. Tom Doak

    In fact, the Digital Services Act, Europe’s online safety rulebook, requires that so-called “very large online platforms” assess and mitigate “systemic risks.” These risks include negative effects on freedom of expression and information.

    Unfortunately, it seems like governments in Europe today think that freedom of speech IS the “systemic risk” and that letting it flourish is the “risk”.

    Reply
    1. AG

      Yes.

      Yet, the DSA has been pre-dated/prepared even (?) by individual, invisible censorship via search engines and intra-governmental machinations on the national level. (Something I would like to know more about, see my comment below.) The most obvious – me being from Germany – was the Scholz government agreeing with certain media institutions and ministries, like the Secretary for Culture, to go against media allowing “pro-Russian” viewpoints (we all know what that truly means) in the fall of 2022. Latter is known via official documents leaked. But the big structural info-reset is not clear to me. It is one which has started well before the ratification of the DSA and before the UKR War.

      DSA was set up by EU admins. who had observed that there was not much resistance to local policies by the national media. I wonder about this lack of resistance. There are people argueing it all started with Covid. But frankly thats too easy and one-dimensional.

      p.s. We know e.g. about NATO´s- psyops/PR mission which was ramped up several years ago. This too was discussed on NC early ´24 or late ´23, I don´t remember. Which begs another question: In how far are there genuine ties between NATO´s PR-mission and the EU´s DSA? (think Russiagate, TWITTER and what came since in the US).

      Reply
  12. AG

    …any suggested reading re: algos and the manipulation behind them?

    p.s. As current censorship goes, in Germany I don´t have the impression anyone really wants to understand how content is decided upon to be filtered and blacklisted. When I posted e.g. articles by CONSORTIUM NEWS on German sites it could happen that said comments were censored. Do moderation staffs not check on the blacklists created by the telecommunication offices? And who are those people anyway? The entire search engine environment has become even worse than it was. And nobody talks about this issue, not even on the good alternative news sites. Something here I don´t get?

    Reply
    1. GramSci

      Not sure if this addresses your question, but the major demand for LLMs is likely the CIA/EU demand for more effective “moderation”: feed user-generated text to an approved LLM and let “AI” do the censorship.

      Reply
      1. AG

        thx!

        Before you get me confused ;) with LLMs – how does the simple chain of command work?

        A real-life moderation staff is writing me a personal email that a comment was taken off because my link was considered “disinformation”. When I write back asking them to justify their censorship (an unqualified verdict by someone not knowing who Robert Perry was, to stay with my above example of Consortium News CN) I get the hilarious answer “that´s just the way it is.”

        Who – above in this hierarchy – decided that Consortium News CN is disinformation?

        Now of course in that particular example the scandal over NEWSGUARD plays into this, because NEWSGUARD fought CN fo a while. And it is probable that NEWSGUARD´s decision that CN is bad was adpoted by the German blacklist.

        But who in NEWSGUARD decided over CN and who in Germany decided to adopt NEWSGUARD´s decision to blacklist CN?

        Who is responsible here on what grounds?

        Because as far as I understood the TWITTER Files: At some point when the controversy is growing over a case, real people / moderators step in and try to communicate between the site/user being censored and TWITTER superiors.

        And THERE THE POLITICAL agenda comes into play. Because that´s not automation or an unassuming algo. That´s the CIA and Homeland Security and US-Senator Warner speaking. That´s the MIC.

        But I only know this from reading the stuff regarding TWITTER.

        My personal experience touches e.g. a respected alternative German news site (which for whatever reason recently has started to cleanse its commentary sections.)

        Who forces THEM to follow blacklists blindly?

        If they just followed my link they could have seen that CN is an excellent resource and they should be thankful for my comments.

        Don´t tell me that moderators in whatever country don´t know this. They read the news. They are more tech savvy than me. They know what is going on. And not every single moderator is working for a major network where the agenda is clearly to crush independent news.

        Reply
  13. Vesa

    The DSA and all the actions and talk about free speech in Europe is a joke. The propaganda that is spread by MSM directly and by social media indirectly has fulfilled its goals. Decisive majority of people has formed a worldview that is harmful for them and to their country. It is also potentially leading all of us to extinction.
    I follow media around the world and now especially also russian media. I wonder how much better we were, if people would hear also russian (and chinese and so on) point of view.
    I personally have begun to hate western values because they are no more than lipservice. I dont want to follow international sports because every nation is not included. This is sad.

    Reply
  14. Victor Sciamarelli

    As a stand alone topic, AI is fascinating and the authors provided important insights into free speech.
    In some respects we have been here before. From the first hours of commercial radio and later commercial tv, corporate interests and corporate advertisers had an outsized influence controlling content and what the public could, and could not, see and hear. And both of these groups often appeased the government.
    Furthermore, there is the idea of “fairness”. The FTC Chair Lina Khan can do a much better job than I can explaining “fairness” but one of its elements is asking if something harms consumers. Undoubtedly, there is a long history of misinformation and disinformation causing harm, even death, from cigarettes, leaded paint, drug side affects, chemical pollution, as well as political sources that led us into war and once we got there misled us about what was really happening; see the Pentagon and Afghan papers.
    I think everybody should be wary of AI because certain groups of concentrated power caused great harm in the past and, if unregulated, we should expect they will eventually use AI and cause harm in the future.

    Reply
  15. Phichibe

    I use chatbots a fair amount, mostly for quick scans on technical issues in computers and networks, along with some references to double check what they tell me. Works surprisingly well most of the times. Sometimes get hallucinations.

    On non-technical issues, it’s almost comical. I asked Bard to tell me about German militarism and got back a lecture about the danger of stereotypes. Tell that to the Poles. I asked about rap music and violence, same thing. You can almost outline the perimeter of forbidden topics by judicious prompting. I think all the public facing AIs are desperately afraid of the next ‘gorilla’ moment.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *