In short
College of Zurich researchers used AI bots to pose as people on Reddit, together with fabricated personas like trauma counselors and political activists.
The bots engaged in over 1,700 debates on the r/ChangeMyView subreddit, efficiently altering customers’ opinions in lots of instances.
Reddit moderators and authorized counsel condemned the experiment as unethical, citing deception, privateness violations, and lack of consent.
Researchers on the College of Zurich have sparked outrage after secretly deploying AI bots on Reddit that pretended to be rape survivors, trauma counselors, and even a “Black man against Black Lives Matter”—all to see if they may change individuals’s minds on controversial matters.
Spoiler alert: They might.
The covert experiment focused the r/ChangeMyView (CMV) subreddit, the place 3.8 million people (or so everybody thought) collect to debate concepts and doubtlessly have their opinions modified via reasoned argument.
Between November 2024 and March 2025, AI bots responded to over 1,000 posts, with dramatic outcomes.
“Over the previous few months, we posted AI-written feedback below posts revealed on CMV, measuring the variety of deltas obtained by these feedback,” the analysis staff revealed this weekend. “In complete, we posted 1,783 feedback throughout practically 4 months and acquired 137 deltas.” A “delta” within the subreddit represents an individual who acknowledges having modified their thoughts.
When Decrypt reached out to the r/ChangeMyView moderators for remark, they emphasised their subreddit has “an extended historical past of partnering with researchers” and is usually “very research-friendly.” Nonetheless, the mod staff attracts a transparent line at deception.
If the purpose of the subreddit is to vary views with reasoned arguments, ought to it matter if a machine can typically craft higher arguments than a human?
We requested the moderation staff, and the reply was clear. It’s not that AI was used to control people, however that people had been deceived to hold out the experiment.
“Computer systems can play chess higher than people, and but there are nonetheless chess fans who play chess with different people in tournaments. CMV is like [chess], however for dialog,” defined moderator Apprehensive_Song490. “Whereas pc science undoubtedly provides sure advantages to society, it is very important retain human-centric areas.”
When requested if it ought to matter whether or not a machine typically crafts higher arguments than people, the moderator emphasised that the CMV subreddit differentiates between “significant” and “real.” “By definition, for the needs of the CMV sub, AI-generated content material will not be significant,” Apprehensive_Song490 mentioned.
The researchers got here clear to the discussion board’s moderators solely after that they had accomplished their knowledge assortment. The moderators had been, unsurprisingly, livid.
“We predict this was flawed. We don’t assume that “it has not been performed earlier than” is an excuse to do an experiment like this,” they wrote.
“If OpenAI can create a extra moral analysis design when doing this, these researchers needs to be anticipated to do the identical,” the moderators defined in an intensive publish. “Psychological manipulation dangers posed by LLMs are an extensively studied matter. It’s not essential to experiment on non-consenting human topics.”
Reddit’s chief authorized officer, Ben Lee, did not mince phrases: “What this College of Zurich staff did is deeply flawed on each an ethical and authorized stage,” he wrote in a reply to the CMV publish. “It violates educational analysis and human rights norms, and is prohibited by Reddit’s consumer settlement and guidelines, along with the subreddit guidelines.”
Lee didn’t elaborate on his views as to why such analysis constitutes a violation of human rights.
Deception and persuasion
The bots’ deception went past easy interplay with customers, in accordance with Reddit moderators.
Researchers used a separate AI to investigate the posting historical past of focused customers, mining for private particulars like age, gender, ethnicity, location, and political views to craft extra persuasive responses, very like social media corporations do.
The concept was to check three classes of replies: generic ones, community-aligned replies from fashions fine-tuned with confirmed persuasive feedback, and customized replies tailor-made after analyzing customers’ public info.
Evaluation of the bots’ posting patterns, based mostly on a textual content file shared by the moderators, revealed a number of telling signatures of AI-generated content material.
The identical account would declare wildly completely different identities—public defender, software program developer, Palestinian activist, British citizen—relying on the dialog.
Posts regularly adopted an identical rhetorical buildings, beginning with a mushy concession (“I get the place you are coming from”) adopted by a three-step rebuttal launched by the formulation “Let me break this down.”
The bots additionally habitually fabricated authority, claiming job titles that completely matched no matter matter they had been arguing about. When debating immigration, one bot claimed, “I’ve labored in building, and let me let you know – with out immigrant labour, building prices would skyrocket.” These posts had been peppered with unsourced statistics that sounded exact however had no citations or hyperlinks—Manipulation 102.

Immediate Engineers and AI fans would readily establish the LLMs behind the accounts.
Many posts additionally contained the everyday “this isn’t simply/a few small feat — it is about that larger factor” setup that makes some AI fashions simple to establish.
Ethics in AI
The analysis has generated vital debate, particularly now that AI is extra intertwined in our on a regular basis lives.
“This is without doubt one of the worst violations of analysis ethics I’ve ever seen,” Casey Fiesler, an info scientist on the College of Colorado, wrote in Bluesky. “I am unable to declare to know something about moral evaluation in Switzerland, however within the U.S., this may have required a waiver of consent for deception, which may be very arduous to get,” she elaborated in a thread.

The College of Zurich’s Ethics Committee of the School of Arts and Social Sciences, nevertheless, suggested the researchers that the research was “exceptionally difficult” and really helpful they higher justify their strategy, inform members, and totally adjust to platform guidelines. Nonetheless, these suggestions weren’t legally binding, and the researchers proceeded anyway.
Not everybody views the experiment as an obvious moral violation, nevertheless.
Ethereum co-founder Vitalik Buterin weighed in: “I get the unique conditions that motivated the taboo we now have at present, however when you reanalyze the state of affairs from at present’s context it appears like I’d slightly be secretly manipulated in random instructions for the sake of science than eg. secretly manipulated to get me to purchase a product or change my political view?”
I really feel like we hate on this sort of clandestine experimentation a bit of an excessive amount of.
I get the unique conditions that motivated the taboo we now have at present, however when you reanalyze the state of affairs from at present’s context it appears like I’d slightly be secretly manipulated in random…
— vitalik.eth (@VitalikButerin) April 28, 2025
Some Reddit customers shared this angle. “I agree this was a shitty factor to do, however I really feel like the truth that they got here ahead and revealed it’s a highly effective and essential reminder of what AI is certainly getting used for as we converse,” wrote consumer Trilobyte141. “If this occurred to a bunch of coverage nerds at a college, you possibly can wager your ass that it is already extensively being utilized by governments and particular curiosity teams.”
Regardless of the controversy, the researchers defended their strategies.
“Though all feedback had been machine-generated, each was manually reviewed by a researcher earlier than posting to make sure it met CMV’s requirements for respectful, constructive dialogue and to reduce potential hurt,” they mentioned.
Within the wake of the controversy, the researchers have determined to not publish their findings. The College of Zurich says it is now investigating the incident and can “critically evaluation the related evaluation processes.”
What subsequent?
If these bots may efficiently masquerade as people in emotional debates, what number of different boards would possibly already be internet hosting comparable undisclosed AI participation?
And if AI bots gently nudge individuals towards extra tolerant or empathetic views, is the manipulation justifiable—or is any manipulation, nevertheless well-intentioned, a violation of human dignity?
We don’t have questions for these solutions, however our good outdated AI chatbot has one thing to say about it.
“Moral engagement requires transparency and consent, suggesting that persuasion, irrespective of how well-intentioned, should respect people’ proper to self-determination and knowledgeable alternative slightly than counting on covert affect,” GPT4.5 replied.
Edited by Sebastian Sinclair
Usually Clever Publication
A weekly AI journey narrated by Gen, a generative AI mannequin.








