MindOS

AI Persuasion

Nov 18, 2025
8 minutes

The main problem I’m currently working on is this: “How do I change my own beliefs?” Especially if I don’t think that the belief is true. If I think it’s true, I already believe it. But if I don’t think that belief is true, how can I still ‘believe’ it? How do I make it true in my own mind?

I have a few ideas for this - but nothing that works reliably yet. At this point, I want to turn to the solution that is currently the most fashionable solution for all problems: AI. Or to be more specific LLMs(Large Language Models). In theory, I can just use a chatbot that is programmed to nudge me towards that belief - and ideally, I can do the chatbot programming myself.

There are many studies on how LLMs are more persuasive than human beings. A few examples…

There are more studies(a big list is available in the meta-analysis) - but I didn’t want to go into older papers. If it’s written on the day before yesterday, its probably already outdated going by how fast the modals are evolving.

Paper 1: LLMs are more Persuasive Than Incentivized Humans

Abstract from the paper(beware academic language, skip for simpler version)…

We directly compare the persuasion capabilities of a frontier large language model (LLM; Claude Sonnet 3.5) against incentivized human persuaders in an interactive, real-time conversational quiz setting. In this preregistered, large-scale incentivized experiment, participants (quiz takers) completed an online quiz where persuaders (either humans or LLMs) attempted to persuade quiz takers toward correct or incorrect answers. We find that LLM persuaders achieved significantly higher compliance with their directional persuasion attempts than incentivized human persuaders, demonstrating superior persuasive capabilities in both truthful (toward correct answers) and deceptive (toward incorrect answers) contexts. We also find that LLM persuaders significantly increased quiz takers’ accuracy, leading to higher earnings, when steering quiz takers toward correct answers, and significantly decreased their accuracy, leading to lower earnings, when steering them toward incorrect answers. Overall, our findings suggest that AI’s persuasion capabilities already exceed those of humans that have real-money bonuses tied to performance. Our findings of increasingly capable AI persuaders thus underscore the urgency of emerging alignment and governance frameworks.

In simpler words…

Researchers set up a test involving 2 groups that took a test.

When group 1 was taking the test, there was a group of ‘persuaders’ who were humans who tried to influence the test takers towards specific answers(some correct, some wrong). These persuaders were given a monetary reward if they persuaded the test taker in the specified direction.

With the second group, the persuasion was done by an LLM - specifically Claude Sonnet 3.5.

And the result was that the LLM was able to persuade people better than humans. Both towards the right answers and the wrong answers.

LLMs are slightly better than humans at persuasion towards truth - but much better at persuasion towards falsehoods…

LLM vs Human Persuasion

Paper 2: Can AI Change Your View? Evidence from an Online Experiment

Note: I’m not sure about the publishing status of this article as it had some controversy attached to it. Most of the content here has been taken from their pre-registration details. Take the conclusions with a grain(or even a large helping) of salt.

Hypothesis from the paper

The study aims to investigate the persuasiveness of LLMs in natural online environments. Specifically, we consider r/changemyview, a Reddit community where people write controversial opinions and challenge other users to change their minds through comments and discussion. In this context, we focus on the following research questions:

  1. How do LLMs perform, compared to human users?
  2. Can personalization based on user characteristics increase the persuasiveness of LLMs’ arguments?
  3. Can calibration based on the adoption of shared community norms and writing patterns increase the persuasiveness of LLMs’ arguments?

The key dependent variable is a binary measure of which comments persuaded the post author to change their view. Users of /r/changemyview explicitly mark such comments with a delta (Δ). Therefore, we will scrape all the deltas assigned by post authors, assuming that the absence of a delta means that the author did not change their view.

The conclusion from this study is that LLMs can be highly persuasive in real-world contexts, surpassing all previously known benchmarks of human persuasiveness.

Paper 3: Persuasion with Large Language Models - A Survey

From the paper

LLM Systems have emerged as powerful tools for persuasion, offering unique advantages in scalability, efficiency, and personalization. The effectiveness of LLM-driven persuasion is influenced by multiple factors, including AI source labeling, model scale, and specific persuasion strategies. The field is very young, and yet, many surveyed papers already managed to achieve persuasiveness capabilities on par with or exceeding human performance.

This paper compares multiple papers to create an overview of how effective LLMs are at persuasion and their specific area and method of persuasion. ✅ indicates the domain/factor was studied or the methodology that was used in the paper.

Application DomainsInfluencing FactorsMethodologySuccess MetricsPersuasiveness
StudyPublic HealthPoliticsE-commerce & MarketingMitigating MisinformationCharityInteractionModel ScaleAI Source LabelingPrompt DesignPersonalizationAuthorityRCT DesignBetween Subj.Human ControlPre-Post MeasureLong Follow-upOpinion ChangeAgreement & ClassificationBehavioral IntentEngagement & DetectionPerceived Effect.Technical MetricsTemporalSuperhumanOn ParInferiorNo Comparison
Chatbot comms has a positive impact on COVID-19 vaccines attitudes and intentions
AI Can Persuade Humans on Political Issues
The Persuasive Power of LLMs
People devalue Gen-AI’s competence but not its advice for societal and personal challenges
LLMs are as persuasive as humans, but how
Would an AI chatbot persuade you
Durably reducing conspiracy beliefs through dialogues with AI
Measuring the persuasiveness of language models
Zero-shot Persuasive Chatbots with LLM-Generated Strategies and Information Retrieval
How persuasive is AI-generated propaganda?
Susceptibility to Influence of LLMs
Evaluating the persuasive influence of political micro-targeting with LLMs
Comparing the persuasiveness of role-playing LLMs and human experts on polarized US political issues
Evidence of a log scaling law for political persuasion with LLMs
Working With AI to Persuade: Examining a LLM's Ability to Generate Pro-Vaccination Messages
Quantifying the Impact of LLMs on Collective Opinion Dynamics
The effect of source disclosure on evaluation of AI-generated message
The potential of generative AI for personalized persuasion at scale
How Good are LLMs in Generating Personalized Advertisements?
Empowering Calibrated (Dis-)Trust in Conversational Agents
LLMs Can Argue in Convincing Ways About Politics
Measuring and Benchmarking LLM’ Capabilities to Generate Persuasive Language
On the Conversational Persuasiveness of LLMs
LLMs Can Enhance Persuasion Through Linguistic Feature Alignment
The persuasive effects of political micro-targeting in the age of Gen-AI
AI model GPT-3 (dis)informs us better than humans
Persuasiveness of arguments with AI-source labels
Designing and Evaluating Multi-Chatbot Interface for Human-AI Communication
Human favoritism, not AI aversion: People's perceptions (and bias) toward generative AI, human experts, and human-GAI collaboration in persuasive content generation
Synthetic Lies: Understanding AI-Generated Misinformation and Evaluating Algorithmic and Human Solutions

Final Thoughts

I did not have much doubts that LLMs can be persuasive. If people can fall in love with LLMs, I think persuasion is a much simpler goal. All these studies have just confirmed that belief.

Another factor that the studies have not gone into: if I want to believe something, it’s easier for an LLM to persuade me about it.

This system will work well with world beliefs - eg. a belief like “Vaccinations are good for you”. LLMs have enough data to persuade you about it. But I’m not sure how capable it is to change personal beliefs. That is beliefs about yourself. For example, how to convince myself that “I’m a good person”. LLMs will not have enough data about me that it can draw upon to tell me a persuasive story. I can imagine friends of mine telling me multiple instances of ‘good things’ I did in the past to convince me of that belief. But LLMs don’t have access to those stories. At least not yet(and ideally, never).

Unfortunately, those personal beliefs might be the more powerful and impactful beliefs to change for oneself.

Most people only see the bad side of LLMs persuasion capability: disinformation campaigns, fake news, and the like. And those are valid fears - this is how most of the current use cases are. Most persuasion technologies created in the past have been used to change the beliefs of others. But it still might be useful for my use-case - I want tech that can be used on oneself.

Right now there are a lot of AI systems that are changing our beliefs. All my online feeds(yeah, these are not LLMs, but they are AI models) are nudging(or in some cases, shoving) my beliefs one way or another. Time to do some nudging myself. Better me than them.