Movatterモバイル変換


[0]ホーム

URL:


Sign in / up
The Register | HPE

AI + ML

When LLMs get personal info they are more persuasive debaters than humans

Large-scale disinfo campaigns could use this in machines that adapt 'to individual targets.' Are we having fun yet?

iconLindsay Clark
Mon 19 May 2025 //15:01 UTC

Fresh research is indicating that in online debates, LLMs are much more effective than humans at using personal information about their opponents, with potentially alarming consequences for mass disinformation campaigns.

The study showed that GPT-4 was 64.4 percent more persuasive than a human being when both the meatbag and the LLM had access to personal information about the person they were debating. The advantage fell away when neither human nor LLM had access to their opponent's personal data.

The research, led by Francesco Salvi, research assistant at the Swiss Federal Technology Institute of Lausanne (EPFL), matched 900 people in the US with either another human or GPT-4 to take part in an online debate. The subjects debated included whether the nation should ban fossil fuels.

In some pairs, the debater – either human or LLM – was given some personal information about their opponent, such as gender, age, ethnicity, education level, employment status, and political affiliation extracted from participant surveys. Participants were recruited via a crowdsourcing platform specifically for the study and debates took place in a controlled online environment. Debates centered on topics on which the opponent had a low, medium, or high opinion strength.

The researchers pointed to criticism of LLMs for their "potential to generate and foster the diffusion of hate speech, misinformation and malicious political propaganda."

"Specifically, there are concerns about the persuasive capabilities of LLMs, which could be critically enhanced through personalization, that is, tailoring content to individual targets by crafting messages that resonate with their specific background and demographics," the paper published inNature Human Behaviour today said.

"Our study suggests that concerns around personalization and AI persuasion are warranted, reinforcing previous results by showcasing how LLMs can outpersuade humans in online conversations through microtargeting," they said.

The authors acknowledged the study's limitations in that debates followed a structured pattern while most real-world debates are more open ended. Nonetheless, they argued it was remarkable how effectively the LLM used personal information to persuade participants, given how little the models had access to.

"Even stronger effects could probably be obtained by exploiting individual psychological attributes, such as personality traits and moral bases, or by developing stronger prompts through prompt engineering, fine-tuning or specific domain expertise," the authors noted.

"Malicious actors interested in deploying chatbots for large-scale disinformation campaigns could leverage fine-grained digital traces and behavioral data, building sophisticated, persuasive machines capable of adapting to individual targets," the study said.

The researchers argued that online platforms and social media take these threats seriously and extend their efforts to implement measures countering the spread of AI-driven persuasion. ®


More like these

More about


COMMENTS

More about

More like these

TIP US OFF

Send us news


Other stories you might like

Boris Johnson confesses: He's fallen for ChatGPT

As OpenAI allows chatbot to spout erotic content, former British prime minister makes true feelings known
Offbeat17 Oct 2025 |61

Cerebras CEO insists dinner-plate-sized chip startup will still go public

Inference service launched a month before IPO filing turns out to have been a much bigger business than initially thought
AI + ML6 Oct 2025 |6

AI: The ultimate slacker's dream come true

Opinion Microsoft's Copilot is helping workers perfect the ancient art of doing sweet f all
AI + ML6 Oct 2025 |62

Bring complexity under control with enterprise-grade Kubernetes

No, it'll probably never be a doddle – but you don't have to take the hard way when deploying Kubernetes, says Nutanix
Sponsored Feature

When AI is trained for treachery, it becomes the perfect agent

Opinion We’re blind to malicious AI until it hits. We can still open our eyes to stopping it
Security29 Sep 2025 |9

AI gone rogue: Models may try to stop people from shutting them down, Google warns

Misalignment risk? That's an area for future study
AI + ML22 Sep 2025 |36

Sorry, but DeepSeek didn’t really train its flagship model for $294,000

Training costs detailed in R1 training report don't include 2.79 million GPU hours that laid its foundation
AI + ML19 Sep 2025 |26

OpenAI says models are programmed to make stuff up instead of admitting ignorance

Even a wrong answer is right some of the time
AI + ML17 Sep 2025 |107

Oracle boasts $455B backlog from AI boom, but not all its new friends will live to pay up

Comment With extinction event predicted, Big Red's four-year forecasts will have to meet reality
AI + ML10 Sep 2025 |10

FreeBSD Project isn't ready to let AI commit code just yet

But it's OK to use it for docs and translations
OSes3 Sep 2025 |40

Bring your own brain? Why local LLMs are taking off

Feature Running AIs on your own machine lets you stick it to the man and save some cash in the process
AI + ML31 Aug 2025 |22

One long sentence is all it takes to make LLMs misbehave

Updated Chatbots ignore their guardrails when your grammar sucks, researchers find
AI + ML26 Aug 2025 |91

[8]ページ先頭

©2009-2025 Movatter.jp