When LLMs get personal info they are more persuasive debaters than humans

Fresh research is indicating that in online debates, LLMs are much more effective than humans at using personal information about their opponents, with potentially alarming consequences for mass disinformation campaigns.

The study showed that GPT-4 was 64.4 percent more persuasive than a human being when both the meatbag and the LLM had access to personal information about the person they were debating. The advantage fell away when neither human nor LLM had access to their opponent's personal data.

The research, led by Francesco Salvi, research assistant at the Swiss Federal Technology Institute of Lausanne (EPFL), matched 900 people in the US with either another human or GPT-4 to take part in an online debate. The subjects debated included whether the nation should ban fossil fuels.

In some pairs, the debater - either human or LLM - was given some personal information about their opponent, such as gender, age, ethnicity, education level, employment status, and political affiliation extracted from participant surveys. Participants were recruited via a crowdsourcing platform specifically for the study and debates took place in a controlled online environment. Debates centered on topics on which the opponent had a low, medium, or high opinion strength.

The researchers pointed to criticism of LLMs for their "potential to generate and foster the diffusion of hate speech, misinformation and malicious political propaganda."

"Specifically, there are concerns about the persuasive capabilities of LLMs, which could be critically enhanced through personalization, that is, tailoring content to individual targets by crafting messages that resonate with their specific background and demographics," the paper published in Nature Human Behaviour today said.

"Our study suggests that concerns around personalization and AI persuasion are warranted, reinforcing previous results by showcasing how LLMs can outpersuade humans in online conversations through microtargeting," they said.

The authors acknowledged the study's limitations in that debates followed a structured pattern while most real-world debates are more open ended. Nonetheless, they argued it was remarkable how effectively the LLM used personal information to persuade participants, given how little the models had access to.

"Even stronger effects could probably be obtained by exploiting individual psychological attributes, such as personality traits and moral bases, or by developing stronger prompts through prompt engineering, fine-tuning or specific domain expertise," the authors noted.

"Malicious actors interested in deploying chatbots for large-scale disinformation campaigns could leverage fine-grained digital traces and behavioral data, building sophisticated, persuasive machines capable of adapting to individual targets," the study said.

The researchers argued that online platforms and social media take these threats seriously and extend their efforts to implement measures countering the spread of AI-driven persuasion. ®

Search
About Us
Website HardCracked provides softwares, patches, cracks and keygens. If you have software or keygens to share, feel free to submit it to us here. Also you may contact us if you have software that needs to be removed from our website. Thanks for use our service!
IT News
Apr 14
Windows Update is a torture chamber for seldom-used PCs

Opinion Microsoft punishes you for updating infrequently

Apr 14
The votes are in: AI will hurt elections and relationships

Latest report from Stanford's AI boffins finds unsafe usage practices, widespread anxiety about impacts, and China catching up to the USA

Apr 14
Cloudflare revamps CLI as agents take over the internet

What, you think basic usability is improved just for your benefit, human?

Apr 14
Claude is getting worse, according to Claude

Brief outage follows growing number of quality complaints

Apr 13
How ServiceNow gets customers to gorge at the AI trough

'AI is now infused in every package that we offer to our addressable market,' SVP John Aisien told us

Apr 13
WARNING: Oracle's AI obsession could mean higher prices and worse support

Advisers say fewer staff could mean slower answers and tougher renewals

Apr 13
Oracle job cuts and AI spending could impact support, raise prices

Advisers say fewer staff could mean slower answers and tougher renewals