Human hubris may prevent people from accepting AI help

Human psychology may prevent people from realizing the benefits of artificial intelligence, according to a trio of boffins based in the Netherlands.

But with training, we can learn to overcome our biases and trust our automated advisors.

In a preprint paper titled "Knowing About Knowing: An Illusion of Human Competence Can Hinder Appropriate Reliance on AI Systems," Gaole He, Lucie Kuiper, and Ujwal Gadiraju, from Delft University of Technology, examine whether the Dunning-Kruger effect hinders people from relying on recommendations from AI systems.

The Dunning-Kruger effect (DKE) dates back to research from 1999 by psychologists David Dunning and Justin Kruger, "Unskilled and unaware of it: How difficulties in recognizing one's own incompetence lead to inflated self-assessments."

Dunning and Kruger posit that incompetent people lack the capacity to recognize their incompetence and thus tend to overestimate their abilities.

Assuming DKE exists - something not everyone agrees on - the Delft researchers suggest this cognitive condition means AI guidance may be lost on us. That's not ideal since AI systems presently tend to be pitched as assistive systems that augment human decision-making rather than autonomous systems that operate without oversight. Robo help doesn't mean much if we don't accept it.

"This a particularly important metacognitive bias to understand in the context of human-AI decision making, since one can intuitively understand how inflated self-assessments and illusory superiority over an AI system can result in overly relying on oneself or exhibiting under-reliance on AI advice," state He, Kuiper, and Gadiraju in their paper, which has been conditionally accepted to CHI 2023. "This can cloud human behavior in their interaction with AI systems."

To test this, the researchers asked 249 people to answer a series of multiple choice questions to test their reasoning. The respondents were asked to answer questions first by themselves and then with the help of an AI assistant.

The questions, available in the research project GitHub repository, consisted of a series of questions like this:

The study participants were then asked, Which one of the following, if true, most strengthens the physician's argument?

After respondents answered, they were presented with the same questions as well as an AI system's recommended answer (D for the question above), and were given the opportunity to change their initial answer. This approach, the researchers say, has been validated by past research [PDF].

Based on the answers they received, the three computer scientists conclude "that DKE can have a negative impact on user reliance on the AI system..."

But the good news, if that's the right term, is that DKE is not destiny. Our mistrust of AI can be trained away.

"To mitigate such cognitive bias, we introduced a tutorial intervention including performance feedback on tasks, alongside manually crafted explanations to contrast the correct answer with the users' mistakes," the researchers explain. "Experimental results indicate that such an intervention is highly effective in calibrating self-assessment (significant improvement), and has some positive effect on mitigating under-reliance and promoting appropriate reliance (non-significant results)."

Yet if tutorials helped those exhibiting overconfidence (DKE), corrective re-education had the opposite effect on those who initially underestimated their capabilities: It made them either overconfident or possibly algorithm averse - a known consequence [PDF] of seeing machines make mistakes.

In all, the researchers conclude that more work needs to be done to understand how human trust of AI systems can be shaped.

We'd do well to recall the words of HAL, from 2001: A Space Odyssey.

®

Search
About Us
Website HardCracked provides softwares, patches, cracks and keygens. If you have software or keygens to share, feel free to submit it to us here. Also you may contact us if you have software that needs to be removed from our website. Thanks for use our service!
IT News
Mar 31
Mar 31
The changing data landscape

Webinar How AI demands a new navigation

Mar 31
FTC urged to freeze OpenAI's 'biased, deceptive' GPT-4

AI policy wonks slam chatty hallucination-prone model in formal complaint

Mar 30
So you want to integrate OpenAI's bot. Here's how that worked for software security scanner Socket

Exclusive Hint: Hundreds of malicious npm and PyPI packages spotted

Mar 30
It's official: Ubuntu Cinnamon remix has been voted in

And it looks like educational flavor Edubuntu is returning, too

Mar 30
This US national lab turned to AI to hunt rogue nukes

All it needs to do is detect ■■■■■■■■■■ in the ■■■■■ at ■■■■■■ when the ■■■■■■■■

Mar 30
Judge grants subpoena to ID Twitter source code leaker

Unmasking also in store for anyone who's 'posted, uploaded, downloaded or modified' tweet biz code