Study employs large language models to sniff out their own bloopers

Researchers in computing and linguistics have devised a new way to detect errors in large language models, which relies on employing more LLMs.

Applying statistical machine learning to languages at an ever-increasing scale has become in vogue with tech vendors and investors alike, but it is well known that such language models are prone to errors. In the much-hyped world of LLMs, shortcomings that might be deemed malfunctions in other contexts are euphemistically called "hallucinations."

Generating content that is either inaccurate or nonsensical nonetheless troubles the industry, so the race is on to detect them with more accuracy and presumably try to eliminate them.

This week, in the UK science journal Nature, Oxford University and DeepMind researcher Sebastian Farquhar and colleagues proposed a method for quantifying the degree of hallucinations generated by an LLM while also showing how correct the generated content might be.

The study sought to address a subset of hallucinations known as confabulations, output that can be inaccurate and arbitrary, which the researchers say is down to a "lack of knowledge."

The researchers suggest their approach can find confabulations in LLM-generated biographies and in answers to questions on trivia, general knowledge, and life sciences.

In an accompanying article, Karin Verspoor, computing technologies professor at Australia's RMIT University said: "There is much debate about whether the models actually capture meaning or understand language in any epistemological or cognitive sense given that they lack any awareness of communicative intent or connection to real world objects and impacts.

"However, it is clear that these models perform well in a range of complex language-processing tasks that involve some comprehension."

The research team sought to exploit an LLM's performance on one of these tasks to detect hallucinations in another LLM. Textual entailment is a way of saying one statement can be inferred from another. So, saying "Pat purchased a car" also means "Pat owns a car" but not necessarily that "Pat rode in a car." The Oxford team's approach used LLMs' ability to recognize entailment as a way of spotting confabulations in another LLM.

But they didn't stop there. They also employed a third LLM to validate the findings of the second.

"Our probabilistic approach, accounting for semantic equivalence, detects an important class of hallucinations: those that are caused by a lack of LLM knowledge," the paper says. "These are a substantial portion of the failures at present and will continue even as models grow in capabilities because situations and cases that humans cannot reliably supervise will persist. Confabulations are a particularly noteworthy failure mode for question answering but appear in other domains too."

The researchers suggest that the findings might help people improve LLM performance by tailoring prompts, the supposedly "natural" way users query or instruct an LLM.

"By detecting when a prompt is likely to produce a confabulation, our method helps users understand when they must take extra care with LLMs and opens up new possibilities for using LLMs that are otherwise prevented by their unreliability," the paper adds.

Verspoor agreed the approach might be useful for detecting hallucinations in LLMs and other nefarious output, such as misinformation or plagiarism. But she warned that overreliance on LLMs might lead to further challenges.

"Researchers will need to grapple with the issue of whether this approach is truly controlling the output of LLMs, or inadvertently fueling the fire by layering multiple systems that are prone to hallucinations and unpredictable error," she said. ®

About Us
Website HardCracked provides softwares, patches, cracks and keygens. If you have software or keygens to share, feel free to submit it to us here. Also you may contact us if you have software that needs to be removed from our website. Thanks for use our service!
IT News
Jul 13
Game dev accuses Intel of selling 'defective' Raptor Lake CPUs

High-end processor instability headaches, failures pushed one studio to switch to AMD

Jul 12
White House urged to double check Microsoft isn't funneling AI to China via G42 deal

Windows maker insisted everything will be locked down and secure - which given its reputation, uh-oh!

Jul 12
PowerToys bring fun tweaks to Windows 10 and 11

Friday FOSS Fest Mac migrants (if any exist) will find Powertoys Run strangely familiar

Jul 12
New Outlook set for GA despite missing some key features

Classic Outlook for Windows shuffles a little closer to the end of the road

Jul 12
Google can totally explain why Chromium browsers quietly tell only its websites about your CPU, GPU usage

OK, now tell us why this isn't an EU DMA violation - asking for a friend in Brussels

Jul 12
SAP's bid to woo open source community meets muted response

German software giant says open source is a 'catalyst for innovation' but is unlikely to release proprietary code

Jul 12
Stop installing that software - you may have just died

On Call They're called role-playing games for a reason ...