Cursor AI's own support bot hallucinated its usage policy

In a fitting bit of irony, users of Cursor AI experienced the limitations of AI firsthand when the programming tool's own AI support bot hallucinated a policy limitation that doesn't actually exist.

Users of the Cursor editor, designed to generate and fix source code in response to user prompts, have sometimes been booted from the software when trying to use the app in multiple sessions on different machines.

Some folks who inquired about the inability to maintain multiple logins for the subscription service across different machines received a reply from the company's support email indicating this was expected behavior.

But the person on the other end of that email wasn't a person at all, but an AI support bot. And it evidently made that policy up.

In an effort to placate annoyed users this week, Cursor co-founder Michael Truell published a note to Reddit to apologize for the snafu.

"Hey! We have no such policy," he wrote. "You're of course free to use Cursor on multiple machines.

"Unfortunately, this is an incorrect response from a front-line AI support bot. We did roll out a change to improve the security of sessions, and we're investigating to see if it caused any problems with session invalidation."

Truell added that Cursor provides an interface for viewing active sessions in its settings and apologized for the confusion.

In a post to the Hacker News discussion of the SNAFU, Truell again apologized and acknowledged that something had gone wrong.

"We've already begun investigating, and some very early results: Any AI responses used for email support are now clearly labeled as such. We use AI-assisted responses as the first filter for email support."

He said the developer who raised this issue had been refunded. The session logout issue, now fixed, appears to have been the result of a race condition that arises on slow connections and spawns unwanted sessions.

Truell did not immediately respond to our requests for comment.

AI models are well known to hallucinate, generating inaccurate or low quality responses to input prompts; for users, it appears the software just invents stuff out of thin air.

As noted in Nature earlier this year, hallucinations cannot be stopped, though they can be managed. AI model repository HuggingFace documents the phenomenon in its Hallucination Leaderboard, which compares how different AI models perform on different benchmark tests.

Marcus Merrell, principal technical advisor for Sauce Labs, an application testing biz, said more thorough testing of the support bot could have mitigated the risk of misstatements.

"This support bot fell victim to two problems here: Hallucinations, and non-deterministic results," Merrell told The Register.

"We all know about hallucinations, but the non-deterministic piece was at play here, too: if multiple people ask the same question, they're likely to get different results. So some users saw the message about the new policy change, and others didn't. This led to confusion within the company and online, as customers saw inconsistent messaging."

Merrell added, "For a support bot, this is unacceptable. Humans doing support usually have a script and a process. It's possible that the LLM can be refined in a way that mitigates these problems, but companies are racing to roll them out at scale - choosing to save staffing costs - and putting their brand at risk in the process. Letting users know 'this response was generated by AI' is likely to be an inadequate measure to recover user loyalty." ®

Search
About Us
Website HardCracked provides softwares, patches, cracks and keygens. If you have software or keygens to share, feel free to submit it to us here. Also you may contact us if you have software that needs to be removed from our website. Thanks for use our service!
IT News
May 19
Microsoft adds Grok - the most unhinged chatbot - to Azure AI buffet

Never mind the chatbot's recent erratic behavior

May 19
Nvidia builds a server to run x86 workloads alongside agentic AI

GTC Wants to be the 'HR department for agents'

May 19
May 19
When LLMs get personal info they are more persuasive debaters than humans

Large-scale disinfo campaigns could use this in machines that adapt 'to individual targets.' Are we having fun yet?

May 19
LastOS slaps neon paint on Linux Mint and dares you to run Photoshop

Another distro for Windows users - presumably ones who love bling

May 19
Latest patch leaves some Windows 10 machines stuck in recovery loops

Veteran OS might be almost out of support, but there's still time for Microsoft to break it

May 19
AI skills shortage more than doubles for UK tech leaders

Highest recorded jump in skills gap for more than a decade, recruiter finds