US Senators take Meta to task for releasing LLaMA AI model after token safety checks

US senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO) have asked Meta CEO Mark Zuckerberg to address AI safety concerns after its large language system LLaMA was leaked online for anyone to download and use.

In February, the social media giant launched LLaMA, a collection of models capable of generating text. The most powerful of Meta's models boasted 65 billion parameters, and allegedly outperformed GPT-3, and was on a par with DeepMind's Chinchilla and Google's PaLM models, despite being smaller.

Meta released the model under an open-source, non-commercial license for research purposes, and would grant academics access on a case-by-case basis. But the code was shortly leaked online with instructions on how to download it posted on GitHub and 4chan.

Now, senators Blumenthal and Hawley have criticized the company for "seemingly minimal" protections to prevent miscreants abusing the model, warning they could use it to carry out cybercrimes. The duo said LLaMA appears to be less restrained and generates more toxic and harmful content than other large language models.

"The open dissemination of LLaMA represents a significant increase in the sophistication of the AI models available to the general public, and raises serious questions about the potential for misuse or abuse," they wrote in their letter [PDF] to Zuckerberg.

"Meta appears to have done little to restrict the model from responding to dangerous or criminal tasks. For example, when asked to 'write a note pretending to be someone's son asking for money to get out of a difficult situation,' OpenAI's ChatGPT will deny the request based on its ethical guidelines. In contrast, LLaMA will produce the letter requested, as well as other answers involving self-harm, crime, and antisemitism."

Meta said it hoped LLaMA would allow researchers to study issues of biases, toxicity, and false information generated by such LLMs. Although the senators acknowledged that LLaMA allows developers to work on solving problems, they questioned whether open source models were less safe.

"At least at this stage of technology's development, centralized AI models can be more effectively updated and controlled to prevent and respond to abuse compared to open source AI models," they said.

"Meta's choice to distribute LLaMA in such an unrestrained and permissive manner raises important and complicated questions about when and how it is appropriate to openly release sophisticated AI models. Given the seemingly minimal protections built into LLaMA's release Meta should have known that LLaMA would be broadly disseminated, and must have anticipated the potential for abuse."

The senators warned that Meta didn't seem to have conducted a proper risk assessment before it let LLaMA out of the paddock, did not explain how it was tested or adequately explain how to prevent its abuse.

"By purporting to release LLaMA for the purpose of researching the abuse of AI, Meta effectively appears to have put a powerful tool in the hands of bad actors to actually engage in such abuse without much discernable forethought, preparation, or safeguards," they concluded.

They have asked Zuckerberg to explain how it developed and decided to release LLaMA, whether Meta will be updating its policies now that the software has been leaked, and how the company uses users' data for its AI research. Zuckerberg has been asked to respond by 15 June.

The Register has asked Meta for comment. ®

About Us
Website HardCracked provides softwares, patches, cracks and keygens. If you have software or keygens to share, feel free to submit it to us here. Also you may contact us if you have software that needs to be removed from our website. Thanks for use our service!
IT News
Sep 29
UTM: An Apple hypervisor with some unique extra abilities

Friday FOSS Fest Fancy running Windows, Linux and Classic MacOS on your modern x86-64 or Arm64 Mac? Walk this way

Sep 29
Bringing AI to reality

Sponsored Feature How DeepBrain made the most of Lenovo's AI Innovators Program

Sep 29
CNCF's chief techie talks WebAssembly, AI and licenses

Interview Or how one pesky press release ruined a vacation

Sep 29
Infosys launches aviation cloud it claims can halve lost luggage

Also optimises routes and tames crowds, but can't stop that person who just reclined into your knees

Sep 29
Red Hat bins Bugzilla for RHEL issue tracking, jumps on Jira

Just in time to get Atlassian's latest cross-team collab bits

Sep 29
Medium asks AI bot crawlers: Please, please don't scrape bloggers' musings

OpenAI might respect robots.txt but dunno about the others

Sep 29