Meta has historically restricted its LLMs from uses that could cause harm - but that has apparently changed. The Facebook giant has announced it will allow the US government to use its Llama model family for, among other things, defense and national security applications.
Nick Clegg, Meta's president of global affairs, wrote yesterday that Llama, already available to the public under various conditions, was now available to US government agencies - as well as a number of commercial partners including Anduril, Lockheed Martin, and Palantir. Meta told The Register all of its Llama models have been made available to the US government and its contractors.
Llama - which is described by Meta as open source though it really isn't - is already being used by Uncle Sam's partners such as Oracle to improve aircraft maintenance, and by Scale AI "to support specific national security team missions." IBM, through watsonx, is bringing Llama to national security agencies' self-managed datacenters and clouds, according to Clegg.
"These kinds of responsible and ethical uses of open source AI models like Llama will not only support the prosperity and security of the United States, they will also help establish US open source standards in the global race for AI leadership," Clegg asserted.
The new permission for the federal government and its contractors to use Llama for national security purposes conflicts with the model's general-public acceptable use policy, which specifically prohibits use in "military, warfare, nuclear industries or applications, espionage" or "operation of critical infrastructure, transportation technologies, or heavy machinery."
Even so, we're told nothing's changing - outside of the deal Clegg announced.
"Our Acceptable Use Policy remains in place," a Meta spokesperson told us. "However, we are allowing the [US government] and companies that support its work to use Llama, including for national security and other related efforts in compliance with relevant provisions of international humanitarian law."
Clegg waxed philosophical throughout his blog post about how the success of Llama's ostensibly open design was fundamental to American economic and national security needs.
"In a world where national security is inextricably linked with economic output, innovation and job growth, widespread adoption of American open source AI models serves both economic and security interests," Clegg wrote. "We believe it is in both America and the wider democratic world's interest for American open source models to excel and succeed over models from China and elsewhere."
Clegg went on to argue that open standards for AI will increase transparency and accountability - which is why the US has to get serious about making sure its vision for the future of the tech becomes the world standard.
"The goal should be to create a virtuous circle, helping the United States retain its technological edge while spreading access to AI globally and ensuring the resulting innovations are responsible and ethical, and support the strategic and geopolitical interests of the United States and its closest allies," Clegg explained.
To that end, Meta told Bloomberg, similar offers for the use of Llama by government entities were extended to the US's "Five Eyes" intelligence partners: Canada, the UK, Australia, and New Zealand.
But let's not forget the self-serving aspect of this deal.
It was just days ago, during Meta's Q3 earnings call, that Mark Zuckerberg asserted that opening up Llama would benefit his company, too - by ensuring its AI designs become a sort of de facto standard.
"As Llama gets adopted more, you're seeing folks like Nvidia and AMD optimize their chips more to run Llama specifically well, which clearly benefits us," Zuckerberg told investors listening to the earnings call. "So it benefits everyone who's using Llama, but it makes our products better rather than if we were just on an island building a model that no one was kind of standardizing around in the industry."
The announcement is perfectly timed to give Llama a patriotic paint job after news broke last week that researchers in China reportedly had built Llama-based AI models for military applications.
Meta maintained that China's use of Llama was unauthorized and contrary to its acceptable use policy. And that's inviolable - except for the US government and its allies, apparently. ®
21 lines that show the big man still has what it takes
Webinar Boost your organization's AI application performance with optimized SQL vector data queries
Screens sprayed with coffee after techies find Microsoft's latest OS in unexpected places
Need to know how to set up a business? There's an (experimental) AI for that
Ubuntu Summit 2024 'First impressions matter' but a KDE flavor is in the making - and more publicly at that
Change of mind follows discovery China was playing with it uninvited?
Firefox overlord to 'revisit' advocacy mission