UK's new thinking on AI: Unless it's causing serious bother, you can crack on

Comment The UK government on Friday said its AI Safety Institute will henceforth be known as its AI Security Institute, a rebranding that attests to a change in regulatory ambition from ensuring AI models get made with wholesome content - to one that primarily punishes AI-abetted crime.

"This new name will reflect its focus on serious AI risks with security implications, such as how the technology can be used to develop chemical and biological weapons, how it can be used to carry out cyber-attacks, and enable crimes such as fraud and child sexual abuse," the government said in a statement of the retitled public body.

AI safety - "research, strategies, and policies aimed at ensuring these systems are reliable, aligned with human values, and not causing serious harm," as defined by The Brookings Institution - has seen better days.

Between Meta's dissolution of its Responsible AI Team in late 2023, the refusal of Apple and Meta to sign the EU's AI Pact last year, the Trump administration ripping up Biden-era AI safety rules, and concern about AI competition from China, there appears to be less appetite for preventive regulation - like what the US Food and Drug Administration tries to do with the food supply - and more interest in proscriptive regulation - enjoy your biased, racist AI but don't use it to commit acts of terror or sex crimes.

"[The AI Security Institute] will not focus on bias or freedom of speech, but on advancing our understanding of the most serious risks posed by the technology to build up a scientific basis of evidence which will help policymakers to keep the country safe as AI develops," the UK government said, championing unfettered discourse in a way not evident in its reported stance on encryption.

Put more bluntly, the UK is determined not to regulate the country out of the economic benefit of AI investment and associated labor consequences - AI jobs and AI job replacement.

Peter Kyle, Secretary of State for Science, Innovation, and Technology, said as much in a statement: "The changes I'm announcing today represent the logical next step in how we approach responsible AI development - helping us to unleash AI and grow the economy as part of our Plan for Change." That plan being the Labour government's blueprint of priorities.

A key partner in that plan now is Anthropic, which has distinguished itself from rival OpenAI by staking out the moral high ground among commercial AI firms. Built by ex-OpenAI staff and others, it identifies itself as "a safety-first company," though whether that matters much anymore remains to be seen.

Anthropic and the UK's Department for Science, Innovation and Technology (DSIT) have signed a Memorandum of Understanding to make AI tools that can be integrated into UK government services for citizens.

"AI has the potential to transform how governments serve their citizens," said Dario Amodei, CEO and co-founder of Anthropic, in a statement. "We look forward to exploring how Anthropic's AI assistant Claude could help UK government agencies enhance public services, with the goal of discovering new ways to make vital information and services more efficient and accessible to UK residents."

Allowing AI to deliver government services has gone swimmingly in New York City, where the MyCity Chatbot, which relies on Microsoft's Azure AI, last year gave business owners advice that violated the law. The Big Apple addressed this not by demanding its AI model that gets things right but by adding this disclaimer in a popup window:

The disclaimer dialogue window also comes with a you're-to-blame-if-you-use-this checkbox, "I agree to the MyCity Chatbot's beta limitations." Problem solved.

Anthropic appears to be more optimistic about its technology and cites several government agencies that have already befriended its Claude family of LLMs. The San Francisco upstart notes that the Washington, DC Department of Health has partnered with Accenture to build a Claude-based bilingual chatbot to make its services more accessible and to provide health information on demand. Then there's the European Parliament, which uses Claude for document search and analysis - so far without the pangs of regret evident among those using AI for legal support.

In England, Swindon Borough Council offers a Claude-based tool called "Simply Readable," hosted on Amazon Bedrock, that makes documents more accessible for people with disabilities by reformatting them with larger font, increased spacing, and additional images.

The result has been significant financial savings, it's claimed. Where previously documents of 5-10 pages cost around £600 to convert, Simply Readable does the job for just 7-10 pence, freeing funds for other social services.

According to the UK's Local Government Association (LGA), the tool has delivered a 749,900 percent return on investment.

"This staggering figure underscores the transformative potential of 'Simply Readable' and AI-powered solutions in promoting social inclusion while achieving significant cost savings and improved operational efficiency," the LGA said earlier this month.

No details are offered on whether this AI savings entailed a cost in jobs or expenditures in the form of Jobseeker's Allowance.

But Anthropic in time may have some idea about that. The UK government deal involves using the AI firm's recently announced Economic Index, which uses anonymized Claude conversations to estimate AI's impact on labor markets. ®

Search
About Us
Website HardCracked provides softwares, patches, cracks and keygens. If you have software or keygens to share, feel free to submit it to us here. Also you may contact us if you have software that needs to be removed from our website. Thanks for use our service!
IT News
Mar 24
GNOME 48 lands with performance boosts, new fonts, better accessibility

Tweaks mean smoother operation even on low-end kit

Mar 24
Oracle Cloud says it's not true someone broke into its login servers and stole data

Despite evidence to the contrary as alleged pilfered info goes on sale

Mar 23
A closer look at Dynamo, Nvidia's 'operating system' for AI inference

GTC GPU goliath claims tech can boost throughput by 2x for Hopper, up to 30x for Blackwell

Mar 21
Microsoft ducks politico questions on Copilot bundling and lack of consent

Consumer price hikes come amid interrogation of why customers have to opt out of added AI features

Mar 21
Accenture: DOGE's Federal procurement review is hurting our sales

Share price list slides for top ten consultant to US government

Mar 21
NASA's inbox goes orbital after email mishap spams entire space industry

EXCLUSIVE MAPTIS mailing list blunder triggers reply-all chaos

Mar 21
Cloudflare builds an AI to lead AI scraper bots into a horrible maze of junk content

Slop-making machine will feed unauthorized scrapers what they so richly deserve, hopefully without poisoning the internet