AI, extinction, nuclear war, pandemics ... That's expert open letter bingo

There's another doomsaying open letter about AI making the rounds. This time a wide swath of tech leaders, ML luminaries, and even a few celebrities have signed on to urge the world to take the alleged extinction-level threats posed by artificial intelligence more seriously.

More aptly a statement, the message from the Center for AI Safety (CAIS) signed by individuals like AI pioneer Geoffrey Hinton, OpenAI CEO Sam Altman, encryption guru Martin Hellman, Microsoft CTO Kevin Scott, and others is a single declarative sentence predicting apocalypse if it goes unheeded:

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

Why so brief? The goal was "to demonstrate the broad and growing coalition of AI scientists, tech leaders, and professors that are concerned by AI extinction risks. We need widespread acknowledgement of the stakes so we can have useful policy discussions," CAIS director Dan Hendrycks told The Register.

CAIS makes no mention of artificial general intelligence (AGI) in its list of AI risks, we note. And current-generation models, such as ChatGPT, are not an apocalyptic threat to humanity, Hendrycks told us. The warning this week is about what may come next.

"The types of catastrophic threats this statement refers to are associated with future advanced AI systems," Hendrycks opined. He added that necessary advancements needed to reach the level of "apocalyptic threat" may be as little as two to 10 years away, not several decades. "We need to prepare now. However, AI systems that could cause catastrophic outcomes do not need to be AGIs," he said.

Because humans are perfectly peaceful anyway

One such threat is weaponization, or the idea that someone could repurpose benevolent AI to be highly destructive, such as using a drug discovery bot to develop chemical or biological weapons, or using reinforcement learning for machine-based combat. Humans are already quite capable of manufacturing weapons that can take out a person, neighborhood, city, or country, mind you.

AI could also be trained to pursue its goals without regard for individual or societal values, we're warned. It could "enfeeble" humans who end up ceding skills and abilities to automated machines, causing a power imbalance between AI's controllers and those displaced by automation, or be used to spread disinformation, intentionally or otherwise.

Again, none of the AI involved in that needs to be general, and it's not too much of a stretch to see the potential for current-generation AI to evolve to pose the sorts of risks CAIS is worried about. You may have you own opinions on how truly destructive or capable the software could be, and what it could really achieve.

It's crucial, CAIS' argument goes, to examine and address the negative impacts of AI that are already being felt, and to turn those extant impacts into foresight. "As we grapple with immediate AI risks ... the AI industry and governments around the world need to also seriously confront the risk that future AIs could pose a threat to human existence," Hendrycks said in a statement.

"The world has successfully cooperated to mitigate risks related to nuclear war. The same level of effort is needed to address the dangers posed by future AI systems," Hendrycks urged, with a list of corporate, academic and thought leaders backing him up.

Musk's not on board

Other signatories include Google DeepMind principal scientist Ian Goodfellow, philosophers David Chalmers and Daniel Dennett, author and blogger Sam Harris and musician/Elon Musk's ex, Grimes. Speaking of the man himself, Musk's signature is absent.

The Twitter CEO was among those who signed an open letter published by the Future of Life Institute this past March urging a six month pause on the training of AI systems "more powerful than GPT-4." Unsurprisingly, OpenAI CEO Altman's signature was absent from that particular letter, ostensibly because it called his company out directly.

OpenAI has since issued its own warnings about the threats posed by advanced AI and called for the establishment of a global watchdog akin to the International Atomic Energy Agency to regulate use of AI.

That warning and regulatory call, in a case of historically poor timing, came the same day Altman threatened to pull OpenAI, and ChatGPT with it, from the EU over the bloc's AI Act. The rules he supports are one thing, but Altman told Brussels their idea of AI restriction was a regulatory bridge too far, thank you very much.

EU parliamentarians responded by saying they wouldn't be dictated to by OpenAI, and that if the company can't comply with basic governance and transparency rules, "their systems aren't fit for the European market," asserted Dutch MEP Kim van Sparrentak.

We've asked OpenAI for clarification on Altman's position(s) and will update this story if we hear back. ®

Search
About Us
Website HardCracked provides softwares, patches, cracks and keygens. If you have software or keygens to share, feel free to submit it to us here. Also you may contact us if you have software that needs to be removed from our website. Thanks for use our service!
IT News
Jul 15
The graying open source community needs fresh blood

Opinion Deep experience of the older tech crowd is nothing short of vital, yet projects need new devs to move forward

Jul 15
Hey Microsoft - what ever happened to 'Developers, developers, developers'?

Opinion Hey, here's an idea... create a point system for every time Microsoft hurts us

Jul 15
Your next datacenter could be in the middle of nowhere

Feature Training AI models doesn't need low latency. It needs cheap energy - wherever it can be found

Jul 14
Honey, I shrunk the LLM! A beginner's guide to quantization - and testing it

Hands on Just be careful not to shave off too many bits ... These things are known to hallucinate as it is

Jul 13
Game dev accuses Intel of selling 'defective' Raptor Lake CPUs

High-end processor instability headaches, failures pushed one studio to switch to AMD

Jul 12
White House urged to double check Microsoft isn't funneling AI to China via G42 deal

Windows maker insisted everything will be locked down and secure - which given its reputation, uh-oh!

Jul 12
PowerToys bring fun tweaks to Windows 10 and 11

Friday FOSS Fest Mac migrants (if any exist) will find Powertoys Run strangely familiar