Misconfigured AI could trigger the next national infrastructure meltdown

The next blackout to plunge a G20 nation into chaos might not come courtesy of cybercriminals or bad weather, but from an AI system tripping over its own shoelaces.

Analyst firm Gartner warned this week that misconfigured artificial intelligence embedded in national infrastructure could shut down critical services in a major economy as soon as 2028, delivering the kind of disruption usually blamed on hostile governments or catastrophic natural events. The prediction centers on the rapid adoption of AI in cyber-physical systems, which Gartner defines as "systems that orchestrate sensing, computation, control, networking, and analytics to interact with the physical world (including humans)."

Gartner's warning isn't really about attackers taking over AI tools - it's about what happens when everything is working as intended... until it isn't. More operators are allowing machine learning systems to make real-time decisions, and those systems can respond unpredictably if a setting is changed, an update is pushed, or flawed data is entered.

Unlike traditional software bugs that might crash a server or scramble a database, errors in AI-driven control systems can spill into the physical world, triggering equipment failures, forcing shutdowns, or destabilizing entire supply chains, Gartner warns.

"The next great infrastructure failure may not be caused by hackers or natural disasters but rather by a well-intentioned engineer, a flawed update script, or a misplaced decimal," cautioned Wam Voster, VP Analyst at Gartner.

Power grids are an obvious stress test. Energy firms now rely heavily on AI to monitor supply, demand, and renewable generation. If the software malfunctions or misreads data, sections of the network could go dark, and repairing damaged grid hardware is rarely a quick process. The same creeping automation is turning up in factories, transport systems, and robotics, where AI is slowly taking over decisions that used to involve a human looking mildly concerned at a dashboard.

Gartner's bigger worry is how quickly AI is being deployed where mistakes don't just crash software; they break real things. AI is turning up in systems where failures can shut down physical infrastructure, yet the models themselves aren't always fully understood, even by the teams building them. That makes it difficult to predict how they'll react when something unexpected happens or when routine updates are released.

"Modern AI models are so complex they often resemble black boxes," said Voster. "Even developers cannot always predict how small configuration changes will impact the emergent behavior of the model. The more opaque these systems become, the greater the risk posed by misconfiguration. Hence, it is even more important that humans can intervene when needed."

While regulators have spent years focusing on cybersecurity threats to operational technology, Gartner's forecast suggests the next wave of infrastructure risk could be self-inflicted rather than adversary-driven. ®

Search
About Us
Website HardCracked provides softwares, patches, cracks and keygens. If you have software or keygens to share, feel free to submit it to us here. Also you may contact us if you have software that needs to be removed from our website. Thanks for use our service!
IT News
Mar 5
Microsoft Copilot to hijack your browser... for your own convenience

Embeds Edge into AI assistant, ignores questions about opt-in

Mar 5
UK still doodling digital pound while Brussels frets over payment sovereignty

Geopolitical tensions turn up the pressure for European legislators

Mar 5
Supposedly big-brained execs are outsourcing decisionmaking to AI

Survey of UK bosses find 62 percent of bosses rely on LLMs for help

Mar 5
Broadcom says AI companies can't make their own silicon any time soon

Offers booming customer accelerator biz as evidence, while VMware props up its software business

Mar 4
Managers try AI, staff lag behind: HR urged to help

Employees need guidance and support if companies really want to commit to AI adoption

Mar 4
AI doctor's assistant is easily swayed to change prescriptions, give bad medical advice

Spread false medical info, supersize drug orders, and more!