Bad trip coming for AI hype as humanity tools up to fight back

Opinion 6:56 PM. April 11, 2025. Write it down. That's the precise moment the tech-bro-niverse imploded due to the gravitational force of irony at its core. That was the moment Jack Dorsey posted "Delete all IP law" on X. A little later, Elon Musk added his approval with "I agree."

Is this a considered intellectual stance born from a closely argued radical reassessment of the legal, economic, and cultural framework of modern times, or a petulant outburst from an entitled billionaire? We report, you decide. Fortunately for the world beyond Jack's egosphere, deleting IP law is still outside presidential powers, for now, at least. It's also a good thing for Jack and Elon themselves, whose entire empires are built on IP law.

Although there's much that would bear reform, the system of patents, copyright, and trademarks underpins all innovation and commerce, nationally and globally. Who would invest in hardware companies if R&D had a payback time of however many weeks it took to be ripped off? Who would invest in software if any employee could post the source code publicly? Would Linux have evolved without the GPL?

Without IP law, the structure of the 21st century would degenerate into warlord-managed fiefdoms where the organizations with the most power to intimidate would take what they wanted and deny others. Jack and Elon may fondly imagine they'd be among those warlords.

Rather than demonstrating the risible braggadocio of teenage boys on their first beer binge, our dynamic duo should look to the inevitable consequences of acting as if IP law had indeed been deleted. The technology they wish to give free rein to feed on human brains is demonstrably flawed. Not only is no one able to stop it hallucinating on unsullied training data - careful with that coding supply chain, Eugene - it can be provoked into a full-blown bad trip with data that appears OK to humans but contains carefully engineered digital LSD: adversarial noise.

Just like Dr Hofmann's original infamous elixir, adversarial noise is effective in very small amounts, adding tiny tweaks to data that sound like one thing to humans but are perceived by certain kinds of generative AI very differently. If used in training data, this can unhinge the resultant model; if used in data an unsullied model is trying to analyse, it can embed hidden commands or perceptions that corrupt the AI's output.

All this is possible because no matter how the hipsters gussy it up, AI isn't intelligent and works completely differently to our own wetware. Once you set out to deceive it, it goes off the rails. The CIA tested LSD as a mind-control drug last century, but it didn't work out. Adversarial noise does.

Take a look at the work of musician and IP activist Benn Jordan. As he puts it in his latest highly entertaining and informative video, he was trying to work with lawmakers to compel large AI companies to document their training data and have a licensing structure to pay creatives for the use of their work. As he says, the magic mute button for anyone pushing a new paid-for generative AI product is "What data did you use to train your base model?" Either they don't know, or admitting that they just scraped everything makes them liable for copyright claims. Big, big copyright claims. So let's get it out into the open.

All that came to an end on January 20, when the question became "how do we take this into our own hands?" One answer is adversarial noise. Researchers at the University of Tennessee, Knoxville had already created a technology called HarmonyCloak that sounds fine to human ears but completely breaks AI's ability to recognize harmony and rhythm. The results are comically horrific. Adding this to his own Poisonify system, which makes AI misidentify instruments, Jordan brews up a potion that makes an acid casualty of artificial intelligence. Degenerative AI.

These are early days, but this stuff works. Put out protected music and it will poison any AI that feeds on it. That immediately protects musicians and visual artists, as this works on diffusion models common to both audio and graphical content. Enough of it, and business models break down as well. That's before the potential for pranking and attacking voice recognition systems.

This may not appear to affect LLMs directly, where adding noise is much harder to disguise from ordinary users.

The lesson's the same, Jack and Elon. If you don't demonstrably regularize your training data, your product is vulnerable, your business even more so. IP law provides the framework within which you can do that, protecting you from attack and formalizing your intellectual supply chain.

On which you utterly depend. You dolts. ®

Search
About Us
Website HardCracked provides softwares, patches, cracks and keygens. If you have software or keygens to share, feel free to submit it to us here. Also you may contact us if you have software that needs to be removed from our website. Thanks for use our service!
IT News
May 19
Microsoft adds Grok - the most unhinged chatbot - to Azure AI buffet

Never mind the chatbot's recent erratic behavior

May 19
Nvidia builds a server to run x86 workloads alongside agentic AI

GTC Wants to be the 'HR department for agents'

May 19
May 19
When LLMs get personal info they are more persuasive debaters than humans

Large-scale disinfo campaigns could use this in machines that adapt 'to individual targets.' Are we having fun yet?

May 19
LastOS slaps neon paint on Linux Mint and dares you to run Photoshop

Another distro for Windows users - presumably ones who love bling

May 19
Latest patch leaves some Windows 10 machines stuck in recovery loops

Veteran OS might be almost out of support, but there's still time for Microsoft to break it

May 19
AI skills shortage more than doubles for UK tech leaders

Highest recorded jump in skills gap for more than a decade, recruiter finds