What's going on with AMD funding a CUDA translation layer, then nuking it?

Analysis AMD's legal team appears to have clawed back control of much of the ZLUDA project's code base. The open source project, for which the House of Zen pulled support earlier this year, enabled compiled CUDA code to run natively on non-Nvidia GPUs.

ZLUDA was introduced as a way to run unadulterated CUDA binaries on Intel GPUs. Project lead Andrzej Janik eventually signed a development contract with AMD to target its GPUs, too. CUDA is Nvidia's toolset that lets developers tap GPUs to run their code.

Project commits to ZLUDA ceased in early 2022 for reasons that didn't become clear until earlier this year - when AMD stopped financing its development. Work resumed in the open, as Phoronix clocked in February.

ZLUDA is an interesting project as it offers as a translation layer enabling compiled CUDA programs to run directly on AMD hardware - and for a while there, Intel too - without a requirement to port and recompile the source. For example, we previously looked at a way to use ZLUDA to run Stable Diffusion in Automatic1111 on AMD cards under Windows.

Janik was able to release his work on the project earlier this year because it was thought a clause in the development contract allowed him to resume work in the open if AMD lost interest. "One of the terms of my contract with AMD was that if AMD did not find it fit for further development, I could release it," he wrote in the README for version 3.

However, it seems AMD's legal team has come to a different conclusion. In a statement pinned atop the project's GitHub page this week, Janik wrote that after initially granting permission to release the code, AMD changed its mind.

"The code that was previously here has been taken down at AMD's request. The code was released with AMD's approval through an email. AMD's legal department now says it's not legally binding, hence the rollback," he wrote.

Janik was careful to note that take down came from AMD and that he has "received no legal threats or any communication from Nvidia."

In a Reddit post earlier this week, Janik noted that he'd "consulted a lawyer: the legality of emails is unimportant. The choice is between a rewrite - cheaper, guaranteed result - and possibly fighting it in a court - more expensive, no guarantee of the result I want."

We asked AMD for its side of this story, and though its representatives acknowledged our request, we're still awaiting a response. Janik declined to comment.

This isn't the end for the ZLUDA project. Its contributors remain committed to rebuilding the project from a "pre-AMD" codebase and note funding for continued development is in prospect. What may change is the scope of the project - the devs note that "certain features will not come back," and teased support for Nvidia GameWorks - middleware the company offers to developers to speed jobs like creating visual effects.

Why bury ZLUDA?

While we don't know why AMD's attitude to ZLUDA changed, a few factors may have contributed to the decision.

The first and most obvious is that AMD simply wanted to distance itself from any legal exposure for supporting the development of a project that may violate the Nvidia CUDA terms of service.

Nvidia's terms of service have explicitly restricted the use of translation layers to run CUDA code on other hardware platforms since mid-2021. "You may not reverse engineer, decompile or disassemble any portion of the output generated using SDK elements for the purpose of translating such output artifacts to target a non-Nvidia platform," the EULA reads.

If developers became dependent on ZLUDA to code for both Team Red and Team Green's GPUs it could create a need for swift rewrites if Nvidia chooses to enforce its fine print.

In a similar vein, the Radeon wrangler may have grown concerned that even without its support, the continued availability of ZLUDA could end up undermining AMD's own software efforts. Why optimize applications for ROCm or HIP if you can just run CUDA code on AMD GPUs instead? And, while AMD does have its own CUDA translation tool - which we'll discuss in a bit - it's largely aimed at porting and recompiling source code rather than simply running already-built CUDA programs on its accelerators.

Considering that the take-down notice came from AMD's legal team, we also can't rule out the possibility of a disagreement over what code created by the ZLUDA team could and couldn't be released.

It certainly wouldn't be the first time that overzealous lawyers tasked with protecting a client's intellectual property rights have made a mess of things. You may recall last year when lawyers contracted to enforce Arm's copyright and trademarks ended up getting an assembly language guru's domains pulled offline.

Not just ZLUDA

As we alluded to earlier, ZLUDA is far from the only project aimed at enabling CUDA applications working on non-Nvidia GPUs. Last month, a compiler toolkit called SCALE was appeared in beta.

In a blog post announcing the project, Michael Søndergaard - who according to LinkedIn is the CEO of Spectral Compute - described SCALE as "a GPGPU toolkit, similar to Nvidia's CUDA Toolkit, with the capability to produce binaries for non-Nvidia GPUs when compiling CUDA code."

For the moment, the project is targeting AMD GPUs, but Søndergaard wrote that support for additional vendors was in the works.

AMD itself offers HIPIFY, which provides tools for "automatically" translating CUDA source code into portable HIP C++ code. But as our sibling site The Next Platform pointed out late last year, HIPIFY's automation is weaker than AMD's description might have you believe.

One of the problems is that HIPIFY doesn't take into account device-side template arguments in texture memory or multiple CUDA header files, and thus requires manual intervention by developers.

We understand HIPIFY differs from efforts like ZLUDA with a design that focuses on source-to-source translation - as opposed to running native unmodified CUDA binaries.

A similar project that we've looked at is the SYCL toolkit found in Intel's oneAPI framework. If you're not familiar, SYCL is similar to HIPIFY in that it handles most of the heavy lifting - purportedly up to 95 percent - of porting CUDA code to a format that can run on non-Nvidia accelerators. Unlike HIPIFY, SYCL is designed to be cross platform - enabling code to run on AMD, Intel, and Nvidia GPUs.

If you're curious about either, we recommend checking out The Next Platform for a deeper dive into porting CUDA to alternative accelerators. ®

Search
About Us
Website HardCracked provides softwares, patches, cracks and keygens. If you have software or keygens to share, feel free to submit it to us here. Also you may contact us if you have software that needs to be removed from our website. Thanks for use our service!
IT News
Sep 10
ServiceNow moves its backend off MariaDB to homebrew Postgres

Xanadu release also adds a Pro tier, along with lots more AI

Sep 10
Cassandra redesigns indexing, storage management for 5.0 release

Users warned to get off 3.x releases as support ends

Sep 10
UK Lords push bill to tame rogue algorithms in public sector

Peer says government needs to learn lessons from Post Office scandal

Sep 10
We're in the brute force phase of AI - once it ends, demand for GPUs will too

Gartner thinks generative AI is right for only five percent of workloads

Sep 10
US sets reporting requirements for AI models, infrastructure operators

Washington wants to know what the biggest model-makers are up to

Sep 10
Apple debuts iPhone 16, Watch Series 10, assorted AirPods

Setting the stage for pending AI feature while doubling down on health tech

Sep 9
OneFileLinux: A tiny recovery distro that fits snugly in your EFI system partition

The kind of thing the big names should be doing instead of working with proprietary vendors