The open source library ZLUDA is back. And with a bang that echoes even through the thick concrete walls of NVIDIA’s developer bunker. After AMD had quietly and secretly buried the project out of legal fear and presumably after a few worried letters from lawyers, the developers are now back – independent, twice as strong (two developers instead of one!) and ready to break through the fortress walls of CUDA exclusivity.
A brief review: David versus Goliath 2.0
ZLUDA is, or rather: was, a pretty clever idea. Take the dominant industry standard for GPU-accelerated computing – NVIDIA’s CUDA – and build a layer in between that brings this API to other GPUs. Originally intended for Intel’s GPUs, AMD snapped up the project, presumably with a mischievous grin and a calculator full of LLM dreams. But then came the legal reality – and AMD dropped ZLUDA like a hot potato. But as is the case with good ideas: they don’t just die. They mature underground.
Now it’s getting serious – or is it?
According to a recent report on Phoronix, ZLUDA is not only active again, but is also developing into a genuine multi-GPU solution. Running CUDA code on non-NVIDIA hardware? Sounds like blasphemy in Santa Clara, but that is exactly what is now being attempted again. Two developers are currently working on the project – a 100% increase in personnel. When you consider that NVIDIA probably has hundreds of engineers working on TensorRT alone, it’s an almost romantic David-versus-Goliath metaphor.
What has been achieved so far?
- Bit-precise execution: ZLUDA emulates CUDA so precisely that even the smallest computing operations work exactly as they would on a GeForce. Not “almost the same”, but really the same. This is not trivial.
- PhysX support: The dead live longer – ZLUDA even breathes new life into NVIDIA’s dusty physics engine. Will anyone want that? Doubtful. But it is feasible.
- LLM and PyTorch on Radeon & Co The first steps have been taken. ZLUDA can partially pass CUDA code. It’s still a bit bumpy, but at least the train is rolling.
The double-edged sword of openness
On the one hand, an open bridge between CUDA and other platforms would be a blessing for developers who no longer want to be gagged by one provider. On the other hand, NVIDIA has paid dearly for its sovereignty over CUDA – in years, billions and market monopolies. And this is precisely why the success of ZLUDA is a ticking time bomb. Legally, the situation is nebulous. ZLUDA is now being developed completely independently of AMD – presumably for good reason. But if you use third-party header files and API structures, a shrewd lawyer could quickly suspect copyright infringement.
What does this mean for the market?
If ZLUDA works, it won’t be a technical revolution. But a political one. Because it shows that developers cannot be locked in the golden cage of the CUDA world forever. It could increase the pressure on NVIDIA to be more open with its software platform – or at least to document its interfaces better. For AMD and Intel, in turn, it could mean that they also make their hardware more usable in CUDA-driven ecosystems. But the road ahead is rocky. Performance will be one issue. Compatibility will be another. And without broader community participation, the project will remain as fragile as a house of cards on the Las Vegas strip.
A diplomatic capitulation – or just an interim step?
ZLUDA is a fascinating attempt to tear down a wall that has been in place for over a decade. With CUDA, NVIDIA has built the Apple ecosystem of the GPU world: closed, high-performance, with no alternative. ZLUDA is – with a wink – the Hackintosh of this world. Those who use it are dancing on thin legal ice, but perhaps also into a freer future. Whether this ends up being a valid bridge-building project or just a PR flash in the pan depends on the usual three factors: time, money and will. But at least someone has rekindled the fire.
Source: Phoronix
12 Antworten
Kommentar
Lade neue Kommentare
Urgestein
Urgestein
Veteran
Urgestein
Veteran
Veteran
Urgestein
Urgestein
Urgestein
Urgestein
Veteran
Urgestein
Alle Kommentare lesen unter igor´sLAB Community →