From CNN to Transformer – NVIDIA is radically revamping its Deep Learning Super Sampling (DLSS). What sounds like a buzzword massacre turns out to be a serious technological leap on closer inspection. A commentary between technical fascination and strategic foresight.
Old school out, new class in – why CNNs are now obsolete
What do classic Convolutional Neural Networks (CNNs) and a brick have in common? Both are stable, reliable – but incredibly cumbersome. In the case of DLSS, this meant: acceptable image quality, but an eternal balancing act between performance, ghosting and artifacts. NVIDIA is now pulling the ripcord – and replacing the whole thing with a Transformer model. Yes, exactly the same architectural framework that is behind ChatGPT or Midjourney. The big difference? While CNNs think locally – i.e. rattling off pixel by pixel like an accountant doing a tax assessment – the Transformer acts like a literary editor: it understands the context. It knows which pixel is important and which is redundant. And not just in one frame, but across several. Sounds like magic? It is – if enough computing power is available.

The price of progress: quadruple the load, double the parameters, massive jumps
NVIDIA speaks of four times the computing effort compared to the old DLSS approach. Sounds like madness at first – but it can be put into perspective: With FP8 optimizations, Tensor Core acceleration and Blackwell-based architectural alchemy, this can be tamed. The Transformer version uses twice as many parameters as its CNN predecessor – a massive increase in complexity, but one that obviously pays off: Ray reconstruction is much clearer, edges are razor-sharp, ghosting is (almost) history. If it works as promised.
Compatibility with an ulterior motive: the old RTX lives on
An almost diplomatic move: DLSS Transformer works from Turing – i.e. RTX 2000 upwards. This means that owners of older cards will also get an upgrade – at least for Super Resolution and DLAA. Only with the multi-frame generation does NVIDIA draw a clear line: RTX 5000 only. A rogue who thinks evil – or simply an analyst with an understanding of the market.
Reality versus marketing – where the new DLSS stumbles
Reddit, the tech press and forum nerds agree: there has been progress – but there are also problems. Ghosting has been reduced, but not eliminated. Artifacts? Still there. The so-called “dissoclusion” effect with fast-moving objects remains an issue. And VRAM-intensive scenes show that the Transformer is smart, but also hungry. NVIDIA is holding back with real figures – probably not without good reason.

NVIDIA’s long-term game: A strategic power maneuver
You can’t look at the move in isolation. DLSS Transformer is part of a larger tactic:
- Market protection: if you don’t have tensor cores, you can pack it in.
- Ecosystem lock-in: DLSS is now in over 125 games – a monopoly with a comfort zone.
- Innovation with lock-in: Old GPUs remain relevant – but only to a limited extent. If you want more, you have to pay.
- Technological dominance: AMD’s FSR 3.1? Technically solid, but against Transformer-DLSS it looks like a pocket calculator against a pocket computer.
Transformer DLSS – a double-edged sword with a golden blade
What NVIDIA is delivering here is technically impressive, strategically clever – and at the same time a small move with side effects. The Transformer model takes DLSS to a new level, no question about it. But the price is high: computing load, complexity and a certain lack of transparency in the event of problems. The marketing department is jubilant, the technology faction is still frowning – and the consumer? They realize that progress is being made. But not entirely without hitches.
Source: Videocardz
30 Antworten
Kommentar
Lade neue Kommentare
Urgestein
Urgestein
Veteran
Urgestein
Urgestein
Urgestein
Urgestein
Veteran
Urgestein
Mitglied
Mitglied
Mitglied
Mitglied
Mitglied
Veteran
Urgestein
Urgestein
Urgestein
Urgestein
Alle Kommentare lesen unter igor´sLAB Community →