Basics Gaming GPUs Reviews

Muddy ramp turns into crunchy pixel fruit – NVIDIA’s DLSS 2.0 with a GeForce RTX 2060 Super in test

NVIDIA’s Geforce RTX 2060 Super is, if you leave the unsupered version aside, the entry into NVIDIA’s new, colorful RTX world. Besides raytracing, I was especially interested in DLSS and the optical and Performance gain, which makes the “little” RTX mutate back and forth between a hare and a real playmate. While the original version of DLSS was mocked as a “blurred smear” or even “Vaseline on the screen”, the second iteration, christened DLSS 2.0, has been showered with praise in recent reviews. Today’s self-test had to show whether this praise is justified.

The shame was great when NVIDIA proudly introduced Deep Learning Super Sampling – DLSS for short – as the “killer feature” of the Turing architecture. The increase in FPS due to the activation of DLSS was measurable, but it was not in relation to the reduction in picture quality. The not fully developed technology led to massive blurring, artifacts and other lost image information. The simple upscaling included in some games offered in many cases a better image quality with similar performance gains.

The reviews were accordingly mixed and the new technology was quickly dismissed as a flop. But NVIDIA praised improvement and continued to work on the promising technology. About a year and a half passed before NVIDIA’s Deep Learning Team finally introduced DLSS 2.0 and promised improvements across the board. It seems that this time the original promises were actually kept, as the trade press reacted extremely positively to the new version. This clear change of mind finally made me curious and made me finally make up my own mind about the Turing features.

What is DLSS and what does it do?

Since I don’t want to write a scientific article here, I try to explain the function as simple as possible. DLSS is supposed to use artificial intelligence to ensure a higher frame rate in graphically demanding games without reducing the picture quality. For this purpose, supercomputers have been taught in a very elaborate process how a game would look “perfect”, i.e. without recognizable pixel stairs (the so-called aliasing). If the user now starts a game on his computer and activates DLSS, the game is rendered in a lower resolution than the resolution of the monitor, which can significantly increase the frame rate.

However, due to the reduced number of pixels, a lot of image information is lost. This is where the Ting-exclusive Tensor cores come into play, which match the rendered image with the “perfect” image stored in the neural network. Before output, image information in the form of pixels is added to the rendered frame to correct the loss of quality and output in the monitor’s native resolution.

Since the so-called “training” of the neural network was very complex and had to be carried out separately for each game and resolution, the technology was only used for a few games due to time constraints, but nevertheless often did not lead to the desired image quality. To reduce the effort, NVIDIA has created a universal solution for DLSS 2.0 that eliminates the need to train the neural network exclusively for each game. In addition, the improved algorithm should also offer a significantly enhanced image quality – especially with regard to the sharpness of the final image. If you want to know a little more about the technical background, you can have a look at the two articles by Igor: 

 

DLSS 1.0

To see how DLSS has evolved, one must first of course look at some examples of the original implementation. The first thing that came to my mind was the inadequate implementation in Battlefield V, which was torn apart in the original reviews. When I then activated the option in the game, I noticed that the reduction of the image quality wasn’t as drastic as it was shown in the first tests in early 2019.

 

Apparently, NVIDIA and Dice have already made some improvements in this area. But in the end, the loss of quality is still noticeable and especially the blurred movements and slightly muddy textures immediately caught my eye. In addition, the FPS increase due to DLSS was lowest on Battlefield V of all games tested. Click here to see the full-size images again:

 

Somewhat more serious is the loss of quality at Monster Hunter World, where the focus seems to have been on the implementation of DX12 rather than improvements to the DLSS implementation. Again, the blurriness of the movements is once again apparent and additionally the excessive sharpening leads to artifacts, as you can easily see from the protagonist’s face. The FPS increase of over 30% makes the game much smoother, though.

 

Here again the full images to enlarge:

 

The prime example for a bad implementation was Anthem, where the blur and the artifacts are added to an unbelievable shine through excessive sharpening. With the other two games, the slight blurring was already exhausting for the eyes, but I couldn’t play Anthem with DLSS without getting a headache after a short time – and in this case it wouldn’t even be because of the miserable content.

 

The whole thing of course also here again in Groß:

 

Exactly from here on it can only get better! Therefore please turn the page!

Werbung

Werbung