GPUs Graphics Reviews

MSI RX 6800XT Gaming X Trio 16 GB Review – Sanity on silent soles with decent reserves for cocky people

After the already extensively tested reference cards of AMD’s RX 6000 series, the MSI RX 6800XT Gaming X Trio 16 GB has now found its way to my lab as another board partner card. The continued unavailability of all these graphics cards for the masses is actually embarrassing and all the more annoying because these products are really good and competitive graphics cards that would deserve to please the masses of willing buyers. Unfortunately, a real improvement is not yet in sight and so today I am testing a product that could have been bought if it had existed or exists. But one after the other, because looking at it and getting an appetite is also a solution. It’s still nasty, though, sorry.

With the (Attention! Officially prescribed name according to AMD’s nomenclature) “MSI Gaming X Trio AMD Radeon™ RX 6800 XT Gaming Graphics Card with 16GB GDDR6, AMD RDNA™ 2”, which I will call MSI RX 6800XT Gaming X Trio 16 GB again in the article for the sake of simplicity as otherwise the charts and legends would explode, here is a thoroughly interesting RDNA2 specimen that you have to listen twice to hear. And MSI didn’t set the factory clock too high, leaving plenty of room for your own OC experiments. I will show that this is possible.

This graphics card, like all RX 6000 models, comes with the new video codec AV1, they also support DirectX 12 Ultimate for the first time and thus also DirectX Raytracing (DXR). With AMD FidelityFX, they also offer a feature that should also give developers more leeway in choosing effects. Also included is Variable Rate Shading (VRS), which can save immense amounts of processing power by smartly reducing the display quality of areas of the image that are not in the player’s eye anyway. So much for the feature set of all new Radeon cards.

With a current street price of over 1200 Euros, the German retailers has generated a fat markup that should definitely be remembered later when availability improves. It’s probably best to avoid particularly brash shops in the future. The MSRP of just over $800 USD is not that far above the MSRP of the reference cards as some other competitors have announced, but it is almost cheap considering the current scalper prices. Because where street prices used to be considered a balm for the RRP-stricken shopper’s soul, they (and retailers) are now downright to be feared.

Optics and haptics

The MSI RX 6800XT Gaming X Trio 16 GB weighs 1552 grams and is thus naturally heavier than the reference card. It is also longer with its full 32 cm, stately 13.5 cm high (installation height from PEG) and in addition 5.4 cm thick (2.5 slot design), whereby a backplate and the PCB with altogether five additional millimeters are added. The body is made of two-tone plastic, the MSI lettering and the light band on the top are LED illuminated.

 

The graphic brick including illumination is powered by two standard 8-pin sockets, so everything is as known and used. We also see here the vertical orientation of the cooling fins and the board reinforcement in the form of a backplate and a frame that extends halfway.

The slot bracket is closed, carries 1x HDMI 2.1 and three current DP connectors. The USB Type C port, on the other hand, is missing. More about the construction, the cooler and the assembly on the next page in the teardown.

Technology

With 72 compute units (CU), the MSI RX 6800XT Gaming X Trio 16 GB has a total of 4608 shaders. While the base clock is specified with 2045 MHz, the boost clock is 2285 MHz, which is also reached. Card relies on 16 GB GDDR6 with 16 Gbps, which is made up of 8 modules of 2 GB each. This includes the 256-bit memory interface and the 128MB Infinity Cache, which is supposed to solve the bandwidth problem. The card does not have a switchable dual BIOS, which is actually a shame.

Raytracing / DXR

At the latest since the presentation of the new Radeon cards it is clear that AMD will also support ray tracing. Here one goes a way clearly deviating to NVIDIA and implements a so-called “Ray Accelerator” per Compute Unit (CU). Since the Radeon RX 6800 has a total of 72 CUs, this also results in 72 such accelerators for the Radeon RX 6800XT, while the smaller Radeon RX 6800 still has 60. A GeForce RTX 3080 comes with 68 RT cores, which is nominally less for now. When comparing the smaller cards, the score is 62 for the RX 6800 and 46 for the GeForce RTX 3070. However, RT cores are organized differently and we will have to wait and see what quantity can do against specialization here. So in the end it’s an apples and oranges comparison for now.

But what has AMD come up with here? Each of these accelerators is first capable of simultaneously computing up to 4 beam/box intersections or a single beam/triangle intersection per cycle. This way, the intersection points of the rays with the scene geometry are calculated (analogous to the Bounding Volume Hierarchy), first pre-sorted and then this information is returned to the shaders for further processing within the scene or the final shading result is output. NVIDIA’s RT cores, however, seem to have a much more complex approach to this, as I explained in detail during the Turing launch. What counts is the result alone, and that’s exactly what we have suitable benchmarks for.

Smart Access Memory (SAM)

AMD already showed SAM, i.e. Smart Access Memory, at the presentation of the new Radeon cards – a feature I enabled today in addition to the normal benchmarks, which also allows a direct comparison. But actually SAM is not Neuers, just verbally more nicely packaged. This is nothing else than the clever handling of the Base Address Register (BAR) and exactly this support must be activated in the substructure. With modern AMD graphics hardware, resizable PCI bars (see also PCI SIG from 4/24/2008) have played an important role for quite some time, since the actual PCI BARs are normally only limited to 256 MB, while with the new Radeon graphics cards you can now find up to 16 GB VRAM.

The result is that only a fraction of the VRAM is directly accessible to the CPU, which without SAM requires a whole series of workarounds in the so-called driver stack. Of course, this always costs performance and should therefore be avoided. So that’s where AMD comes in with SAM. This is not new, but it must be implemented cleanly in the UEFI and later also activated. This only works if the system is running in UEFI mode and CSM/Legacy are disabled.

CSM stands for the Compatibility Support Module. The Compatibility Support Module is exclusive to UEFI and ensures that older hardware and software will work with UEFI. The CSM is always helpful when not all hardware components are compatible with UEFI. Some older operating systems and the 32-bit versions of Windows also do not install on UEFI hardware. However, it is precisely this compatibility setting that often prevents the clean Windows variant required for the new AMD components from being installed.

First you have to check in the BIOS of the motherboard if UEFI or CSM/Legacy is active and if not, make sure to do this step. Only then you can activate and use the resizable PCI-BARs at all, but stop – does your Windows boot at all then? How to convert an (older) disk from MBR to GPT, so that it is recognized cleanly under UEFI, you could read among other things also in the forum, if there are questions in this regard, that leads here now too far.
 
The fact is that AMD sets the hurdles for the use of SAM quite high and has only communicated this sparsely so far. A current Zen3 CPU is required, as well as a B550 or X570 motherboard with an updated BIOS. Then again, the UEFI thing is a small but incredibly important side note. It should also be noted that NVIDIA and Intel have already announced their own solutions or plan to use them in the future. One goes first, the others follow suit, whereas one could have done it long ago. But they didn’t, for whatever reason. Over 12 years of drawer is plenty of wasted time. But better late than never.
 

Benchmarks, test system and evaluation software

For the benchmarks, I chose the same 10 games, analogous to the launch article, weighting between old and new, and AMD- or NVIDIA-specific. Since everything is very similar to the launch article of the Radeon cards, this time there is only a cumulative summary of all games with a detailed explanation for each resolution. The power consumption is also given in great detail, as you are used to.

The benchmark system is new and I am now also using PCIe 4.0, matching X570 motherboards in the form of the MSI MEG X570 Godlike and a selected Ryzen 9 5950X that has been water cooled overclocked (PBO + 500 MHz). Add to that the matching DDR4 4000 RAM from Corsair, as well as several fast NVMe SSDs. For direct logging during all games and applications, I use my own measurement station with shunts and riser card, as well as NVIDIA’s PCAT in games, which adds to the convenience immensely.

The measurement of the detailed power consumption and other, more profound things takes place here in the special laboratory on a redundant and in detail identical test system then double-tracked by means of high-resolution oscillograph technology…

…and the self-created MCU-based measurement setup for motherboards and graphics cards (pictures below), where at the end in the air-conditioned room also the thermographic infrared images are created with a high-resolution industrial camera. The audio measurements are done outside in my Chamber (room within a room).

I have also summarized the individual components of the test system in a table:

Test System and Equipment
Hardware:
AMD Ryzen 9 5950X OC
MSI MEG X570 Godlike
2x 16 GB Corsair DDR4 4000 Vengeance RGB Pro
1x 2 TByte Aorus (NVMe System SSD, PCIe Gen. 4)
1x 2 TB Corsair MP400 (Data)
1x Seagate FastSSD Portable USB-C
Be Quiet! Dark Power Pro 12 1200 Watt
Cooling:
Alphacool Eisblock XPX Pro
Alphacool Eiswolf (modified)
Thermal Grizzly Kryonaut
Case:
Raijintek Paean
Monitor: BenQ PD3220U
Power Consumption:
Oscilloscope-based system:
Non-contact direct current measurement on PCIe slot (riser card)
Non-contact direct current measurement at the external PCIe power supply
Direct voltage measurement at the respective connectors and at the power supply unit
2x Rohde & Schwarz HMO 3054, 500 MHz multichannel oscilloscope with memory function
4x Rohde & Schwarz HZO50, current clamp adapter (1 mA to 30 A, 100 KHz, DC)
4x Rohde & Schwarz HZ355, probe (10:1, 500 MHz)
1x Rohde & Schwarz HMC 8012, HiRes digital multimeter with memory function

MCU-based shunt measuring (own build, Powenetics software)
Up to 10 channels (max. 100 values per second)
Special riser card with shunts for the PCIe x16 Slot (PEG)

NVIDIA PCAT and FrameView 1.1

Thermal Imager:
1x Optris PI640 + 2x Xi400 Thermal Imagers
Pix Connect Software
Type K Class 1 thermal sensors (up to 4 channels)
Acoustics:
NTI Audio M2211 (with calibration file)
Steinberg UR12 (with phantom power for the microphones)
Creative X7, Smaart v.7
Own anechoic chamber, 3.5 x 1.8 x 2.2 m (LxTxH)
Axial measurements, perpendicular to the centre of the sound source(s), measuring distance 50 cm
Noise emission in dBA (slow) as RTA measurement
Frequency spectrum as graphic
OS: Windows 10 Pro (all updates, current certified or press drivers)

Danke für die Spende



Du fandest, der Beitrag war interessant und möchtest uns unterstützen? Klasse!

Hier erfährst Du, wie: Hier spenden.

Hier kannst Du per PayPal spenden.

About the author

Igor Wallossek

Editor-in-chief and name-giver of igor'sLAB as the content successor of Tom's Hardware Germany, whose license was returned in June 2019 in order to better meet the qualitative demands of web content and challenges of new media such as YouTube with its own channel.

Computer nerd since 1983, audio freak since 1979 and pretty much open to anything with a plug or battery for over 50 years.

Follow Igor:
YouTube Facebook Instagram Twitter

Werbung

Werbung