GPUs Graphics Reviews

AMD Radeon RX 6700XT review – Big Navi in medium size but with a lot of bite and also a bit thirsty

With the Radeon RX 6700XT presented today, AMD rounds off the Big Navi portfolio downwards for the time being. The board partner cards may then be presented from tomorrow and there are already products in the pipeline for this as well. But more on that at the appropriate time. The Navi22 chip in the RX 6700XT is new and we’ll take a closer look at it in a moment. In general, however, graphics cards are currently a sensitive topic, so that one has to separate very strongly between technical and emotional considerations. Therefore, let’s please leave miners, scalpers and the partly already megalomaniac retail sector aside for now and dedicate ourselves purely to the technology for the time being. I’ll write about the rest at the end, you can’t help it.

The Radeon RX 6700 XT as a reference design

With the 40 compute units (CU), the RX 6700 XT has 2560 shaders, making it effectively half an RX 6900XT. While the base clock of the RX 6800 is given with 1815 and the boost with 2105 MHz, the RX 6700XT manages 1594 to 2689 MHz. The card relies on 12GB of GDDR6 at 16Gbps, which is made up of 6 modules of 2GB each. Also common are the 192-bit memory interface and the 96MB Infinity Cache, which is supposed to solve the bandwidth problem.

The RX 6700 XT weighs just over 900 grams, is 26.7 cm long, 12 cm high (11.5 cm installation height from PEG), 3.8 cm thick (2-slot design), with a backplate and the PCB adding a total of four more millimeters. The slot bracket is closed, carries 1x HDMI 2.1 and three current DP connectors. The USB Type C is omitted. The body is made of light metal, the Radeon lettering is illuminated and the whole thing is powered via an 8-pin and a 6-pin socket. More about this on the next page in the teardown.

 

The screenshot from GPU-Z then provides information about the remaining data of the card:

 

Again, I have a table for all statisticians among you, before it really gets going from the next page on.

Name RX 6900 XT RX 6800 XT RX 6800 RX 6700 XT
Chip Navi 21 XTX Navi 21 XT Navi 21 XL Navi 22 XT
Chip Size 520 mm² 520 mm² 520 mm² n/a
Shader Cluster 80 72 60 40
Shaders/TMUs/ROPs 5.120/320/128 4.608/288/128 3.840/240/96 2560/160/64
RT Units 80 72 60 40
Infinity Cache MB 128 128 128 96
GPU Game Clock MHz 2.015 2.015 1.815 2
FP16 TFLOPS 41.27 37,14 27.88 23.04
FP32/FP64 TFLOPS 20.63/1.29 18.57/1.16 13.94/0.87 11.52/0.72
Fillrate Mtex/Mpix per sec. 644.8/257.9 580.3/257.9 435.6/174.2 432/144
L2 Cache MB 4 4 4 3
Memory interface 256-bit 256-bit 256-bit 192-bit
Memory Speed Gbps 16 16 16 16
Memory GDDR6 GDDR6 GDDR6 GDDR6
Bandwidth GB/s 512 512 512 384
Memory Capacity GB 16 16 16 12
PCIe Connectors 2× 8-pin 2× 8-pin 2× 8-pin 1× 8-pin + 1x 6-pin
TBP Watts 300 300 250 230
Launch Price (MSRP) 999 Euro 649 Euro 579 Euro n/a

Raytracing / DXR

At the latest since the presentation of the new Radeon cards it is clear that AMD will also support ray tracing. Here one goes a way clearly deviating to NVIDIA and implements a so-called “Ray Accelerator” per Compute Unit (CU). Since the Radeon RX 6700XT has a total of 40 CUs, this also results in 40 such accelerators. A GeForce RTX 3070 comes with 46 RT cores. However, RT cores are organized differently and we will have to wait and see what quantity can do against specialization here. So in the end it’s an apples and oranges comparison for now.

But what has AMD come up with here? Each of these accelerators is first capable of simultaneously computing up to 4 beam/box intersections or a single beam/triangle intersection per cycle. This way, the intersection points of the rays with the scene geometry are calculated (analogous to the Bounding Volume Hierarchy), first pre-sorted and then this information is returned to the shaders for further processing within the scene or the final shading result is output. NVIDIA’s RT cores, however, seem to have a much more complex approach to this, as I explained in detail during the Turing launch. What counts is the result alone, and that’s exactly what we have suitable benchmarks for.

Smart Access Memory (SAM)

AMD already showed SAM, i.e. Smart Access Memory, at the presentation of the new Radeon cards – a feature I enabled today in addition to the normal benchmarks, which also allows a direct comparison. But actually SAM is not Neuers, just verbally more nicely packaged. This is nothing else than the clever handling of the Base Address Register (BAR) and exactly this support must be activated in the substructure. With modern AMD graphics hardware, resizable PCI bars (see also PCI SIG from 4/24/2008) have played an important role for quite some time, since the actual PCI BARs are normally only limited to 256 MB, while with the new Radeon graphics cards you can now find up to 16 GB VRAM (here it’s 12 GB).

The result is that only a fraction of the VRAM is directly accessible to the CPU, which without SAM requires a whole series of workarounds in the so-called driver stack. Of course, this always costs performance and should therefore be avoided. So that’s where AMD comes in with SAM. This is not new, but it must be implemented cleanly in the UEFI and later also activated. This only works if the system is running in UEFI mode and CSM/Legacy are disabled.

CSM stands for the Compatibility Support Module. The Compatibility Support Module is exclusive to UEFI and ensures that older hardware and software will work with UEFI. The CSM is always helpful when not all hardware components are compatible with UEFI. Some older operating systems and the 32-bit versions of Windows also do not install on UEFI hardware. However, it is precisely this compatibility setting that often prevents the clean Windows variant required for the new AMD components from being installed.

First you have to check in the BIOS if UEFI or CSM/Legacy is active and if not, make sure to do this step. Only then you can activate and use the resizable PCI-BARs at all, but stop – does your Windows boot at all then? How to convert an (older) disk from MBR to GPT, so that it is recognized cleanly under UEFI, you could read among other things also in the forum, if there are questions in this regard, that leads here now too far.
 
The fact is that AMD sets the hurdles for the use of SAM quite high and has communicated this only sparsely so far. A current Zen3 CPU is required for the real, purely hardware-based solution, as well as a B550 or X570 motherboard including an updated BIOS. Then again, the UEFI thing is a small but incredibly important side note. It should also be noted that NVIDIA and Intel already use their own firmware-based solutions or plan to expand this in the future. One goes first, the others follow suit, whereas one could have done it long ago.

 

Test system and evaluation software

The benchmark system is new and is now no longer in the lab, but back in the editorial office. I now also rely on PCIe 4.0, the matching X570 motherboard in the form of an MSI MEG X570 Godlike, and a select Ryzen 9 5950 X that has been heavily overclocked water-cooled. Along with that comes fast RAM as well as multiple fast NVMe SSDs. For direct logging during all games and applications, I use NVIDIA’s PCAD, which adds to the convenience immensely.

The measurement of the power consumption and other things takes place here in the special laboratory on a redundant and in detail identical test system then double-tracked by means of high-resolution oscillograph technology…

…and the self-created MCU-based measurement setup for motherboards graphics cards (pictures below), where at the end in the air-conditioned room also the thermographic infrared images are created with a high-resolution industrial camera. The audio measurements are done outside in my chamber.

I have also summarized the individual components of the test system in a table:

Test System and Equipment
Hardware:
AMD Ryzen 9 5950X OC
MSI MEG X570 Godlike
2x 16 GB Corsair DDR4 4000 Vengeance RGB Pro
1x 2 TByte Aorus (NVMe System SSD, PCIe Gen. 4)
1x 2 TB Corsair MP400 (Data)
1x Seagate FastSSD Portable USB-C
Be Quiet! Dark Power Pro 12 1200 Watt
Cooling:
Alphacool Eisblock XPX Pro
Alphacool Eiswolf (modified)
Thermal Grizzly Kryonaut
Case:
Raijintek Paean
Monitor: BenQ PD3220U
Power Consumption:
Oscilloscope-based system:
Non-contact direct current measurement on PCIe slot (riser card)
Non-contact direct current measurement at the external PCIe power supply
Direct voltage measurement at the respective connectors and at the power supply unit
2x Rohde & Schwarz HMO 3054, 500 MHz multichannel oscilloscope with memory function
4x Rohde & Schwarz HZO50, current clamp adapter (1 mA to 30 A, 100 KHz, DC)
4x Rohde & Schwarz HZ355, probe (10:1, 500 MHz)
1x Rohde & Schwarz HMC 8012, HiRes digital multimeter with memory function

MCU-based shunt measuring (own build, Powenetics software)
Up to 10 channels (max. 100 values per second)
Special riser card with shunts for the PCIe x16 Slot (PEG)

NVIDIA PCAT and FrameView 1.1

Thermal Imager:
1x Optris PI640 + 2x Xi400 Thermal Imagers
Pix Connect Software
Type K Class 1 thermal sensors (up to 4 channels)
Acoustics:
NTI Audio M2211 (with calibration file)
Steinberg UR12 (with phantom power for the microphones)
Creative X7, Smaart v.7
Own anechoic chamber, 3.5 x 1.8 x 2.2 m (LxTxH)
Axial measurements, perpendicular to the centre of the sound source(s), measuring distance 50 cm
Noise emission in dBA (slow) as RTA measurement
Frequency spectrum as graphic
OS: Windows 10 Pro (all updates, current certified or press drivers)

137 Antworten

Kommentar

Lade neue Kommentare

thoemmi

Neuling

6 Kommentare 3 Likes

GeForce Radeon RX 6700XT, sehr schön 😂

Antwort 2 Likes

Blueline56

Mitglied

38 Kommentare 8 Likes

Wusste gar nicht, das es schon ne RTX 3070Ti gibt, da ist wohl ein Tippfehler passiert. Ansonsten ne nette Karte, sollte die jemals den Weg zu den Gamern finden und zu normalen Preisen geben.

Antwort Gefällt mir

Igor Wallossek

1

10,198 Kommentare 18,814 Likes

14 Seiten in 3 Stunden - das ist kein Spaß :D

Antwort 5 Likes

Blueline56

Mitglied

38 Kommentare 8 Likes

Ist schon klar und war ja auch nicht böse gemeint, nur als Hinweis gedacht....

Antwort Gefällt mir

Igor Wallossek

1

10,198 Kommentare 18,814 Likes

Hinweise bitte als PN :)
Das kommt schneller hier ins System

Antwort Gefällt mir

Blueline56

Mitglied

38 Kommentare 8 Likes
L
Launebaer

Mitglied

17 Kommentare 1 Likes

wahrscheinlich kommt nochmal ein MorePowerTool Review der 6700XT mit Sweetspot Analyse? Guter Test, gute Hardware, aber eben Marktbedingt wie alle Karten momentan sauteuer :/

Antwort Gefällt mir

E
Edenjung

Mitglied

15 Kommentare 4 Likes

Mega Karte.

Jetzt nur noch in passenden Mengen verfügbar (am besten über AMD direkt) und die lEute die von eine 980ti oder darunter aufrüsten wollen haben ihre Karte gefunden.

Was mich aber bei der ganzen 6000er Reihe begeistert ist das Referenzdesign. Da brauchts kein Custom-Modell mehr.
Einfach nur super.

Antwort 2 Likes

O
Opa_Hoppenstedt

Mitglied

32 Kommentare 12 Likes

Kommt noch ein Video, für die die nicht lesen können?

Antwort Gefällt mir

konkretor

Veteran

298 Kommentare 301 Likes

Schöne Karte aber nicht verfügbar :-(

Antwort Gefällt mir

R
RX Vega_1975

Urgestein

575 Kommentare 75 Likes

@Igor Wallossek

Auslieferung ab Morgen dann. ca. 14 bis 15 Uhr über AMD selbst ?

Antwort Gefällt mir

S
Satruma

Neuling

3 Kommentare 0 Likes

Einen groben Zeitpunkt zu wissen wäre echt nicht schlecht.

Antwort Gefällt mir

C
ChrischiHROHH

Mitglied

11 Kommentare 3 Likes

Moin Igor. Ich hab da mal ne fachliche Fachfrage, die ich mir schon die ganze Zeit stelle.
Die ganzen Grafikkarten Releases der letzten Monate hat mich auf den Gedanken gebracht, dass AMD EIGENTLICH deutlich besser abschneidet als Nvidia. Richtig? Denn: AMD setzt auf 256 bit Speicherinterface (bspw jetzt die 6900 XT halt) und diese ist quasi genauso schnell (RTX außen vor) wie eine RTX 3090 mit 384 bit Speicherinterface. Zudem verwendet die RTX 3080/3090 ja schnelleren GDDR6X Speicher, während AMD "nur" GDDR6 benutzt.

Liege ich mit meinem Gedanken richtig, dass wenn bspw. Eine 6900 XT mit 16 GB GDDR6X und 384 bit Speicherinterface + 128 MB Infinity Cache, DEUTLICH schneller wäre als eine 3090?

Antwort Gefällt mir

D
Denniss

Urgestein

1,516 Kommentare 548 Likes

größeres Speicherinterface = höhere Strombedarf, mehr Speicherchips + GDDR6X = höherer Strombedarf, dazu fetteres PCB mit mehr Spannungswandlern = mehr Strombedarf. Da gehen bestimmt 30W für drauf die dem Chip dann im Budget fehlen. Effekt wäre wohl weniger Leistungsabfall unter 4k dafür Rückkschritt bei kleineren Auflösungen

Antwort 1 Like

C
ChrischiHROHH

Mitglied

11 Kommentare 3 Likes

Aso ok. Trotzdem vermute ich, dass AMD in der nächsten Generation nvidia deutlich mehr wehtun wird. :o

Antwort Gefällt mir

Igor Wallossek

1

10,198 Kommentare 18,814 Likes

Auch ein Speicherinterface benötigt Chipfläche. Bei AMD ist es durch den Infinity-Cache möglich, ein kleineres Interface zu nutzen. Aber die schlechte Hitrate unter Ultra-HD bricht der 6900XT echt das Genick. Bei den kleineren karten und Auflösungen passt das hingegen.

Antwort 1 Like

X
XVI

Neuling

1 Kommentare 0 Likes

Danke für den Test (: Kannst du was bzgl. Undervolting sagen?

Antwort Gefällt mir

Igor Wallossek

1

10,198 Kommentare 18,814 Likes

Die Karten haben massig Potential nach unten - wenn Du im Boost ca. 100 bis 150 MHz wegnimmst. Die werden voll an der Kotzgrenze gefahren. Ich hatte in Full-HD fast 2.7 GHz out of the Box.

ich hatte ja die beiden BIOSe im November mal gezeigt. Du kommst am Ende auf 12 Volt locker um 30 bis 35 Watt runter.

Antwort 4 Likes

O
Oese72

Mitglied

38 Kommentare 11 Likes

Danke für den Test! Super Karte, aber schade das man die Platine und auch den Kühler so deutlich abgespeckt hat. Damit ist die Karte aus meiner Sicht etwas zu teuer im Vergleich zur 6800. Aber zur Zeit ist ja eh nix mit Aussuchen. Ich bin jedenfalls morgen dabei :D.

Antwort Gefällt mir

Danke für die Spende



Du fandest, der Beitrag war interessant und möchtest uns unterstützen? Klasse!

Hier erfährst Du, wie: Hier spenden.

Hier kannst Du per PayPal spenden.

About the author

Igor Wallossek

Editor-in-chief and name-giver of igor'sLAB as the content successor of Tom's Hardware Germany, whose license was returned in June 2019 in order to better meet the qualitative demands of web content and challenges of new media such as YouTube with its own channel.

Computer nerd since 1983, audio freak since 1979 and pretty much open to anything with a plug or battery for over 50 years.

Follow Igor:
YouTube Facebook Instagram Twitter

Werbung

Werbung