Artificial Intelligence Gaming GPUs Graphics Reviews

AMD Radeon RX 9070XT and RX 9070 review – They’re back and even available, thank goodness!

Total power consumption and compliance with standards in practice

The measured power consumption of the two graphics cards turned out to be higher than the specified values. Instead of the expected 220 watts, the consumption was 245 watts, while the second card actually drew 345 watts instead of 304 watts. This is due to the fact that these are so-called OC cards with significant factory overclocking. Due to higher clock rates and often also higher voltage values, the power consumption increases noticeably compared to the reference models, which is particularly noticeable under load.

In addition, an increased power consumption in idle mode was observed when using applications such as NVIDIA’s PCAD. Even if this program is only running minimized in the taskbar, it keeps certain GPU components active, which can significantly increase power consumption. This happens, for example, through the forced activation of higher performance levels that prevent the card from switching to a real energy-saving mode. In such cases, consumption can easily rise to over 30 watts and more instead of leveling off at a significantly lower level.

The mainboard slot, also known as the PCIe slot (PEG: PCI Express Graphics), is designed for a maximum current of 5.5 amps at a voltage of 12 volts in accordance with the PCI-SIG standard. This corresponds to a maximum power consumption of 66 watts, which can be supplied directly via the slot. The PCI-SIG standard serves as the basis for ensuring a uniform and reliable power supply via the mainboard slot while maintaining system stability. The specified limit value of 5.5 amps also takes into account short-term peak loads that can occur during abrupt load changes. However, these load peaks must not overload the system or affect other components due to voltage fluctuations.

A key advantage of this standard is that it ensures interoperability and compatibility between mainboards and graphics cards from different manufacturers. Clear specifications on the maximum load prevent potential damage to the mainboard’s conductors and connectors, which could be caused by excessive currents. The moderate load on the slot not only ensures system stability, but also increases the longevity of the hardware components concerned.

Another advantage of this limitation is the ability to provide additional power supply via external connections to meet the requirements of high-performance graphics cards. The graphics card in question, which does not push the limits of the power supply even when using the modern 12V2X6 power connector design, demonstrates particularly efficient load distribution. The PEG slot is only loaded with a maximum of 0.5 or 0.6 amps, which corresponds to less than 7 or 8 watts. This minimal load on the mainboard slot underlines the efficiency of the cards and significantly reduces potential thermal loads or damage to the mainboard.

 

Detailed view of gaming in Ultra HD

In Cyberpunk 2077, the graphics card reaches peak values of up to 245 or 345 watts in UHD and maximum settings. This high load is caused by the immense computing requirements without AI-supported scaling and requires a stable power supply. Although the design is not fully utilized, it still places high demands on the power supply stability. The power consumption and current levels are measured at 20 ms intervals in order to capture rapid load changes.

The first graph shows the real-time consumption as a product of current and voltage, which allows conclusions to be drawn about peak values and compliance with the PCIe specifications. The second graph focuses on the current distribution between the PEG slot and the external connections. It provides information on how heavily the mainboard slot is used and in which situations external connections have to deliver more power.

 

The next graphs analyze a single 20 ms interval with a resolution of 10 µs and show in detail the behavior of the power supply during short-term load changes. These are caused by sudden GPU requirements, such as render spikes or frame changes. The first graph visualizes the power consumption in this extremely short period of time and reveals short-term peaks of up to 500 watts, which place high demands on the stability and response speed of the power supply unit. The two graphs on the right show the current flow through the supply cables and reveal abrupt changes under dynamic loads. These measurements illustrate the importance of the ATX 3.1 standard, which requires a power reserve of 200% during short load peaks. As modern GPUs place extremely high demands in peak load situations, a sufficient power supply reserve is crucial in order to avoid voltage dips and ensure system stability.

Load behavior in the Torture test

Furmark is an extreme load test for graphics cards that generates an atypically constant maximum load, far above that which occurs in real applications or games. Through intensive calculations, both the shader and memory controllers are fully utilized, which leads to extreme thermal and electrical stress. This worst-case test checks the stability of the GPU and the power supply, whereby the power consumption can also significantly exceed the specified TDP and reach peak values of up to 245 or 345 watts. Since Furmark generates a permanent maximum load, the test is not representative for everyday use, but it is extremely useful for uncovering weak points in the cooling or power supply. The fact that the card reaches peaks of up to 600 watts underlines the importance of a powerful power supply with sufficient reserves. Furmark thus serves as a stress test to ensure that the entire system remains stable even under extreme conditions.

 

The high-resolution measurements during a Furmark test provide precise insights into the behavior of the power supply and power consumption under extreme load. The continuous maximum load on the GPU results in constant thermal and electrical stress, which is analyzed at microsecond intervals. Particularly noticeable are short-term load peaks that far exceed the average power consumption and are caused by sudden changes in the load of individual GPU components. These measurements are particularly relevant with regard to the ATX 3.1 standard, which requires power supply units to compensate for short-term peaks of up to 200% of the nominal load for up to 1 millisecond. The data shows that such peaks are not only theoretically possible, but actually occur and can severely stress the limits of power supply designs.

Summary of the load peaks and a power supply recommendation

A power supply unit with a rated output of 700 to 850 watts (depending on the card) that meets the requirements of the ATX 3.1 standard is certainly a suitable choice for both cards in order to reliably cover the power consumption values and load scenarios described. The maximum peak loads of the graphics card, which can reach up to 245 or 345 watts in extreme situations such as Furmark or very demanding games, make a high power reserve necessary. Together with the load of the rest of the system, such as the CPU, RAM and other components, this results in a requirement that can be up to around 450 or 550 watts in very short peak times.

 

A 700 or 850 watt power supply unit not only offers sufficient headroom, but also absorbs short-term load peaks, as required by the ATX 3.1 standard with up to 200% of the nominal load for one millisecond. This means that peaks of up to 1400 or 1700 watts can be handled without stability problems. The dimensioning also ensures that the power supply operates in an efficient load range between 50 and 70 %, which optimizes energy efficiency and longevity. An 80 PLUS Platinum or Titanium certification also ensures low heat generation and high efficiency. Thanks to support for modern standards such as ATX 3.x, the power supply is future-proof and offers long-term stability for upcoming high-performance graphics cards and hardware upgrades.

303 Antworten

Kommentar

Lade neue Kommentare

DMHas

Veteran

103 Kommentare 55 Likes

Schöner Artikel! AMD hat alles richtig gemacht.
@Igor Wallossek: Sind die Temperaturangaben der Sapphire korrekt?

(Die Differenz oder die Hotspottemperatur dürfte nicht stimmen:
... Sapphire RX 9070 XT Nitro+ liegen mit 57 °C für die GPU und maximal 85 °C für den Hotspot auf einem angenehm niedrigen Niveau, insbesondere angesichts der Leistungsaufnahme von 345 Watt. Das übliche Delta von 18 Kelvin zwischen GPU und Hotspot bestätigt)

Antwort 1 Like

Igor Wallossek

1

12,408 Kommentare 24,646 Likes

Danke, ja, es sind 28°C. Die Hektik... Ist gefixt.

Antwort 4 Likes

Cerebral_Amoebe

Veteran

143 Kommentare 68 Likes

Vielen Dank für den Test!

Hier habe ich etwas gefunden:

Antwort Gefällt mir

SchmoWu

Veteran

113 Kommentare 38 Likes

Danke für den Test @Igor Wallossek

auf Seite 10 steht auch nochmal 18Kelvin.

mfg
Schmo

:edit
auf Seite 10 hast du die 9070Pulse mit Manual UC mit in der Liste, wie schneidet die mit Werkseinstellungen zur 9070Quicksilver ab? Wird sich nix nehmen?

Antwort Gefällt mir

kodos

Mitglied

21 Kommentare 13 Likes

Die ersten 5070 sind im Handel und die billigste ist 790€
Damit ist die 9070 trotz gleichen UVP Konkurrenzlos wenn sie morgen zu eben diesen zu bekommen ist

Antwort 2 Likes

Gurdi

Urgestein

1,546 Kommentare 1,108 Likes

Mega Karten. Die werden den Markt ordentlich durchschütteln. Wenn nur die gierigen Händler derzeit nicht wären.

Antwort 4 Likes

Widukind

Urgestein

653 Kommentare 310 Likes

Sehr spannend! 🙂 (y) Vielen Dank!

Die starke RT-Leistung hatten wir ja vermutet. Aber auch die starke Rasterleistung etwa gleichauf mit de 7900XTX überrascht mich schon sehr.

Die 5070 könnte zur neuen 4060 TI werden. :D

Bei dem Angebot von AMD. Potentielle 4070TI S Käufer als auch preiswusste 5070TI Interessenten werden jetzt zu AMD's 9070XT abwandern.

Antwort 5 Likes

Gurdi

Urgestein

1,546 Kommentare 1,108 Likes

Habs doch gesagt^^

Antwort 2 Likes

Widukind

Urgestein

653 Kommentare 310 Likes

Du hattest wohl Insider-Einblicke... ^^ Bei den technischen Rohdaten hat mich das nun wirklich verwundert. Aber um so besser! Kommt endlich mal etwas Schwung in den Markt! :cool:🙏

Antwort 1 Like

t
teenlaquifah

Neuling

1 Kommentare 0 Likes

Dann gucken wir mal morgen auf die Preise und die Verfügbarkeit

Antwort Gefällt mir

b
bitracer

Urgestein

749 Kommentare 336 Likes

Notwendige Bedingung für Hype:
Leistung der vorherigen Flagschiffkarte "eingestellt" (==gleichgezogen!) // so soll es ja eigentlich auch laufen!
Hinreichende Bedingung:
"vernünftige" (==leise und haltbare) Karten für msrp verfügbar ? ? ?
...wir werden es ja morgen sehen können.

Falls das so klappt, dann "Hut ab, AMD!"
Wenn nicht...
...warten bis zum Abverkauf in 12..24 Monaten :cool:

Antwort Gefällt mir

RX480

Urgestein

2,138 Kommentare 1,029 Likes

macht sich auch besser für den Hotspot
evtl.
sollte man wg. der geringen Fläche die grooooßen Customs beim OCen mit PL+15 mit Kryosheet betreiben
könnte sein das PTM9750 nicht reicht

Wieviel PL +/- Prozent lassen sich denn einstellen?

Antwort Gefällt mir

Case39

Urgestein

2,690 Kommentare 1,057 Likes

Danke an igor für das wie immer detaillierte Review und DANKE an AMD zum abliefern dieser Karten....feiere ich gerade.
Bitte fürs nächste Jahr ne High End Karte...hab die Nase voll von NV und deren "Lesswell" Karten.

Antwort 6 Likes

M
Mudsee

Veteran

101 Kommentare 65 Likes

Was micht wundert gibt es keine verschiedene Biose mehr (Einstellungen) ? Bei der alten warfen es ja 2/3 Silent , Normal , OC. Das konnte man ja entweder manuell bzw mit der Softwäre einstellen

Das mit dem Stecker ist so ne Sache bei der Nitro, gerade das mit dem fehlenden feedback ist so ne Sache.
Das mit XFX hmm 2 Karten beide defekt ist finde ich nicht so gut. Da frägt man sich wie der rest im Handel so ist.

Antwort 1 Like

grimm

Urgestein

3,572 Kommentare 2,618 Likes

Sehr nice. Im Prinzip kriegt man ne 4080er für um die 800,- Euro. Das ist vergleichsweise fair.

Antwort 5 Likes

Widukind

Urgestein

653 Kommentare 310 Likes

Im Prinzip sogar eine 4080S mit 4070TI S RT-Leistung. FSR statt DLSS, aber das kann man bei dem Preis verschmerzen. Bei Nvidia bist Du bei der Leistung bei 1000-1100€ und da hast Du dann noch kein 3,5-Slotdesign mit dickem Kühler.

Antwort Gefällt mir

c
carrera

Veteran

253 Kommentare 163 Likes

wenn ich aus dem Artikel zitieren darf

"Und ich hoffe natürlich, dass hier der MSRP (Manufacturer’s Suggested Retail Price) nicht zu einem “Massively Surpassed Real Price” wird, weil der tatsächliche Preis bisher meist irgendwo zwischen „unverschämt“ und „astronomisch“ liegt. Oder aber wir betrachten die aktuellen Preise eher als “Maybe Someday at a Reasonable Price” und warten einfach noch ein bisschen. Das fällt vielen natürlich schwer, geht aber auch, glaubt es mir einfach."

@Igor Wallossek: das ist einfach großartig - you made my day 😂

Antwort 2 Likes

grimm

Urgestein

3,572 Kommentare 2,618 Likes

Die "S" ist auch nur so ein Unfall. Meine GameRock OC ist easy auf "Super"-Niveau.

Antwort 4 Likes

R
RX Vega_1975

Urgestein

642 Kommentare 108 Likes

Was wird die 9070 XT Gigabyte OC reissen
und die Asrock Stealth Legend 9070 XT
und auch kosten bitte ...

@Igor Wallossek weisst dies bereits ?

Antwort Gefällt mir

Danke für die Spende



Du fandest, der Beitrag war interessant und möchtest uns unterstützen? Klasse!

Hier erfährst Du, wie: Hier spenden.

Hier kannst Du per PayPal spenden.

About the author

Igor Wallossek

Editor-in-chief and name-giver of igor'sLAB as the content successor of Tom's Hardware Germany, whose license was returned in June 2019 in order to better meet the qualitative demands of web content and challenges of new media such as YouTube with its own channel.

Computer nerd since 1983, audio freak since 1979 and pretty much open to anything with a plug or battery for over 50 years.

Follow Igor:
YouTube Facebook Instagram Twitter

Werbung

Werbung