I have started testing with two reference cards RX 6800 in Windows and in HiveOS because people like to mine on them besides gaming and I don't want them to destroy their new cards with high temps when I upgrade their GPUs (I make them a mining and gaming profile so it's kinda foolproof). This most important if the desktop is mining 20 hours off the day. In Windows with MPT I can get the card to consume a lot less power than in HiveOS so I'm building a small test rig for three cards in Windows to do some further testing when it comes to power consumption. For now my main concern is VRAM temps.
Issue:
Both my cards (one in a gaming desktop, on Windows, and one in on a test board in open air, on HiveOS) have about the same VRAM temp (shown in HWinfo64 and HiveOS): 80C, with core temp of around 47-50C. With 70% fanspeed (about 60% effective in HiveOS). The room temperature is about 18C. Both cards have thermal pads added between the backplate and the PCB.
A friend of mine with a 6900XT in a gaming desktop got way lower VRAM temps when we tested his setup while mining without MPT voltage change (so his card actually had a software reading of 150watt, but still 66C on VRAM temps). Online I have seen others with the same reference cards with similar core temperatures have VRAM temps in the high 60s which is 20% lower than my VRAM temps. I have also seen others with similar VRAM temps as myself. This is making me wonder if AMD has started using different thermal pads for the VRAM on the reference cards. Especially since the VRAM is directly connected to the vaporchamber of the vape exchange cooling system with copper. This method of VRAM cooling should be very efficient.
I will use mining data since its more widely available, but also my own data collected when gaming.
Case 1, mining in HiveOS.
Here is my card tested in HiveOS:
Here are two screenshots of other people's reference RX 6800 with about the same core temp (and software power reading) but much lower VRAM temp. I think this delta is between VRAM and core temp is very important since it more or less leaves out room temperature. I
Case 2, Gaming in Windows:
Igors data is quite self-explanatory: Room temp 22C, GPU temp 69, hotspot 80C. Which leads to a memory temp of 65C. I'm assuming the Witcher 3 UHD has a power draw of about 190-200 watt on this GPU and I assume fan speed is stock.
My own data:
Room temp was 20C, case closed with good airflow. GPU temp is 8C (or 11%) higher, hotspot is 17C (or 21% higher) and VRAM is 17C higher (or 26%).
Conclusion:
Maybe my assumption is that Igor uses stock fanspeed is wrong but it looks like these temps are pretty bad.I'm hoping other people here using AMD RX 6800 (XT) reference cards that maybe have cards with different temperature readings for VRAM to compare to mine. I might get another card that I can test but that's about it. Maybe it is just newer cards and future cards I receive will have higher VRAM temps too so I'm not even sure if I can test this myself.
So what are your experiences? Could this be down to a change in thermal pads?
Issue:
Both my cards (one in a gaming desktop, on Windows, and one in on a test board in open air, on HiveOS) have about the same VRAM temp (shown in HWinfo64 and HiveOS): 80C, with core temp of around 47-50C. With 70% fanspeed (about 60% effective in HiveOS). The room temperature is about 18C. Both cards have thermal pads added between the backplate and the PCB.
A friend of mine with a 6900XT in a gaming desktop got way lower VRAM temps when we tested his setup while mining without MPT voltage change (so his card actually had a software reading of 150watt, but still 66C on VRAM temps). Online I have seen others with the same reference cards with similar core temperatures have VRAM temps in the high 60s which is 20% lower than my VRAM temps. I have also seen others with similar VRAM temps as myself. This is making me wonder if AMD has started using different thermal pads for the VRAM on the reference cards. Especially since the VRAM is directly connected to the vaporchamber of the vape exchange cooling system with copper. This method of VRAM cooling should be very efficient.
I will use mining data since its more widely available, but also my own data collected when gaming.
Case 1, mining in HiveOS.
Here is my card tested in HiveOS:
Here are two screenshots of other people's reference RX 6800 with about the same core temp (and software power reading) but much lower VRAM temp. I think this delta is between VRAM and core temp is very important since it more or less leaves out room temperature. I
Case 2, Gaming in Windows:
Igors data is quite self-explanatory: Room temp 22C, GPU temp 69, hotspot 80C. Which leads to a memory temp of 65C. I'm assuming the Witcher 3 UHD has a power draw of about 190-200 watt on this GPU and I assume fan speed is stock.
My own data:
Room temp was 20C, case closed with good airflow. GPU temp is 8C (or 11%) higher, hotspot is 17C (or 21% higher) and VRAM is 17C higher (or 26%).
Conclusion:
Maybe my assumption is that Igor uses stock fanspeed is wrong but it looks like these temps are pretty bad.I'm hoping other people here using AMD RX 6800 (XT) reference cards that maybe have cards with different temperature readings for VRAM to compare to mine. I might get another card that I can test but that's about it. Maybe it is just newer cards and future cards I receive will have higher VRAM temps too so I'm not even sure if I can test this myself.
So what are your experiences? Could this be down to a change in thermal pads?
Zuletzt bearbeitet
: