AMD Vega Hardware Reviews

I believe you are referring to a test where they are undervolting the V56 and V64 by different amounts.
Hardwareluxx
Am I correct?

Yes, not to mention these are review units where the Vega 56 cards may even be artificially salvaged chips (i.e. Vega 64 GPUs with a Vega 56 bios).
The Vega 56 are not only coming later but also being packaged in a different facility.

Results for undervolt + overclock are a lottery anyways.
 
Yes, and I do have some doubts about the results,

but still

RX64 voltage - 7,5%
RX56 voltage - 11%

performance similar up to 2-3%

Power Consumption (whole System) RX56 reduced by 25% compared to the optimized RX64 - imho it does not add up. If there are no faults the measurement and the numbers are correct and disregarding the individual chip quality this strange.

At the moment there are many unknowns. It seems there are at least 3 different versions of the GPU+HBM+Interposer package around, al slightly different and at least one with the HBM not being as high as the GPU, leaving a 0,1mm gap to the cooling solution. Also it seems like some test RX56 had Samsung HBM2 while those in retail show SK Hynix. At the moment RX Vega seems like a lottery to me.
 
Last edited:
Power Consumption (whole System) RX56 reduced by 25% compared to the optimized RX64 - imho it does not add up. If there are no faults the measurement and the numbers are correct and disregarding the individual chip quality this strange.
And why would you disregard chip quality?
 
Well, knowing what your chip currently consumes would be all that's needed. There is on chip logic that performs these measurements / estimates.

The throttling itself, when engaged, reduces the clocks and/or voltage.

The algorithms themselves (for throttling and consumption computation) should be vendor specific and gpu-specific
 
Power Consumption (whole System) RX56 reduced by 25% compared to the optimized RX64 - imho it does not add up. If there are no faults the measurement and the numbers are correct and disregarding the individual chip quality this strange.
I believe the explanation is that RX64 is suffering much more severely from being front end bottlenecked by its polygon throughput still being on the order of 4 triangles per clock since primitive shaders have not been enabled in drivers yet. As a result the higher power target and CUs of RX64 are simply being wasted. RX56 is proportionally less effected by the front end bottleneck so its showing much better perf/watt even without primitive shaders enabled.

And yet the power consumption is measured as going down, which also makes me wonder.
That appears to be a typical result with undervolting Vega with an increased power target. Gamers Nexus found similar behaviour when they undervolted a Vega FE. With current RX drivers, it should be possible to do even better than what GN achieved as it should be possible to find a sub +50% power offset that is till high enough to allow the card not to power throttle at stock clocks while also undervolting.
 
lol not this again. Where are you getting that now? The comment regarding the linux drivers?
ttzW26x.png


The Vega whitepaper claims that primitive shaders can increase effective polygon throughput to up to 17 triangles per clock, but since 6,305/1630 = ~3.75 triangles per clock, either primitive shaders are not enabled yet in drivers or RTG made up the feature out of thin air and it doesn't actually exist. You'll also note that Vega 56 has exactly the same ~3.75 triangle per clock rate as RX64, which would be consistent with the performance testing done by many showing that RX56 delivers identical performance to RX64 at the same cocks.
 
So, primitive shaders have not been enabled in the drivers yet. Also, DSBR is not working for each and every application out there. Also, HBCC is disabled by default, even though it can potentially increase performance by double-digit percentages. Add all of that together (if those are accumulates) and you have your 1080 Ti competitor.
 
That appears to be a typical result with undervolting Vega with an increased power target. Gamers Nexus found similar behaviour when they undervolted a Vega FE. With current RX drivers, it should be possible to do even better than what GN achieved as it should be possible to find a sub +50% power offset that is till high enough to allow the card not to power throttle at stock clocks while also undervolting.

Typical result? Do we have a reasonable sample of undervolted Vegas to even talk about what's typical?


Undervolted (1200->1025mV) and set to +50% power, Gamer's Nexus Vega 56 is drawing 30-70 Watts more than stock.

undervolt-v56-power_3mgs58.png

http://www.gamersnexus.net/hwreviews/3020-amd-rx-vega-56-review-undervoltage-hbm-vs-core
 
The Vega whitepaper claims that primitive shaders can increase effective polygon throughput to up to 17 triangles per clock, but since 6,305/1630 = ~3.75 triangles per clock, either primitive shaders are not enabled yet in drivers or RTG made up the feature out of thin air and it doesn't actually exist.
I believe this has been discussed before but the high polygon throughput is likely a potential based on culling primitives, not magically being able to generate more polygons than its 4 setup engines is capable of. That test you linked isn't designed to test any kind of discard from what I know.
 
A Quick Look At The High Bandwidth Cache Controller On AMD’s Radeon RX Vega

With or without HBCC, Vega 64 peaked at 8GB used with anti-aliasing disabled. With it enabled, and set to 4xMSAA, we can see that HBCC does have to step in, with GPU-Z reporting close to 11GB of memory used (see below) even though the GPU really has only 8GB.

The story doesn’t end there, though. If you’ll notice, the system RAM increased 3GB in the HBB On test, which makes sense since the VRAM increased 3GB, but what’s peculiar is the DRAM result at 4xMSAA. Somehow, general system memory become flooded with extra data as a byproduct of being absolutely inundated with work.

And that’s the kind of oddity you can expect to see with such specific, bleeding-edge and hardware-punishing testing. 3DMark didn’t always fare well through these repeated benchmarks, which doesn’t really matter as this particular use case is non-existent.
https://techgage.com/article/a-look-at-amd-radeon-vega-hbcc/
 
I wonder if Raven Ridge will contain HBCC, potentially some improvements there to be had?
 
HBCC is more important with less VRAM Available but* we don't know if its useful at all without any VRAM available....HBCC for what I understand if just a way to only allocate the necessary data into the VRAM and "cache" the rest into system ram and trade them in real time.
 
Back
Top