AMD Radeon VII Announcement and Discussion

Believe it or not, but I think what this graph shows is actually the saddest part of it all.
Completely agreed.

Most of AMD's new GPU launches from the past 5 years seem like the result of a monkey paw wish.

Hawaii: good performance at launch, stable drivers but most terrible cooler in the world which obfuscates all positives.
Fiji: good performance at launch, stable drivers, good cooling solutions but small amounts of VRAM that hurt their long-term performance.
Polaris 10: good performance, stable drivers, better efficiency, adequate amount of VRAM, but pulling too much power from the PCIe
Vega 20: good performance at competitive price, new price, finally a high-end solution buuut driver implementation for SMU is broken on windows so efficiency and stability goes down the toilet.


AMD's official response to this seems to be terrible at the moment. Apparently they told the guys at Gamers Nexus that everything is final and working as it was supposed to be, so there's no improvement to expect.
 
Completely agreed.

Most of AMD's new GPU launches from the past 5 years seem like the result of a monkey paw wish.

Hawaii: good performance at launch, stable drivers but most terrible cooler in the world which obfuscates all positives.
Fiji: good performance at launch, stable drivers, good cooling solutions but small amounts of VRAM that hurt their long-term performance.
Polaris 10: good performance, stable drivers, better efficiency, adequate amount of VRAM, but pulling too much power from the PCIe
Vega 20: good performance at competitive price, new price, finally a high-end solution buuut driver implementation for SMU is broken on windows so efficiency and stability goes down the toilet.


AMD's official response to this seems to be terrible at the moment. Apparently they told the guys at Gamers Nexus that everything is final and working as it was supposed to be, so there's no improvement to expect.

Maybe that's just what the bin is, an MI 50 that's not efficient? Efficiency matters for corporate customers, they're the ones most sensitive to electricity bills. AMD previously stated most consumers don't seem to care, and I suppose they'd know best from sales numbers. Bad efficiency bins are definitely a thing, but I'm just speculating.
 
Efficiency matters for corporate customers, they're the ones most sensitive to electricity bills. AMD previously stated most consumers don't seem to care, and I suppose they'd know best from sales numbers. Bad efficiency bins are definitely a thing, but I'm just speculating.
I actually thought efficiency would matter for corporate customers too.
But clearly that's not the case neither. If it actually would matter AMD would be selling these chips in cards using ~230W, being roughly 10% slower but using 25% less electricity.
And yes I'm somewhat surprised AMD isn't doing exactly that (but indeed, for consumer cards it's not surprising).
 
But clearly that's not the case neither. If it actually would matter AMD would be selling these chips in cards using ~230W, being roughly 10% slower but using 25% less electricity.
Apparently the cards can use 230W while keeping performance intact.
Here's an example of a guy who's running the GPU at 960mV vcore (down from 1250mV 1094mV default):


I can't find anyone who hasn't been successful at reducing vcore to at least 1V.
Driver auto-undervolting had to have been working on day one. It was absolutely essential for this card.
 
Last edited by a moderator:
The Vega 10 had / has massive undervolting potential too. I guess they don't have the tools to do a better voltage adjustment (is that the correct word ? Like testing gpus to see what is the needed voltage case by case, like the Hovis method). Or Vega wasn't designed with that aspect in mind.
 
Apparently the cards can use 230W while keeping performance intact.
Yes, but I'm not talking about undervolting. This might be an interesting discussion as well (could amd ship those cards with lower voltages and still guaranteeing they work correctly in all conditions), but really a separate one.
But I'm only talking about the fact that amd is operating these chips way beyond their point where they operate efficiently, even on those datacenter oriented cards. Hence even with the standard (non-undervolted) voltage/frequency curves if you drop clocks 10% power drops like 25%.
(While not a test with such a card, but rather the Vega VII, computerbase.de tested with a -20% TDP limit (which is the minimum you can even set...), and it resulted in a 3% performance drop on average, go figure... Granted that is biased by the fact that some apps don't actually reach the 300W limit, but even the maximum performance drop (which definitely went from 300W to 240W) was only 6% - https://www.computerbase.de/2019-02...ns-creed-origins-effizienzvergleich-bei-240-w. Now on desktop, it sort of makes sense amd absolutely wanted to get as much out of the chip they possibly could by any means, hairdryer or not, but I'd have thought in the datacenter efficiency would matter a lot more - if you need more performance there get more cards...)
 
Maybe apps on datacenter don't stress all the part of the gpu at the same time like games do ? Like, in my mind, they will use shaders/maths a lot, but tmu/rops ? While for gaming everything will be a active most of the time.
 
Maybe apps on datacenter don't stress all the part of the gpu at the same time like games do ? Like, in my mind, they will use shaders/maths a lot, but tmu/rops ? While for gaming everything will be a active most of the time.
Yup. A GPU running compute tasks at "100% utilization" barely stresses the GPU compared to a game which uses all parts of chip.
 
AMD Radeon VII: Benchmarks with current games and (Async) Compute
February 11, 2019
The AMD Radeon VII is undoubtedly interesting as a 7 nm product with a 16 GB memory - despite its weaknesses. This second review includes new game benchmarks and DirectX 12, Async and Compute measurements. The new games paint a slightly better picture, but can not change the conclusion.
https://www.computerbase.de/2019-02/amd-radeon-vii-sonder-test/
 
uQlHLxZ.png


From here:


At 986mV the core clock stays at over 1790MHz, so performance actually increases while being almost silent (fans run automatically at 1550rpm).
The overclock in his card was achieved with a slight undervolt to 1082mV which enables the core to clock at >1920MHz. He claims a 9-10% bump in performance after GPU and HBM2 overclock (+increased power limit), which in a comparison against the RTX 2080 would be enough to tilt the balance in favor of the VII in most games.


AMD's "factory overvolts" seem super weird at this point. All cards I've seen so far can undervolt at least by 100mV.
The Radeon VII's reference core voltage at the moment hurts everything: temperatures, power consumption, core clocks and noise.
Why?!





Who's making the HBM2 stacks? They all seem to clock at 1200MHz with no problems (aside from slightly increased temperatures). Maybe these are Samsung Aquabolt chips?
 
If someone is still wondering about multi-GPU/cfx support, it's there and seems to work fine at least in these test cases:
http://blog.livedoor.jp/wisteriear/archives/1073779496.html

It looks like it's not working, and the use cases where Crossfire "works" seem to be where DX12 explicit multiadapter is supported instead.

But damn, those scaling numbers on explicit multiadapter are ridiculous! Practically 100% scaling on any architecture.
Here's hoping explicit multiadapter will be broadly supported in the future. If they did right now, two $200 RX580 8GB would be giving a performance between the RTX 2080 and the 2080 Ti.
 
uQlHLxZ.png


From here:


At 986mV the core clock stays at over 1790MHz, so performance actually increases while being almost silent (fans run automatically at 1550rpm).
The overclock in his card was achieved with a slight undervolt to 1082mV which enables the core to clock at >1920MHz. He claims a 9-10% bump in performance after GPU and HBM2 overclock (+increased power limit), which in a comparison against the RTX 2080 would be enough to tilt the balance in favor of the VII in most games.


AMD's "factory overvolts" seem super weird at this point. All cards I've seen so far can undervolt at least by 100mV.
The Radeon VII's reference core voltage at the moment hurts everything: temperatures, power consumption, core clocks and noise.
Why?!





Who's making the HBM2 stacks? They all seem to clock at 1200MHz with no problems (aside from slightly increased temperatures). Maybe these are Samsung Aquabolt chips?


From the graph' how much exactly the Radeon VII (at 984mV) card is consuming?
 
Back
Top