AMD Vega Hardware Reviews

Asked AMD that question.
If it's not too much trouble, could you ask AMD what is up with the apparent ~20% raw memory bandwidth regression in Vega vs. Fiji? And whether they plan to fix it or are able to fix it? In case you haven't seen the images going around on the subject:

kkNjv2u.png

TvCu3r0.png
 
Was about to post the same thing, the DICE example of 30% improvement from FP16 everyone seems happy to misquote was for a specific workload not for an overall performance increase. It was mentioned only to accelerate checkerboarding by 30%.
https://www.slideshare.net/DICEStudio/4k-checkerboard-in-battlefield-1-and-mass-effect-andromeda
(Slide 82)
So the "used throughout checkerboard resolve shader" part that they showed increasing overall performance 25-35% accounted for a 30% increase? Guess I could see how people mix that up with FP16 only half that. Didn't really go into the other areas it was used so impact may have been a bit larger. Overall not a bad improvement for only one part of a larger picture.

Is undervolting really what a power saving mode would do?
Lower voltage and clocks, but in reality it could be faster with throttling. GN at least said it was far more stable. I believe on Reddit someone tested with LN2 and was running 1800MHz using 100W LESS power than stock. So leakage and thermals are definitely a thing.
 
Sure, we know these "Perf/W" claims are made up but AMD has done their work...
After reading a dozen of reviews, I found this graph the most shocking:

getgraphimg.php


From hardware.fr who measures the cards at the PCIe slot, 2 years later, Vega 64 shows worst perf/w than the already horrible Fury X, and with the help of a full node shrink. This is just UN-BE-LIE-VA-BLE :oops::oops::oops:
 
Last edited:
So the "used throughout checkerboard resolve shader" part that they showed increasing overall performance 25-35% accounted for a 30% increase?

Not exactly. The use of Packed-Math FP16 accounted for a 30% increase in the performance of the "checkerboard resolve shader" compared to not using Packed-Math FP16 "checkerboard resolve shader".
 
And I should note that it's not currently exposed to DXVA. I asked AMD about this, but I haven't had a response so far.

It's clearly not a priority for them, which is a shame. I hope they have a better solution by the time they launch Raven Ridge.

As an aside, this was the tipping point that made me pull the trigger today on buying a custom OC'd 1080 @ $499 over waiting and trying to land a Vega 56 reference card (miner demand putting the cheapest 1070 @ $429 took that out of consideration). There were lots of factors that ultimately led to that decision, but Vega having hardware acceleration for VP9 profile 2 @ 4K would have been enough for me to be willing to wait. But adding nothing over Polaris when Nvidia have added both VP9 up to 8K (1060) and further added VP9 profile 2 up to 8K (1050/1030) to products released since was just one fail too many.
 
Last edited:
It's clearly not a priority for them, which is a shame. I hope they have a better solution by the time they launch Raven Ridge.
It's also interesting to note that Vega only offers partial VP9 acceleration, which is simply unacceptable these days
 
If it's not too much trouble, could you ask AMD what is up with the apparent ~20% raw memory bandwidth regression in Vega vs. Fiji? And whether they plan to fix it or are able to fix it? In case you haven't seen the images going around on the subject:
While it's not published (since that architecture section wasn't done in time), I did run that same benchmark. I'm getting much better numbers than that on Vega 64. About 20% better, to be precise.

http://images.anandtech.com/doci/11717/rx_v64_aida64eng_gpgpu.png

I've seen someone theorize that in some cases it's taking too long for Vega to clock up in these short benchmarks, and that may very well be the case.
 
Watching the DF reviews online and hte 56 looks amazing . So far its beaten the 1070 in everything except AC unity. In COD it was faster than the 180. However crysis 3 its slower than a fury X. Really don't know whats going on there.

However with eth solid showing i think it will only improve with time. hopefully i can snag one when it comes out
 
Knowing AMD's driver development, I think there's a lot of low hanging fruit to be picked with Vega, heck the Hawaii relaunch saw tessellation improvements that gave >20% in some games and some noise was made when they weren't given to the 290 series before 390 reviews were out.

edit: Forgot to add,

An issue that we weren’t expecting, is traditional Multi-Sample or Super Sample Anti-Aliasing performance.

Based on our testing there is indication that MSAA is detrimental to AMD Radeon RX Vega 64 performance in a big way. In three separate games, enabling MSAA drastically reduced performance on AMD Radeon RX Vega 64 and the GTX 1080 was faster with MSAA enabled. In Deus EX: Mankind Divided we enabled 2X MSAA at 1440p with the highest in-game settings. The GeForce GTX 1080 was faster with 2X MSAA enabled. However, without MSAA, the AMD Radeon RX Vega 64 was faster. It seems MSAA drastically affected performance on AMD Radeon RX Vega 64.

https://www.hardocp.com/article/2017/08/14/amd_radeon_rx_vega_64_video_card_review/17
 
Last edited:
From hardware.fr who measures the cards at the PCIe slot, 2 years later, Vega 64 shows worst perf/w than the already horrible Fury X, and with the help of a full node shrink. This is just UN-BE-LIE-VA-BLE
Yea, 28 vs 14nm, plenty of "energy efficiency improvements" according to marketing materials and yet managing worse perf/W than Fury X.

Too bad CF configs are not viable solutions anymore. Two RX 580 would be faster and maybe have better perf/W...
 
What is the default power mode for the Vega cards?
Balanced mode.

Do any sites have benchmarks comparing power saving mode to balanced on vega 64?
Techreport did a small one using hitman:

OfsJ5Gt.png

Npr42CN.png


Note: those are system power consumption numbers.


Because we only had 3 days.
Well the joke's on them, then.
That last-minute push for Vega 56 on reviewers who already had very scarce time to work on the Vega 64 review is just another drop in the ocean of terrible communication and marketing decisions that AMD has been making with Vega, IMO.
 
While it's not published (since that architecture section wasn't done in time), I did run that same benchmark. I'm getting much better numbers than that on Vega 64. About 20% better, to be precise.

http://images.anandtech.com/doci/11717/rx_v64_aida64eng_gpgpu.png

I've seen someone theorize that in some cases it's taking too long for Vega to clock up in these short benchmarks, and that may very well be the case.
Yep, that's been fixed. Same here on time constraints.

I would like to point out though, that the AIDA-GPU-Benchmarks that have been shown below, were part of (our I guess?) Vega-FE-Review and not originally present in the RX Vega article. I've added them since, with the new and improved numbers for Vega RX - cannot rebench FE atm though.
 
Last edited:
It's clearly not a priority for them, which is a shame. I hope they have a better solution by the time they launch Raven Ridge.

As an aside, this was the tipping point that made me pull the trigger today on buying a custom OC'd 1080 @ $499 over waiting and trying to land a Vega 56 reference card (miner demand putting the cheapest 1070 @ $429 took that out of consideration). There were lots of factors that ultimately led to that decision, but Vega having hardware acceleration for VP9 profile 2 @ 4K would have been enough for me to be willing to wait. But adding nothing over Polaris when Nvidia have added both VP9 up to 8K (1060) and further added VP9 profile 2 up to 8K (1050/1030) to products released since was just one fail too many.

It is a shame, but I wouldn't get too attached to VP9. Only youtube uses it and it's not scaleable at all to the future, AV1 should be replacing it... (waits impatiently for the colossal slowness that is AV1...)
 
Balanced mode.


Techreport did a small one using hitman:

OfsJ5Gt.png

Npr42CN.png


Note: those are system power consumption numbers.



....

I'd be curious to see more extensive benchmarks. My major concern with vega is power consumption. That result puts it in a much better light. Still not as good as gtx1080, but much more reasonable and the tradeoff is not big. Would like to see other games benchmarked to see if the performance holds up.
 
I'd be curious to see more extensive benchmarks. My major concern with vega is power consumption. That result puts it in a much better light. Still not as good as gtx1080, but much more reasonable and the tradeoff is not big. Would like to see other games benchmarked to see if the performance holds up.

Power saving mode makes the Vega 64 clock around 1500MHz, and the power consumption is close to a 1080 Ti or Pascal Titan.

The fact that Vega 10, like Polaris 10, needs to be clocked beyond its ideal power efficiency curve to properly compete is just another clue pointing to a substantial deficit in manufacturing process efficiency.

Globalfoundries had better be selling their 14LPP wafers to AMD for a lot less than TSMC would for 16FF+ wafers, otherwise this is just a lose-lose situation for AMD.
 
Power saving mode makes the Vega 64 clock around 1500MHz, and the power consumption is close to a 1080 Ti or Pascal Titan.

The fact that Vega 10, like Polaris 10, needs to be clocked beyond its ideal power efficiency curve to properly compete is just another clue pointing to a substantial deficit in manufacturing process efficiency.

Globalfoundries had better be selling their 14LPP wafers to AMD for a lot less than TSMC would for 16FF+ wafers, otherwise this is just a lose-lose situation for AMD.
Yeah TSMC has a lead in performance over GF but there is a regression in perf/watt and perf/area in Vega going from 28mm to 14nm FinFet. There is a problem in the hardware don't blame manufacturing process for this terrible product.
 
Back
Top