AMD Vega Hardware Reviews

After thinking about it and talking to some AMD people: The texture bandwith issue could be related to the short duration of the test and Vega gradual progression through clock states. Will try to investigate.

Thank you for talking to them about it and investigating. I've been puzzled by Vega 10's bandwidth issues since Vega FE got tested, and testing of Vega 64 and Vega 56 don't show the issue as fully fixed yet. Would also be nice if you could ask your contacts whether and to what extent DSBR, primitive culling and primitive shaders are working, as I've heard some reports that prim shaders are *still* disabled in drivers.

azkyB7G.png
 
Best case scenario it's about exactly 1 year away, not 1,5 (glofo claims they're ready for mass production in 2nd half of 2018 with the 7nm "Leading Performance" -process)
That is truly a best case scenario. In some aspects GF are almost a year behind TSMC. On the bright side though, they won't lead with a mobile oriented process, so there is that. And the process looks good, the little which is out in the public domain. Execution remains to be seen. High performance 7nm, whether from TSMC or GloFo actually looks very good compared to 16/14nm. Whenever the 7nm processors and GPUs arrive, there is every reason to assume that it will be a significant step up in performance, particularly for GPUs of course.
 
So after weeks of "illegitimate results" and "wait for Reviews!!!1" we are currently at a crossroads of "Now wait for....<>" and "If you only undervolt, overclock, and run non-existent software at non-existent setting.."

Instead if trying to predict the future or deal in murky hypotheticals, maybe start with a notion that AMD, not being idiots, gave you the best product they could and evaluate it on its merits. The theory that AMD was "sandbagging" FE was dubious enough, the notion that they are sabotaging their own launch by holding anything back is absurd.

Id est quo id est.
 
And yeah async panned out and devs stumbled on it. Takes time to work around the performance hit and it did improve performance as demonstrated in every(?) title using it in DX12. DSBR is enabled everywhere and not requiring specific dev intervention as some suggested, Rys confirmed that on Twitter. AMD likely needs to tune the binning though. Took Nvidia a while to get theirs right. FP16 was announced by AMD when they first showed Vega, so not sure why it would just "now" be becoming relevant. It's been there the entire time. All of those will make a difference, that part has never changed. All of those happen to be in your "magic" drivers.
They are magic when you start conjuring up numbers from your own imagination, claiming to surpass 1080Ti without any sort of substantial shred of evidence, the same way you claimed massive reduction of power consumption through driver power regulation or the claim of disabled driver features that will massively boost performance. Among them DSBR which did nothing to improve performance in the end. As for Async, it barely added like 5% more performance in actual games, jury is still out on FP16. This constant overhyping of imaginary proportions is what ruined AMD's launch in the first place.
 
Has anyone covered Video Decode/Encode at all? Even just a DXVA checker screenshot showing the format support?

I'm in the position where upgrading to a 1070/1080(ti) from my 1060 would actually be a downgrade in some features that are important to me. It'd be nice if there was an option where I could get both a performance upgrade and upgraded functionality in this area.
 
That is truly a best case scenario. In some aspects GF are almost a year behind TSMC. On the bright side though, they won't lead with a mobile oriented process, so there is that. And the process looks good, the little which is out in the public domain. Execution remains to be seen. High performance 7nm, whether from TSMC or GloFo actually looks very good compared to 16/14nm. Whenever the 7nm processors and GPUs arrive, there is every reason to assume that it will be a significant step up in performance, particularly for GPUs of course.
While GF can be seen as year behind TSMC, they're not spending their R&D to 10nm process at all unlike TSMC, and their 7nm (at least the 1st 7nm anyway) is indeed performance oriented
 
You are certainly entitled to your opinion, but your posts are misleading at best and it makes the whole site look bad IMO. I don't think anyone here is willing to argue with you any more since your position is so hysterically biased and unreasonable. I find myself almost liking your posts just so I can unlike them.
Keep on posting Anarchist. I don't agree with some of your posts that are heavily AMD slanted (just like mine are the opposite:) ), but the architectural ones require me to think, often for quite a while, and they give me greater insight, even if I sometimes eventually don't agree either.
 
You are certainly entitled to your opinion, but your posts are misleading at best and it makes the whole site look bad IMO. I don't think anyone here is willing to argue with you any more since your position is so hysterically biased and unreasonable. I find myself almost liking your posts just so I can unlike them.
Not opinion, empirical evidence in this case. We have hard numbers from these reviews. DICE(GDC I think), AMD, and common sense have presented examples of gains in FP16.

I'm not the one misleading anyone here, you are with flagrantly biased posts and viral marketers to back the position. That's what makes the site look bad when you come in here outright rejecting simple logic because it doesn't support your position. So feel free to take your hysterical doom and gloom elsewhere.

They are magic when you start conjuring up numbers from your own imagination, claiming to surpass 1080Ti without any sort of substantial shred of evidence, the same way you claimed massive reduction of power consumption through driver power regulation or the claim of disabled driver features that will massively boost performance. Among them DSBR which did nothing to improve performance in the end. As for Async, it barely added like 5% more performance in actual games, jury is still out on FP16. This constant overhyping of imaginary proportions is what ruined AMD's launch in the first place.
Then stop conjuring numbers and use the evidence and a little common sense. I've presented you a link to the benchmark requested. Devs have published papers on gains from various tech. So go do your homework and quit making shit up. GN confirmed DSBR was disabled initially. So saying it was disabled initially seems straightforward enough. Even enabled it would need tuned and AMD presented optimizations, with no figures attached, in the Mantor interview on discarding primitives. Saving bandwidth should at the very least have an impact. Maybe you know more about Vega than a corporate fellow though. Async is still hard to judge as it ultimately required paths Nvidia could run efficiently. Part of what is holding back new engines as multi-engine was largely put on hold. Async for VR techniques in the same boat for a number of reasons. The gains are there, just a matter if getting them fully enabled. I'm not imagining anything but putting forth points that AMD and devs have presented. I'd say the jury is still out on a "ruined" launch, because if they sell all their product at higher MSRP than they anticipated it's hardly a bad thing.

Regardless we still need to wait for more in depth reviews or details from AMD. Whole lot of unknowns with Vega before we can judge it.

Keep on posting Anarchist. I don't agree with some of your posts that are heavily AMD slanted (just like mine are the opposite:) ), but the architectural ones require me to think, often for quite a while, and they give me greater insight, even if I sometimes eventually don't agree either.
Don't worry I will. I often prefer playing devil's advocate just to hash out positions. Finding the limits of an architecture is the hard part, yet the most useful more often than not. Besides, anything well known isn't worth discussing.

I still genuinely believe the performance is there, just a matter of understanding what is going on prior to reaching a verdict. Taking results as fact without bothering to understand them is pointless.
 
Anarchist4000 said:
Again, get rid of the turbo and power isn't an issue. Then take a game with DX12/Vulkan and FP16 it probably beats 1080ti significantly at similar power.
Anarchist4000 said:
Just because you don't like the facts doesn't make them wrong.
Anarchist4000 said:
Then stop conjuring numbers and use the evidence and a little common sense.
Anarchist4000 said:
So go do your homework and quit making shit up.
Anarchist4000 said:
I still genuinely believe the performance is there, just a matter of understanding what is going on prior to reaching a verdict.
 
Not opinion, empirical evidence in this case. We have hard numbers from these reviews. DICE(GDC I think), AMD, and common sense have presented examples of gains in FP16.

I'm not the one misleading anyone here, you are with flagrantly biased posts and viral marketers to back the position. That's what makes the site look bad when you come in here outright rejecting simple logic because it doesn't support your position. So feel free to take your hysterical doom and gloom elsewhere.
I can understand if you think I'm biased based on what I've posted in this thread, but in reality I am super disappointed with AMD. Based on everything they've been saying the last few months + using common sense and extrapolating from Fiji performance I thought there was no way they could fail this hard.

BTW I only wanted you to cool it with the outrageous claims of dramatic future performance increases that may or may not appear. I could just as easily claim NVIDIA will release an uber driver in 6 months that will increase GP104 performance across the board by 30%, but I don't because A) it is bullshit and B) I would get laughed off the forum.
 
I can understand if you think I'm biased based on what I've posted in this thread, but in reality I am super disappointed with AMD. Based on everything they've been saying the last few months + using common sense and extrapolating from Fiji performance I thought there was no way they could fail this hard.

BTW I only wanted you to cool it with the outrageous claims of dramatic future performance increases that may or may not appear. I could just as easily claim NVIDIA will release an uber driver in 6 months that will increase GP104 performance across the board by 30%, but I don't because A) it is bullshit and B) I would get laughed off the forum.
Then again, we do have historical evidence of AMD cards gaining more performance over time compared to GeForces from every generation since GCN was first introduced. Vega being biggest change in the architecture, but still essentially GCN, is there a reason not to expect at least similar performance uplift as previous generations, if not bigger due the bigger than usual changes?
 
Then again, we do have historical evidence of AMD cards gaining more performance over time compared to GeForces from every generation since GCN was first introduced. Vega being biggest change in the architecture, but still essentially GCN, is there a reason not to expect at least similar performance uplift as previous generations, if not bigger due the bigger than usual changes?

There is, but when you're late like they are, by the time your performance starts looking favorable against the competition the competition will have introduced new products. They really aren't in a good place, ATM.
 
Then again, we do have historical evidence of AMD cards gaining more performance over time compared to GeForces from every generation since GCN was first introduced. Vega being biggest change in the architecture, but still essentially GCN, is there a reason not to expect at least similar performance uplift as previous generations, if not bigger due the bigger than usual changes?
You could just as easily make the argument that all the low hanging fruit has been picked and future increases the likes of what we saw with Tahiti and Hawaii are unlikely.

And even if Vega performance will increase over time (it probably will a bit), this is far less relevant than how it performs right now for a multitude of reasons. The only logical conclusion as of 2017/08/14 is that Vega is a failure in comparison to a chip that is much smaller, much cheaper to produce, much less power hungry, and has been on the market for 15 months. Not to mention Vega should be competing against GP102 given its BOM.
 
Last edited:
Has anyone covered Video Decode/Encode at all? Even just a DXVA checker screenshot showing the format support?

I'm in the position where upgrading to a 1070/1080(ti) from my 1060 would actually be a downgrade in some features that are important to me. It'd be nice if there was an option where I could get both a performance upgrade and upgraded functionality in this area.

Got my answer from this whitepaper (pg 14).

Hybrid decoding for VP9....sigh.
 
I've presented you a link to the benchmark requested.
One benchmark! One against dozens with subpar performance across 1080p, DX11, Anti-Aliasing and even several DX12 titles (Hitman, Sniper Elite 4, Gears Of War 4.. etc).
Devs have published papers on gains from various tech.
Yet you've presented none of that here.
and quit making shit up.
Great advice, because I am not the one claiming ridiculous gains out of thin air.
Even enabled it would need tuned
Saving bandwidth should at the very least have an impact.
Where is the impact, and how much of it, is there?
Maybe you know more about Vega than a corporate fellow though.
Coincidentally a corporate fellow never gave any concrete number, nor promise any massive performance improvement, a practice you liberally do in a heart beat, despite NOT being a corporate fellow, and despite proven consistently wrong at every prediction you make.
Regardless we still need to wait for more in depth reviews or details from AMD. Whole lot of unknowns with Vega before we can judge it.
Another great advice, you should start immediately and not give imaginary "magic" numbers before the dust settles down.
 
Last edited:
Not sure why people were expecting so much. The Vega launch has been both delayed and rushed. AMD has had working silicon for 8 months and weren't able to get performance up to level with the FE and the RX was less than 2 months later. R600 comparisons are pretty justified right now. Vega 64 is quite terrible on all fronts right now but Vega 56 is 90%+ of performance with only ~80% of the FLOPS, something is seriously bottlenecking. AMD probably knew this and couldn't produce the drivers that is actually able to bypass all the bottlenecks. At this point, I doubt they would be able to magically come out with drivers that improve performance by much, maybe a little eventually but Vega 64 is terrible as a product right now and unless prices drop, it won't be worth it. AMD probably will have to forfeit the high end again until Navi and then we get to have the same song and dance again where everyone can hope they catch up.

Vega 56 does seem like a good card but AMD has been delayed so long that this probably isn't good enough for them to compete especially since they are currently being compared with year old cards and nvidia will have Volta early next year. Navi will be 7nm according to AMD and I can't see it coming before the end of 2018. Hopefully Navi will bring the same kind of upgrade the R700 had over the R600.
 
Why people expected so much? Maybe because of "Poor Volta" "Join the Revolution" "the FE is not a benchmark for the gaming performance of RX Vega".... the product has some deficits, but the disappointment is 100% AMD made, whose marketing did everything to fuel over the top expectations.
 
Why people expected so much? Maybe because of "Poor Volta" "Join the Revolution" "the FE is not a benchmark for the gaming performance of RX Vega".... the product has some deficits, but the disappointment is 100% AMD made, whose marketing did everything to fuel over the top expectations.
AMD barely made any of that hype. The revolution has been their marketing thing since polaris, FE is not RX Vega and performance is slightly different and Poor Volta was a small easter egg on a sign for half a second..

AMD has barely even talked about Vega in the past 6 months (compared to ryzen) and if people are getting overhyped off the few little tidbits that gets dug up, they have their own problems.
 
I´d say if AMD thinks "Poor Volta.." would just be seen as a funny Easter egg they have the problems. And if they think the few percent between the FE and RX Vega 64 in gaming is what people understand when they say "FE is not representative for RX Vega performance" they missed some very basic communication rules.
Let me remind you of the wallpapers a long time before the launch of the product. Imho they talked a lot about Vega.
 
Back
Top