AMD Vega Hardware Reviews

Gamer's Nexus was able to undervolt their GPU as well:

http://www.gamersnexus.net/guides/2...tion-undervolt-benchmarks-improve-performance

They found it improved perf as it was able to boost to higher clocks for longer while barely using more power then stock.

I read a pretty detailed post about extracting most performance from amd cards by undervolting and increasing the power limit on overclockers uk forum.

The user said that even though he saw similar clocks on monitoring software, the card power throttled enough to cause lower benchmark scores. The post is gone now but iirc he was using a79xx card.
 
Vega is huge and uses HBM.. why does it suck so badly compared to GP104?

Perhaps GP102 exists because a gaming-oriented GP100 would also suck compared to a GP104.
The closest chip there is to Vega in terms of FP16 throughput is GP100 (also a 300W chip clocking below 1.5GHz), which actually has a lower FP32/FP16 throughput. 2*FP16 probably doesn't come for free in the power and area departments and neither does HBCC. It uses HBM but it's HBM2 at close to 2Gbps and it's using 8-Hi stacks, both of which are unprecedented and could be pulling a lot of power. Vega FE's HBM stacks compared to Fiji's is 4x the memory density working at twice the frequency and twice the number of layers. It'll be interesting to compare power consumption and thermal performance between 8GB and 16GB Vega cards.

Current problems notwithstanding (much lower than expected texel fillrate, effective bandwidth and geometry performance in games, either of which could still be solved through drivers), Vega's problem is that it's one chip competing with GP104+GP102+GP100.
It has to provide AMD with a foothold in the high-end videogame card and as a high-end content creation card (lots of VRAM) and as a high-end compute-oriented card (2*FP16, non-useless FP64). Fiji only addressed one of these markets.
On top of that it's being built on GlobalFoundries' 14FF, which seems to be hopelessly behind TSMC's 16FF+ in power/performance. And according to 7nm expectation from both foundries it doesn't look like tables will turn come 2018.

And yes, GV100 is less than half a year away, but given its ridiculous size, usage of TSMC's 12FF risk production and price to match. I don't think it'll be a replacement for GP100. It looks like an even higher-end offering for a more specific audience.

That said, despite AMD's bullish statements and rather poor marketing and communication, it would be a miracle if AMD's R&D had managed to develop a single GPU that could take on 3 different chips at a time and beat them all at their own game.
And, of course, Vega is late. Vega is very late, I'd bet AMD's initial plans were to be selling Vega chips in late Q1/early Q2.
 
2*FP16 probably doesn't come for free in the power and area departments
PS4 PRO apu size (and surely power) doesn´t imply that.

It must be the HBC controller or the pixel engine. One of them (or a combination of them) is a disaster in performance/size.
 
Last edited:
Perhaps GP102 exists because a gaming-oriented GP100 would also suck compared to a GP104.
Maybe it exists in order to save manufacturing costs for Nvidia and increasing their margins. Also, to not cannibalize supply of high-margin-Compute cards due to the needed split of HBM (gen2) memory into both product lines? Just a thought.

The closest chip there is to Vega in terms of FP16 throughput is GP100 (also a 300W chip clocking below 1.5GHz), which actually has a lower FP32/FP16 throughput. 2*FP16 probably doesn't come for free in the power and area departments and neither does HBCC.
HBCC is materially different in power consumption from a traditional memory controller? Or just "not for free" as in maybe 5%-ish difference?

It uses HBM but it's HBM2 at close to 2Gbps and it's using 8-Hi stacks, both of which are unprecedented and could be pulling a lot of power. Vega FE's HBM stacks compared to Fiji's is 4x the memory density working at twice the frequency and twice the number of layers. It'll be interesting to compare power consumption and thermal performance between 8GB and 16GB Vega cards.
Yes, that's going to be interesting.

Current problems notwithstanding (much lower than expected texel fillrate, effective bandwidth and geometry performance in games, either of which could still be solved through drivers), Vega's problem is that it's one chip competing with GP104+GP102+GP100. It has to provide AMD with a foothold in the high-end videogame card and as a high-end content creation card (lots of VRAM) and as a high-end compute-oriented card (2*FP16, non-useless FP64). Fiji only addressed one of these markets.
If true, at least the decision to try and compete against a much smaller GPU using much more traditional memory, seems financially VERY risky at best.

If HBCC is as efficient as touted, the amount of on-package-memory would be VERY generous.

And, of course, Vega is late. Vega is very late, I'd bet AMD's initial plans were to be selling Vega chips in late Q1/early Q2.
Given that nothing really spectacular happened since early Q2 in the compute segment except the GV100 announcement (you compare Vega to last-gen GP100 yourself), I am no seeing how this affects the competive standing the card has. Maybe towards the end of the year, but perception now?
 
Last edited:
Perhaps GP102 exists because a gaming-oriented GP100 would also suck compared to a GP104.
The closest chip there is to Vega in terms of FP16 throughput is GP100 (also a 300W chip clocking below 1.5GHz), which actually has a lower FP32/FP16 throughput. 2*FP16 probably doesn't come for free in the power and area departments and neither does HBCC. It uses HBM but it's HBM2 at close to 2Gbps and it's using 8-Hi stacks, both of which are unprecedented and could be pulling a lot of power. Vega FE's HBM stacks compared to Fiji's is 4x the memory density working at twice the frequency and twice the number of layers. It'll be interesting to compare power consumption and thermal performance between 8GB and 16GB Vega cards.

Current problems notwithstanding (much lower than expected texel fillrate, effective bandwidth and geometry performance in games, either of which could still be solved through drivers), Vega's problem is that it's one chip competing with GP104+GP102+GP100.
It has to provide AMD with a foothold in the high-end videogame card and as a high-end content creation card (lots of VRAM) and as a high-end compute-oriented card (2*FP16, non-useless FP64). Fiji only addressed one of these markets.
On top of that it's being built on GlobalFoundries' 14FF, which seems to be hopelessly behind TSMC's 16FF+ in power/performance. And according to 7nm expectation from both foundries it doesn't look like tables will turn come 2018.

And yes, GV100 is less than half a year away, but given its ridiculous size, usage of TSMC's 12FF risk production and price to match. I don't think it'll be a replacement for GP100. It looks like an even higher-end offering for a more specific audience.

That said, despite AMD's bullish statements and rather poor marketing and communication, it would be a miracle if AMD's R&D had managed to develop a single GPU that could take on 3 different chips at a time and beat them all at their own game.
And, of course, Vega is late. Vega is very late, I'd bet AMD's initial plans were to be selling Vega chips in late Q1/early Q2.
GP100 is plenty capable in terms of gaming, fairly certain it was tested at some point I'll try to find the link if I can. It performs exactly like a lower clocked Titan X (or 1080ti), far more powerful than GP104 and it even has 128 ROPs compared to 96 on Gp102.

What distinguishes GP100 and GV100 from the rest of the line-up is FP64, and introduction of tensor with GV100 but remains to be seen if there will be other dies with similar functionality with Volta
 
GP100 is plenty capable in terms of gaming, fairly certain it was tested at some point I'll try to find the link if I can. It performs exactly like a lower clocked Titan X (or 1080ti), far more powerful than GP104 and it even has 128 ROPs compared to 96 on Gp102.

What distinguishes GP100 and GV100 from the rest of the line-up is FP64, and introduction of tensor with GV100 but remains to be seen if there will be other dies with similar functionality with Volta
As far as I'm aware, there are no gaming results for GP100. I'm not even sure the card would let you launch games, as the drivers are likely not at all made for it.
 
Current problems notwithstanding (much lower than expected texel fillrate, effective bandwidth and geometry performance in games, either of which could still be solved through drivers), Vega's problem is that it's one chip competing with GP104+GP102+GP100.
No. Vegas problem is that it does not have the efficiency (Perf/Power) to compete with any Pascal chip. Vega FE is already hitting the power limit hard, thats why you see the AiO card. And Vega RX will have the same problem if the rumors about an AiO version is true. It is basically Fiji 2.0.
 
On GP100?
GP102 and down have desktop equivalents made for gaming. GP100 was never intended to be used in gaming solutions.
No, Tesla drivers run games just fine on a GPU104. You'd be hard pressed to find GP100 gaming benchmarks as I doubt you'd find gaming software on those machines.
However, Quadro GP102 gaming benchmarks should give you an indication of how it would fair.
 
so Vega is basically a 1080 with worse power efficiency... a year later. Even at a slightly better price why would someone choose this over a 1080 exactly?

If you already have a Freesync display I guess. It's the only reason I keep my Fury X (which is still doing a good job to be fair) and was waiting for Vega. Now, I guess I'll wait for Navi (or a "fixed" Vega...)
 
At least they're not showing it against the Titan anymore, being more realistic? Apparently adaptive-sync and g-sync were also being used and the monitor was blocked so no way to determine the refresh range. Doesn't look like resolution was mentioned either. If so much has to be hidden and you're locking down to a likely 45-70ish fps, I'm not sure even doing these tours is even worth it.
 
No, Tesla drivers run games just fine on a GPU104. You'd be hard pressed to find GP100 gaming benchmarks as I doubt you'd find gaming software on those machines.
However, Quadro GP102 gaming benchmarks should give you an indication of how it would fair.
Sort of. GP100 has 64 CUDA cores per SM. The original reason I was looking for GP100 gaming results was to figure out what kind of impact it would have on gaming performance.

If you already have a Freesync display I guess. It's the only reason I keep my Fury X (which is still doing a good job to be fair) and was waiting for Vega. Now, I guess I'll wait for Navi (or a "fixed" Vega...)
Basically my situation. Got a Freesync display with my 390. Waiting for NVIDIA to either provide VESA adaptive sync support or for AMD to provide a proper upgrade for me.
 
so Vega is basically a 1080 with worse power efficiency... a year later. Even at a slightly better price why would someone choose this over a 1080 exactly?

If RX Vega performs like Vega FE (still can't believe it), it's going to be an epic engineering fail on par with R600 or worse. The kind that I didn't think was possible anymore with today's state of verification methods. (And this will be 100% on Raja, 4 years is plenty to take control.)

But there are enough AMD enthusiasts (with a FreeSync monitor) who'll only look at price and performance and not power and definitely not the kind of nerd ratios like perf/mm2 that I'm interested it.

And if RX Vega is priced correctly and plays games just fine, it's not an irrational choice.
 
just some games running, no card, no fps counter) off vs. the GTX1080:
I just can't believe they'd resort to that same tactic again! Since when does NVIDIA or AMD show demos without fps?! This could be another telling sign (like the Vega FE vs TitanXp game demos).
 
Back
Top