AMD Vega Hardware Reviews

So what, that everyone knows where Totten's allegiances stand, so its probably better to ignore those statements, otherwise we risk derailing the thread. Learned that long ago.
In terms of not derailing the thread, maybe we should focus on the actual reviews, seeing as this is the review thread? This was the first set of numbers, and I found the power and frequency graphs quite interesting. Remarkably small gain in performance going from "balanced" to "+50%" for instance, even though power draw increased significantly.
 
In terms of not derailing the thread, maybe we should focus on the actual reviews, seeing as this is the review thread? This was the first set of numbers, and I found the power and frequency graphs quite interesting. Remarkably small gain in performance going from "balanced" to "+50%" for instance, even though power draw increased significantly.

I think that further proves that something in the architecture is bottlenecking the chip. Which is curious given the high amount of cache it is supposed to have. The ALUs or some other parts of chip that should benefit from OC are probably idle waiting for something else to finish? (Uneducated guess, have a very limited understanding of these things).

EDIT - Oh wait! Where are you seeing small gain in performance? We only have one data point about it, the first chart on that page, and gains seem to be decent? In fact in that data point OC vs OC is faster than 1080FE!
 
Last edited:
My guess it's bandwidth starved.
It would be interesting to see benchmarks with 10% overclock on the HBM:s and compare.
 
I think that further proves that something in the architecture is bottlenecking the chip. Which is curious given the high amount of cache it is supposed to have. The ALUs or some other parts of chip that should benefit from OC are probably idle waiting for something else to finish? (Uneducated guess, have a very limited understanding of these things).

EDIT - Oh wait! Where are you seeing small gain in performance? We only have one data point about it, the first chart on that page, and gains seem to be decent? In fact in that data point OC vs OC is faster than 1080FE!
I'm looking at the diagram where they show power draw vs. setting (for liquid and air) and then cross referencing to the line graphs of frequency over time running a benchmark at the different power settings. Both variants show (very) large increases in power draw in the diagram, but looking at the clock curves, the gain from increasing the allowed power draw is quite small. Dropping the power setting by (-25%) drops the frequency graph pretty much as you would expect.
 
I'm looking at the diagram where they show power draw vs. setting (for liquid and air) and then cross referencing to the line graphs of frequency over time running a benchmark at the different power settings. Both variants show (very) large increases in power draw in the diagram, but looking at the clock curves, the gain from increasing the allowed power draw is quite small. Dropping the power setting by (-25%) drops the frequency graph pretty much as you would expect.

True, but without actual performance numbers it is hard to judge if it is having a big effect or not. Who knows, the power settings might have an effect beyond the main clock speed (we had chips on the past that have some obscure own clocking - Geometry Clock Delta on G70) ?
 
Last edited:
Well, I would think the Cache and the HDCC would help with that. If HDCC is working as it should, that is.

HBCC is just a fancy name for the new memory (or "cache") controller, no ? With a better memory management, and a better "knowledge" of what is really needed in the vram. Is HBCC enabled by default ? Or it's a slider on/off ?
Even with this, I don't see how this can help in pure bandwidth bound situations.
But I agree with you with the cache, having a lot, everywhere, should help. Except if it's not efficient. Or, again, if something is broken somewhere and trash everything else.
 
If it is really a cache controller and using the HBM as a cache instead of having it managed by third-party software (the games/apps), it could optimized memory layout and access patterns.
 
and AMD lied again !
Now the price is confirmed to be $599 for the reference Vega 64.
Yep! Looks like AMD raised the prices ... also $599 on Amazon.
1M0dCq6.png
https://www.amazon.com/dp/B074PKVSH9/ref=cm_sw_r_cp_apa_JTvKzb3J7428Y
 
Well, I would think the Cache and the HDCC would help with that. If HDCC is working as it should, that is.
That will only help if constrained by memory capacity, PCIE bandwidth, or CPU. Need to see more games tested, but Vega did have better minimum fps as AMD put in their slides. And turbo seemed to add 5% at the cost of 100W. Seems like we just need more context.

I honestly thought another 2900XT is extremely unlikely with modern simulation tools.

Yet here we are.
Again, get rid of the turbo and power isn't an issue. Then take a game with DX12/Vulkan and FP16 it probably beats 1080ti significantly at similar power. So 2900XT doesn't seem a great comparison. No idea where drivers stand either.

Is it AMD, or vendors ?
Rumors are AMD upped MSRP to make more from mining. No idea how bundles or rebates will affect prices, but supply and demand is a thing. If AMD was truthful about delaying to increase supply, demand must be rather high.
 
DX12 and FP16 will be enough to beat a 1080ti at similar power ? Wait what ? Even RTG didn't dare to say something like that...
 
DX12 and FP16 will be enough to beat a 1080ti at similar power ? Wait what ? Even RTG didn't dare to say something like that...
Just as the first-gen GCN Tahiti models crawled up from GTX680 performance to GTX780 after a couple of years, it's a relatively safe bet (not 100% sure of course) that Vega's proportional performance towards GP104/102 will also change as time goes by. Reaching the (reference clocked) 1080Ti is too optimistic sure, but truth be told the difference to the GTX1080 isn't that great either, it's 33% faster.

Although what we can get out of this is that if Vega turns out to be a good card within a year or so, it's definitely a problem to point at AMD for not being able to extract all the performance they can out of a new architecture when it launches, again. It's just not a good tradition to have.
 
Back
Top