Another Vega vs Fury clock for clock :
https://www.hardocp.com/article/2017/09/12/radeon_rx_vega_64_vs_r9_fury_x_clock_for/15 (conclusion page).
"When you look at all the above architecture changes and benefits over previous high-end AMD GPUs you have to ask yourself where are those and why aren’t they making a bigger difference? Either the features are broken, turned off, not working, or its advantages were simply over-marketed."
Whats the diference in area between those? and tbh Nvidia could simply argure "it was enough to claim the performance crown".
in AMD case they marketed its "all new arquitecture" its all of its new feautre aimed to improve performance and then clock for clock it was equal than last(2 years old) gen. Not the same thing.
What compute tasks were you thinking of? I've seen the CAD benchmarks but I don't think I've seen any compute benchmarks.
Where has it been conclusively shown that vega is competitive across the board with GP102 (running professional drivers), and not just in a few select benchmarks?Performs like GP102 in CAD tasks and compute.
Where has it been conclusively shown that vega is competitive across the board with GP102 (running professional drivers), and not just in a few select benchmarks?
Yeah, you made your post just as I was making mine. I'll go have a look-see.If you mean professional drivers as in Quadro P6000 see above.
Isn't compute very specialized? I know of very little creative professional companies using their graphics cards for 3D-rendering while also running FP64-Monte Carlo Simulations while going out for lunch break.
Crypto-Engines seem to be a very strong point in Vega, though.
Fp64 compute is only a tiny fraction of all compute workloads. Even professional compute can often be fp32, or even fp16 (neural nets, etc). All modern AAA games spend significant time of their frame time running compute shaders. Many games are 50% compute, 50% rasterization nowadays, and the amount of compute vs rasterization is rising all the time.Isn't compute very specialized? I know of very little creative professional companies using their graphics cards for 3D-rendering while also running FP64-Monte Carlo Simulations while going out for lunch break.
I mentioned it because that's one of the areas where Vega stands out in that Techgage.com-test that was linked. For FP32-compute, apart from crypto/hashing, Vegas performance did not seem very much outstanding (again: In that linked test).Fp64 compute is only a tiny fraction of all compute workloads. Even professional compute can often be fp32, or even fp16 (neural nets, etc). All modern AAA games spend significant time of their frame time running compute shaders. Many games are 50% compute, 50% rasterization nowadays, and the amount of compute vs rasterization is rising all the time.
I don't see a vastly different feature set. It's a GPU. It has shaders cores that are quite similar. It has texture units and ROPs that are fixed function. It has geometry processing units, timing, large caches, etc.You can't just blindly compare two chips with vastly different feature sets and market targets to make your point. Or you can, if you already made up your mind about the conclusions, before looking at the facts, that is.
I don't see a vastly different feature set. It's a GPU. It has shaders cores that are quite similar. It has texture units and ROPs that are fixed function. It has geometry processing units, timing, large caches, etc.
The differences are minor: HBCC, 2xFP16, ...
You can look at facts, such as TFLOPS, texture ops, memory BW, ROPs, etc and compare those against the competition. And for everything except ROPs, Vega has ballpark the same number or higher than a 1080 Ti.
And that's when you conclude that 1080 level performance is something that worthy of heavy criticism.
There's nothing particularly special about Vega's compute performance: it's exactly where it should be given its raw specs.
Was that a wise or a necessary decision?That's looking at it from a very high level and silo vision (the total is not equal to the sum of its parts). If it is so simple as you make it out to be, why did NVIDIA go to the trouble of branching out compute from graphics products?
If, like you say, there should be no difference in performance between a part oriented for gaming and one for compute, why did they bother? Its clearly not the case. Seriously aiming for compute with the same chip most likely leads to suboptimal performance in graphics.
On the other hand, AMD always had loads of TFlops of theoretical compute power, without the graphics performance to justify it. It's not anything new.
Edit - And for all those metrics that are similar to GTX1080Ti.. It does compete in compute, highlighting even more the fact that it trades pure gaming performance for compute.
Was that a wise or a necessary decision?
In the case of VEGA, it clearly isn't. And that's an anomaly.That's looking at it from a very high level and silo vision (the total is not equal to the sum of its parts).
You ask very hard hitting questions and I don't have a few hours to research this, so you'll have to do with just the following bullet points:If it is so simple as you make it out to be, why did NVIDIA go to the trouble of branching out compute from graphics products?
Another hard hitting question. Unfortunately, my only answer to this is production cost and thus profitability. I know that's not something AMD is terribly concerned with.If, like you say, there should be no difference in performance between a part oriented for gaming and one for compute, why did they bother?
Clear as mud.Its clearly not the case. Seriously aiming for compute with the same chip most likely leads to suboptimal performance in graphics.
The disparity has never been as large as with VEGA. Not even close.On the other hand, AMD always had loads of TFlops of theoretical compute power, without the graphics performance to justify it. It's not anything new.
It highlights even more that VEGA has a serious issue with gaming performance. And it's not at all obvious why because previous AMD GPUs had a much more reasonable compute vs gaming performance ratio.Edit - And for all those metrics that are similar to GTX1080Ti.. It does compete in compute, highlighting even more the fact that it trades pure gaming performance for compute.
GP100 would destroy VEGA in FP64 and inter-chip workloads. You know the stuff that costs area.Edit 2 - It would be nice to see Vega going head to head against GP100, did anyone know of any base for comparison?
I would say necessary. As necessary as Fermi was for NVIDIA. It would be unwise if AMD had the cash to make two chips, but they don't seem to have, so there is that. Regardless, I prefer to see a company take a risk for the future than playing safe for the moment.
Clearly Vega has a bigger flops count than GP102, any applications that depend purely on flop count is going to favor Vega, no doubt about that, Cryptography is the same as well (for obvious reasons), no surprises there. However, in the cases that use mixed workloads it's not going to be the same, from the very review you quoted, Vega trails the Titan XP by a big margin in: 3D Max, AutoCAD, and the whole SPECviewperf/SPECapc lineup of tests. It's also worth to point out that Vega is blasted beyond it's optimal clocks/efficiency curve, in other words it's pushed beyond it's limits just to compete with GP104 in gaming, If Pascal is put under similar conditions it would pull further ahead.Here for example:
https://techgage.com/article/a-look-at-amds-radeon-rx-vega-64-workstation-compute-performance/5/
Vega 64 is competing pretty well with Quadro P6000 not only in graphics but compute as well in Scientific, Financial and Cryptography.
You ask very hard hitting questions and I don't have a few hours to research this, so you'll have to do with just the following bullet points:
- a gaming GPU has no business having a 1/2 FP64 ratio.
- a gaming GPU has not business having 4 or 6 high BW inter-chip links
- a gaming GPU has no business having ECC everywhere
- a gaming GPU apparently doesn't need a silly amount of L2 cache, local memory or register files
- a gaming GPU has no business having 4 HBM interfaces and everything that goes with it
Another hard hitting question. Unfortunately, my only answer to this is production cost and thus profitability. I know that's not something AMD is terribly concerned with.
The disparity has never been as large as with VEGA. Not even close.
It highlights even more that VEGA has a serious issue with gaming performance. And it's not at all obvious why because previous AMD GPUs had a much more reasonable compute vs gaming performance ratio.
GP100 would destroy VEGA in FP64 and inter-chip workloads. You know the stuff that costs area.
AMD never promised Vega to compete with GTX1080Ti. Blame the rabid fans, who still to this day believe in a miraculous driver with 30% boost in performance lol.
Question: how does Vega compare to the Nvidia GeForce GTX 1080 Ti and the Nvidia Titan Xp?
DON WOLIGROSKI (AMD): It looks really nice.
link