AMD Vega Hardware Reviews

Honestly, the White Paper is not that different from the slides, with the exception of not mentioning IPC anywhere.
LOL Read it through more carefully then.
WP said:
In some units—for instance, in the texture decompression
data path of the L1 cache—the teams added more stages to
the pipeline, reducing the amount of work done in each
clock cycle
in order to meet “Vega’s” tighter timing targets.

Vega to me is just a NetBursted/Bulldozered GCN. They have apparently invested enormous effort/die space to push the frequency up. AMD carefully dances around the pipeline lengthening which reminds Bulldozer a bit.

* Vega topped Polaris' freq
=======
* Vega lost IPC by pipelines tweaks
* Vega obtained a partial compensation of the pipeline tweaks induced IPC loss in form of huge caches, etc[?]
* Vega wasted diespace by the compensations
* Vega became a super-powerhog
 
if you want to fluid dynamics you want high FP64
Our SPH fluid solver uses 16 bit (normalized) storage for position data. fp16 for velocities. We have a relative small area however, but many professional fluid dynamics use cases are also using a small simulation area around a single car/bike/plane/rocket. Fluid solving itself considers only a small neighborhood (either neighboring voxels or particles fetched from a fixed size grid). If you store the particles to a cache locality optimized grid, you can store the fractional part of the position inside the grid cell. Similarly math inside the SPH kernel doesn't need high precision, since the particles that affect each other are close to each other. This avoids the biggest floating point problems such as catastrophic cancellation (which obviously are very relevant if you use a single global floating point coordinate space).

But doing fluid sim in fp16 versus fp64 requires much more thought about the math and data structures. Professional users tend to favor higher hardware cost over higher amount of programming hours.
 
Question to owners of Vega:
- what is vGPU your core can get down to at 1550-1600MHz before crashing?

My card (Vega 56 Stock) can run 1550-1600MHz in Heaven 4.0 and DOOM Vulkan at 1070mV set in Wattman (1.05V GPU-Z) happily for hours. I've tried 1000mV and it ran Heaven for full benchmark pass, then I bumped core clock to 1650MHz and it crashed after 30 seconds. I'm still looking for that sweet spot, but I have a feeling that 1050mV will be stable up to 1600MHz. For mining I've run 1600MHz at 1000mV for hour before deciding it is stable and going back to proper mining clocks. Oh, and I verified that voltage changes are real, with power meter at the wall.

On another note, my HBM2 is clocking to 930MHz happily, but not much more, even with more vMEM.
Interesting fact about HBM2 is that increasing vMEM from stock 950mV to 975mV at 930MHz increases power consumption at the wall by 10W during ETH mining!
 
vMEM doesn't not actually control voltage to HBM2. For vega56 it's about 1.2v , 1.35 on 64 and FE. And, AFAIK, no one have found a way to change it yet. You can flash a Vega64 bios on the 56 though, and have the voltage bump.

EDIT : sorry, this should go to Vega discussion I guess
 
vMEM doesn't not actually control voltage to HBM2. For vega56 it's about 1.2v , 1.35 on 64 and FE. And, AFAIK, no one have found a way to change it yet. You can flash a Vega64 bios on the 56 though, and have the voltage bump.

EDIT : sorry, this should go to Vega discussion I guess

I know about flashing 64 BIOS, but I want to explore stock card first before moving on.More fun for longer that way!
So what is controlling Wattman voltage field in memory section?
 
I have no idea what the point of this last post was. I hope it's not "AMD brought this on themselves so they deserve utter failurezz!!" because it adds zero contribution to the discussion.
Regardless, there's a mod post asking to stop with the offtopic so I won't feed it anymore. Feel free to continue the discussion in the proper thread, as I did already.



Today techspot did a Vega 64 LC vs. GTX 1080 FE comparison on a Ryzen 5 1600 @4GHz and a 7700K @4.9GHz. Only problem issue I see with this comparison is why they decided to test on 1080p instead of 4K or 3440*1440 which are becoming more popular for high-end cards. Who's buying a Vega 64 or GTX1080 to play at 1080p? Maybe they were simply trying to test CPU overhead performance. (EDIT: they also tested 1440p which is what I link below. Point was why test 1080p in these cards and not 4K, for example).

Here's the most interesting pair of results:

xv1HqfV.png
axfSC2r.png
 
Last edited by a moderator:
I have no idea what the point of this last post was. I hope it's not "AMD brought this on themselves so they deserve utter failurezz!!" because it adds zero contribution to the discussion.
Regardless, there's a mod post asking to stop with the offtopic so I won't feed it anymore. Feel free to continue the discussion there, as I did already.
You are completely right, my bad.


Today techspot did a Vega 64 LC vs. GTX 1080 FE comparison on a Ryzen 5 1600 @4GHz and a 7700K @4.9GHz. Only problem I see with this comparison is why they decided to test on 1080p. Who's buying a Vega 64 or GTX1080 to play at 1080p? Maybe they were simply trying to test CPU overhead performance.

Here's the most interesting pair of results:

xv1HqfV.png
axfSC2r.png

Why do the graphs say 1440p and you claim 1080p? Or did they test both?
 
Today techspot did a Vega 64 LC vs. GTX 1080 FE comparison on a Ryzen 5 1600 @4GHz and a 7700K @4.9GHz. Only problem I see with this comparison is why they decided to test on 1080p. Who's buying a Vega 64 or GTX1080 to play at 1080p? Maybe they were simply trying to test CPU overhead performance.

I see 1440p?

EDIT - Yes the review has also 1080p results. I guess that was indeed to test when exactly the CPU bottleneck shows up and whether it would be different between the CPUs.

EDIT 2 - They probably did not test 4K because that would be more of a GPU bottleneck than CPU? Granted that if they really wanted a CPU bottleneck just get the resolution down to 720p or lower, but they were probably trying to be realistic about it, since no one games at 720p with those GPUs.
 
Last edited:
New Ah, I see. So no reason to complain about 1080p - they can do what they want with their time, right?

They can even test SLI Titan Xp at 640*480 in Doom 3 if they want to.
And I'm still allowed to have an opinion about it.
 
They can even test SLI Titan Xp at 640*480 in Doom 3 if they want to.
And I'm still allowed to have an opinion about it.
'Course you are, just as anyone else. :) I just don't see why an optional/addtional data set could be categorized as a problem. Irrelevant, maybe, according to each one's expectations or motives. Waste of time, probably, also maybe according to one's preferred n... gaming resolution. But 1080p still is the most widely used display resolution, I guess. Heck, I only have a 1080p display (even though it's 120Hz at least), but then I only have a Vega 56, not a 64. So it completely eludes me why the inclusion of 1080p results should be a problem, and that's why I ask.
 
There, I updated the sentence in question with a follow-up explanation in bright red.
 
There, I updated the sentence in question with a follow-up explanation in bright red.
Makes sense, from a certain point of view. But for articles about CPU performance, it makes more sense not to go too far into GPU-limited territory. Maybe they do a follow up?
 
To me 1080p have not much sense to test CPUs overhead, if we really want to test that in a unrealistic scenario we could simply test at 720p with low details and have an even "better result" or higher gaps within the CPUs tested but as this test shows having more overhead in low resolutions does not mean having higher performance in upper ones as we can see ryzen losing at 1080p and winning at 1440p.
 
That's quite impressive for the $100 cheaper 900Mhz slower 1600. If Battlefield 1 was actually fully tested in MP, the 1600 would come out on top overall. GTAV is missing though which would skew it in favor of 7700k and call it a wash.
 
Back
Top