AMD Vega 10, Vega 11, Vega 12 and Vega 20 Rumors and Discussion

If the difference is real ( im not convinced) i think it will come down to scheduling differences. I tired the fall up date and in witcher 3 and superposition i saw 0 difference. In witcher 3 with croud/npc count set to ultra im CPU limited on 3770k @ 4.3 and Superposition barely uses 1/2 a core.

That's fascinating. With all the changing they are making:
Game Monitor Update
Auto Game Mode in certain games
Addition of Xbox Networking settings. (anyone found a way to remove restricted nat)

I beginning to take them a bit more seriously this time then the GWL era.
There is certainly a lot more tasks found in the task manager and people are still reporting better performance.
I would certainly like to know what else they did.
 
You can post the first one in the Volta thread, too bad the cool stuff doesn't do much for either of two things I'm looking at,

the great irony here is that one of the main selling points: ~~~machine learning~~~ with the ~~~tensor units~~~: gains nothing from all this. it's the dumbest straight-line code you can imagine

absolutely none of this is meaningful for gaming performance in any modern engine sorry
 
I wouldn't say Doom is a great example, as the extensions used are more or less the console platform coming with the new shader model. All IHVs support the extensions which are becoming standard. A true extension mess would be something only one IHV reasonably supports, not that there aren't places for those. Primitive shaders for example are likely an extension, however it's unclear if they could even be abstracted onto other hardware. It needs to happen, but the graphics pipeline is changing with it.
Doom is a great example as it is still one of the few games that is balanced for both AMD and Nvidia where AMD has a strong performance position, primarily because of the extensions and their accessibility to the low level hardware functions, this provides more of a benefit on the PC than Async compute for this game (which from memory is at most 10% on PC when various reviewers tested it).
For context Forza 7 is not a great example of a DX12 because it is more balanced/weighted towards AMD optimisation/focus just like Quantum Break (OK extreme example but Nvidia seemed to need to do a lot in the drivers to improve performance) - just mentioning this as Doom is pretty well optimised for both GPUs just that one has a greater advantage from the Vulkan extensions although Nvidia has been increasing theirs more recently as well (albeit only usable for games developed using them and still an unknown).
Cheers
 
Last edited:
The 470 isn't doing 58 FPS, it's doing 51 FPS or ~15% lower than a 580, which is close to their clock difference. Here we have the RX580 with more CUs than the 470 but their performance difference boils down to their clocks. And since both the 470 and 580 have the same number of geometry engines, this further implies that Witcher 3 is geometry intensive for GCN cards (like pretty much all Gameworks titles).
Sorry, typo on my part. I was referring to the 480, which is effectively a lower clocked 580. The 4SE limit being an arrangement of CUs should scale by clocks if the limit, but that wouldn't appear to be the case. If bottlenecks shift, I'd expect a bit more variance in the results.

As for the RX580 vs Fury vs Fury X, it indicates that while Polaris cards have to spend a lot more time with the compute/pixel shaders in each frame, they spend a lot less time on geometry (because of higher clocks + primitive discard accelerator), hence the similar performance between Polaris 10 and Fiji.
True, however the odds of achieving identical performance seem rather low. Fiji also has far more bandwidth which doesn't seem to have much effect.

What I believe is happening here is a memory latency limit of some sort and a dependency. Difficulty prefetching vertices or indexing geometry past a certain point. Regardless of CUs, memory bandwidth, or clocks. One interesting difference with Vega would be the pseudo-channel memory to reduce latency and larger parameter cache. The 480/580 are identical and even Fiji could have similar latencies as they are from similar memory generations. The 390 older and a bit slower. Still seems odd as prefetching indexed geometry shouldn't be all that difficult to extend. Also possible the GCP and front ends are in their own clock domain and they simply can't dispatch work quickly enough.

Doom is a great example as it is still one of the few games that is balanced for both
Balanced, but not really using unique extensions. I'd describe it as a year before it's time. All IHVs could effectively execute the instructions, it's just they weren't readily included with the APIs or exposed at the time. Doom was well optimized, but not really using extensions specific to one IHV or implementing different extensions specific to each. I otherwise agree with your sentiments.
 
Balanced, but not really using unique extensions. I'd describe it as a year before it's time. All IHVs could effectively execute the instructions, it's just they weren't readily included with the APIs or exposed at the time. Doom was well optimized, but not really using extensions specific to one IHV or implementing different extensions specific to each. I otherwise agree with your sentiments.
You do realise just how much performance AMD has gained in Doom from the Vulkan extension available for their hardware?
A game well designed and balanced for both AMD and Nvidia, but AMD gain a massive performance advantage from the Vulkan low level extension.
A well designed DX11 or DX12 game focused on both Nvidia and AMD does not see such notable gains for one manufacturer over the other relative to what we see with Doom and the Vulkan extensions used.
Cheers
 
No, because DOOM was originally OGL and AMDs OGL implementation is horrific.
I think you missed the context and how this has to be compared to other well designed games and also comparable competitor hardware, the past and OGL is not really relevant to my point.
Well designed games for both AMD/Nvidia tend to show slight advantages of one over the other when it is comparable HW, but this game in Vulkan that IS well designed for both AMD and Nvidia has much greater advantage for AMD; primarily because of the Vulkan extensions rather than async compute.

Please note I keep reiterating only games that seem well designed for both manufacturers without the engine/rendering weighted towards one only (such as Quantum Break or Fallout 4, maybe Shadow Warriors 2 but not sure if that is just driver or game engine optimisation, amongst others).
Cheers
 
Last edited:
You do realise just how much performance AMD has gained in Doom from the Vulkan extension available for their hardware?
No.

How much? :p

Btw! Anyone else than me who is antsy over 3rd party Vega boards still MIA? I've been expecting/waiting for weeks now to be able to buy mine, but still no news! #sadpanda :(
 
You do realise just how much performance AMD has gained in Doom from the Vulkan extension available for their hardware?
Significant, but Intel and Nvidia also support those extensions. Just not when Doom first released. At least I'm not aware of any functionality used that didn't end up in the current shader model. AMD is the only one with full hardware support for them, but my point is that these weren't extensions that would only be available on one IHV. A "good" example in my mind would be a title that somehow managed RPM and tensor core support in the same title for example. Not sure the RPM titles coming would be considered bad either as an alternative wouldn't exist.
 
No.

How much? :p

Btw! Anyone else than me who is antsy over 3rd party Vega boards still MIA? I've been expecting/waiting for weeks now to be able to buy mine, but still no news! #sadpanda :(
By this much <----------> :)

More seriously though, quite a few decent reviews/analysis out there for Doom and Vulkan.
And yeah with you regarding 3rd party Vega boards, sort of reminds me a bit of the delay and hurt (from a customer perspective relative to Nvidia in terms of noise/cooling solutions) we saw with the 290/290x.
 
I haven't played through Doom yet, but doesn't it have levels with a lot of translucent surfaces, like fogged windows with decals on them?
 
Sometimes. Lots more fire, smoke, sparks and so on though.

I remember reading something about certain translucent surfaces in Doom being impressive, but I can't remember why. All I know is deferred renderers are not well known for their performance with semi-transparent surfaces, which makes it surprising they'd offer a deferred render option.
 
I remember reading something about certain translucent surfaces in Doom being impressive, but I can't remember why. All I know is deferred renderers are not well known for their performance with semi-transparent surfaces, which makes it surprising they'd offer a deferred render option.
With 80%+ async compute it may balance out better. Could be a reconstruction technique as well. Also order independent transparency to consider, which is implicitly deferred.
 
I remember reading something about certain translucent surfaces in Doom being impressive, but I can't remember why. All I know is deferred renderers are not well known for their performance with semi-transparent surfaces, which makes it surprising they'd offer a deferred render option.
Are you referring to the fact you can't alpha blend a G-buffer directly?
 
Back
Top