AMD Vega 10, Vega 11, Vega 12 and Vega 20 Rumors and Discussion

And in the end does it matter? You can tick all checkboxes, have enormous GFlops and it still matters nothing when you can not convert it into FPS. I am betting that the time between the launch of Vega and the launch of GV104 will be smaller than the time between the launch of GP104 and Vega. That is the elephant in the room. NV could launch a much faster card that also has all the features in less than 6-9 months.
 
Is this like the „Fermi is doing tessellation in software“-software discussion? Or is it coming from the discussion whether or not the roughly corresponding Open CL limit is a hard limit or software enforced?
We can benchmark it to find whether there's a performance cliff when you increase the resource binding counts beyond what the HW is capable of efficiently handling. Even if exceeding limits incurs some cost, it is still good that now everybody supports binding tier 3 and programmers are able to execute the same code on all major GPUs.

I would also argue that Vega having full feature level 12_1 support is good for PC gaming in general. Now all major PC IHVs support conservative raster, volume tiled resources and rasterizer ordered views. Rendering technology can move forward. Obviously this doesn't help Vega RX in the benchmarks now, but it will be a big step forward for the future games. Same for double rate fp16. Current games gain nothing, but some PS4 Pro games are already showing nice gains. Developers will certainly start to migrate these optimizations to PC, as there's now hardware support for double rate fp16. It will take time, but we can't simply ignore these new Vega features because they don't bring performance improvement now. Developers are interested about these new features, and they will be used eventually.
 
Last edited:
We can benchmark it to find whether there's a performance cliff when you increase the resource binding counts beyond what the HW is capable of efficiently handling. Even if exceeding limits incurs some cost, it is still good that now everybody supports binding tier 3 and programmers are able to execute the same code on all major GPUs.

I would also argue that Vega having full feature level 12_1 support is good for PC gaming in general. Now all major PC IHVs support conservative raster, volume tiled resources and rasterizer ordered views. Rendering technology can move forward. Obviously this doesn't help Vega RX in the benchmarks now, but it will be a big step forward for the future games. Same for double rate fp16. Current games gain nothing, but some PS4 Pro games are already showing nice gains. Developers will certainly start to migrate these optimizations to PC, as there's now hardware support for double rate fp16. It will take time, but we can't simply ignore these new Vega features because they don't bring performance improvement now. Developers are interested about these new features, and they will be used eventually.

What do you think about Primitive shaders?
 
Is Vega GCN-improvement really adding just glass jaws? It seems you gotta use primitive shader, 16b math, DX12_1+ features, extract perf. from the rasterizer, etc. to exploit Vega architecture. No fun for legacy?
 
TLvQGns.jpg

This guy is simply scary
 
So, judging from our Spec ViewPerf Results in energy-01, where we got 20,4 mean score, DSBR indeed WAS enabled on Vega Frontier edition for this particular workload at least.
So it's possible that I was sort of wrong about being wrong about the DSBR being partly on?

Either way, I wonder what it would take to enable it for that workload, and what about that load versus others that prevents AMD from flipping the switch for everything else. DSBR would be a significantly less compelling feature if needs special handling per application and possibly the notoriety of an industry benchmark to get the necessary attention.
 
Is Vega GCN-improvement really adding just glass jaws? It seems you gotta use primitive shader, 16b math, DX12_1+ features, extract perf. from the rasterizer, etc. to exploit Vega architecture. No fun for legacy?
Tiled rasterizer is fully automated, no game developer efford needed. There's no API for primitive shaders, so either the driver uses it internally or it isn't used at all ATM. Of course both of these features might require extra work from the driver team.

FP16 code is going to boost Intel perf as well. Intel GPUs have double rate FP16 as well. Nvidia also has double rate FP16 hardware for professional use (P100 and V100). We can expect Nvidia to ship double rate FP16 consumer hardware at some point. So FP16 optimizations aren't there solely for AMD. It's is good to see FP16 hardware getting more traction in PC consumer market. More efficient execution will help all GPUs in the future. Vega having double rate FP16 support will make it more future proof.

Feature level 12_1 features are going to enable new algorithms that are more efficient. Nvidia and Intel already have 12_1 hardware out. Now that all three major PC desktop players have 12_1 hardware out, we can expect game developers to increase their focus on new features. This also helps all 12_1 GPUs, not just AMD Vega. As a game developer I am happy about AMDs decision to spend some of their Vega transistor budget and R&D budget to support 12_1 feature set instead of spending that budget for something else. It might not be the best choice for Vega RX launch (benchmarking existing software), but it is definitely the best choice for game developers and gamers in the long run.
 
Last edited:
I would also argue that Vega having full feature level 12_1 support is good for PC gaming in general. Now all major PC IHVs support conservative raster, volume tiled resources and rasterizer ordered views. Rendering technology can move forward. Obviously this doesn't help Vega RX in the benchmarks now, but it will be a big step forward for the future games. Same for double rate fp16. Current games gain nothing, but some PS4 Pro games are already showing nice gains. Developers will certainly start to migrate these optimizations to PC, as there's now hardware support for double rate fp16. It will take time, but we can't simply ignore these new Vega features because they don't bring performance improvement now. Developers are interested about these new features, and they will be used eventually.
Of course it is good that we finally have more than Nvidia and Intel supporting newer DX12 features. I don't think anyone will argue with that. :)
 
Reminds me of the tessellation unit on R600. It was nice but never used, since it needed custom coding. Well the whole thing smell like R600. But this time they don't have a RV770 in their pocket I guess
Problem with R600->RV770's tessellation units was the fact that it wasn't included in DX10/10.1, and when Microsoft introduced tessellation in DX11 they deemed those units as off-spec because they weren't flexible enough.

We'll see if the primitive shader will suffer the same fate, but AMD nowadays is very influential on Vulkan's development so that's probably one API where the feature is bound to appear. Besides, AMD's developer relations are substantially better today than they were 10 years ago (Tim Sweeney holding a Radeon and smiling for the camera!).
 
The "new" DX12 features Vega supposedly incorporated (ie, CR, ROV), can we assume they are working in Vega FE and will work in Vega RX? Any Vega FE reviews or developers confirm this?
 
The "new" DX12 features Vega supposedly incorporated (ie, CR, ROV), can we assume they are working in Vega FE and will work in Vega RX? Any Vega FE reviews or developers confirm this?
12_1 features are supported by current Vega FE drivers. Someone should write a benchmark for conservative raster and rasterizer ordered views. Would be nice to see how Nvidia & Intel and AMD implementations differ.
 
Tiled rasterizer is fully automated, no game developer efford needed. There's no API for primitive shaders, so either the driver uses it internally or it isn't used at all ATM. Of course both of these features might require extra work from the driver team.
I agree. However, isn't it doable to tune/profile your SW to gain some perf. from the rasterizer? The gains for engines listed in AMD's slide vary from less than 5% to 30+%.
 
I agree. However, isn't it doable to tune/profile your SW to gain some perf. from the rasterizer? The gains for engines listed in AMD's slide vary from less than 5% to 30+%.
Tiled rasterizer efficiency boost mostly depends on the amount of overdraw. But geometry submit order also plays a role. Sorting roughly by screen locality should improve perf on both AMDs and Nvidia's tiled rasterizers.
 
I wish there were some 120Hz benchmarks. Right now, I'm really struggling to play anything competitive at less than 100Hz, gives you so much more control and precision. That's also the reason why I've beend holding out on a new display - waiting for 4K + 120 Hz.

I was hoping for Vega to remedy that in an economically viable manner.
 
Last edited:
Here's a slide I haven't seen elsewhere:

Sneaky marketing, using an aftermarket 980 Ti to make the 1080 FE look bad by comparison, most probably against the watercooled Vega :smile:

The ideal choice for 4k is neither the 1080 nor the Vega RX, it's the 1080 Ti and even then you'll struggle in some titles and have to play around with settings.
 
Back
Top