AMD Vega 10, Vega 11, Vega 12 and Vega 20 Rumors and Discussion

Is there really any doubt Vega will overtake 1080ti in the future? The features to do it in the short term are there with packed math, primitive shaders, and DSBR. Longer term bindless resources, Tier3 features, GPU driven rendering, and SM6. Not to mention the historical trend of Nvidia cards aging rather badly to encourage sales. That seems to be readily evidenced with all the cache added to both Vega and Volta and Volta's inclusion of hardware scheduling at various levels.
Is there any data or evidence to back up that claim? All you did is amass together some technical names to try prove a highy dubious wishful theory. Even AMD never claimed any of the things you say. Anyone with common sense would think that AMD would delay RX Vega launch if any of these features had any significant impact on performance. But I won't resort to that argument because it's common sense. AMD never gave any performance increases for their DSBR implementation, their projection for the feature was rather cautious, same thing for primitive shaders. As for Tier 3, SM6, bindless resources.. we've had previous incarnations of some of them before, they hardly amounted to anything, some features are even for flexibility not performance.
We know current performance and what boost certain features provide. It's not difficult.
Really? How much uplift did AMD give for primitive shaders? Or DSRB?
 
Has anyone demonstrated the performance benefit of tiled rasterisation in NVidia's GPUs?
For multiple GPU generations, Nvidia didn't officially admit to the existence of that feature. It was suspected and hinted at by a few, but the first concrete discussion here came after the Realworldtech trianglebin test.
I'm not clear on whether the option is given to turn it off for the sake of testing.

AMD have graphs and marketing slides! What else do we need? :rolleyes:
In fairness to AMD, they know one does not bring an abacus to a multi-dimensional optimization fight.
If they ever gave multipliers and indicated each was universal and exclusive of the others, that would be setting them up for serious backlash. To my knowledge, they've made sure to keep their "up-to" figures specific to limited subsets and without overall context.
 
Is there any data or evidence to back up that claim? All you did is amass together some technical names to try prove a highy dubious wishful theory. Even AMD never claimed any of the things you say. Anyone with common sense would think that AMD would delay RX Vega launch if any of these features had any significant impact on performance. But I won't resort to that argument because it's common sense. AMD never gave any performance increases for their DSBR implementation, their projection for the feature was rather cautious, same thing for primitive shaders. As for Tier 3, SM6, bindless resources.. we've had previous incarnations of some of them before, they hardly amounted to anything, some features are even for flexibility not performance.
Packed math we've seen ample examples from devs on console. AMD included examples as well for some lighting effects.

DSBR has bandwidth savings listed, but no fps numbers. If Vega was bandwidth starved as has been claimed, that should help significantly.

As AMD is currently selling all the Vegas that hit the market and pro works fine, I'm unsure why anyone would expect a delay. Could always choose not to make a profit and go out of business I suppose.

The features will be mixed obviously, but bindless for example makes far more sense for GPU driven rendering which we haven't seen. The SM6 "intriniscs" seemed to help Doom enough and exist in hardware for GCN as GCN2/console is the basis.

AMD hasn't presented much hard data, but depending on the status of certain features they may not be prepared to. Why release piecemeal performance improvements when they likely synergize?

Really? How much uplift did AMD give for primitive shaders? Or DSRB?
When Nvidia released similar features they never acknowledged the existence of said features. Just a node jump level of performance increase from roughly the same hardware. A combination of tiled raster and register caching providing the uplift.

These features will entail various degrees of tuning as they're part of a black box. If AMD hasn't finished that, then releasing numbers isn't warranted. They may also be intertwined, in which case they are even harder to pinpoint. We've seen the Energy benchmark with a 2x increase. Bandwidth savings from DSBR presented. A driver setting to force DSBR, even if it crashes, might be nice just for testing.

Has anyone demonstrated the performance benefit of tiled rasterisation in NVidia's GPUs?
As mentioned above, with no way to disable I'm not sure it can be tested. I thought someone tried the RWT test with really old drivers a while back, but that's a poor approximation of gaming.
 
Packed math we've seen ample examples from devs on console. AMD included examples as well for some lighting effects.
Ample examples of limited performance increase.
DSBR has bandwidth savings listed, but no fps numbers. If Vega was bandwidth starved as has been claimed,
Which is obviously not, otherwise AMD would have given specific fps numbers for those savings. In fact AMD told Anandtech to expect the gains to show up in resources starved GPUs, which leaves out full fledged Vega.

I'm unsure why anyone would expect a delay.
They already delayed RX 2 months beyond Frontier Edition. They would have delayed it more if they thought it was worth it, instead of half assing it through maybe the worst AMD launch since R600.
The features will be mixed obviously, but bindless for example makes far more sense for GPU driven rendering which we haven't seen.
Guess what? We already have 2 tiers of bindless in current GPUs for many years now, we've yet to exploit them. SO don't think Tier 3 would make that big of a difference compared to what's already here.
The SM6 "intriniscs" seemed to help Doom enough and exist in hardware for GCN as GCN2/console is the basis.
Yeah, still not enough for GP102.
in which case they are even harder to pinpoint. We've seen the Energy benchmark with a 2x increase.
How are they hard to pinpoint when they just pinpointed them in the Energy benchmark? The level of contradictions in that statement is high!
 
Delaying the launch even further to wait for driver improvements could have made a lot of sense had a massive improvement been possible in a short time, say, 10% in a month, but for some 3% in that same month, it would have made no sense to delay the launch.

That said, if that 3%/month rate could be sustained, it would be a very big deal after 6 months, and absolutely massive after a year.
 
Ample examples of limited performance increase.
30% that is multiplicative with other boosts is bordering on a generational performance increase. So sure, limited to one generation of increases sounds about right.

Which is obviously not, otherwise AMD would have given specific fps numbers for those savings. In fact AMD told Anandtech to expect the gains to show up in resources starved GPUs, which leaves out full fledged Vega.
Why provide fps figures if the number wasn't going to be representative of final performance? That seems rather stupid and I'm not sure why anyone would even consider it as it isn't representative of anything final. What figures have been provided represent measured gains with the effect isolated. FPS figures that may change daily don't make much sense.

They already delayed RX 2 months beyond Frontier Edition. They would have delayed it more if they thought it was worth it, instead of half assing it through maybe the worst AMD launch since R600.
Yet they still sell cards as fast as they make them. They're so bad that demand exceeded expectations despite retailers jacking up prices. Worst launch ever with higher than expected revenue! Many more launches like this and all AMD employees will be forced into early retirement, sipping cocktails on private tropical islands, and we'll be in real trouble.

Guess what? We already have 2 tiers of bindless in current GPUs for many years now, we've yet to exploit them. SO don't think Tier 3 would make that big of a difference compared to what's already here.
Sounds like that final hurdle is a big one. What is the percent increase from a fixed number to unlimited anyways? One sounds infinitely better. I'm sure engines will be just fine with a handful of states. Besides, why have GPU driven rendering and more elegant deferred methods when you have a perfectly capable CPU to bottleneck everything?

How are they hard to pinpoint when they just pinpointed them in the Energy benchmark? The level of contradictions in that statement is high!
Hard to tell their thinking, but probably has to do with performance in a synthetic benchmark being somewhat easy to nail down. None of those pesky resource management issues, variable object counts, complex shapes, etc messing things up.
 
Delaying the launch even further to wait for driver improvements could have made a lot of sense had a massive improvement been possible in a short time, say, 10% in a month, but for some 3% in that same month, it would have made no sense to delay the launch.

That said, if that 3%/month rate could be sustained, it would be a very big deal after 6 months, and absolutely massive after a year.

But that is unrealistic, because either a performance enhancing feature works or it does not work.
 
But that is unrealistic, because either a performance enhancing feature works or it does not work.
Not exactly a shortage of them though, on top of the typical game and driver optimizations. Even enabled is no guarantee the feature is optimal. Just consider DSBR with the ability to change tile sizes dynamically. AMD could be tweaking those algorithms, likely with diminishing returns, for some time and other features may interact with the optimal settings.
 
Not exactly a shortage of them though, on top of the typical game and driver optimizations. Even enabled is no guarantee the feature is optimal. Just consider DSBR with the ability to change tile sizes dynamically. AMD could be tweaking those algorithms, likely with diminishing returns, for some time and other features may interact with the optimal settings.

But still you would get one big improvement when you turn the feature on and then some smaller follow-ups and not some consistent medium improvements over 12 drivers or such. But imho the big question is why is the driver so bad? Vega is late and still the driver is barely in alpha status. If I compare it to the Maxwell launch, this is the worst display of competence from AMD in ages.

I have my theory about the problem and that has a lot to do with the primitive shaders and the work needed to transform VS into unified shaders in the driver and on the fly. But if this is true, the problem will stay with VEGA for a long time and hit it again with most new games and surely most new game engines.

But whatever the reason is, the launch was awfully executed in any form possible. From pricing shenanigans over driver quality to availability. And imho so far the press has focussed a lot on what VEGA might turn out to be and not on the sad state of the VEGA ecosystem when launched for the paying customer. 400-700$ for a promise is a bad joke and the press and forums would go berserk if NV would have tried this.
 
How so? Last time I ran RotTR and AotS both worked.
When the general advice with NVidia is to run games using D3D12 instead of D3D11, then we'll know it's not broken. Since the current, opposite, advice is so stark, then it indicates a problem.

Same as when the general advice is to buy Vega 64, instead of 1080. Or at the very least, "choose either, they're about the same".

One could argue that D3D12 on NVidia is relatively undesirable, because D3D11 is very good. Though that doesn't provide an answer for why D3D12 is generally regarded as inferior on NVidia.
 
Packed math we've seen ample examples from devs on console. AMD included examples as well for some lighting effects.

DSBR has bandwidth savings listed, but no fps numbers. If Vega was bandwidth starved as has been claimed, that should help significantly.

As AMD is currently selling all the Vegas that hit the market and pro works fine, I'm unsure why anyone would expect a delay. Could always choose not to make a profit and go out of business I suppose.

The features will be mixed obviously, but bindless for example makes far more sense for GPU driven rendering which we haven't seen. The SM6 "intriniscs" seemed to help Doom enough and exist in hardware for GCN as GCN2/console is the basis.

AMD hasn't presented much hard data, but depending on the status of certain features they may not be prepared to. Why release piecemeal performance improvements when they likely synergize?


When Nvidia released similar features they never acknowledged the existence of said features. Just a node jump level of performance increase from roughly the same hardware. A combination of tiled raster and register caching providing the uplift.

These features will entail various degrees of tuning as they're part of a black box. If AMD hasn't finished that, then releasing numbers isn't warranted. They may also be intertwined, in which case they are even harder to pinpoint. We've seen the Energy benchmark with a 2x increase. Bandwidth savings from DSBR presented. A driver setting to force DSBR, even if it crashes, might be nice just for testing.


As mentioned above, with no way to disable I'm not sure it can be tested. I thought someone tried the RWT test with really old drivers a while back, but that's a poor approximation of gaming.
we also know that async on consoles is being used quite a lot (i think it was 20-30% depending on the game?)
truth is devs on pc games wont go so far to give amd a clear advantage over nvidia purely because this will hurt their sales same thing goes for everything amd currenty is good at it be it compute shaders being async being with those prim shaders bla bla
i think this is one of the reasons why amd is actually trying to automate the primitive shaders (can they even be agnostic gaming wise? )can you imagine the %^$%&^&#^^%fest we would have seen if the devs had access on them?(they will eventually i guess)
 
When the general advice with NVidia is to run games using D3D12 instead of D3D11, then we'll know it's not broken. Since the current, opposite, advice is so stark, then it indicates a problem.

Same as when the general advice is to buy Vega 64, instead of 1080. Or at the very least, "choose either, they're about the same".

One could argue that D3D12 on NVidia is relatively undesirable, because D3D11 is very good. Though that doesn't provide an answer for why D3D12 is generally regarded as inferior on NVidia.
maybe because nvidia dont really sell cards but software?isnt dictated that a full exposure on d12 NEEDS a more low level approach? nvidia had a lot of trouble with async on aots and aots wasnt even using much of this feature.. perhaps thats why they are trying to push for people to use their abstraction layer on vulkan also(and since this is nvidia we all know what they mean with abstraction layer)
 
Back
Top