AMD Vega 10, Vega 11, Vega 12 and Vega 20 Rumors and Discussion

Discussion in 'Architecture and Products' started by Deleted member 13524, Sep 20, 2016.

  1. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    3,976
    Likes Received:
    5,211
    Is there any data or evidence to back up that claim? All you did is amass together some technical names to try prove a highy dubious wishful theory. Even AMD never claimed any of the things you say. Anyone with common sense would think that AMD would delay RX Vega launch if any of these features had any significant impact on performance. But I won't resort to that argument because it's common sense. AMD never gave any performance increases for their DSBR implementation, their projection for the feature was rather cautious, same thing for primitive shaders. As for Tier 3, SM6, bindless resources.. we've had previous incarnations of some of them before, they hardly amounted to anything, some features are even for flexibility not performance.
    Really? How much uplift did AMD give for primitive shaders? Or DSRB?
     
    DrYesterday and pharma like this.
  2. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,708
    Likes Received:
    2,132
    Location:
    London
    Has anyone demonstrated the performance benefit of tiled rasterisation in NVidia's GPUs?
     
  3. Malo

    Malo Yak Mechanicum
    Legend Subscriber

    Joined:
    Feb 9, 2002
    Messages:
    8,929
    Likes Received:
    5,528
    Location:
    Pennsylvania
    AMD have graphs and marketing slides! What else do we need? :roll:
     
  4. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    3,976
    Likes Received:
    5,211
    None, NVIDIA kept it a secret for a long time, so to this day we don't know how much impact it has on actual performance.

    Exactly, despite those they never gave a solid % for any fps uplift in any game.
     
  5. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,579
    Likes Received:
    4,799
    Location:
    Well within 3d
    For multiple GPU generations, Nvidia didn't officially admit to the existence of that feature. It was suspected and hinted at by a few, but the first concrete discussion here came after the Realworldtech trianglebin test.
    I'm not clear on whether the option is given to turn it off for the sake of testing.

    In fairness to AMD, they know one does not bring an abacus to a multi-dimensional optimization fight.
    If they ever gave multipliers and indicated each was universal and exclusive of the others, that would be setting them up for serious backlash. To my knowledge, they've made sure to keep their "up-to" figures specific to limited subsets and without overall context.
     
    Alexko, pharma and DavidGraham like this.
  6. Anarchist4000

    Veteran

    Joined:
    May 8, 2004
    Messages:
    1,439
    Likes Received:
    359
    Packed math we've seen ample examples from devs on console. AMD included examples as well for some lighting effects.

    DSBR has bandwidth savings listed, but no fps numbers. If Vega was bandwidth starved as has been claimed, that should help significantly.

    As AMD is currently selling all the Vegas that hit the market and pro works fine, I'm unsure why anyone would expect a delay. Could always choose not to make a profit and go out of business I suppose.

    The features will be mixed obviously, but bindless for example makes far more sense for GPU driven rendering which we haven't seen. The SM6 "intriniscs" seemed to help Doom enough and exist in hardware for GCN as GCN2/console is the basis.

    AMD hasn't presented much hard data, but depending on the status of certain features they may not be prepared to. Why release piecemeal performance improvements when they likely synergize?

    When Nvidia released similar features they never acknowledged the existence of said features. Just a node jump level of performance increase from roughly the same hardware. A combination of tiled raster and register caching providing the uplift.

    These features will entail various degrees of tuning as they're part of a black box. If AMD hasn't finished that, then releasing numbers isn't warranted. They may also be intertwined, in which case they are even harder to pinpoint. We've seen the Energy benchmark with a 2x increase. Bandwidth savings from DSBR presented. A driver setting to force DSBR, even if it crashes, might be nice just for testing.

    As mentioned above, with no way to disable I'm not sure it can be tested. I thought someone tried the RWT test with really old drivers a while back, but that's a poor approximation of gaming.
     
  7. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    Maxwell was quite different from Kepler.
     
  8. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    3,976
    Likes Received:
    5,211
    Ample examples of limited performance increase.
    Which is obviously not, otherwise AMD would have given specific fps numbers for those savings. In fact AMD told Anandtech to expect the gains to show up in resources starved GPUs, which leaves out full fledged Vega.

    They already delayed RX 2 months beyond Frontier Edition. They would have delayed it more if they thought it was worth it, instead of half assing it through maybe the worst AMD launch since R600.
    Guess what? We already have 2 tiers of bindless in current GPUs for many years now, we've yet to exploit them. SO don't think Tier 3 would make that big of a difference compared to what's already here.
    Yeah, still not enough for GP102.
    How are they hard to pinpoint when they just pinpointed them in the Energy benchmark? The level of contradictions in that statement is high!
     
    pharma, xpea and ieldra like this.
  9. Alexko

    Veteran Subscriber

    Joined:
    Aug 31, 2009
    Messages:
    4,541
    Likes Received:
    964
    Delaying the launch even further to wait for driver improvements could have made a lot of sense had a massive improvement been possible in a short time, say, 10% in a month, but for some 3% in that same month, it would have made no sense to delay the launch.

    That said, if that 3%/month rate could be sustained, it would be a very big deal after 6 months, and absolutely massive after a year.
     
  10. Anarchist4000

    Veteran

    Joined:
    May 8, 2004
    Messages:
    1,439
    Likes Received:
    359
    30% that is multiplicative with other boosts is bordering on a generational performance increase. So sure, limited to one generation of increases sounds about right.

    Why provide fps figures if the number wasn't going to be representative of final performance? That seems rather stupid and I'm not sure why anyone would even consider it as it isn't representative of anything final. What figures have been provided represent measured gains with the effect isolated. FPS figures that may change daily don't make much sense.

    Yet they still sell cards as fast as they make them. They're so bad that demand exceeded expectations despite retailers jacking up prices. Worst launch ever with higher than expected revenue! Many more launches like this and all AMD employees will be forced into early retirement, sipping cocktails on private tropical islands, and we'll be in real trouble.

    Sounds like that final hurdle is a big one. What is the percent increase from a fixed number to unlimited anyways? One sounds infinitely better. I'm sure engines will be just fine with a handful of states. Besides, why have GPU driven rendering and more elegant deferred methods when you have a perfectly capable CPU to bottleneck everything?

    Hard to tell their thinking, but probably has to do with performance in a synthetic benchmark being somewhat easy to nail down. None of those pesky resource management issues, variable object counts, complex shapes, etc messing things up.
     
  11. seahawk

    Regular

    Joined:
    May 18, 2004
    Messages:
    511
    Likes Received:
    141
    But that is unrealistic, because either a performance enhancing feature works or it does not work.
     
  12. Anarchist4000

    Veteran

    Joined:
    May 8, 2004
    Messages:
    1,439
    Likes Received:
    359
    Not exactly a shortage of them though, on top of the typical game and driver optimizations. Even enabled is no guarantee the feature is optimal. Just consider DSBR with the ability to change tile sizes dynamically. AMD could be tweaking those algorithms, likely with diminishing returns, for some time and other features may interact with the optimal settings.
     
  13. seahawk

    Regular

    Joined:
    May 18, 2004
    Messages:
    511
    Likes Received:
    141
    But still you would get one big improvement when you turn the feature on and then some smaller follow-ups and not some consistent medium improvements over 12 drivers or such. But imho the big question is why is the driver so bad? Vega is late and still the driver is barely in alpha status. If I compare it to the Maxwell launch, this is the worst display of competence from AMD in ages.

    I have my theory about the problem and that has a lot to do with the primitive shaders and the work needed to transform VS into unified shaders in the driver and on the fly. But if this is true, the problem will stay with VEGA for a long time and hit it again with most new games and surely most new game engines.

    But whatever the reason is, the launch was awfully executed in any form possible. From pricing shenanigans over driver quality to availability. And imho so far the press has focussed a lot on what VEGA might turn out to be and not on the sad state of the VEGA ecosystem when launched for the paying customer. 400-700$ for a promise is a bad joke and the press and forums would go berserk if NV would have tried this.
     
    Heinrich4 and pharma like this.
  14. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,708
    Likes Received:
    2,132
    Location:
    London
    NVidia's D3D12 still seems to be broken. How long has that been now?
     
    Heinrich4 likes this.
  15. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    How so? Last time I ran RotTR and AotS both worked.
     
    pharma likes this.
  16. no-X

    Veteran

    Joined:
    May 28, 2005
    Messages:
    2,451
    Likes Received:
    471
    Well, it's not $400-700 for a promise. Vega 64 and Vega 56 in current state offer price/performance ratio comparable to the competition. The "promise" is a free bonus.
     
  17. seahawk

    Regular

    Joined:
    May 18, 2004
    Messages:
    511
    Likes Received:
    141
    Did anybody ever say "wait until NV activates DX12 and we will see the real performance of the chip or was the bad DX12 performance rightfully criticized?
     
    Heinrich4 and pharma like this.
  18. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,708
    Likes Received:
    2,132
    Location:
    London
    When the general advice with NVidia is to run games using D3D12 instead of D3D11, then we'll know it's not broken. Since the current, opposite, advice is so stark, then it indicates a problem.

    Same as when the general advice is to buy Vega 64, instead of 1080. Or at the very least, "choose either, they're about the same".

    One could argue that D3D12 on NVidia is relatively undesirable, because D3D11 is very good. Though that doesn't provide an answer for why D3D12 is generally regarded as inferior on NVidia.
     
  19. giannhs

    Newcomer

    Joined:
    Sep 4, 2015
    Messages:
    44
    Likes Received:
    47
    we also know that async on consoles is being used quite a lot (i think it was 20-30% depending on the game?)
    truth is devs on pc games wont go so far to give amd a clear advantage over nvidia purely because this will hurt their sales same thing goes for everything amd currenty is good at it be it compute shaders being async being with those prim shaders bla bla
    i think this is one of the reasons why amd is actually trying to automate the primitive shaders (can they even be agnostic gaming wise? )can you imagine the %^$%&^&#^^%fest we would have seen if the devs had access on them?(they will eventually i guess)
     
  20. giannhs

    Newcomer

    Joined:
    Sep 4, 2015
    Messages:
    44
    Likes Received:
    47
    maybe because nvidia dont really sell cards but software?isnt dictated that a full exposure on d12 NEEDS a more low level approach? nvidia had a lot of trouble with async on aots and aots wasnt even using much of this feature.. perhaps thats why they are trying to push for people to use their abstraction layer on vulkan also(and since this is nvidia we all know what they mean with abstraction layer)
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...