AMD Vega 10, Vega 11, Vega 12 and Vega 20 Rumors and Discussion

Discussion in 'Architecture and Products' started by ToTTenTranz, Sep 20, 2016.

  1. Anarchist4000

    Veteran Regular

    Joined:
    May 8, 2004
    Messages:
    1,439
    Likes Received:
    359
    Likely increased given all the new roles for L2. The L1/register size as I recall from the Linux drivers actually shrunk to 1k per wave from 16k. Need to double check that. That would make it likely they're paging the register file from somewhere. They could be deliberately spilling inactive registers to ram or L2 to increase occupancy with a smaller, highly ported RF. Obviously speculating on that, but it would amplify apparent VGPRs and simplify scheduling. Makes sense for long running shaders at least.

    AMD does have color compression and I'd expect diminishing returns on improving that feature. Especially if AMD foresees the market moving towards compute where it isn't used. Compression there is a programmer implementation.

    Prefetch is consecutive data words. Most of that compression will be from working around masked lanes. Graphics bandwidth comes from this as SIMDs usually read consecutive data. Zero compression being a significant component of overall compression. From there it's a matter of smaller block sizes mapping better to sparse data. Prefetch of one and zero compression would be pointless as you'd just skip it. Then add in the additional channels on separate tasks to make up the bandwidth.

    Could be pipeline stalls which wouldn't be surprising. Or he had Chill running.
     
  2. ninelven

    Veteran

    Joined:
    Dec 27, 2002
    Messages:
    1,720
    Likes Received:
    140
    Looks like Vega is likely ~15+% behind Pascal in perf/watt. I can sort of see the thought process.... You have a card with similar "uncapped" performance similar to or maybe even slightly better than the 1080TiFe/Titan X. You get 16GB of VRAM vs 11 or 12 GB. And you get 2x fp16... The 1080Ti has an MSRP of $699, the Titan Xp sells for $1200 (direct from nvidia)... The problem for AMD is both of those two cards are 250W TDP. I would guess Vega's theoretical performance is being limited by power and/or thermal considerations.
     
  3. ninelven

    Veteran

    Joined:
    Dec 27, 2002
    Messages:
    1,720
    Likes Received:
    140
    Yes, it just is not as good as Nvidia's. And, yes, it matters today. Nvidia's various bandwidth saving techniques are worth at least 25% of their perf/watt advantage last time I did the math...
     
  4. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,402
    Likes Received:
    4,111
    Location:
    Well within 3d
    Is this closer to R300 being the first DX9 card or closer to its continuation of TruForm?
    From the admittedly meaningless metric of my personal boredom that's not good enough--although if something truly substantial doesn't change with regards to what we're seeing in Vega versus the soon to be replaced competition it might not be the only metric.
    I hope something more interesting comes out about what has changed, and I feel as if AMD's GFX could stand for some updates on the same order as Vega in a more rapid cadence.
    While it looks to be a nice change, I'm not seeing this as being sufficient if they start the "leveraging" game that was done with GCN1-4.


    Any references on that?

    If it's that dependent on primitive shaders, why would something so specific to one chip in the entire market get that much support?

    It would run counter to AMD's attempt to reform its image on execution, which I suppose could have some indirect effects in terms of investors or creditors. Depending on how much the rumors about some of AMD's changes being driven by covenants in its financing, perhaps it does matter if RTG cannot turn things around. The bluster has gotten tiresome to me, at any rate.

    If there is a reference you can find, I would be interested to read it.

    That might be losing out. There are areas where in-memory compression is an option, such as for IBM's server chips. Some nice aspects of having it be implicit or physically positioned at an LLC or further out in the hierarchy is that it doesn't clutter up the instruction stream or the tightest portions of the execution loop.
    Gathering metadata about overall execution could be applied to configuring kernel launches or changing/eliding them for workloads that are drawing from sources that have stretches of highly correlated behavior.
     
    Geeforcer likes this.
  5. no-X

    Veteran

    Joined:
    May 28, 2005
    Messages:
    2,345
    Likes Received:
    313
    Are there any publicly available reviews or tests, which proved it?

    Why doesn't it work for AMD? Tonga/Antigua brought color compression, but resulting performance per watt was worse when compared to 1st gen GCN:
    http://www.hardware.fr/articles/945-6/consommation-efficacite-energetique.html
     
  6. ninelven

    Veteran

    Joined:
    Dec 27, 2002
    Messages:
    1,720
    Likes Received:
    140
    Yes.
     
  7. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,402
    Likes Received:
    4,111
    Location:
    Well within 3d
    There were synthetics that showed that the ratio of bandwidth achieved with compressible versus non-compressible was larger with Nvidia, although how that is borne out in more complex scenarios is unclear.
    Some of Nvidia's architectural changes in terms of tiling would have had synergistic effects, by increasing how correlated accesses were and reducing thrashing of any compression data. Thrashing metadata in more random access cases is mentioned as a potential problem in some cases by AMD, although the absence of a statement from Nvidia doesn't necessarily mean it's not an issue there.

    Tonga was an odd launch. While I think things did get better, it seemed to have some other problems going on that made up for any memory efficiencies.
     
  8. Geeforcer

    Geeforcer Harmlessly Evil
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,311
    Likes Received:
    503
    I know R300 means different things to different people, but the reasons why it is on the Mt. Rushmore of GPUs is that it was not only a huge upgrade over the previous generation (both ATI and Nvidia) but by many matrices a superior design to still-to-be-released same-gen product from the competitor and clearly far exceeded everyone's expectations, exp Nvidia's.

    With high-quality settings (1600x1200, 4AA+8AF) it was
    • 20-200% faster than 4600 (previous competitor generation)
    • 10-40% faster than 5800 in most games
    • Despite Nvidia's efforts, the the CineFX architecture never really truly caught up with Khan, ushering an almost-2 year stretch of ATI dominance.

    So to me, when someone says "9700-like impact", I have a vision of a product/architecture that is dominant right off the bat, makes competitor's yet-to-be-released next gen competitively still-born, gets a stranglehold on the market and reshapes the industry for years to come. To me, personally, everything about this release so far hankers far more to R300 much-maligned counterpart (NV30) .
     
    Lightman and DuckThor Evil like this.
  9. Heinrich4

    Regular

    Joined:
    Aug 11, 2005
    Messages:
    596
    Likes Received:
    9
    Location:
    Rio de Janeiro,Brazil
    #2269 Heinrich4, Jun 28, 2017
    Last edited: Jun 28, 2017
  10. Gipsel

    Veteran

    Joined:
    Jan 4, 2010
    Messages:
    1,620
    Likes Received:
    264
    Location:
    Hamburg, Germany
    The problem is that color compression alone won't save the day. Tiled rasterization and a large L2 caching the framebuffer tiles is needed to really leverage the framebuffer compression. In AMD's case (pre Vega that is), the tiny ROP caches need to be flushed way too often for real workloads (i.e. outside of fillrate tests with fullscreen quads), meaning the overhead for that can mask quite a bit of the advantages of compression. I remember there was a brief discussion of a paper from some AMD guys about the tradeoff of tile sizes, compression ratios and cache sizes. Only with larger caches one can efficiently use larger tiles (which have the tendency of reaching higher compression ratios) without excessive overhead for tile reloading which may turn the compression into a net negative. With tiled rasterization and larger caches for the ROPs the break even for compression is simply lower and easier to reach (edit: and the advantage is larger even for the exact same compression ratios).
    The hope is that Vega is able to massively improve on that situation.
     
    #2270 Gipsel, Jun 28, 2017
    Last edited: Jun 28, 2017
  11. seahawk

    Regular

    Joined:
    May 18, 2004
    Messages:
    511
    Likes Received:
    141
    So they advertise the Gaming Mode for the FE, but do not sent cards to reviewers for Gaming benchmarks? Has the PR department gone insane?

    What do they think will the early adopters be doing the next month - keeping their cards secret or posting benchmarks like crazy?

     
  12. Rootax

    Veteran Newcomer

    Joined:
    Jan 2, 2006
    Messages:
    1,629
    Likes Received:
    1,001
    Location:
    France
    Seems like they know the product won't be good :/
     
  13. lanek

    Veteran

    Joined:
    Mar 7, 2012
    Messages:
    2,469
    Likes Received:
    315
    Location:
    Switzerland
    They only thing that i have seen as "advertising " is to say you can use the switch button on the driver for pass on gaming mode.. ( that i dont know exactly how it work and if you need to reboot the card if i follow PCworld article ) ... I can imagine that it switch to a " more gaming " friendly driver " ( containing some of the catalyst driver games optimization ).. I dont even know if it cut some professional optimization on hardware ( As ECC ? )

    They have implement this mode for alllow games developpers to "test, run the game" instead of move on other hardware . Not for gamers. ( Andd by games developpers, i think this gpu is more aimed to Indie VR developpers than big studios )

    The only real advertising i have seen so far from AMD is to tell you that if it is for gaming, you should wait for the RX Vega ...
     
    #2273 lanek, Jun 28, 2017
    Last edited: Jun 28, 2017
  14. ninelven

    Veteran

    Joined:
    Dec 27, 2002
    Messages:
    1,720
    Likes Received:
    140
    I don't think I ever indicated otherwise? In my post from August of last year I believe I described it as a necessary but not sufficient condition...

    Clearly, it has improved... but I would stick with my previous guess of still ~15%+ behind Pascal in perf/watt.
     
  15. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    9,237
    Likes Received:
    3,182
    Location:
    Finland
    The way AMD has commented on it, the "game mode" isn't supposed to be equal to having Radeon RX -version of the card
     
  16. Clukos

    Clukos Bloodborne 2 when?
    Veteran Newcomer

    Joined:
    Jun 25, 2014
    Messages:
    4,515
    Likes Received:
    3,872
    Maybe, just maybe, they should have enabled the "gaming" mode in the drivers when RX Vega released to avoid such early benchmarks.
     
    chris1515 and DavidGraham like this.
  17. lanek

    Veteran

    Joined:
    Mar 7, 2012
    Messages:
    2,469
    Likes Received:
    315
    Location:
    Switzerland

    And this have bring ATI back in the market, or in the 9700 case, i should say make a powerfull entry on the market... I dont think that Carsten was think on "performance" and superior architecture (as was the 9700 ) but just that it will bring back sales to AMD and permit them to captures back some of their marketshare.. i dont want to speak for him off course and so this just reflect the way i have understand his post.

    The 9700Pro ( i have own many of them ( all Maya edition ), and then 9800's ) was incredible, they was initiate new technology and possibility on graphics rendering ( not only for gaming )... But it have got too the impact of make that ATI have capture a lot of marketshare at this time .
     
    #2277 lanek, Jun 28, 2017
    Last edited: Jun 28, 2017
    chris1515 and CarstenS like this.
  18. Anarchist4000

    Veteran Regular

    Joined:
    May 8, 2004
    Messages:
    1,439
    Likes Received:
    359
    Everything about Vega looks programmable in my view with the Tier3's. So while the traditional pipeline still exists, they have the option of transparently using a primitive shader for everything. That may be required for geometry improvements and still a WIP. Obviously no evidence of that, but seems logical.

    May take some digging, but I'll see if I can find it.
     
  19. Geeforcer

    Geeforcer Harmlessly Evil
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,311
    Likes Received:
    503
    Story so far:

    - "You can't compare these cards to Quadro! I don't care that AMD themselves categorizes this card as part of Pro line and boasts "professional, but not certified drivers". It's not a "Pro card"!
    - "OK then, if we look at some gaming benchmarks..."
    - "OMG, what kind of idiot would expect a PRO card to have decent gaming drivers? "

    You know how sometimes the studios refuse to screen a movie for the critics...?
     
  20. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    3,359
    Likes Received:
    3,732
    All the leaked CompuBench results, 3DMark results, rumors, early AMD demos and even the latest PCWorld preview pointed to something between 1080 and 1080Ti, this pretty much confirms it. During the preview, they removed fps counters from their gaming performance in the presence of the TitanXp, despite showing it months ago with Doom, Battlefront and Sniper 4, now we know why.

    I expect Vega will be "very" close to the 1080Ti in DX12 and Vulkan games though.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...