AMD Vega 10, Vega 11, Vega 12 and Vega 20 Rumors and Discussion

Discussion in 'Architecture and Products' started by ToTTenTranz, Sep 20, 2016.

  1. Cat Merc

    Newcomer

    Joined:
    May 14, 2017
    Messages:
    124
    Likes Received:
    108
    I sort of doubt Vega can do 12 TFLOPS at 225W, considering even at 300W it's power throttling to 1450MHz ish.

    Someone should try lowering the power target to 225W and see what happens.
     
  2. Ryan Smith

    Regular

    Joined:
    Mar 26, 2010
    Messages:
    611
    Likes Received:
    1,052
    Location:
    PCIe x16_1
    Damien is at AMD now? While I'm happy for him, still, aww.:(

    Don't underestimate how much power rasterizers and ROPs can suck up. If you're just doing compute, the power profile of a GPU is very different.
     
    Kej, ImSpartacus, T1beriu and 4 others like this.
  3. xEx

    xEx
    Regular Newcomer

    Joined:
    Feb 2, 2012
    Messages:
    939
    Likes Received:
    398
    Yes the thing is: does AMD really launch a *professional* card aim for professionals and scientific with older software when final version was almost almost finish? I mean that kind of software support its not something that you can finish over a month, even less actually cuz you have to chip the cards with that software so you need to finish it before starting the production.

    In a hypothetic world I could trust AMDs comments of "RX Vega will be much different" but in the real world I really fail to see how AMD could turn around a product so much in so little time.
     
  4. seahawk

    Regular

    Joined:
    May 18, 2004
    Messages:
    511
    Likes Received:
    141
    And most amazingly when NV launched Maxwell the TBR just worked. NV did not even talk much about it and nobody gave a damn if the GPU used a fall back mode or not.

    I see no good answer to the problem for AMD.

    The best case is

    a) we sold you a 1000$ card and decided to withhold much of its power fro a month from you, because our PR department said it makes sense

    the other options is

    b) our driver team sucks
    c) our hardware sucks

    or a combination of b and c
     
  5. Malo

    Malo Yak Mechanicum
    Legend Veteran Subscriber

    Joined:
    Feb 9, 2002
    Messages:
    7,029
    Likes Received:
    3,101
    Location:
    Pennsylvania
    How is the infinity fabric integrated into Vega and what could it possibly mean as far as potential bottlenecks? Is it used to to integrate with new functions such as the cache controller for accessing additional working memory? Replaced crossbar for communication with memory controllers?

    This is the first GPU with it so I'm curious as to it's use.
     
  6. Gipsel

    Veteran

    Joined:
    Jan 4, 2010
    Messages:
    1,620
    Likes Received:
    264
    Location:
    Hamburg, Germany
    In the simplest case there will be just a new driver version for download in a few weeks or something. Imagine the driver stuff for all the enhanced GCN5 features were just not ready yet for prime time. They still had some issue with some visual glitches or crashes, so they simply need some more polish.
    Yes, you can argue AMD shouldn't have launched the FE in the first place if that is the case. But they did and they may have preferred a stable operation over a faster but crashing driver.
     
  7. eastmen

    Legend Subscriber

    Joined:
    Mar 17, 2008
    Messages:
    9,983
    Likes Received:
    1,496
    How do you know maxwell TBR just worked ? It recieved performance upgrades consistantly for a year. All of AMD's gpu's have performed better as drivers improved and most of them ended up besting cards that were originaly faster.

    So your example in A sounds silly because no one would pitch it that way. They would say by our $1k card today and look foward to constant performance improvements as software and drivers continue to take advantage of new features .


    I will hold off for gaming vega before making my judgement on the card. I've had plenty of amd cards that have aged extremely well compared to nvidia cards
     
    no-X likes this.
  8. eastmen

    Legend Subscriber

    Joined:
    Mar 17, 2008
    Messages:
    9,983
    Likes Received:
    1,496
    I'd assume for the professional market stable drivers are the most important aspect . Someone using these to make money will be pissed if half way through a 12 hour render or what have you the thing crashes and they loose all that work.
     
  9. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    8,183
    Likes Received:
    1,840
    Location:
    Finland


    AMD re-affirms RX Vega + others at SIGGRAPH (though technically "announce" could mean they'll just tell it's coming in month x)
     
    Lightman likes this.
  10. eastmen

    Legend Subscriber

    Joined:
    Mar 17, 2008
    Messages:
    9,983
    Likes Received:
    1,496
    will be interesting to see what it is. I want to get a 1080ti performance part for $500 sometime this fall and I'm hoping vega is it. Will be a nice jump from my 290
     
  11. lanek

    Veteran

    Joined:
    Mar 7, 2012
    Messages:
    2,469
    Likes Received:
    315
    Location:
    Switzerland
    I have my invitation, as it is holliday.. see you there .. !!!!!

    If you are bored, come to the Blender sessions, theres a lot of news.. a lot..
     
    Lightman likes this.
  12. Anarchist4000

    Veteran Regular

    Joined:
    May 8, 2004
    Messages:
    1,439
    Likes Received:
    359
    With FE marketed more as a development card, hardware capabilities should matter more than performance and to some degree stability. Stability being important, but applications are assumed to be buggy in development.

    http://www.phoronix.com/scan.php?page=news_item&px=ROCm-1.6-Released

    ROCm 1.6 landed yesterday, no idea/notes what it adds beyond Vega, so presumably there are deep learning devs more concerned with HBCC and packed math which should still work for developers.

    Depends on the part of the "professional" market being discussed. I'm not sure FEs were designed for 12h renders and production as much as debugging software. On the "Frontier" isn't the best place for mission critical stability.
     
  13. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,122
    Likes Received:
    2,873
    Location:
    Well within 3d
    The binning rasterizer in AMD's patent is fine with batching and binning primitives with transparency.
    The triangle bin test's having every pixel read and write a common value seems like it would be a dependence of some sort, and that would potentially meet a close batch condition.
    The triangle test may place a secondary emphasis on HSR since it is counting on the opacity of the triangles to help demonstrate the behavior, and it has a very stark difference in behavior if AMD's batch and bin process were take to the most extreme outcome.

    However, I may have conflated that consideration which is specific to the test with the more general behavior of the batching step. Even if the triangles were transparent, the question remains how the batching would proceed, and the sequence of binning from the batch and pulling in the next one.
    The patent is written to indicate that the rasterizer iteratively processes through a batch across all bins, but whether that serializes the whole process is not clear. A non-contrived scene that didn't have perfectly overlapping triangles might include some other kind of distribution or give the tiling hardware a less rigid sequence of bins to work with.

    edit: It slipped my mind earlier, but I was able track this down again:
    http://www.google.com/patents/US20160371873
    This is a continuation of the prior patent, and has more details on how the batching and bin intercept processes work.
    There are more details as to why a batch can be closed as well.
    It's not clear where it has been inserted. At a minimum, it should be part of the interface touching the memory controllers. That may give another reason why the ROPs no longer touch the controllers directly. Their caches don't speak a superset of hypertransport, and there may have been less work done to their internals by putting them behind another cache.

    How it connects the to the L2 or possibly the CUs is unclear. The numbers given for Vega's fabric bandwidth would make it worse for the the CU-L2 interface, and I sense that a hypertransport packet won't be as lightweight as the cache management now.

    There's a non-data portion of the fabric related to control that might be plugged into various blocks. That's probably there, and it would be used to among other things carry data for DVFS to the hardware that manages it. What granularity that plugs into the hardware may be interesting. The more integrated the fabric is, the more area for non-execution resources, and the more individual blocks might need design changes.
     
    #2613 3dilettante, Jul 1, 2017
    Last edited: Jul 1, 2017
    Malo likes this.
  14. Genotypical

    Newcomer

    Joined:
    Sep 25, 2015
    Messages:
    38
    Likes Received:
    11
    Raja called vega a SoC. what does this mean for a GPU?

    compute should be simpler for AMD to deal with that graphics. I don't think their "professional" driver can be called older software or not ready. launch what you have working now and make money off it. its not a gaming card anyway so who cares. Rx Vega would likely have launched with Vega FE if they were ready with the gaming side of things.
     
  15. McHuj

    Veteran Regular Subscriber

    Joined:
    Jul 1, 2005
    Messages:
    1,432
    Likes Received:
    553
    Location:
    Texas


    Either slide was fake or it turned out that the design couldn't meet the original performance goals so they had to crank up voltages and/or frequency to meet stay competitive.
     
  16. Genotypical

    Newcomer

    Joined:
    Sep 25, 2015
    Messages:
    38
    Likes Received:
    11
    PCper reported the card was dropping to lower power tiers without lowering clock speed. could be the power management is not working well currently. lvls were 280W, 240W and 200W without changing frequency (as far as they could tell). and it happened with temperature limits which is odd.
     
  17. Infinisearch

    Veteran Regular

    Joined:
    Jul 22, 2004
    Messages:
    739
    Likes Received:
    139
    Location:
    USA
    Again pardon my ignorance but are there any compute only benchmarks? (Something like Luxmark maybe) Have they been run Vega yet? I'm wondering if the "NCU's" give better performance beyond clock speed based improvements.
     
  18. CaptainGinger

    Newcomer

    Joined:
    Feb 28, 2004
    Messages:
    92
    Likes Received:
    47
    PCPer ran Luxmark. https://www.pcper.com/reviews/Graph...B-Air-Cooled-Review/Professional-Testing-SPEC

     
  19. Infinisearch

    Veteran Regular

    Joined:
    Jul 22, 2004
    Messages:
    739
    Likes Received:
    139
    Location:
    USA
    Yeah I saw that I was looking for more data points, but I don't know much about compute shader benchmarks.
     
  20. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    8,183
    Likes Received:
    1,840
    Location:
    Finland
    Nothing, they've referred to GPUs as SoCs before too.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...