Forbes: AMD Created Navi for Sony PS5, Vega Suffered [2018-06] *spawn*

Discussion in 'Graphics and Semiconductor Industry' started by BRiT, Jun 12, 2018.

  1. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    13,032
    Likes Received:
    15,780
    Location:
    The North
    So these deals of this nature need to be worked out way in advance then. What happens if they lose out on a major client? Do they just not release a major architecture at all anymore?
     
  2. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    3,510
    Likes Received:
    4,128
    They will still obviously do it, at least for the other major client and PC, remember they have 3 clients now (PlayStation, Xbox, PC).

    I guess something changed when AMD found themselves with basically an uncontested monopoly on the console market, they had to adapt their strategy to fit all the pieces together, Consoles and PC. For example: it's clear the upgraded consoles further solidified the decision to extend the life of GCN, PS4Pro/OneX needed backward compatibility with base PS4/XO, so GCN needed to exist a lot longer than usual, to accommodate the new consoles, and the new unison of Console/PC strategy.
     
    pharma and egoless like this.
  3. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    13,032
    Likes Received:
    15,780
    Location:
    The North
    And in turn Navi is somewhat an extension of what GCN was. Interesting.

    So if we assume this all to be true; then what type of evidence is there inside Navi that would support that Navi was created for Sony and not for, in particular, Microsoft. I suppose MS could theoretically use any GPU setup, an entirely different architecture if they wanted to (with some pain of course for support BC, but doable as we see today); whereas Sony is a little more stuck onto using something that is closer to GCN?
     
  4. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    43,576
    Likes Received:
    16,033
    Location:
    Under my bridge
    Circumstantial at this point. Could very well be AMD didn't want to keep changing architectures and came up with a long term vision, founded on compute because that's where they thought the money was. And they've stuck with it this long because designing RDNA has taken too long, notably because they haven't the same income for funding R&D as they had when they were chopping and changing, trying to find the best solution for these new compute workloads.

    How often do nVidia's change their core shader architecture with a ground-up rewrite?
     
    pharma likes this.
  5. CarstenS

    Legend Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,362
    Likes Received:
    3,101
    Location:
    Germany
    „GCN will be the basis of our chips for many years to come“ – that's what someone higher up from AMDs engineering team said at the GCN launch.
     
  6. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    2,214
    Likes Received:
    1,617
    Location:
    msk.ru/spb.ru
    Tesla-Fermi - 2006-2012
    Kepler - 2012-2014
    Maxwell-Pascal - 2014-2018 (2017 if we count GV100 as the first member of Volta-Turing)
    Volta-Turing-(Ampere?) - 2018-?

    So about once per 4 years or so?
     
    pharma likes this.
  7. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    3,510
    Likes Received:
    4,128
    NVIDIA changes architecture every two generations or so, since DX10, we had Unified Shaders with Tesla (8800 Ultra/GTX 285), then Fermi (GTX 480/GTX 580) came in with heavy focus on Compute/Tessellation, then Kepler (GTX 680/GTX 780Ti) which came with a big focus on power efficiency through scheduling rework, then NVIDIA reworked scheduling again in Maxwell and Pascal (GTX 980Ti/GTX 1080Ti), both relied upon massive increases in Geometry output and Memory compression to drive huge performance gains, then came Volta and Turing, which introduced tons of new features again (AI acceleration, Ray Tracing, separate INT/FP32), Mesh Shaders ..etc.

    In the span of 12 years (2007 to 2018) they changed archs at least 5 major times, AMD only made 2 major transitions in that period (VLIW to GCN).
     
  8. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    43,576
    Likes Received:
    16,033
    Location:
    Under my bridge
    I'm unconvinced. GCN has evolved but just hasn't got a name change. GCN stopped at GCN 5, which includes things like scheduling and tessellation changes. The core GCN architecture is the SIMD CUs and wavefronts, so we're counting two core architecture changes, VLIW and GCN. In that time, hasn't nVidia had effectively one arch, the CUDA core? So nVidia introduced CUDA with Tesla and have stuck with it, and AMD have used GCN. nVidia has named their different CUDA based generations with different family names, whereas AMD has just named theirs GCN x.

    Is there really a difference in behaviour? Both have a long-term architectural DNA as the basis for their GPUs, with refinements happening in scheduling and features across the evolution of that core DNA.
     
    #128 Shifty Geezer, Jan 30, 2020
    Last edited: Jan 30, 2020
    techuse and iroboto like this.
  9. CarstenS

    Legend Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,362
    Likes Received:
    3,101
    Location:
    Germany
    It's a moving target, as in „define what makes an architecture an architecture“.

    Me, I'd consider Kepler Nvidias last deep architecture overhaul since it did away with HW scoreboarding in Fermi. Of course, there have been drastic changes since then, but things like power efficiency focus or memory compression are details. Important details, yes, but they do not make up an architecture - in my personal book.
     
    Putas and entity279 like this.
  10. VitaminB6

    Regular Newcomer

    Joined:
    Mar 22, 2017
    Messages:
    269
    Likes Received:
    373
    So we're to believe that Sony spent millions in R&D to help develope an architecture that will also be used is the XBSX? I think people need to give a little more credit to the engineers at AMD and a lot less to Sony. I know in XB1 MS created their own poprietary audio block in house but outside of that everything used in both consoles has been selected from existing tech already created by AMD. Sure both MS and Sony choose slightly different configurations of that existing tech but that's about it.
     
  11. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    13,032
    Likes Received:
    15,780
    Location:
    The North
    There has to be easier ways to poke holes in this theory without having to rely on data points in which we would never be able to obtain.
     
  12. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    43,576
    Likes Received:
    16,033
    Location:
    Under my bridge
    It was a customisation of AMD's TrueAudio - Tensilica DSPs with a few additions. TrueAudio as an architecture supports any number of Tensilica DSPs going AMD's docs.
     
    VitaminB6 likes this.
  13. VitaminB6

    Regular Newcomer

    Joined:
    Mar 22, 2017
    Messages:
    269
    Likes Received:
    373
    We'll there you go. So even that was mostly based on existing AMD tech.
     
  14. JoeJ

    Veteran Newcomer

    Joined:
    Apr 1, 2018
    Messages:
    1,139
    Likes Received:
    1,291
    Yes. I noticed big differences between the NV GPUs i worked with (Fermi, Kepler, Pascal). E.g. differences in performance of memory access patterns or atomic operations. Very noticeable, although i did not even use profiling tools back then.
    GCN in contrast behaved always the same. Perf just a matter of freq. and CU count, not much else. (7950, 280X, Fiji) Did not yet look closely at the Vega that i have now - seems surprisingly fast somehow.
    I guess it's similar for people working more on rendering than on compute, but they will see different pros and cons.

    Need to add that any GCN worked so well for me, there seemed no need to improve anything at all. NV had bad performance in comparison, also depending a lot on API... only Pascal finally was ok but still way behind in perf per dollar.
    For me, all that ranting AMD would be outdated and behind never made nay sense... until very recently at least :)
     
  15. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    3,510
    Likes Received:
    4,128
    Nope, NVIDIA stuck with the name CUDA cores as it corresponds to cores that run their CUDA language, the underlying arch is different across generations. Memory hierarchy is often different, the arrangement of CUDA cores is widely different and thus scheduling becomes different ..etc.

    The jump in performance is also huge at the same core count, for example the 780Ti (Kepler) has the same number of cores as the 980Ti (Maxwell), with comparable clocks, yet the 980Ti is leaps and bounds faster.

    And no, AMD only added iterative changes to GCN, the arch was mostly the same. Hardware savvy members here can back me up on this. @3dilettante
     
  16. JoeJ

    Veteran Newcomer

    Joined:
    Apr 1, 2018
    Messages:
    1,139
    Likes Received:
    1,291
    Not sure, but i think on NV also things like available registers and LDS memory have varied across generations.
    This is quite remarkable because those numbers often affect which algorithm you choose, at least the implementation details.
    As a programmer, i somehow hope things settle and changes become smaller with newer generations. Comparing the GPU situation with x86, it's harder to keep up to date, and code has shorter lifetime.
     
  17. CarstenS

    Legend Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,362
    Likes Received:
    3,101
    Location:
    Germany
    Even inside one chip: Big Kepler had, in it's GK210 edition, twice the register space as GK110.

    Nvidia, much more than AMD, tried to hide all this behind different compute capabilities, for which their compilers tried to optimize automagically.
     
    JoeJ likes this.
  18. ToTTenTranz

    Legend Veteran

    Joined:
    Jul 7, 2008
    Messages:
    12,072
    Likes Received:
    7,034
    Which.. they kind of haven't been?

    Other than Navi which is using a more advanced process, when was the last time that AMD was able to compete on performance-per-watt and performance-per-die-area?
     
  19. JoeJ

    Veteran Newcomer

    Joined:
    Apr 1, 2018
    Messages:
    1,139
    Likes Received:
    1,291
    "Before that AMD had a lot more liberty to change architectures, they went from VLIW5 to VLIW4 to GCN in the span of just 4 years (from 2007 to 2011), but then they stuck with GCN from 2012 to 2019, which is a frigging 8 years period!

    It seems that at the very least, AMD stuck themselves into a lockstep rhythm with the console release cycle, only changing architecture when the console cycle is about to end, and seems RDNA is heading toward the same fate too."


    But it just works? :D
     
    Lightman and Globalisateur like this.
  20. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    43,576
    Likes Received:
    16,033
    Location:
    Under my bridge
    So why do it? to stick with an architecture that means losing billions in sales in the PC space for some sort of semi-custom market that's not as big and could also use another architecture anyway makes little sense. It makes more sense that AMD invested in a long-term vision for all markets,a scalable architecture that'd optimise their R&D investment, specifically for compute, but guessed wrong, and have been working on a new replacement long-term architecture. Semi custom have all been using GCN because that's the arch AMD had available, rather than AMD only having that arch available because that's what the semi-custom people wanted.
     
    PSman1700 likes this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...