Forbes: AMD Created Navi for Sony PS5, Vega Suffered [2018-06] *spawn*

The problem is: it kinda fits the timeline, AMD developed and then released GCN, two years before XO and PS4 release, then stuck with it for the whole PS4/XO life cycle, refusing to dramatically change it or break away from the formula, until almost two years before the PS5/XSX release, when they developed RDNA which -coincidentally- is also the basis of PS5/XSX.

Before that AMD had a lot more liberty to change architectures, they went from VLIW5 to VLIW4 to GCN in the span of just 4 years (from 2007 to 2011), but then they stuck with GCN from 2012 to 2019, which is a frigging 8 years period!

It seems that at the very least, AMD stuck themselves into a lockstep rhythm with the console release cycle, only changing architecture when the console cycle is about to end, and seems RDNA is heading toward the same fate too.
So these deals of this nature need to be worked out way in advance then. What happens if they lose out on a major client? Do they just not release a major architecture at all anymore?
 
So these deals of this nature need to be worked out way in advance then. What happens if they lose out on a major client? Do they just not release a major architecture at all anymore?
They will still obviously do it, at least for the other major client and PC, remember they have 3 clients now (PlayStation, Xbox, PC).

I guess something changed when AMD found themselves with basically an uncontested monopoly on the console market, they had to adapt their strategy to fit all the pieces together, Consoles and PC. For example: it's clear the upgraded consoles further solidified the decision to extend the life of GCN, PS4Pro/OneX needed backward compatibility with base PS4/XO, so GCN needed to exist a lot longer than usual, to accommodate the new consoles, and the new unison of Console/PC strategy.
 
it's clear the upgraded consoles further solidified the decision to extend the life of GCN, PS4Pro/OneX needed backward compatibility with base PS4/XO
And in turn Navi is somewhat an extension of what GCN was. Interesting.

So if we assume this all to be true; then what type of evidence is there inside Navi that would support that Navi was created for Sony and not for, in particular, Microsoft. I suppose MS could theoretically use any GPU setup, an entirely different architecture if they wanted to (with some pain of course for support BC, but doable as we see today); whereas Sony is a little more stuck onto using something that is closer to GCN?
 
Before that AMD had a lot more liberty to change architectures, they went from VLIW5 to VLIW4 to GCN in the span of just 4 years (from 2007 to 2011), but then they stuck with GCN from 2012 to 2019, which is a frigging 8 years period!

It seems that at the very least, AMD stuck themselves into a lockstep rhythm with the console release cycle, only changing architecture when the console cycle is about to end, and seems RDNA is heading toward the same fate too.
Circumstantial at this point. Could very well be AMD didn't want to keep changing architectures and came up with a long term vision, founded on compute because that's where they thought the money was. And they've stuck with it this long because designing RDNA has taken too long, notably because they haven't the same income for funding R&D as they had when they were chopping and changing, trying to find the best solution for these new compute workloads.

How often do nVidia's change their core shader architecture with a ground-up rewrite?
 
How often do nVidia's change their core shader architecture with a ground-up rewrite?

NVIDIA changes architecture every two generations or so, since DX10, we had Unified Shaders with Tesla (8800 Ultra/GTX 285), then Fermi (GTX 480/GTX 580) came in with heavy focus on Compute/Tessellation, then Kepler (GTX 680/GTX 780Ti) which came with a big focus on power efficiency through scheduling rework, then NVIDIA reworked scheduling again in Maxwell and Pascal (GTX 980Ti/GTX 1080Ti), both relied upon massive increases in Geometry output and Memory compression to drive huge performance gains, then came Volta and Turing, which introduced tons of new features again (AI acceleration, Ray Tracing, separate INT/FP32), Mesh Shaders ..etc.

In the span of 12 years (2007 to 2018) they changed archs at least 5 major times, AMD only made 2 major transitions in that period (VLIW to GCN).
 
I'm unconvinced. GCN has evolved but just hasn't got a name change. GCN stopped at GCN 5, which includes things like scheduling and tessellation changes. The core GCN architecture is the SIMD CUs and wavefronts, so we're counting two core architecture changes, VLIW and GCN. In that time, hasn't nVidia had effectively one arch, the CUDA core? So nVidia introduced CUDA with Tesla and have stuck with it, and AMD have used GCN. nVidia has named their different CUDA based generations with different family names, whereas AMD has just named theirs GCN x.

Is there really a difference in behaviour? Both have a long-term architectural DNA as the basis for their GPUs, with refinements happening in scheduling and features across the evolution of that core DNA.
 
Last edited:
It's a moving target, as in „define what makes an architecture an architecture“.

Me, I'd consider Kepler Nvidias last deep architecture overhaul since it did away with HW scoreboarding in Fermi. Of course, there have been drastic changes since then, but things like power efficiency focus or memory compression are details. Important details, yes, but they do not make up an architecture - in my personal book.
 
So we're to believe that Sony spent millions in R&D to help develope an architecture that will also be used is the XBSX? I think people need to give a little more credit to the engineers at AMD and a lot less to Sony. I know in XB1 MS created their own poprietary audio block in house but outside of that everything used in both consoles has been selected from existing tech already created by AMD. Sure both MS and Sony choose slightly different configurations of that existing tech but that's about it.
 
There has to be easier ways to poke holes in this theory without having to rely on data points in which we would never be able to obtain.
 
Is there really a difference in behaviour?
Yes. I noticed big differences between the NV GPUs i worked with (Fermi, Kepler, Pascal). E.g. differences in performance of memory access patterns or atomic operations. Very noticeable, although i did not even use profiling tools back then.
GCN in contrast behaved always the same. Perf just a matter of freq. and CU count, not much else. (7950, 280X, Fiji) Did not yet look closely at the Vega that i have now - seems surprisingly fast somehow.
I guess it's similar for people working more on rendering than on compute, but they will see different pros and cons.

Need to add that any GCN worked so well for me, there seemed no need to improve anything at all. NV had bad performance in comparison, also depending a lot on API... only Pascal finally was ok but still way behind in perf per dollar.
For me, all that ranting AMD would be outdated and behind never made nay sense... until very recently at least :)
 
In that time, hasn't nVidia had effectively one arch, the CUDA core? So nVidia introduced CUDA with Tesla and have stuck with it, and AMD have used GCN. nVidia has named their different CUDA based generations with different family names, whereas AMD has just named theirs GCN x.
Nope, NVIDIA stuck with the name CUDA cores as it corresponds to cores that run their CUDA language, the underlying arch is different across generations. Memory hierarchy is often different, the arrangement of CUDA cores is widely different and thus scheduling becomes different ..etc.

The jump in performance is also huge at the same core count, for example the 780Ti (Kepler) has the same number of cores as the 980Ti (Maxwell), with comparable clocks, yet the 980Ti is leaps and bounds faster.

And no, AMD only added iterative changes to GCN, the arch was mostly the same. Hardware savvy members here can back me up on this. @3dilettante
 
Not sure, but i think on NV also things like available registers and LDS memory have varied across generations.
This is quite remarkable because those numbers often affect which algorithm you choose, at least the implementation details.
As a programmer, i somehow hope things settle and changes become smaller with newer generations. Comparing the GPU situation with x86, it's harder to keep up to date, and code has shorter lifetime.
 
Not sure, but i think on NV also things like available registers and LDS memory have varied across generations.
Even inside one chip: Big Kepler had, in it's GK210 edition, twice the register space as GK110.

Nvidia, much more than AMD, tried to hide all this behind different compute capabilities, for which their compilers tried to optimize automagically.
 
I find that hard to swallow. ;) It's a silly way to be competitive with nVidia.
Which.. they kind of haven't been?

Other than Navi which is using a more advanced process, when was the last time that AMD was able to compete on performance-per-watt and performance-per-die-area?
 
"Before that AMD had a lot more liberty to change architectures, they went from VLIW5 to VLIW4 to GCN in the span of just 4 years (from 2007 to 2011), but then they stuck with GCN from 2012 to 2019, which is a frigging 8 years period!

It seems that at the very least, AMD stuck themselves into a lockstep rhythm with the console release cycle, only changing architecture when the console cycle is about to end, and seems RDNA is heading toward the same fate too."


But it just works? :D
 
Which.. they kind of haven't been?

Other than Navi which is using a more advanced process, when was the last time that AMD was able to compete on performance-per-watt and performance-per-die-area?
So why do it? to stick with an architecture that means losing billions in sales in the PC space for some sort of semi-custom market that's not as big and could also use another architecture anyway makes little sense. It makes more sense that AMD invested in a long-term vision for all markets,a scalable architecture that'd optimise their R&D investment, specifically for compute, but guessed wrong, and have been working on a new replacement long-term architecture. Semi custom have all been using GCN because that's the arch AMD had available, rather than AMD only having that arch available because that's what the semi-custom people wanted.
 
Back
Top