Forbes: AMD Created Navi for Sony PS5, Vega Suffered [2018-06] *spawn*

Wonder how a further developed EE/GS future would have looked like. Some said it wasnt forward thinking.
Wouldn't that have been the original plan for the PS3? It was alleged that RSX was chosen over including a bespoke graphics companion to the host Cell processor.
Cell and its SPEs had some philosophical similarities, with a number of DSP-like side-processors, a scratchpad-enhanced memory hierarchy, and an aggressive on-die network.

The PS2's graphics subsystem stretched across two chips, and had among other things a 128-bit on-die bus, a direct 64-bit link to the GS, and had a more complex memory mapping scheme with the CPU and other units able to map memory in a heterogeneous way.
If Cell's SPEs were predominantly the VPU portion of an evolved EE, then whatever would have been the other PS3 chip could have been the next version of the GS.
I've only seen rumors about what the PS3 might have wanted to be, and I didn't start researching architectures significantly until the the generation after the PS2--which limits my familiarity. The era of the XBox 360, PS3, multicore x86, R600, G80, and POWER chips was a high-water mark in terms of ready accessibility and evangelism for hardware architectures.

As for whether the T&L path taken by PC architectures was a mistake versus the PS2's VPU arrangement, for whatever reason the latter was supplanted by the former.
I can see the conceptual appeal of the more flexible programmable path, although looking at the system diagram of the PS2 there are elements that would have likely been counterproductive in the realm of PC-industry graphics ASICs on PCI/AGP expansion buses.
The VPUs themselves plug directly into the main memory bus and I believe some level of interaction in a MMU with interacting page table contexts. For the more standardized PC industry, having that bespoke hardware on-die at the time would have been a general non-starter. The CPUs had more need of the precious die space, and the system architectures were not amenable to the more complicated view of memory that the EE had. Even now, generations hence, I'm not sure the level of interaction GPUs have in the x86 virtual memory system is as free, with the IOMMU having some hefty isolation.
The bus between the EE and GS was something that would take a number of generations in terms of expansion bus protocols to get the same bandwidth, and the relationship with slave devices over PCI demands a stronger separation between host and graphics domain that has not been fully erased with current APUs.

In that context, a more fixed-function front end could have made more sense. Interaction with a graphics slave chip would have been more costly in terms of latency and driver involvement, and bandwidth would lag given the needs of expansion slots and long board traces. Doing more with packaged geometry and asynchronously processing it on the other side of the divide, and obfuscated more of it because of a need to be more compatible with the existing ecosystem likely solved problems that the PS2's architecture did not help with.
By conforming to the needs of the multi-billion dollar market still addressed by GPUs with that same architectural legacy, it seemed like it was able to generate multiple generations in a way the PS2's implementation did not.
Other elements, like eDRAM and the speed of inter-chip PCB connections seemed to hit practical limits as far as how they could be scaled, or who was left to manufacture them. The more complex processor and memory hierarchy proved to have performance and developer problems that more directly threatened the architecture than difficulties that arose with optional use of the later shader types, and in the end no vendor left to push it.

While this seems to have appealed for geometry, it seems like that piece alone was swamped by whatever happened with the silicon gains on the pixel processor side, and the rejection of the complexity and scaling costs specific to the Sony+Toshiba paradigm versus what the rest of the industry wanted.
From what rumors state, the mistaken architecture will be what the next generation will be using. Mesh and task shaders also do not remove the hardware paths, so much as they expose it and allow software to leverage its on-die paths more effectively.
 
@3dilettante

Wow ok thanks for the detailed description. The PS2 was my favorite console (and still is), its hardware while not the fastest or best of the 6th gen, was very intresting to say the least, and intresting results where shown.


Thought that was very impressive for being hardware from early 2000. Offcourse the system was pushed more then any system out there even today, it still is an impressive feat.
Imagine if transformers was a 30fps title!

On a note, do you think the OG xbox had the best performing hardware (in general) of the three 6th gen consoles?
 
@3dilettante

Wow ok thanks for the detailed description. The PS2 was my favorite console (and still is), its hardware while not the fastest or best of the 6th gen, was very intresting to say the least, and intresting results where shown.


Thought that was very impressive for being hardware from early 2000. Offcourse the system was pushed more then any system out there even today, it still is an impressive feat.
Imagine if transformers was a 30fps title!

On a note, do you think the OG xbox had the best performing hardware (in general) of the three 6th gen consoles?
I was less aware of the original Xbox at the time than the PS2. I had a PS2, but my memory is fuzzy about that far back.
I don't recall seeing attempts at rigorous comparisons being made between platforms, and I think at the time the general trend was that the original Xbox could usually be counted on giving more stable performance on cross-platform titles.

Running from fuzzy memory, and from the wikis for the tech specs for both.
The PS2's hardware, if well utilized by devs with the time/skill to massage it, could be pushed very far. Its peaks could be high, but there were corners the architecture and step functions based on features used that could bring it down to more modest levels pretty quickly.
The Xbox's hardware had lower peaks in a number of spots, but it seemed to have more generous resources for handling non-ideal coding. It had some bottlenecks relative to the PS2, like the split between CPU and GPU vertex capability that the PS2's EE did not have, but on the other hand those bottlenecks were familiar to many devs and the tools to deal with them were more robust.

In terms of the main CPU in general performance without counting the VPUs or assuming they were heavily used for a graphics load, the Xbox's Pentium III appears to be substantially more capable, and this may have explained some of the performance inconsistencies with the PS2.
The VPUs would have done much better in terms of vector capability, and they contributed to some high peak vertex rates. The more complex arrangement of units and the reliance on optimal software tended to lead to significant underutilization.
VU0, for example, made up somewhat less than half of the peak FP performance of the vector units but in practice saw most of that peak unused. (Unfortunately as with many attempts to get details that far back, the source link for the following thead is dead: https://forum.beyond3d.com/threads/ps2-performance-analyzer-statistics-from-sony.7901/)


The Xbox's GPU would have to take on much of the fight with the VPUs and the GS, which meant lower peak geometry and pixel rates. Complexity in leveraging the more complex PS2 and eDRAM arrangement aside, there were some significant steps down in power based on how many features were employed at once. The pixel engine lost half its throughput if texturing was enabled, for example, and other features dropped the rates as they were enabled. Geometry and pixel fillrate could be very high for the PS2 for simple output, although the single-texturing rate looks a bit modest compared to the high other peaks.

The NV2A didn't have the same raw numbers, but it seems that it could sustain more performance with more pixel effects applied. The PS2's fast eDRAM was also more limited in size, and that could lead to reducing pixel/texture detail to avoid additional passes for tiling purposes.
I'm even hazier on this, but in terms of the big gap between the CPU and GPU in the PC architecture I mentioned earlier: I thought this was bolstered by discussion a while ago about how the PS2 could more readily create its desired output by burning more of its peak geometry and pixel throughput through multiple low-overhead submissions of the same geometry, versus the PC method of reducing the number of passes while cramming in more effects of higher complexity per pass.

https://forum.beyond3d.com/threads/ps2-vs-ps3-vs-ps4-fillrate.55200/
https://forum.beyond3d.com/threads/questions-about-ps2.57768/

As time progressed, some of the assumptions baked into the PS2 became less tenable. eDRAM figured highly in the GS architecture, but the manufacturing base and physical scaling of eDRAM processes did not keep pace. The 360 used it, but it seems by that point the needs of the GPU silicon proper did not allow for it to be on-die. Nintendo kept using eDRAM, although this was increasingly more limited in terms of what it could achieve and in terms of manufacturing (the last aging node for eDRAM by the last fab offering it). The aggressive on-die connectivity allowed by the on-die arrangement helped give the PS2 the high fillrate ceiling, but also bound future scaling to connectivity and capacity scaling.
The PS2's image quality did suffer from the capacity limitations, and the 360's eDRAM capacity constraints could be felt as well. The Xbox One's ESRAM wasn't eDRAM, but its capacity limits were noticed as well.
The overall trend seems to be that demand for capacity outstripped what could be delivered on-die. Relying on the high bandwidth from multiple on-die connections also rode a weaker scaling curve, as interconnection proved more challenging to increase versus transistors.
The more complex programming model, high-peak finicky hardware, and many pools of different memory also became less tolerated as time went on.

(edit: fixed link)
 
Last edited:
@3dilettante

Thanks for a as usual from you in detail explanations about the architectures, its in line what ive heard before here on B3D i think.
I understand from you too that, in general the OG Xbox was quite much more capable, if pushed as much as PS2 offcourse?

Was testing this title today, unreal championship 2, i feel it pushed the xbox quite far.

 
I had less exposure to the Xbox, although there would be games that had an above-average optimization for the platform.
A game that pushed one console very hard would be different from a game targeting the other, enough so that making a direct comparison might be debatable.
The Xbox did have a fair number of extra features and storage that provided a value-add besides raw performance, and there's the benefit of hardware that made getting decent or consistent performance easier to achieve.
The market started to shift towards favoring pixel quality, which the Xbox favored.
I think there were PS2 games that were standouts in terms of what they delivered, and the Xbox had its own standouts and was able to provide a higher average baseline even for titles that didn't push it.
 
And it's the complete opposite of what the Forbes article actually says:



Created for, not created by.
And if Sony was designing Navi then AMD wouldn't need to send 2/3rds of their engineers to work on that.
From https://www.anandtech.com/show/14579/all-ryzen-qa-with-amd-ceo-dr-lisa-su

David Wang, AMD: We started RDNA before the Sony engagement. I think RDNA is revolutionary, and it is very flexible in terms of being able to be customized for different types of workloads.

NAVI was designed before the Sony engagement,
 
From https://www.anandtech.com/show/14579/all-ryzen-qa-with-amd-ceo-dr-lisa-su

David Wang, AMD: We started RDNA before the Sony engagement. I think RDNA is revolutionary, and it is very flexible in terms of being able to be customized for different types of workloads.

NAVI was designed before the Sony engagement,
This just crushes my soul. I was told that all of AMD's RTG was hired years ago by Sony to work exclusively on the PS5 GPU (with exclusive Sony secret sausage). Why does the internet lie to me! :no::cry: /S
 
Last edited:
From https://www.anandtech.com/show/14579/all-ryzen-qa-with-amd-ceo-dr-lisa-su

David Wang, AMD: We started RDNA before the Sony engagement. I think RDNA is revolutionary, and it is very flexible in terms of being able to be customized for different types of workloads.

NAVI was designed before the Sony engagement,

RDNA is the umbrella term for an architecture, like GCN. Navi is a set of products based on RDNA, like Vega is a set of products based on GCN. There was more GCN prior to Vega, and there will be more RDNA post Navi.

The original Forbes article claims the bulk of RTG's resources were redirected to the semicustom division to focus on Sony's requests, hence "Navi (the subset of RDNA) was created for Sony". I don't interpret that as Sony calling AMD to say Hey I want a GPU like this and you'll call it Navi. Rather that the bulk of Navi's final design was made while working for Sony and not the discrete graphics PC team.

2/3rds of resources going to work on semi-custom would explain Vega's less-than-expected performance, the enormous delay of Navi, Raja feeling helpless and leaving for Intel, etc.


David Wang's statement simply implies AMD started working on the foundation of GCN's successor before diverting RTG resources to the semicustom group, but that wasn't even the question he was answering.




I honestly doubt this much drama would have been created if Forbes' article had said "Navi was created for Microsoft".
The butthurt in this thread - starting with its sarcastic title - is so blatant it stinks.
Reviewing the first posts in this thread and a lot of it boils down to "Sony never did anything right PS2 bad Cell terrible Microsupersoft Hololens Roxxorzz!!!11 plus sarcasm it makes me look righter".


EDIT: Bah I don't care.
 
Last edited by a moderator:
2/3rds of resources going to work on semi-custom would explain Vega's less-than-expected performance, the enormous delay of Navi, Raja feeling helpless and leaving for Intel, etc.
There are a lot of ways this could play out. This is just one scenario.

It could have easily been that Sony wanted their Navi ready for 2019 and requested a whack of engineers to help wrap up and Lisa agreed that the semi custom business was more important than their own GPU one and this approved the resource shift.

Lots of amazing scenarios that could play out for Raja’s statement to be true. With RDNA already going before Sony reaches out to them; created for Sony and sold to MS seems like the least probable stretch I can think of.

It’s not that people are pro MS in this thread; people are Pro Nvidia and Pro AMD. And MS has had a long history of being with them In creating consensus in the industry and moving graphics forward via Direct X. Microsoft does build a roadmap with the industry; they are quite involved in understanding where the future of graphics will be. These are natural positions one would take in this debate. Especially with the console industry now predominantly based on PC based designs.

It’s more than entirely possible there is a specific branch of Navi that was designed for Sony. let’s not get crazy and call the whole trunk of Navi being designed for Sony.
 
Except that semi-custom folks don't design architectures, they get ready made ip blocks which are used to build the chip for customers specific needs with possible 3rd party blocks included.
Surely there's some overlap, but still the general concept goes that way - you don't put 2/3 of RTG to do semi-custom. ever. you might put 2/3 of RTG to work on RDNA instead of few last pieces in GCN-puzzle, but that's not even remotely the same thing.
 
Not only that but Forbes was the ONLY news outlet that made that claim, all other basically linked to the Forbes article. The fact that no other reputable news outlet corroborated what was claimed in that article means it is likely factually inaccurate.

The most likely explanation isn't that AMD dedicated 2/3 of RTG resources to Sony's request but that they dedicated 2/3 of RTG resources to the GPU design that would be easier to use for semi-custom designs. Those designs being important for key partners such as Sony, MS, and anyone else.

It may be that whoever Forbes talked to only mentioned "Sony and other partners" in relation to key partners that would make use of RTG custom solutions and then erroneously just assumed Sony was the recipient of all those resources.

If Sony were to contract 2/3 of RTG for their purpose, that would cost a rather princely sum of money, something that would be glaringly obvious in Sony's financial reports, and yet there is nothing there that would account for anything like that, nor anything in any investor reports that leads one to think that Sony invested that amount of resources to lock up that much of another company.

Regards,
SB
 
I don't know the makeup of RTG in terms of what fraction of its engineering goes into architecture, physical design, software, validation, etc., but the most broad reading of 2/3 of engineering going to Sony only may raise questions as to why we've not seen much resulting from it. The rumor was that this robbed Vega of resources and hobbled its implementation, although if true that would likely be robbing it of later stages of development, going by rumors of its prolonged gestation. Navi, or more specific to the rumor--a Sony-only Navi, wouldn't need many of those resources unless it was nearing the same stages. That would be a significantly long halt on the Sony product's development if it was siphoning for resources with Vega in 2016-2017, even if there were a 2019 PS5 launch versus the presumed 2020 one.

Perhaps if there were specific teams or elements of some of them in particular that might have been devoted to semi-custom work, but why a GPU in late-stage development would be throttled by a design that in theory had not gotten to the same resource-intensive phases would need more clarification. That aside, the claim prior to the 2/3 engineering one was that Navi was made for Sony's PS5--which seems like an interpretation not well-supported by AMD saying they were working on it prior to Sony's sign-on, Navi chips now launched or hinted at in driver code, or the Microsoft console announced to be using Navi.
 
Now I think that it was something incorrectly assumed:

2/3 of AMD's RD are working on navi (RDNA) + PS5 is going to use navi = 2/3 of AMD's RD are working for Sony's PS5.

I think the Forbes author did not do his job correctly. He shouldn't have just assumed and he should have double checked his story.
 
Since this thread is alive I bring small stocking stuffer.

Once again some whispers about some weird tools carlessness on MS side. Metacore doesen't like this, metacore would like to think about console as dedicaded platform exposing every hardware nuts and bolts to brave developers asap and in best uncompromised way, because we can by the nature of things, unlike on generic platforms with many configs/things to consider.
 
Since this thread is alive I bring small stocking stuffer.

Once again some whispers about some weird tools carlessness on MS side. Metacore doesen't like this, metacore would like to think about console as dedicaded platform exposing every hardware nuts and bolts to brave developers asap and in best uncompromised way, because we can by the nature of things, unlike on generic platforms with many configs/things to consider.

I heard since Sony bought SN Systems the tools are very good.
 
Since this thread is alive I bring small stocking stuffer.

Once again some whispers about some weird tools carlessness on MS side. Metacore doesen't like this, metacore would like to think about console as dedicaded platform exposing every hardware nuts and bolts to brave developers asap and in best uncompromised way, because we can by the nature of things, unlike on generic platforms with many configs/things to consider.
This is literally incomparable with the early Xbox One SDK clusterfuck situation where some hardware features were not supported at all at launch and had a dodgy version of DirectX 11.2 (11.X).

VS is tightly integrated into the Xbox SDK which means that it takes more time to add support for newer versions unlike Sony where it's simply a question of making their SDK compatible with it (similarly to a plug-in). At the end of the day this is a nothing burger..
 
This is literally incomparable with the early Xbox One SDK clusterfuck situation where some hardware features were not supported at all at launch and had a dodgy version of DirectX 11.2 (11.X).

VS is tightly integrated into the Xbox SDK which means that it takes more time to add support for newer versions unlike Sony where it's simply a question of making their SDK compatible with it (similarly to a plug-in). At the end of the day this is a nothing burger..

Yeah I agree this particular is nothingburger in big picture but little symptomatic. Earlier I ranted how IMO xbox is getting too much like generic closed windows pc insted dedicated platform=console and that was allusion. Rhetorical question ;) in this case would be : If xbox is still such MS "the role model" should't be other way around? all new features of VS2019 for devs integrated and presented here first an and then trickle down elsewhere? Instead, lately we are getting only exclusive gold requirement but thats beyond the scope of even this lighhearted thread.
 
Nah, let all the other devs do the initial round of beta-testing and bug discovery before you release VS-2019 SP1 as the XBox SDK without all those initial issues. :LOL:
 
Back
Top