Forbes: AMD Created Navi for Sony PS5, Vega Suffered [2018-06] *spawn*

BRiT

(>• •)>⌐■-■ (⌐■-■)
Moderator
Legend
Supporter
Article that spawned reactions:

Which brings us to, in my eyes, the more interesting revelation. According to my sources, Navi isn't just inside the Sony PS5; it was created for Sony. The vast majority of AMD and Sony's Navi collaboration took place while Raja Koduri -- Radeon Technologies Group boss and chief architect -- was at AMD.

But the collaboration came at the expense of Radeon RX Vega and other in-development projects. Allegedly, Koduri saw up to a massive 2/3 of his engineering team devoted exclusively to Navi against his wishes, which resulted in a final RX Vega product Koduri was displeased with as resources and engineering hours were much lower than anticipated.

https://www.forbes.com/sites/jasone...nys-playstation-5-vega-suffered/#148088ce24fd
 
Last edited by a moderator:
If Navi is built for gaming, then perhaps it's focus on FP32 and actual geometry crunching. Not a hand-me-down from the business sector. I am sure AMD's work with SONY, gave them plenty of "re-taping" ideas to play with. (Better game compute ratios)
 
If Navi is built for gaming, then perhaps it's focus on FP32 and actual geometry crunching. Not a hand-me-down from the business sector. I am sure AMD's work with SONY, gave them plenty of "re-taping" ideas to play with. (Better game compute ratios)
Seriously wtf is up with all this "with Sony" nonsense? One damn rumor and people are still going around talking like Sony is designing the damn thing.
Sony is no different from Microsoft or any other semi-custom partner, sure, they surely share ideas which may or may not affect the architecture in development phase, but they really deal with the semi-custom department and semi-custom department gets their hands on the architectures when they're ready to be implemented in those semi-custom designs
 
One beneficial aspect of working with amd and sony for consoles is, I guess, having more data to see what devs wants, how they want to do stuff, what's the bottlenecks of their chips, etc. Don't get me wrong, they already have that internally and on PC, but being on console put that on another level.
And If Sony and MS "pick" or "ask" for Y and Z instead of X and W, in a sense, it can help AMD focusing on the "useful" stuff, for gaming...
 
Seriously wtf is up with all this "with Sony" nonsense? One damn rumor and people are still going around talking like Sony is designing the damn thing.
Sony is no different from Microsoft or any other semi-custom partner, sure, they surely share ideas which may or may not affect the architecture in development phase, but they really deal with the semi-custom department and semi-custom department gets their hands on the architectures when they're ready to be implemented in those semi-custom designs

Because nothing makes you an authority on contemporary GPU design quite like having the worst graphics subsystem in comparable consoles 20+ years ago; then following up with failing GPU design so hard 15 years ago as to have to rush and buy a practically off-the-shelf part due to the lack of time to even commission a true semi-custom one and then not bothering to even try ever since.
 
Seriously wtf is up with all this "with Sony" nonsense? One damn rumor and people are still going around talking like Sony is designing the damn thing.
Sony is no different from Microsoft or any other semi-custom partner, sure, they surely share ideas which may or may not affect the architecture in development phase, but they really deal with the semi-custom department and semi-custom department gets their hands on the architectures when they're ready to be implemented in those semi-custom designs

Hum... I'd say Sony is very different from Microsoft.
With the PS2 and PSP they developed the whole things in-house using MIPS IP cores and their own GPUs/Vector-FPUs. In the PS3 they co-developed Cell with Toshiba and IBM which was supposed to power the whole thing.

Apart from Microsoft never developed processing hardware in-house, and AFAIK all Nintendo graphics solutions have been developed by Silicon Graphics / then-ArtX / then-ATi / then-AMD (and now nvidia). I think the only in-house graphics solution from Nintendo is the on in DS, but that's a very simple GPU.
Unless those guys at Sony have different jobs now, the resident talent for graphics processing architectures at Sony exists, while we can't really say the same about Nintendo or Microsoft.


Plus the "rumor" of Sony messing on Navi with AMD came from a Forbes contributor with AFAIR a very decent track record.
 
Microsoft never developed processing hardware in-house
I beg to differ.

Microsoft Talisman was a hardware project combining tile-based rendering API with a reference hardware specification; vendors like Samsung/3DO, Cirrus Logic, Trident and Philips/NXP (Trimedia) already had working VLIW DSP implementations by 1997, but then 3Dfx Voodoo Graphics PCI arrived with a mini-GL driver for QuakeGL, and the whole Talisman effort was dead on arrival.

Then in 1998 they partnered with SGI on Fahrenheit Scene Graph (OpenGL++) and Fahrenheit Low Level specifications, with the latter poised to succeed Direct3D 5, but then Nvidia GeForce256 arrived with hardware transform and lighting in August 1999, and the FLL effort had to be abandoned in favor of refactored Direct3D 7.

So that probably taught Microsoft a lesson or two - and Sony arguably had to endure similar lessons with PlayStations 2 and 3.
 
Last edited:
I beg to differ.

Microsoft Talisman was a project combining tile-based rendering API with a reference hardware implementation; vendors like Samsung/3DO, Cirrus Logic, Trident and Philips/NXP (Trimedia) already had working VLIW DSP implementations by 1997, but then 3Dfx Voodoo Graphics arrived with a mini-GL driver for QuakeGL, and the whole Talisman effort was dead on arrival.

How could miniGL kill Talisman?
 
Hum... I'd say Sony is very different from Microsoft.
With the PS2 and PSP they developed the whole things in-house using MIPS IP cores and their own GPUs/Vector-FPUs. In the PS3 they co-developed Cell with Toshiba and IBM which was supposed to power the whole thing.
And we all know how well that turned out, see Geeforcers post above.
Apart from Microsoft never developed processing hardware in-house,
Except that Microsoft has developed hardware in-house, including graphics and AI chips
Unless those guys at Sony have different jobs now, the resident talent for graphics processing architectures at Sony exists, while we can't really say the same about Nintendo or Microsoft.
Again, see Geeforcers post.
Plus the "rumor" of Sony messing on Navi with AMD came from a Forbes contributor with AFAIR a very decent track record.
Care to point to his previous track record on leaks? Just because someone writes for Forbes doesn't mean everything he writes is true.
 
One beneficial aspect of working with amd and sony for consoles is, I guess, having more data to see what devs wants, how they want to do stuff, what's the bottlenecks of their chips, etc. Don't get me wrong, they already have that internally and on PC, but being on console put that on another level.
And If Sony and MS "pick" or "ask" for Y and Z instead of X and W, in a sense, it can help AMD focusing on the "useful" stuff, for gaming...

Wait, what? Versus MS who has been listening to game developers for over 2 decades now and coordinating that with discussions with what is possible with hardware manufacturer's? That was the whole reason that DirectX existed. To try to bring game developers closer to hardware developers and to try to inform game developers of what is possibly coming in hardware over the next X years. As well to give hardware developers a much better grasp what game developers want and to try to coordinate with them to offer universal hardware capabilities to make things easier for game developers. Prior to that game development on PC was a nightmare of hardware incompatibilities and software developers not being able to rely on having X hardware feature available for their game depending on what hardware an end user had.

Versus console manufacturers who have traditionally just told game developers, this is what you get. Either make a game for it or don't, this is what you get. Sure it made game development a bit easier in that you had stable platforms with set hardware capabilities, but it also meant that game developers had almost no say in where hardware went. And game development was only easier WRT having a stable platform. Platform documentation could vary greatly in quality. The SDKs could often be a nightmare to work with (no input from game devs kind of does that). Etc.

It wasn't until PS4 that Sony actually started to listen to what game developers wanted, and they only started to do that because of how the X360/PS3 generation turned out.

OTOH - MS's consoles have always been praised for how developer friendly they were and easy it was to code games on them. Hence, Sony finally relenting and listening to game developers when the X360 basically matched PS3 sales.

And a good thing that happened as well. The PS4 would likely be a much worse machine if that hadn't happened and Sony continued to do their own thing and not listen to game developers. Ironically, it's also Microsoft not listening to game developers as much as they had in the past that lead to the mistakes of the XBO.

Nintendo? I'm still not sure if Nintendo is listening to game developers or not. :p

Regards,
SB
 
Last edited:
How could miniGL kill Talisman?
miniGL driver was a killer app for Voodoo Graphics PCI because it could run hardware-accelerated GLQuake at ~30 fps - making every hardcore gamer want a 3D accelerator card from 3Dfx, NVidia, or Rendition.

On the other hand, Talisman was a radical departure from existing triangle-based rasterizer pipelines in Direct3D, Glide, and miniGL/OpenGL, and its hardware performance was far behind Voodoo Graphics.
 
Last edited:
Sony might suck at engineering, but the PS2 was just perfect in a sense that it sold north of 155 million units, has the best library of all consoles, and had uniqe graphics at the time. PS2 seems somewhat rushed hardware, designed by someone that didnt have too much knowledge what developers wanted, but turned out to be a great product anyway. Seemed a one-time success afterall as with PS3 this didnt really went like PS2.
 
@DmitryKo

Yes it was very basic but fast, i know devs didnt like the hardware, but i think it was able to display graphics uniqe compared to the competition (ZOE2 as an exemple).
Offtopic but do you happen to play BF4 pc?
 
I mentioned SONY, because they worked with AMD engineers on their custom chip, to be better suited for their library of games. SONY is not at all paying AMD for GPU resources better suited for datacenters... they are paying AMD for a GAMING chip.

And with that insight, AMD could have designed NAVI with better uArch meant for gaming.



This same concept (of learning from SONY) also applies to AMD working alongside & with Microsoft. And... incorporating what they learned into NAVI.
 
I mentioned SONY, because they worked with AMD engineers on their custom chip, to be better suited for their library of games.

Whats your source?
Nvidia has the best gaming chip right now, they didnt need Sony for it being successfull.
 
Whats your source?
Nvidia has the best gaming chip right now, they didnt need Sony for it being successfull.

Forbes: https://www.forbes.com/sites/jasone...nys-playstation-5-vega-suffered/#29b62e24fda8

What is the best gaming chip..? Who don't need SONY? Perhaps you misread my post?.
I was referencing and referring to my opening remarks (thread spawn) & not the post that was above mine. And that the "re-taping" of NAVI is actually a good thing, because it allows AMD to sneak in a few more "adjustments" they have since nailed down (ie: had wished they could've got into navi, & now can, etc)

Dr Su even mentions this and seemed actually glad that "something popped up" & had led them back to a "re-taping" navi. Because, "it allows them to not make mistakes, like they did with Polaris, etc" (*paraphrasing).
 
Dr Su even mentions this and seemed actually glad that "something popped up" & had led them back to a "re-taping" navi. Because, "it allows them to not make mistakes, like they did with Polaris, etc" (*paraphrasing).
She did what where? Only thing Su has said about Navi was in the CES keynote and it was only a tidbit saying it's one of their future products
 
Yet PS2 had no dedicated graphics hardware besides a custom VPU processor... in the year 2000, when the PC had hardware triangle setup/rasterizer and transform&lighting (and soon programmable pixel/vertex shaders).

Yeah well in some way, the VUs were a lot more flexible that what was existing in the pc space at the time. A bitch to dev. for because it was very unique and other bottlenecks, but it was pretty "smart" in the end imo...
 
Back
Top