Cell's dead baby, Cell's dead. Even in its final form. *spawn*

The GPU that Nvidia sold Sony was the best possible GPU that Sony could have put into their console at that time. That time, that die, that power budget was pretty tight. And Sony left it late.

MS got the banger GPU, because they helped shape it. MS and ATI were talking. Sony were coming down from a high dose of copium.

Switch got a fine APU, as fine as Nintendo deserved. The denial machine against the hardware pre launch was incredible, even though B3D regulars could see what was happening.

And who's laughing now? Nintendo. And good on them. Banger of a handheld.
Hmm ? That time nvidia already had 8xxx series with unified shaders also tegra x1 was already old soc when switch premiere and nvidia also had newer one.
 
People hating on Cell when it's the sole reason PS3 was able to compete with 360 graphically.

Can you imagine what PS3 could have done if it had a more capable GPU and Cell didn't have to spend most of its resources doing crap the GPU should be doing.
I am sure that was the original vision. From past interviews, I think it was with Ken, he said that as graphics evolve to being more realistic, we need to to accommodate with more physics, more elements that make the world behave like the real thing, otherwise the realistic worlds will feel off. So we need more processing for that. Cell was supposed to be that super powerful processing unit that was going to make game worlds, not just look better, but also act more immersive, more realistically.
Beyond the visuals, that what was Sony was promising with Cell's processing power. Not graphics, but physics and AI.
 
I am sure that was the original vision. From past interviews, I think it was with Ken, he said that as graphics evolve to being more realistic, we need to to accommodate with more physics, more elements that make the world behave like the real thing, otherwise the realistic worlds will feel off. So we need more processing for that. Cell was supposed to be that super powerful processing unit that was going to make game worlds, not just look better, but also act more immersive, more realistically.
Beyond the visuals, that what was Sony was promising with Cell's processing power. Not graphics, but physics and AI.

You've just touched upon my biggest concern with modern rendering, in the next few years we'll end up with game worlds that look like real life but don't act or react like a real world.

Physics needs a huge, huge push this generation...
 
Hmm ? That time nvidia already had 8xxx series with unified shaders also tegra x1 was already old soc when switch premiere and nvidia also had newer one.

It takes time to integrate a GPU into a console. According to Wikipedia, G80 launched November 6th 2006, and PS3 was originally touted by Sony as launching Spring 2006 (though they could never have made that date). So firstly, there wasn't time to come up with a custom variant and integrate it. Secondly, G80 was absolutely massive (484mm^2) and vastly too large to put in even a 599 US dollar console. Smaller variants came later on newer process nodes.

Sony needed time, reasonable die area and an ability to pass produce millions of units in the run up to launch and during the launch window. G80 or a derivative was never an option given their launch date. I'm not even sure that given the die area Sony had to work with G8x would have been a much better choice.

As for Nintendo, they went with mature hardware with no doubt mature tools, and they had ages to perfect their launch window games on final hardware. They would also have had ages to perfect cooling and form factor too. I'm sure they got a very good deal on the chips. Supply also seems to have been strong for the parts in question.

I can't see that a newer processor would have put them in a better position than they are now, in terms of sales and profit.
 
It takes time to integrate a GPU into a console. According to Wikipedia, G80 launched November 6th 2006, and PS3 was originally touted by Sony as launching Spring 2006 (though they could never have made that date). So firstly, there wasn't time to come up with a custom variant and integrate it. Secondly, G80 was absolutely massive (484mm^2) and vastly too large to put in even a 599 US dollar console. Smaller variants came later on newer process nodes.

Sony needed time, reasonable die area and an ability to pass produce millions of units in the run up to launch and during the launch window. G80 or a derivative was never an option given their launch date. I'm not even sure that given the die area Sony had to work with G8x would have been a much better choice.

As for Nintendo, they went with mature hardware with no doubt mature tools, and they had ages to perfect their launch window games on final hardware. They would also have had ages to perfect cooling and form factor too. I'm sure they got a very good deal on the chips. Supply also seems to have been strong for the parts in question.

I can't see that a newer processor would have put them in a better position than they are now, in terms of sales and profit.
in the end thats the reason 3rd party games were usualy better running on x360 as amd was able to provide much better capable gpu 1 year before nvidia (also they didnt have problem to deliver rt feature for ps5 and xsx chips even tough first rdna2 cards were debuting same time as consoles)
 
What's more X360 GPU had unified shaders, when ATI PC gpus got these 2 years later... so nVidia deffinitely could do better with RSX and make it based on the GF 8000 series.
 
You are missing the point that Sony came to NVIDIA very very late in the design cycle. PS3 was supposed to rely on a second Cell CPU for graphics tasks. They realised too late it wasn't going to work and got NVIDIA on board. By that time there wasn't enough time to fully design a system around the Nvidia GPU.
 
in the end thats the reason 3rd party games were usualy better running on x360 as amd was able to provide much better capable gpu 1 year before nvidia (also they didnt have problem to deliver rt feature for ps5 and xsx chips even tough first rdna2 cards were debuting same time as consoles)

With PS5 and Series X, Sony and MS had been working with AMD consistently for many years. They've had many years of engagement with AMD's technology roadmaps - two whole previous generations in MS's case. They've had many years to work on semi custom branches to RDNA. Despite this, neither console has what you could consider a PC RDNA 2 part just dropped in.

PS5 is not actually "full" PC RDNA 2 - it's older - and has characteristics of RDNA 1 and RDNA 2. It's a mix of new and older parts of the roadmap. Sony and AMD had been working on this for years.

The Series X chip was apparently finished later than PS5, with both XBox and PC featuring full support for the new (at the time) DX12U feature set. Despite this, XSX is probably still a bit older than the first PC RDNA2 parts in terms of chip design, and carrying more RDNA 1 elements or layout or whatever. I think some of the die peeping experts said something like that at any rate.

All RDNA 2 types (PS5, XSX, PC) are ultimately based on the very similar RDNA 1, which predates consoles by more than a year.

Sony's situation with PS3 was very different than what's happened this gen.

What's more X360 GPU had unified shaders, when ATI PC gpus got these 2 years later... so nVidia deffinitely could do better with RSX and make it based on the GF 8000 series.

Not really.

The 360's GPU was based on an already in design GPU that didn't make it to the PC market for whatever reason. It was adapted and put into the 360, because MS could see it was the best available GPU for the 360.*

This isn't the same situation as G80. G80 was Nvidia's first unified shader architecture, and first DX10 product. It arrived along with DX 10 towards the end of 2006. Customisation and system integration takes time.

*What was right for MS to put into a console in 2005 might not necessarily be the right product for the PC in 2005. 360 didn't support DX10 either IIRC, unlike G80.


You are missing the point that Sony came to NVIDIA very very late in the design cycle. PS3 was supposed to rely on a second Cell CPU for graphics tasks. They realised too late it wasn't going to work and got NVIDIA on board. By that time there wasn't enough time to fully design a system around the Nvidia GPU.

The dual Cell idea was early on, IIRC, and I don't think it seriously went into development. Before RSX from Nvidia I think they were pursuing something like a Turbo Graphics Synthesiser from Toshiba.
 
You are missing the point that Sony came to NVIDIA very very late in the design cycle. PS3 was supposed to rely on a second Cell CPU for graphics tasks. They realised too late it wasn't going to work and got NVIDIA on board. By that time there wasn't enough time to fully design a system around the Nvidia GPU.

The weak performance isnt anyone else then Sony to blame. G80 in the PS3 would never fly, G70-series was what could be implemented realistically (and thats what happened). Behind compared other GPU's at the time (half the performance?) but thats always the case with consoles, you cant expect the latest and greatest to be implemented just like that, these things take time.

The Cell wasn't all that of a performance monster for gaming either, it was fast at certain tasks, specially GPU related tasks, the cell was akin to GPU/CPU hybrid, teamed to a at the time weak dedicated GPU.
 
It takes time to integrate a GPU into a console. According to Wikipedia, G80 launched November 6th 2006, and PS3 was originally touted by Sony as launching Spring 2006 (though they could never have made that date). So firstly, there wasn't time to come up with a custom variant and integrate it.

It didn't take Nvidia 1 month to make G80, it was years in the making so there was time to offer Sony something based on it.
 
It didn't take Nvidia 1 month to make G80, it was years in the making so there was time to offer Sony something based on it.
Imo nvidia just like to sell old crap ;) and nothing wrong with that, for sure more money for them (tough with exception of first xbox where they bring newest they had even before desktop)
 
Last edited:
You are missing the point that Sony came to NVIDIA very very late in the design cycle. PS3 was supposed to rely on a second Cell CPU for graphics tasks. They realised too late it wasn't going to work and got NVIDIA on board. By that time there wasn't enough time to fully design a system around the Nvidia GPU.
Oh yes... I forgot about that. So fault lies on both sides. I wonder how Sony imagined the Cell to run modern graphics.
 
Oh yes... I forgot about that. So fault lies on both sides. I wonder how Sony imagined the Cell to run modern graphics.
I revisited some of the interviews and I am very surprised by what I am reading.
Sony really believed in Cell and what it could offer. It wasnt just marketing fud. It kind of ended up being one of course as the end product (PS3) couldnt live up to it.
Sony talked like they worked pretty close with NVIDIA to make the Cell and RSX work together efficiently and effectively. Its like Sony was baking the best cake ever and when it finished baking and tasted it, they were caught off guard themselves at how mediocre it was.
Also a lot of plans they had about Cell, were a very clear glimpse into the future. Whatever they were planning for Cell based on how they envisioned the future came into reality, just not with Cell in it.

It is a very curious situation. What really went wrong with the Cell? They were shooting for the stars, spent so much money and years on it, and the processor itself along with all the plans they had couldnt be materialized.
Either due to lack of coordination, or a processor that wasnt well designed or a combination of factors.
 
You are missing the point that Sony came to NVIDIA very very late in the design cycle. PS3 was supposed to rely on a second Cell CPU for graphics tasks. They realised too late it wasn't going to work and got NVIDIA on board. By that time there wasn't enough time to fully design a system around the Nvidia GPU.
I thought the 2 x Cells rumour was debunked, and that Sony always intended to go for a third party GPU?
 

From the mouth of someone in Naughty Dog.
It says for a "while" but we dont have a specific time frame. Like, when? Until 2002? 2003? 2004?

Edit: Also it sounds like the console wasnt even running on 2 CPU's but rather one CELL. So it more or less sounds like a very very early prototype hardware that had just the Cell in it and was offloading graphics on the SPU's at the beginning.
 
It says for a "while" but we dont have a specific time frame. Like, when? Until 2002? 2003? 2004?
It strongly implies that the PS3 got delayed because of not having a GPU, so it must have been pretty late, to the point that NVIDIA probably couldn't have done much better with such tight schedules. We are talking about a much smaller NVIDIA after all, which just a few years earlier had suffered the 5800 fiasco..
 
It strongly implies that the PS3 got delayed because of not having a GPU, so it must have been pretty late, to the point that NVIDIA probably couldn't have done much better with such tight schedules. We are talking about a much smaller NVIDIA after all, which just a few years earlier had suffered the 5800 fiasco.
Well lets see the information. Logically, Sony of Japan would have got the advice as soon as developers started prototyping software.
The article mentions that the initiative started at the end of Jak II. Which means they would have started around end of 2003, beginning 2004.
The Cell was also probably early and the guys in Japan were hoping that the Cell by 2005 will be powerful enough. Best case scenario they probably informed Sony Japan late 2003 that they needed a GPU. 2 years to finalize lets say, end of 2005?
Its a shame because on one hand Sony was planning for a 2005 release, but the BR delayed farther. They could have just as well use better the time to improve the GPU more
 
The Cell wasn't all that of a performance monster for gaming either, it was fast at certain tasks, specially GPU related tasks, the cell was akin to GPU/CPU hybrid, teamed to a at the time weak dedicated GPU.
Hard disagree. It's fair to say Cell wasn't good for the gaming code and paradigms of the time. If you were to take modern GPGPU focussed concepts, or even just data-oriented game design, and map them onto Cell, you might have a very different story. When it was able to stretch its legs, Cell got unparalleled throughput. The main problem was devs couldn't and shouldn't have had to rearchitect everything for Cell on top of developing for other platforms. Sony perhaps imagined a scenario like PS2 where the majority code was focussed on it and ported, but that didn't happen, and realistically couldn't for the cost of game development as it increased exponentially.

Sadly we'll never know. There isn't going to be a future retro-demo scene pushing PS3's the same way we've seen C64, Spectrum, etc. pushed. There's no access to Cell hardware or interest in exploring it. I'm not going to argue it's a loss to the world and a travesty that other tech displaced it, but I'm not going to accept that it's a poor design for a gaming CPU. Modern game development is data focussed, working around the data access and memory limits caused by RAM buses that haven't kept up with the increase in maths power, gaining massive improvements (10-100x) over object orientated development concepts, and that's entirely what Cell was about 15 years ago when STI got together and said, "you know what the future limiting factor is going to be? Data throughput."
 
Back
Top