Cell & RSX: what will be Sony plans?

Simon82

Newcomer
Hello,
I don't understand why Sony has made lot of effort to build with IBM a so complicated and different processor (Cell) if on the other part the have implemented a graphic processor that seems to be really a standard not innovative or exotic (read customizable/optimized) one. What do you think are Sony's plans about use of those chips? Do you think that RSX will not be at the same height of the cpu if really well used with all its feature? And if this is true, does it mean that 3D rendering is not a priority talking about complexity since it was at PS1 age?

Bye
 
Hello,
I don't understand why Sony has made lot of effort to build with IBM a so complicated and different processor (Cell) if on the other part the have implemented a graphic processor that seems to be really a standard not innovative or exotic (read customizable/optimized) one. What do you think are Sony's plans about use of those chips? Do you think that RSX will not be at the same height of the cpu if really well used with all its feature? And if this is true, does it mean that 3D rendering is not a priority talking about complexity since it was at PS1 age?

Bye

bait-box-sign02.jpg
 

I know that this is a so talked and simplified argument, but Sony has built so much good architectures and RSX is the first quiet step since I remember.

I don't want to tell that this GPU is not good, only that they could customize an architecture already good like that.
 
I know that this is a so talked and simplified argument, but Sony has built so much good architectures and RSX is the first quiet step since I remember.

I don't want to tell that this GPU is not good, only that they could customize an architecture already good like that.

Why would they? RSX gives developers a lot of power which is easily accessible for obvious reasons - it's a PC GPU and therefore very easy to develop for. They already have to "fight" to get great performance from Cell, would you prefer them to be also fighting to get great performance out of RSX?
 
Why would they? RSX gives developers a lot of power which is easily accessible for obvious reasons - it's a PC GPU and therefore very easy to develop for. They already have to "fight" to get great performance from Cell, would you prefer them to be also fighting to get great performance out of RSX?

I'm sure they will, but Cell architecture, being new and (on paper) really powerfull if the code will be optimized for, let them squeeze lot of processing power (maybe). RSX is a GPU that has already shown what can it render on screen, in PC desktop platform. I know that Console world is not the same thing and what a Xbox could do, PC at the same spec couldn't, but if we can't achieve low level optmizing like we could on old console, do you think that this GPU can render all the staff that Cell will give? Could RSX be the bottleneck?

Lot of game and devealopers plans make me think at a similiar situation.
 
I'm sure they will, but Cell architecture, being new and (on paper) really powerfull if the code will be optimized for, let them squeeze lot of processing power (maybe). RSX is a GPU that has already shown what can it render on screen, in PC desktop platform. I know that Console world is not the same thing and what a Xbox could do, PC at the same spec couldn't, but if we can't achieve low level optmizing like we could on old console, do you think that this GPU can render all the staff that Cell will give? Could RSX be the bottleneck?

Lot of game and devealopers plans make me think at a similiar situation.

You need to look at the alternatives, and at the budget.

Alternatives: The mystical "Cell GPU" lots of people were talking about way too much. Or ATI. In the end, using a GPU from either ATI or NVIDIA would give so many more advantages that it's a good thing that Sony went for one of them.

Budget: Once they did decide to go for NVIDIA, the solution was simple: What could Sony get for the amount of money they were prepared to spend on PS3? RSX is the answer. If you want more power, you need to spend more. If you want more "customisation", you need to spend more.

Conclusion: RSX is a heavily PC-based GPU, easy to develop for and very powerful - whatever people say about it being "underpowered". Also, at the end of the day, NVIDIA and ATI are the best in the GPU business and Sony would have had to invest an insane amount of money to make a GPU that was as feature rich and fast as these two can make - if Sony could ever make one like that, that is!
 
I'm sure they will, but Cell architecture, being new and (on paper) really powerfull if the code will be optimized for, let them squeeze lot of processing power (maybe). RSX is a GPU that has already shown what can it render on screen, in PC desktop platform. I know that Console world is not the same thing and what a Xbox could do, PC at the same spec couldn't, but if we can't achieve low level optmizing like we could on old console, do you think that this GPU can render all the staff that Cell will give? Could RSX be the bottleneck?
You talk as though RSX isn't any good, like a lot of people. nVidia have been in the 3D biz for years, and are actually quite good. I don't know what magical replacement you think Sony could come up with that'll out-perform a 7800 type GPU (the best available at the time), let alone be cost effective and developer friendly.

The GPU has to do the same job no matter what architecture you use - tiled rendering, deferred rendering, straight-forward rasterizing, raytracing. If you had to pick one that renders well, G70 wouldn't be a bad choice by any stretch.

Unless you can describe a better solution, balancing performance with complexity and costs, I suggest you go away and think about what RSX does and well it is/isn't suited to a console. IMO any GPU is suited to a console, as it can be maxxed out in the closed environment. Exotic designs are nice from a technological POV, but there isn't much they can offer above the established methods, which are established because they work.
 
I think PowerVR would've been a viable option. I'm talking about the technology not that they had available for the desktop at the time. Would've been cool if SONY/Toshiba licensed the PowerVR IP and built a monster 500MHz 225mm^2 TBDR chip at 90nm.:devilish:
 
You talk as though RSX isn't any good, like a lot of people. nVidia have been in the 3D biz for years, and are actually quite good. I don't know what magical replacement you think Sony could come up with that'll out-perform a 7800 type GPU (the best available at the time), let alone be cost effective and developer friendly.

The GPU has to do the same job no matter what architecture you use - tiled rendering, deferred rendering, straight-forward rasterizing, raytracing. If you had to pick one that renders well, G70 wouldn't be a bad choice by any stretch.

Unless you can describe a better solution, balancing performance with complexity and costs, I suggest you go away and think about what RSX does and well it is/isn't suited to a console. IMO any GPU is suited to a console, as it can be maxxed out in the closed environment. Exotic designs are nice from a technological POV, but there isn't much they can offer above the established methods, which are established because they work.

Answering both users (thank for yours), I know that Sony could not build a GPU so powerful to be competitive with high end DX9 based GPUs. They can make it for PSP because the have got the know-how and the (similar) architetcture from Playstation2, a console they surely have spent lot of money on. But in PS3 they couldn't. Why then spent so much money in Cell architecture? I agree that with CPU and BluRay they surely are out of money in this project and maybe GPU was the latest choice to make that was soffering of this situation, but I think that CPU power nowdays for a console that take photorealistic rendering as first objective, are useless. Maybe they will improve in others area, like physic and a.i., who know, but in a marketing way of thinking these are not something that it's easy to see.
 
Why then spent so much money in Cell architecture?

What kind of a question is that? WHY spend so much money on Cell... Because they have a vision for the use of Cell not only in PS3 but also other platforms - apparently even PS4 (a very powered up version of course). Also, IBM invested lots on it because they also had a vision on where to use it.

And they obviously didn't feel that the alternatives were good enough.
 
What kind of a question is that? WHY spend so much money on Cell... Because they have a vision for the use of Cell not only in PS3 but also other platforms - apparently even PS4 (a very powered up version of course). Also, IBM invested lots on it because they also had a vision on where to use it.

And they obviously didn't feel that the alternatives were good enough.

I admit that maybe is the way we think at console architecture that let we think that PC based hardware are not so good like customized one. Maybe we always see a "different architecture" as a secret black box always ready to explode all its power. Sometimes we could find that lot of customized component don't perform as good as "standard" commercial one. ;)
 
It's pretty simple and basically comes down to the old question of "Do we build it in-house or do we outsource?". You could outsource "custom" chip designs too. It doesn't have to be standard off-the-shelf. It was already mentioned that a custom designed GPU from Nvidia wasn't an option for the launch time window since it was said that RSX was a last minute realization.
 
Why then spent so much money in Cell architecture?
The theory behind Cell was to overcome some severe bottlenecks that are holding back peak performance of existing CPUs. That is, there was room for improving the CPU design, where there isn't so much for improving the GPU design. The areas where the GPU can be improved, such as unified shaders, are in their infancy and nVidia hadn't a solution. How much improvement those designs offer over conventional GPU designs in a closed-box environment are also unknowns.

The investment in Cell as a scalable multimedia CPU with the intention of extending it to many products, is very different to the investment in a GPU only useful for a console and which wouldn't outperform the experts who are already very good at GPUs.
 
The theory behind Cell was to overcome some severe bottlenecks that are holding back peak performance of existing CPUs. That is, there was room for improving the CPU design, where there isn't so much for improving the GPU design. The areas where the GPU can be improved, such as unified shaders, are in their infancy and nVidia hadn't a solution. How much improvement those designs offer over conventional GPU designs in a closed-box environment are also unknowns.

The investment in Cell as a scalable multimedia CPU with the intention of extending it to many products, is very different to the investment in a GPU only useful for a console and which wouldn't outperform the experts who are already very good at GPUs.

I understand perfectly your opinion and I agree it. In real facts maybe Nvidia wont to share G80 architecture like ATI made with X360 GPU that was in some way part of future product before it was released on pc market. Maybye G80 aspects was too expensive for the buject remained after engineering problems and Cell devealoping.

Playstation3 as a test market board more than a "most powerful console"... could have a sense.
 
Why would they? RSX gives developers a lot of power which is easily accessible for obvious reasons - it's a PC GPU and therefore very easy to develop for. They already have to "fight" to get great performance from Cell, would you prefer them to be also fighting to get great performance out of RSX?

Not only a PC GPU, but NV has a long history of excellent OpenGL support -- absolutely necessary as Sony wasn't going to support DX, so OpenGL is next in line, or ahead, depending on who you ask -- as well as NV has their own shader language, CG, to compete with HLSL, another MS product.

So they not only got a familiar, tried and true architecture that they could deploy cheaply (NV needed to get into the game, and with 100M potential PS3s + PSP2 and PS4 contracts available JUMP IN!), and could do so quickly (G70 was nearly completion when the deal was made) they also got all the dev friendly tools they would need to turn out next generation graphics quickly.

Obviously a custom design tailored to Sony's special needs and architecture using some of the core technologies from NV's G80 design would have been nice. But that wasn't in the cards. And think of it this way: companies have limited focus. Sony focused on a next gen CPU architecture (Cell) and got a modern, if not typical flagship GPU; MS focused on a next gen GPU architecture (Xenos) and got a modern, if not typical multicore CPU. Which exactly represents where the memory controllers are.

The GPUs and CPUs just meet their design philosophies and focus, and in Sony's case RSX is one of the biggest short term wins Sony made. They opened up deep and wide for PC development in a way they never did with previous hardware and have hardware and tools that accelerate development for great looking titles.
 
After Sony's effort with PS2's GS im glad they went to someone that knew what they were doing :)
A friend gave me his old ATi 9600 Pro last week, which I replaced my old Ti4200 with. I went looking for some demo's to try out the card. I've just worked through the Dungeon Siege II demo. Comparing that on a GPU vastly superior to GS, to the similarly styled CON on PS2, I know which I prefer!

It seems unless you go into uber-graphics, mostly on FPSes, all those lovely pixel shaders and vertex shader that appeared a few years after PS2, are virtually worthless. No-one uses them to good effect - at least not in the games I've tried. And framerate always suffers. At the time, I think GS wasn't such a bad design. It's not easy to use, but it's versatile and produces some impressive results. Compare that to what's possible with an nVidia GPU that would have been available to Sony at PS2's development (without haemorrhaging cash!) and I think they probably made a fair choice.

This generation, things have moved on considerably. Not only in standard hardware design, but also tools like shader languages. Where GS was competitive with the products from nVidia and ATi for it's time, with lots of scope for pushing effects, I don't think anything competitive could have been produced for PS3, in terms of performance and development. Though to be frank, given the poor use of shaders in PC games, I'm not sure easy development is all it's cracked up to be!
 
A friend gave me his old ATi 9600 Pro last week, which I replaced my old Ti4200 with. I went looking for some demo's to try out the card. I've just worked through the Dungeon Siege II demo. Comparing that on a GPU vastly superior to GS, to the similarly styled CON on PS2, I know which I prefer!

It seems unless you go into uber-graphics, mostly on FPSes, all those lovely pixel shaders and vertex shader that appeared a few years after PS2, are virtually worthless. No-one uses them to good effect - at least not in the games I've tried. And framerate always suffers. At the time, I think GS wasn't such a bad design. It's not easy to use, but it's versatile and produces some impressive results. Compare that to what's possible with an nVidia GPU that would have been available to Sony at PS2's development (without haemorrhaging cash!) and I think they probably made a fair choice.

This generation, things have moved on considerably. Not only in standard hardware design, but also tools like shader languages. Where GS was competitive with the products from nVidia and ATi for it's time, with lots of scope for pushing effects, I don't think anything competitive could have been produced for PS3, in terms of performance and development. Though to be frank, given the poor use of shaders in PC games, I'm not sure easy development is all it's cracked up to be!

You make an extremely vaild point Shifty, one thing the GS di'nt have what IMO is was desparate for was texture compression. TC would have made a nice difference :)
 
i'm betting that Sony just wanted a simple, reasonably powerful GPU but one that was not top of the line. good enough with standard features and strong shader performance. but lacked massive pixelfillrate and bandwdith that GS had for its time, in comparsion.

I'd expect that for PS4, Sony and Nvidia will have had 5 to 6 years to work on a custom GPU that takes the strengths of Nvidia's know-how, and combines that with a high bandwidhth EDRAM solution with TeraBytes/sec bandwidth.

in 2004, Tony from Nvidia said that highend 3D games in the future would need like 3 TB/sec bandwidth. I expect that's where PS4 will go.

I'm hoping to see a PS1 to PS2 like leap when going from PS3 to PS4. the PS3 has all the right graphical features and capabilities that GS lacked. just not enough performance. with a custom Nvidia-Sony GPU, and not a quick 2 year customization of an existing Nv architecture, Sony will have the oppertunity to have something truly high performance for its time.

PS3 RSX has 4,000Mpixels/s aka 4 Gpixels/s and 22.4 GB/sec to GDDR3 memory plus another bus to CELL which eventually lets RSX tap into the XDR memory.

PS4 will need far, far more pixel fillrate and bandwidth. and far more than the current-state of the art G80. only EDRAM and alot of rendering pipes will get them that. i dont think the days of pixelfillrate are over because of shaders. while shaders can help, they cannot replace pixels or polygons. unless i totally new way of rendering happens which might not happen in 5-6 years.
 
Back
Top