Why not let PowerVR build RSX?

As I ask of everyone who raises this (which no-one answers ;)), what would you expect a custom chip to look like? The GPU would want to have vertex units, pixel units, texture units, etc. If Sony ruled against eDRAM, and nVidia didn't have US working in a useable form, what would they do differently? The end result of a custom GPU would look a lot like G70, no? Vertex units+pixel units+texture units.

Not that I'm a genius in these matters but it seems often I hear of situations where Cell is helping the GPU in graphics. In some cases redundantly. It would seem to me that a custom design would not have parts that are not as efficient as the cpu which the chip would be paired with.

I could be way off though.:???:
 
As I ask of everyone who raises this (which no-one answers ;)), what would you expect a custom chip to look like? The GPU would want to have vertex units, pixel units, texture units, etc. If Sony ruled against eDRAM, and nVidia didn't have US working in a useable form, what would they do differently? The end result of a custom GPU would look a lot like G70, no? Vertex units+pixel units+texture units.
XDR-Ram? This should provide alot more bandwidth per pin than GDDR3, AFAIR Cell uses 4*16 Pins for 25GB/s. So either the PS3 could d away with the RAM packed on top of the GPU (which is done to save MB-Layers ) or it could have Bandwith thats in excess of a 256bit GDDR solution - with 128 Pins. Maybe even both as XDR compensates for different trace lengths.
 
Not that I'm a genius in these matters but it seems often I hear of situations where Cell is helping the GPU in graphics. In some cases redundantly. It would seem to me that a custom design would not have parts that are not as efficient as the cpu which the chip would be paired with.

I could be way off though.:???:

How is Cell helping the GPU a bad thing suddenly ? Extra power is always better right (assuming cost is not an issue) ? Even if I'm early in the design cycle, I'm sure I'll ask the same question (whether I can throw in Cell power) since it is known to be a math power house.
 
How is Cell helping the GPU a bad thing suddenly ? Extra power is always better right (assuming cost is not an issue) ? Even if I'm early in the design cycle, I'm sure I'll ask the same question (whether I can throw in Cell power) since it is known to be a math power house.

http://www.beyond3d.com/forum/showthread.php?t=37033

If I understand corectly, optimally Cell helps cull polygons for rsx. Why would this ability not have been part of the gpu as this functionality would be needed for most any graphics heavy app? It's great cell can help in this regard but if cell was not tied down helping with things such as this, it could be doing other things such as improved physics calcs, collision detection, etc.

Just seems a bit of a waste to me is all.
 
Last edited by a moderator:
How is Cell helping the GPU a bad thing suddenly ? Extra power is always better right (assuming cost is not an issue) ? Even if I'm early in the design cycle, I'm sure I'll ask the same question (whether I can throw in Cell power) since it is known to be a math power house.

Because if the GPU could handle it all on its own Cell could be used for other things? That's one way to look at it. There's two sides to everything. Personally I would wonder if it was considered to bump up a part of the GPU in order to prevent this. I'd rather Cell handling as much on its own separate from the GPU as it can and vice versa. If your doing one thing you can be doing another.
 
Powervr probably doesn't have anything in the high end to offer. Sega was (apparantly) supposed to use something from them for the Lindbergh arcade board, but went with pretty boring off the shelf PC parts (including Nvidia) instead. I think I remember reading something from a Sega rep that if they had kept waiting for Powervr, it would have been a long time before anything got done. Can't find the link for that now though.
 
As I ask of everyone who raises this (which no-one answers ;)), what would you expect a custom chip to look like? The GPU would want to have vertex units, pixel units, texture units, etc. If Sony ruled against eDRAM, and nVidia didn't have US working in a useable form, what would they do differently? The end result of a custom GPU would look a lot like G70, no? Vertex units+pixel units+texture units.
I would expect several changes. For one, I'd really expect a blendable 32bpp HDR format (compatible with AA of course). Whether it's similar to FP10 or uses a superior RGBE format, it should be there. I'm also unsure why Sony would rule out eDRAM. They seem to be in a better position to use it than Microsoft due to their manufacturing experience. I'd also expect higher triangle setup speed because console games generally have wordloads with a lower pixel:vertex ratio than PC games. (I guess this last point is a bit moot since Sony was targetting 1080p as the most common resolution.)

My guess is NVidia dangled the option of a G80 based GPU for PS3, but was asking too much for it for Sony to stomach and the schedule would be too tight considering that Sony probably wanted to launch PS3 earlier. G7x had a nice compact architecture that for the most part does the job, and given how far behind R4xx was in technology and how successful the PS2 was, Sony probably wasn't expecting much competition from XB360.
 
JBenchmark shows that Toshiba, designers of the Sharp 903's GPU, design graphics competitively enough to compare favorably to nVidia, designers of the Sony Ericsson W900i's GPU, in the one market in which they both compete.

http://www.jbenchmark.com/result.jsp?benchmark=hd

As for the CELL based GPU: while an alternate GPU design for PS3, the RS, did come from Toshiba, Ken Kutaragi revealed research Sony did on the possibility of a CELL based graphics processor:
One of our ideas was to equip two Cell chips and to use one as a GPU, but we concluded that there were differences between the Cell to be used as a computer chip and as a shader, since a shader should be graphics-specific.

No system designer that competed against the Dreamcast, Kyro, or MBX devices wanted to end up with a less effective architecture, obviously, so ignorance of the benefits of PowerVR is probably the most likely reason for companies to have not selected them.
 
TheChefO said:
http://www.beyond3d.com/forum/showthread.php?t=37033

If I understand corectly, optimally Cell helps cull polygons for rsx. Why would this ability not have been part of the gpu as this functionality would be needed for most any graphics heavy app? It's great cell can help in this regard but if cell was not tied down helping with things such as this, it could be doing other things such as improved physics calcs, collision detection, etc.

Just seems a bit of a waste to me is all.

Sure, but it could also mean that nVidia refused to customize RSX too much (since they would want to focus on G80), or it would cost Sony too much to change RSX than to have Cell cover the duty. It does not necessarily mean that RSX is a last minute decision.

Because if the GPU could handle it all on its own Cell could be used for other things? That's one way to look at it. There's two sides to everything. Personally I would wonder if it was considered to bump up a part of the GPU in order to prevent this. I'd rather Cell handling as much on its own separate from the GPU as it can and vice versa. If your doing one thing you can be doing another.

See above. As a side note, I thought Cell can also be used for processing vertices, or certain/limited post processing... not just culling vertices.
 
Sure, but it could also mean that nVidia refused to customize RSX too much (since they would want to focus on G80), or it would cost Sony too much to change RSX than to have Cell cover the duty. It does not necessarily mean that RSX is a last minute decision.



See above. As a side note, I thought Cell can also be used for processing vertices, or certain/limited post processing... not just culling vertices.

If you're designing a chip from the ground up for a platform as influential as "Playstation" don't you think they would implement every feature (within reason) that would help ps3 be as competitive/dominant as possible? To me this sounds more like:

1) Sony plans design for ps3 with cell = gpu
2) Cell = gpu = not so good
3) Sony calls up nVidia (late in the design process of ps3)
4) Nvidia says "we're busy, but we can let you hop on our latest chip that should meet your timeframe"
5) Sony looks around realizes they have no other option and says "ok"
6) Sony adds the (costly) tech to the designs
7) tech demos from early cell = gpu works are finished or almost finished so they use them at e3

Ideally it would have been:
1) Sony gets on the phone with nVidia to setup a meeting to discuss the next Playstation
2) design is implemented for use within a system considering cost/performance
3) overall designed system is either less costly or higher performance

Not saying ps3 is inferior etc., just that it seems the gpu was not on the drawing board from day one.
 
If you're designing a chip from the ground up for a platform as influential as "Playstation" don't you think they would implement every feature (within reason) that would help ps3 be as competitive/dominant as possible? To me this sounds more like:

1) Sony plans design for ps3 with cell = gpu
2) Cell = gpu = not so good
3) Sony calls up nVidia (late in the design process of ps3)
4) Nvidia says "we're busy, but we can let you hop on our latest chip that should meet your timeframe"
5) Sony looks around realizes they have no other option and says "ok"
6) Sony adds the (costly) tech to the designs
7) tech demos from early cell = gpu works are finished or almost finished so they use them at e3

Ideally it would have been:
1) Sony gets on the phone with nVidia to setup a meeting to discuss the next Playstation
2) design is implemented for use within a system considering cost/performance
3) overall designed system is either less costly or higher performance

Not saying ps3 is inferior etc., just that it seems the gpu was not on the drawing board from day one.

I've said this: the original plan was for Toshiba to make a GPU, not to use Cell.
 
I've said this: the original plan was for Toshiba to make a GPU, not to use Cell.
I think I read somewhere that there was sort of a fleeting intent to use Cell at first, but they nixed this idea very early on and hoped for Toshiba to deliver the goods.

Whether this is true or not, I doubt NVidia was actually contracted to design RSX early on as some posters are asserting because I don't think you can hide that sort of thing from investors legally for long. IMO the actual work (i.e. significant man-hours and cost) probably started well into 2004, possibly after summer since the press release was Dec. 7.

How long did it take for RSX to make its way into dev kits? NVidia has a very good execution record, so I can't imagine they've been working on it since 2003.
 
When was the last time PowerVR built anything that didn't end up in a mobile phone? Kyro II? The Dreamcast?
Kyro II.
When was the last time either ATI/AMD or NVIDIA built embedded graphics cores that didn't in comparison look like scrap metal?
ban25 said:
Are you serious about this thread?
Yes. Bang per amount of bandwidth is a real-world problem, and hence this is a real-world thread.
 
Kyro II.
When was the last time either ATI/AMD or NVIDIA built embedded graphics cores that didn't in comparison look like scrap metal?
An embedded core is a lot different than a GPU for PS3. If NVidia can screw up programmable shaders as much as they did with NV30, then it's anything but trivial to get the same shading power out of 240 mm2 as RSX does. There's a lot of parts in high-end GPU architectures that don't scale down well to handheld devices, and similarly having the best handheld GPU (if true) doesn't mean you can design the best PS3 GPU.

Going back to your original thread, I seriously doubt NVidia would be willing to share G7x IP with anyone, especially the shader architecture, memory controller, and the finer aspects of how to get everything running at good efficiency.
 
If you're designing a chip from the ground up for a platform as influential as "Playstation" don't you think they would implement every feature (within reason) that would help ps3 be as competitive/dominant as possible? To me this sounds more like:

Are you implying that RSX is not competitive enough here ?

1) Sony plans design for ps3 with cell = gpu
2) Cell = gpu = not so good
3) Sony calls up nVidia (late in the design process of ps3)
4) Nvidia says "we're busy, but we can let you hop on our latest chip that should meet your timeframe"
5) Sony looks around realizes they have no other option and says "ok"
6) Sony adds the (costly) tech to the designs
7) tech demos from early cell = gpu works are finished or almost finished so they use them at e3

Ideally it would have been:
1) Sony gets on the phone with nVidia to setup a meeting to discuss the next Playstation
2) design is implemented for use within a system considering cost/performance
3) overall designed system is either less costly or higher performance

Not saying ps3 is inferior etc., just that it seems the gpu was not on the drawing board from day one.

Sorry... but your theory of using Cell as a GPU is flawed. Devs have mentioned that there was no such thing.

As for the rest of the steps, I have no idea. But you can see a previous discussion on this very topic starting right around here:
http://www.beyond3d.com/forum/showpost.php?p=637740&postcount=451

And in that thread, you can find this: http://money.cnn.com/2003/08/27/commentary/game_over/column_gaming/?cnn=yes ... thanks to xbd.

Hmmm... if you are so curious about RSX, why don't you sign up for GDC this March ? I think there is a session on it.
 
No system designer that competed against the Dreamcast, Kyro, or MBX devices wanted to end up with a less effective architecture, obviously, so ignorance of the benefits of PowerVR is probably the most likely reason for companies to have not selected them.
You seriously think that all these companies designing sophisticated electronic devices are ignorant of TBDR? Are you crazy?

We've had this debate many times. Read this thread again if you forgot. You have no basis to say IMR is less efficient than TBDR or that PowerVR could come up with something faster than RSX for today's workloads.
 
An embedded core is a lot different than a GPU for PS3. If NVidia can screw up programmable shaders as much as they did with NV30, then it's anything but trivial to get the same shading power out of 240 mm2 as RSX does. There's a lot of parts in high-end GPU architectures that don't scale down well to handheld devices, and similarly having the best handheld GPU (if true) doesn't mean you can design the best PS3 GPU.
That was tit-for-tat.
Just because Img Tech doesn't run along in the desktop PC space doesn't mean they don't have the technology to render shiny pixels for a closed box. Likewise, just because ATI/AMD and NVIDIA dominate the PC graphics space doesn't mean they can outdo everyone else in every space at every design point.

The PC market is very different from closed boxes. Observing that PowerVR has not (re-)entered the PC market and extrapolating from there that they don't have a competitive hardware architecture is short-sighted.
Mintmaster said:
Going back to your original thread, I seriously doubt NVidia would be willing to share G7x IP with anyone, especially the shader architecture, memory controller, and the finer aspects of how to get everything running at good efficiency.
You sure? I have been quite sure that Sony fabs RSX and can have a peek inside, do shrinks etc, though obviously with some agreement in place that they will only use it to build the PS3.
 
I think I read somewhere that there was sort of a fleeting intent to use Cell at first, but they nixed this idea very early on and hoped for Toshiba to deliver the goods.

Whether this is true or not, I doubt NVidia was actually contracted to design RSX early on as some posters are asserting because I don't think you can hide that sort of thing from investors legally for long. IMO the actual work (i.e. significant man-hours and cost) probably started well into 2004, possibly after summer since the press release was Dec. 7.

How long did it take for RSX to make its way into dev kits? NVidia has a very good execution record, so I can't imagine they've been working on it since 2003.

You've missed the massive threads before here on that very topic, but they're around to be found, and they discuss the very legal elements you're alluding to. Suffice to say, that neither Sony nor NVidia would need to disclose anything until it became material.

Anyway this is from September of '03 - feel free to make of it what you will, but personally I find the news reports are always behind the curb in reporting rather than ahead of it, so I would expect that whatever the work performed at that time, talks and conceptualization was probably decently along by late '03, if only for the purposes of determining whether or not Sony would indeed go with NVidia.

http://money.cnn.com/2003/08/27/commentary/game_over/column_gaming/
 
You've missed the massive threads before here on that very topic, but they're around to be found, and they discuss the very legal elements you're alluding to. Suffice to say, that neither Sony nor NVidia would need to disclose anything until it became material.
Thanks for the info, but you're basically emphasizing my point. NVidia is not going to invest millions of dollars in the design until it does indeed become material. They'll make proposals, and Sony will mull over them for a while wondering if they can make a better or cheaper design in house, but NVidia is not going to seriously work on it until they get the contract.
 
The PC market is very different from closed boxes. Observing that PowerVR has not (re-)entered the PC market and extrapolating from there that they don't have a competitive hardware architecture is short-sighted.
I disagree. Yes, they're somewhat different, but the PC market has requirements that are lot closer to those of the PS3 than those of a handheld.

Your link also doesn't tell us anything about its real-world shader performance. Look at Matrox, S3, Trident, etc and specs/features don't tell you much about true perf/$ of a design.

You sure? I have been quite sure that Sony fabs RSX and can have a peek inside, do shrinks etc, though obviously with some agreement in place that they will only use it to build the PS3.
I don't think you need to know the architecture for that, especially since Sony won't be increasing the clock speed. Even if you're right, Sony poses no threat to NVidia's other markets whereas PowerVR does, especially with IGPs (where the space efficient G7x architecture could be useful).
 
Back
Top