RSX - Best guess at what 300 million trannies are for

Status
Not open for further replies.
Is AA an issue given the higher resolution and that we'll have mipmapping on textures this time?
 
Mikage said:
"PS3 has not only 1 GPU, but 1+7 GPUs."
http://www.watch.impress.co.jp/game/docs/20050519/ps3_r.htm

David Kirk (nVIDIA) said:

1)RSX can use XDR-RAM(256MB) as VRAM too.

2)7 SPEs and RSX can work togehter as total GPU. SPE as vertex shader
,post processing a rendering reslut from RSX etc...


Sorry, my english is poor. I expect Mr. one will translate this article.
If it uses the SPEs for vertex processing, would RSX be able to allocate its own vertex processors for pixel processing?
 
Luminescent said:
If it uses the SPEs for vertex processing, would RSX be able to allocate its own vertex processors for pixel processing?
R500 can ;) Having RSX a 'classic' architecture I don't think it will be able to re-use its vertex shader engine as pixel shading processors
 
nAo said:
Where is ONE when you need him? HEELLLLLLLP :)
Here's what I understand (Keep in mind that my comprehension of the language is awfulistic)
The GPU can address the 256MB XDR.
The RSX is connected to the CPU via the FlexI/O, (CPU to GPU bandwidth: 20GB/s; GPU to CPU: 15GB/s).
The RSX can render to the XDR, and from there the Cell SPEs could do post processing effects(?) (HDR, Motion Blur).
They give an example of possible GPU/SPE complementary work: the rendering by the SPEs of a reflection maps while the RSX render the whole scenery and blend both images.
Nvidia compares the RSX-SPEs teamwork to SLI.
There's a part about high-level surfaces, but I don't get how it's supposed to work (SPEs are involved).

they don't talk about AA, they don't talk about AA... bad sign
They didn't talk about Fragment programs neither. That doesn't mean the RSX won't have pixel shading capabilities. :p
 
nAo said:
Luminescent said:
If it uses the SPEs for vertex processing, would RSX be able to allocate its own vertex processors for pixel processing?
R500 can ;) Having RSX a 'classic' architecture I don't think it will be able to re-use its vertex shader engine as pixel shading processors
Well, at least it was worth a shot. ;) There probably is a way of getting around and hacking a solution, but I doubt it will ever be necessary.

It was mentioned previously in this thread that the connection between RSX and the SPUs may make it relatively tangiable for the SPUs to take care of surface tesselation and displacement mapping.
 
They didn't talk about Fragment programs neither. That doesn't mean the RSX won't have pixel shading capabilities. :p
But they talked about pixel shading at Sony conference, AA was not mentioned is any occasion so far :(
Thank you for the translation anyway ;)
 
I'm still reading the article, and the part about the RSX/SPEs teamworks sound a lot like this "chaperone" rumors from a few months ago.

nAo said:
But they talked about pixel shading at Sony conference, AA was not mentioned is any occasion so far
Thank you for the translation anyway
It's not a GPU made by Toshiba, image quality shouldn't be problem, and I'm sure that in real world situations 2xAA (Or the infamous new version of Quincunx) should be afordable, if not for free.
And, you're welcome. :D
I'm at home, so I have a dictionnary installed. ;)
 
Quick question for you guys. RSX's bus to the memory is 128-bit, correct? I keep seeing a number of ppl throwing out 256-bit (Anand for example) and there's no way that's possible considering that it's using GDDR3 running at 1400MHz and only kicking out 22.4GB/s.
 
Since that interview with David Kirk conducted by Zenji Nishikawa contains so little words from Kirk that it's almost dedicated to technical illustration for readers and speculation by Nishikawa, I have little to add to what Mikage's post. Kirk says nothing about specifics about RSX, but only about how this RSX-CELL system works. Anyway here's a bit more descriptive version of the latter half of the article -

David Kirk: SPE and RSX can work together. SPE can preprocess graphics data in the main memory or postprocess rendering results sent from RSX.

Nishikawa's speculation: for example, when you have to create a lake scene by multi-pass rendering with plural render targets, SPE can render a reflection map while RSX does other things. Since a reflection map requires less precision it's not much of overhead even though you have to load related data in both the main RAM and VRAM. It works like SLI by SPE and RSX.

David Kirk: Post-effects such as motion blur, simulation for depth of field, bloom effect in HDR rendering, can be done by SPE processing RSX-rendered results.

Nishikawa's speculation: RSX renders a scene in the main RAM then SPEs add effects to frames in it. Or, you can synthesize SPE-created frames with an RSX-rendered frame.

David Kirk: Let SPEs do vertex-processing then let RSX render it.

Nishikawa's speculation: You can implement a collision-aware tesselator and dynamic LOD by SPE.

David Kirk: SPE and GPU work together, which allows physics simulation to interact with graphics.

Nishikawa's speculation: For expression of water wavelets, a normal map can be generated by pulse physics simulation with a height map texture. This job is done in SPE and RSX in parallel.
 
PC-Engine said:
So basically part of CELL will need to be used for part of the graphics rendering pipeline.

my guess is that nVidia could integrate that into their custom Cg for the PS3 so it would be seemless to developers.
 
one said:
Since that interview with David Kirk conducted by Zenji Nishikawa contains so little words from Kirk that it's almost dedicated to technical illustration for readers and speculation by Nishikawa, I have little to add to what Mikage's post. Kirk says nothing about specifics about RSX, but only about how this RSX-CELL system works. Anyway here's a bit more descriptive version of the latter half of the article -

David Kirk: SPE and RSX can work together. SPE can preprocess graphics data in the main memory or postprocess rendering results sent from RSX.

Nishikawa's speculation: for example, when you have to create a lake scene by multi-pass rendering with plural render targets, SPE can render a reflection map while RSX does other things. Since a reflection map requires less precision it's not much of overhead even though you have to load related data in both the main RAM and VRAM. It works like SLI by SPE and RSX.

David Kirk: Post-effects such as motion blur, simulation for depth of field, bloom effect in HDR rendering, can be done by SPE processing RSX-rendered results.

Nishikawa's speculation: RSX renders a scene in the main RAM then SPEs add effects to frames in it. Or, you can synthesize SPE-created frames with an RSX-rendered frame.

David Kirk: Let SPEs do vertex-processing then let RSX render it.

Nishikawa's speculation: You can implement a collision-aware tesselator and dynamic LOD by SPE.

David Kirk: SPE and GPU work together, which allows physics simulation to interact with graphics.

Nishikawa's speculation: For expression of water wavelets, a normal map can be generated by pulse physics simulation with a height map texture. This job is done in SPE and RSX in parallel.

Neat. Ties in with my speculation here,

Code:
[XDR]<--->[L2-Cache]--->[SPE:1-7]--->[SPE:8]--->[GPU]---> Output
            ^__________________________|__________|

http://www.beyond3d.com/forum/viewtopic.php?p=469475#469475

8)
 
PC-Engine said:
So basically part of CELL will need to be used for part of the graphics rendering pipeline.

Need? I don't think it's a need issue, but an optional extra. If it was a need issue, for example, that would suggest the GPU was all pixel shaders (e.g. you had to do vertex shading on cell), but I don't believe that's the case.

I think they're just talking about boosting an already powerful GPU with further help from Cell. Good stuff :)

Storing the framebuffer in XDR would also be useful..save some of your less latency-prone bandwidth for just non-frambuffer GPU stuff?

Surely some other western sites got an interview with Kirk on RSX?
 
So basically part of CELL will need to be used for part of the graphics rendering pipeline.

why do you need to make it sound bad ?

Unless you think ps3 will be cpu-bound ?

Just thinking of what freedom it gives.I've seen novodex physics involving heavy vertex creation and destruction to create completly on the fly chunks of geometry based on material properties ,to basicaly make things explode.

Also , if i want to be able to massively post process the whole screen , doing 9*9 convolutions, Sfx ,and so long...

this is good feature,ain't it ?
 
Ardrid said:
Quick question for you guys. RSX's bus to the memory is 128-bit, correct? I keep seeing a number of ppl throwing out 256-bit (Anand for example) and there's no way that's possible considering that it's using GDDR3 running at 1400MHz and only kicking out 22.4GB/s.

Yes, it's 128 bit GDDR3 bus at 700 MHz (1400MHz effective).

1400*128 ~ 22.4 GB/s
 
PC-Engine said:
So basically part of CELL will need to be used for part of the graphics rendering pipeline.

How in the world did you come to this conclusion? I hardly think it needs to be used as part of the rendering pipeline. It gives developers the option to though.
 
This also contradicts PCE's ideas that Sony have used an off the shelf GPU as a last-minute fix. If the G70 is an off the shelf PC component, but it can't run without Cell's help, it wouldn't do too well in the PC space would it? Or did Sony customise the chip for PS3 by removing fucntionality?

I can't see you can have it both ways.
 
I think it's very closed-minded to think that Cell will not take part in the graphics calculations.

Cell itself is just sitting there screaming "Please use me, i want to calculate lots of vertices all over the place and move them realistically!!".

And seeing how the SPEs are both more flexible and much faster than a GPU vertex shaders will be for a long time, it can only be a Good Thing (TM).

But it's obvious that PC-Engine will find things to complain about even now.
 
I think PS3's current setup looks very versatile. I can understand Sony wanting to go with a strong conventional GPU now, while also allowing interaction between the two for all that programmable FP power to be used wherever the devs want. In something like SOTC or ICO, where the amount happening on screen is limited due to the genre, the visuals are going to be breathtaking and Cell can help that with bits of raytracing and goodness knows what else. Whereas in other complicated games, Cell will be on physics and AI and leave the visuals to the still-very-able RSX.
 
Status
Not open for further replies.
Back
Top