Official speculate before its too late: RSX Spec thread

Edge said:
That's a given when connected to CELL, but the memory bus will not be using Rambus technology.

Sony seems to have a good relationship with Rambus and the technology they have fits well into a console. A GPU thrives off of bandwidth, and Rambus has a solution that delivers it in spades.
 
DeanoC said:
Lol

That diagram appears to be missing the quad hamster wheels...
There has been a lot of dev pressure to replace one of the hamster wheels with a coffee machine to improve the development environment and SDK.

Bugger wonder if I've let Sony's big secret out now...

This forum must be immensely entertaining for you :devilish:

I don't even care if RSX is a piece of crap on toast anymore...I just would like to know the details.
 
ROG27 said:
This forum must be immensely entertaining for you :devilish:

I don't even care if RSX is a piece of crap on toast anymore...I just would like to know the details.

Well if the RSX is based off of next-gen nVidia technology, Sony wouldn't be allowed to talk about it.
 
ROG27 said:
This forum must be immensely entertaining for you :devilish:

I don't even care if RSX is a piece of crap on toast anymore...I just would like to know the details.
Want something to fill up you time? Reverse engineer the NVIDIA G70 driver and start to figure out how it really works (not the simple API/PR view that gets repeated ad nausaum), then you can start to speculate on RSX differences...

All NVIDIA tech is evolutionary, so whatever the differences it will be related to current hardware. This actually applies to PC chips as well...
 
DeanoC said:
Want something to fill up you time? Reverse engineer the NVIDIA G70 driver and start to figure out how it really works (not the simple API/PR view that gets repeated ad nausaum), then you can start to speculate on RSX differences...

All NVIDIA tech is evolutionary, so whatever the differences it will be related to current hardware. This actually applies to PC chips as well...

Hah! I knew it!

The driver can load balance between pixel shader units and vertex shader units and probably SPUs too! So basically it's a unified shader architecture at the 'software' level!



RSXspecs.jpg


Note: "independent vertex/pixel shaders"

"independent" is a 'strange' word to describe them together in that context! There are some NVIDIA patents to support this too...

Btw, happy new year everyone!
 
Jaws said:
Note: "independent vertex/pixel shaders"
"independent" is a 'strange' word to describe them together in that context! There are some NVIDIA patents to support this too...
Not really strange.. it's just pointing out that RSX is not using ATI/MS's unified shader approach. That's all..

Btw, I think you've misinterpreted DeanoC's previous comments about the driver a little.. lol. :)

Dean
 
Why would RSX want SPE's? What's wrong with using vertex shaders? They're a much more efficient use of silicon when it comes to specialised vertex work. Woudl there be any advantage to having massive programmability at a cost of a large drop in raw vertex performance?

As far as I'm concerned, the Visualizer idea was just a concept on how to produce a Cell like GPU so the same fab can run off CPUs and GPUs with very close CPU>GPU interrelation, which saves buying in components and facilitates a distributed rendering solution for networked Cells. I don't think the Visualizer represents a magic uber-GPU that's better than existing solutions, and given a choice between Visualizer and RSX/Xenos, the latter would give better bang per buck.
 
DeanA said:
Not really strange.. it's just pointing out that RSX is not using ATI/MS's unified shader approach. That's all..

Btw, I think you've misinterpreted DeanoC's previous comments about the driver a little.. lol. :)

Dean

Hah! My bait worked!
 
Shifty Geezer said:
Why would RSX want SPE's? What's wrong with using vertex shaders? They're a much more efficient use of silicon when it comes to specialised vertex work. Woudl there be any advantage to having massive programmability at a cost of a large drop in raw vertex performance?
...

My hangover's to crippling for me to dig those NVIDIA patents up, but IIRC, there was a multi-threading driver/load balancing CPU-GPU patent.

It's not a matter of RSX wanting SPUs, it's a matter of the driver distributing workload to increase efficiency over a given number of cycles/frames etc. across both CPU/GPU...
 
Jaws said:
It's not a matter of RSX wanting SPUs, it's a matter of the driver distributing workload to increase efficiency over a given number of cycles/frames etc. across both CPU/GPU...
If you were a console developer and got told you had to use a graphics driver that automagically load-balanced between a CPU and a GPU, it's highly likely that you'd pull a knife out of your kitchen drawer, drive round to the home of whichever individual authorised such an abomination, and then carve 'TARD' into that individual's forehead (reversed, of course, so he/she could read it when loooking in the mirror).

A (good) developer, with control over most of the things that are executing on the target, is likely to be able to do this way way way better than a driver that would have to carve work up based on some kind of history of CPU & GPU usage taken over the previous few frames.

Oh, and Happy New Year. :)

Dean
 
DeanA said:
If you were a console developer and got told you had to use a graphics driver that automagically load-balanced between a CPU and a GPU, it's highly likely that you'd pull a knife out of your kitchen drawer, drive round to the home of whichever individual authorised such an abomination, and then carve 'TARD' into that individual's forehead (reversed, of course, so he/she could read it when loooking in the mirror).

A (good) developer, with control over most of the things that are executing on the target, is likely to be able to do this way way way better than a driver that would have to carve work up based on some kind of history of CPU & GPU usage taken over the previous few frames.

Oh, and Happy New Year. :)

Dean

Hehe, that patent must be for PC devs then!
 
weaksauce said:
Would NURBS modelling be possible in ps3 games?
Nurbs are a thing of the past.Not interesting for games ,just good for design industry.
Overated hype word #785
 
weaksauce said:
Would NURBS modelling be possible in ps3 games?

We keep hearing NURBs being brought up, generation after generation.

As generally used in modelling packages, I don't think they'll ever be used in-game. They just don't make a lot of sense for a realtime application when the same surface can be turned into a more manageable representation.

The main use for such surfaces is just for ease of modelling for something smooth. You want operations that let you edit that surface without having to tweak thousands of vertices. Change parameters on a handful of control points, cut the surface, intersect it with other curves... those are the kinds of operations you want when you're editing something.

However at runtime, you just want a simple representation of the end result - storing all the modelling information and reconstructing it every frame would be silly. Why bother trimming the surface at runtime if it's static? The only thing you want to be able to do at runtime is probably to dynamically subdivide the surface to retain smoothness.

If it's animating then it might need more work but you'd still be vastly better off turning it into something more manageable at export.

Could a PS3 handle it? Well, probably, but relatively speaking even if it could do it faster than other platforms, the relative difference wouldn't be enough to compensate against just using beziers or such-like.

If you just mean what many people seem to mean when they say NURBs, which is actually just "curved surfaces of some kind", then yes this is possible on PS3 and we'll be seeing them in games. We'll also be seeing them on every other platform, as we have been doing for a while now. Many PS2 games probably use curved surfaces in places - anything with a big landscape or race track quite possible uses them, for instance.

I'd prefer to see better tools for subdivision surfaces (they're in most modellers now, but I'm not convinced they're exposed in the right way to build real-time content - again the focus is on modelling, so maybe it needs another couple of iterations to converge on something we can all use) - they're good for runtime and much more flexible for the artist anyway.
 
I think often people use the term 'NURBS' just to cover HOS. And yes PS3 could use NURBS, or SDS, or Beziers, or CSGs. Whether you'd want to or not is a different matter entirely. I'd quite like to see someone have a go at something like Tetris using CSGs, raytracing and a realtime GI solution. Or Snooker/Pool where the benefit to curved surfaces while be evident (though TBH with the poly budgets these machines have a Snooker/Pool game is unlikely to show much by way of jaggies on the balls even as tri-meshes!). A fully raytraced game would be nice as a landmark graphics event even if it's something very simple in game terms.
 
A 256bit bus would be nice too...
The bus is the best kept conspiracy secret in the industry. Only few individuals actually know that it's in fact 256bit, but 128bits are reserved for Ken's OS, so it's the same like they never existed. :p

Shifty said:
Why would RSX want SPE's? Would there be any advantage to having massive programmability at a cost of a large drop in raw vertex performance?
Depends on what raw performance are you targetting. If you can get within reasonable range of performance target there are certainly compelling reasons for such a setup.
However, given the marketability of higher numbers, I suspect we'll still be using hacky fixed hw solutions for a long time to come.



And I second the Happy New Year wishes :)
 
DeanA said:
If you were a console developer and got told you had to use a graphics driver that automagically load-balanced between a CPU and a GPU, it's highly likely that you'd pull a knife out of your kitchen drawer, drive round to the home of whichever individual authorised such an abomination, and then carve 'TARD' into that individual's forehead (reversed, of course, so he/she could read it when loooking in the mirror).
I have my knife sharpened ready and I've been practising my reverse writing :)

I moan about OS thread switching policies and quad pixel allocation strategies, let alone something like CPU/GPU load-balencing. Eeek!

Happy New Year
 
Back
Top