Questions about PS2

Sonys biggest mistake with the PS2 was not getting the tools right before launch. Or indeed any reasonable time after that. If at all.
There was no real incentive to push the hardware because everything would sell reasonably well because of the insanely big user base. Everyone "knew" what it could do so a false floor or level of normal was created. Thus the machine was hardly ever pushed. Even the games that did dare to push the envelope hardly scratched the surface, being content to do a few things right and mostly be mediocre games otherwise.

The EE had an on-die macro block decompressor with insanely high throughput, much higher than would have been ever needed for playing a DVD. The intent was of course to decompress textures for games on the fly.
No one to my knowledge ever utilized that feature. With MIP mapping and decent culling the texture throughput per frame would never need to go above a few megabytes, when the frame itself was only one megabyte. Most likely it should have been less had 16 bit textures been used.
Huffman decompression was done by the CPU and VU´s for textures but why use precious time for that, when there is much better hardware with higher compression sitting right there?!

Geometry on the PS2 was meant to be synthesized. That much is clear from what Kutaragi said early on before release. The geometry was meant to be largely dynamic, instanced and generated on the fly. If that had been done more throughly PS2 games would have looked a hell of a lot better.
The MIP level could have been set for inclined surfaces or whole objects with approximately the same surface inclination towards the screen to avoid shimmer. Culling could have been done more effectively, and the triangles could have been generated to the right size to avoid thrashing the cache if too small and have texture jumping if too large, etc.

Doing stuff like that of course would have meant that the game would have been harder to port to other less advanced platforms. But who cares if the install base is so large?!
The real answer is I suppose that the computer industry is the most conservative, stiff and dogmatic of all the branches of engineering when it comes down to it. Double up for game programmers. ;-)

Why didn't even Sony use these techniques? I don't really understand but it sounds like a fantastic opportunity. They could have extended the life of the machine which was essentially printing money by then.
 
Why didn't even Sony use these techniques? I don't really understand but it sounds like a fantastic opportunity. They could have extended the life of the machine which was essentially printing money by then.
For the same reason no one else was using them. Sonys few inhouse devs was and is also part of the industry and the things that are good and bad about it.
Take a look at this presentation from early 2003: http://lukasz.dk/mirror/research-scea/research/pdfs/GDC2003_Intro_Performance_Analyzer_18Mar03.pdf Sonys hardware and tool guys where really very slow and mealymouthed about guiding people towards the right paths.
Also look at how bad the utilization of the EE to GS bus is, even on the "good" example.

Using the IPU to decompress textures would of course not have been as straight forward as just storing textures in DXTC format. But if streaming textures and geometry off the incredibly slow disc, compared to RAM worked, as it did in many games, then using the IPU to stream-decompress textures from memory could of course have been done.
It would have made a huge difference with natural material (complex) or picture textures, stretching the 32 Mb far further out than ordinary compression would have allowed.
 
Last edited:
When multipass is used, did all geometry should be sent to GS for each pass, or only part of geometry? As an exampe, if there is room rendered with single pass, and in that oom is object (cube), and cube have also enviroument mapping. Al polygons will be send twice, or only polygons needed for cube?
 
Only polygons drawn. Each pass is a separate render - triangles, lights, pixels.
So it means what only polygons for cube wil be sent twice? :D

Ok, next question. After first pass is made and backbuffer written to EDRAM, should GS read back buffer from EDRAM for second pass, or not, or only part of backbuffer needed for cube which require second pass? (yes, maybe my questions is hard to understan). :D
 
Yes, only the polygons drawn will consume filtrate and untextured polygons (useful for lighting and shadows and other special effects) will in theory render twice as fast. And only the part of the back buffer touched is the one rendered to. Or rather the 32x32 pages rendered to, so it does load a bit more pixels into the render-cache than are necessarily drawn to.

PS2 really is very very fast at multipass. All the talk you'll find about additional passes halving filtrate is kind of true but really just FUD.
The only thing that could kill it would be too small polygons because the rendering quad (stamp of pixels that walks across the screen when rendering) would be wasted on smaller polygons.
 
Yes, only the polygons drawn will consume filtrate and untextured polygons (useful for lighting and shadows...
Very true. The drop in shadow quality from PS2 to PS3 was so jarring! Makes one wonder how would a PS3 game perform if it strove for some PS2 quality targets like shadow quality (pixel-perfect) and particle density?
 
Kutaragi should have had his way and something like the GSCube should have been the rendering engine on the PS3.
They would have needed to take doing the necessary APIs and tools really effing seriously, but Sony was in a very good place after the PS2, so it should have been doable.
It would have meant a much more scalable architecture than what we have now.
Just add some more APUs and eDRAM when new processes makes it possible. Also rendering would have been very flexible going far further down that road than where we are now.
 
The fantasy train has to stop somewhere.

Resubmitting geometry to a fast but simple rasteriser with limited blend modes, limited culling and CLUT only texture compression was not the optimal way to utilise 3D hardware. GSCube was doubling down on a dead end of graphics hardware, which is why it was dropped. Stone cold. Dead. By people who knew what they were doing.

They killed it.

Graphics left the interesting but evolutionary dead end GS behind for a reason. Meanwhile, Nvidia and ATI went on to dominate the PC space and the TBDR went on to dominate the world of portable devices, reaching further than Nvidia or AMD ever managed.

Despite having far more development resources poured into it than any other platform in it's generation, PS2 ended up where it ended up. No other system had the luxury of almost unlimited resources poured into it over several generations of software.

Every system in its generation ended up less well developed than the PS2, mostly because graphics pipelines and asset creation pipelines weren't ready yet.

PS2 had a better shot at achieving it's potential - for the most part - than any other system.
 
BS! PS2 was different. People on a deadline and under budget don't like different. That doesn't mean it was bad though. 5 years is nothing to learn something different. Especially not when Microsoft lured developers with a repackaged PC.
The PS2 might have had the biggest budget but that doesn't say much in an industry where the common philosophy is to leave hardware to a few magic companies and let Moores law work for you.

The PS3 GS equivalent would have contained APU's being essentially a special extension of Cell. Also far more eDRAM for a larger buffer.
Those two things would have meant all the difference.
Those APU's could have been used for anything, and when they where finished with that, they could be state changed within the same frame and do something else.
Not that the GS wasn't a good design, but anything done within a timeframe and a budget has to make some compromises.
The GSCube was used and was very successful at what it set out to do. IE rendering high-res interactive previews of CG movies. It was used on a few movies. It was never meant as a big seller. It was a bit like the original Pixar Image computer. A showcase.
 
BS! PS2 was different. People on a deadline and under budget don't like different. That doesn't mean it was bad though. 5 years is nothing to learn something different. Especially not when Microsoft lured developers with a repackaged PC.
The PS2 might have had the biggest budget but that doesn't say much in an industry where the common philosophy is to leave hardware to a few magic companies and let Moores law work for you.

Naughty Dog was a Sony first party with a completely in-house technology stack built from the ground up specifically for the PS2 and refined by very good engineers for the length of the generation: engine, pipeline, compiler, programming language. You cannot get much more different from the industry at large than that.

If even their best efforts fell short of the hardware's ideal, then maybe the fault did not lie with the developers.
 
Kutaragi should have had his way and something like the GSCube should have been the rendering engine on the PS3.
They would have needed to take doing the necessary APIs and tools really effing seriously, but Sony was in a very good place after the PS2, so it should have been doable.
It would have meant a much more scalable architecture than what we have now.
Just add some more APUs and eDRAM when new processes makes it possible. Also rendering would have been very flexible going far further down that road than where we are now.

It was the first plan before going with NVIDIA. My friend who was working at Quantic Dreams tells me the two CELL plan is an urban legend but they wanted to do a GS 2 on PS3 and change late in development for the NVIDIA RSX.
 
He didn't like the idea technically but he thinks maybe commercially it helps the PS3, some developer could have stop supporting the PS3 with a GS 2...
 
When multipass is used, did all geometry should be sent to GS for each pass, or only part of geometry? As an exampe, if there is room rendered with single pass, and in that oom is object (cube), and cube have also enviroument mapping. Al polygons will be send twice, or only polygons needed for cube?

It's really not as complicated as you are making it out to be. You can draw a triangle using 3 verts, a texture and a blend mode. If you try to, you can draw another triangle that happens to have the same vertex positions, but maybe other stuff that's different. That's all there is to multipass. There is no scene-level/full-frame-level/anything-complicated-level logic in the GS. Just triangles.
 
It was the first plan before going with NVIDIA. My friend who was working at Quantic Dreams tells me the two CELL plan is an urban legend but they wanted to do a GS 2 on PS3 and change late in development for the NVIDIA RSX.

I don't know if it was an urban legend. The chip was specifically designed for multi-chip...designs. I have no inside knowledge though.
 
I don't know if it was an urban legend. The chip was specifically designed for multi-chip...designs. I have no inside knowledge though.

No it is an urban legend... They needed a true GPU on the other side. The plan was to build the PS4 with Cell processor too(multi chip design maybe) or use it in other field like they did building a supercomputer...
 
No it is an urban legend... They needed a true GPU on the other side. The plan was to build the PS4 with Cell processor too(multi chip design maybe) or use it in other field like they did building a supercomputer...

I have no idea if they could use two cell and a GPU. I guessing you're right that two cell and no GPU was not considered though.
 
I have no idea if they could use two cell and a GPU. I guessing you're right that two cell and no GPU was not considered though.

In his opinion it was a more interesting idea than the PS3 with an Nividia GPU but it was too risky...
 
It's really not as complicated as you are making it out to be. You can draw a triangle using 3 verts, a texture and a blend mode. If you try to, you can draw another triangle that happens to have the same vertex positions, but maybe other stuff that's different. That's all there is to multipass. There is no scene-level/full-frame-level/anything-complicated-level logic in the GS. Just triangles.
Ok, thanks.
Why then when multipass used fillrate is devided by two for each pass?
 
Back
Top