Questions about PS2

BS! PS2 was different. People on a deadline and under budget don't like different. That doesn't mean it was bad though. 5 years is nothing to learn something different. Especially not when Microsoft lured developers with a repackaged PC.
The PS2 might have had the biggest budget but that doesn't say much in an industry where the common philosophy is to leave hardware to a few magic companies and let Moores law work for you.

The PS3 GS equivalent would have contained APU's being essentially a special extension of Cell. Also far more eDRAM for a larger buffer.
Those two things would have meant all the difference.
Those APU's could have been used for anything, and when they where finished with that, they could be state changed within the same frame and do something else.
Not that the GS wasn't a good design, but anything done within a timeframe and a budget has to make some compromises.
The GSCube was used and was very successful at what it set out to do. IE rendering high-res interactive previews of CG movies. It was used on a few movies. It was never meant as a big seller. It was a bit like the original Pixar Image computer. A showcase.

I think you're overstating that GSCube was very successful even in its intended application. From what I've read, less than 20 were ever made and it wasn't long before they were sent back to Japan where they were destroyed. It allegedly took several minutes to upload the results from the GSCube to the attached server, which defeated the purpose of having real time rendering. Which, if true, shows how much of a hacked together effort this was. And I wonder if it would have competed well against a hypothetical similar effort from a rival like nVidia and if that simply didn't happen due to lack of interest and demand rather than Sony holding an intrinsic advantage for this application.

There's also claim that they were sold at a tremendous loss. All of this information is taken from here: http://assemblergames.com/l/threads/gscube-information.18036/#post-271056

Still it's kind of moot; even if it was great for movie previews (which could be useful with very specific criteria, like high polygon count but low shading quality) that alone doesn't support the argument that something like it would have been well suited for PS3.

GS had really serious limitations that I don't think you're really addressing this. function mentioned a big one, lack of decent texture compression. Culling is a little more debatable, we could argue the suitability of keeping that largely in the vertex shading domain (the VUs) although I think there's a reason other GPUs haven't done this and the problem (percentage of polygons that need to be culled geometrically) would have gotten worse with resolution scaling.

But I'd say that the entire shading model had real intrinsic limitations. Not just because of lack of blend operations but because the precision/dynamic range was of the values was limited to 8-bits per component. You could really do that many blend passes before quantization noise ruined the image quality. And the way the eDRAM-based framebuffer is used for intermediate value storage is inefficient compared to the register model other GPUs have followed. The bandwidth needed to perform RMWs from eDRAM almost certainly resulted in more power consumption than bandwidth needed to update SRAM based registers, especially when accounting for forwarding in the ALUs. With the SRAM based register files in GPUs the intermediate operations don't have to be coherent in screen space and can manage better than GS can with aggregating many small polygons over the same shader, where GS has to update 4x2 or 8x2 blocks in eDRAM vs other GPU shaders typically working in 2x2 quads.

I'm sure there's also a very real bandwidth/power overhead in having to resubmit triangles for multi-pass, and some inefficiencies on the VU (Cell SPU?) side keeping the GS FIFOs full.

The lack of implicit texture caching/requirement to load textures into the eDRAM almost certainly would have resulted in a lot of needless redundancy in textures loaded to different eDRAM tiles. The solution to this, making the tiles larger, would have needed more ALUs per tiles and would have exasperated the aforementioned screen-space coherency problem.

Then there's the lack of many other features that improved efficiency like early/hierarchical depth testing, which were already features in the PS3 console generation.

No matter how you slice it 512MB of eDRAM is enormous for the 1080p targets GSCube was designed for. Even if we would have been looking at an 8xGS with 256MB of eDRAM instead for something targeting PS3 that still would have been a tremendous amount of eDRAM for the time and I doubt it would have been economical. 1GB of RDRAM would have also been fairly expensive.

For its generation, you could say GS was a decent architecture. You could also say Flipper and NV20A were, each having different strengths and weaknesses and being very different from GS. But I can't agree that it would have been competitive moving forward by simply brute force scaling any of these upwards like GSCube did.
 
Last edited:
Where's that info from? It doesn't match my understanding. Maybe it's referring to some other feature.
http://slideplayer.com/slide/8727367/ Here, slide 30.
slide_30.jpg
 
I was thinking more about the prospect of a GSCube-based PS3 and when you get down to it I just don't see it as technically feasible to come anywhere remotely close to its eDRAM storage.

For comparison: XBox 360 had a mere 10MB of eDRAM on a 90nm process and it was on a daughter die with integrated ROPs. Not a particularly small die either at 80 mm^2. Wii U, years later, had 32MB of eDRAM which took up around 41mm^2 of its "Latte" chip built on a 40/45nm process (AFAIK).

Looking at it another way, the CPU+GPU reduction was about 6x in area going from 250nm to 90nm for PS2, and some of that was likely due to consolidation of the two chips into one, but we'll use 6x transistors as a reasonable starting point. PS3's GPU was about the same size as PS2's was originally (around 260mm^2); let's say that using eDRAM would save some external costs by not needing two different pools of memory, so it could maybe afford to have a bit of a larger GPU. So maybe you could fit 8x GS onto one die, but not while also increasing the individual GS eDRAM pool by 8x. You'd have a (quite large) chip with 32MB of eDRAM, not 256MB of eDRAM. You'd still hit the theoretical fillrate peaks of 1/2 the 1080p targeting GSCube, so around similar IQ at 720p, but with much less eDRAM. And like was said earlier, it was the huge increase in eDRAM that made GSCube at all viable for its purposes. It would have been really hard to deal with this tiled setup where every tile needs different pieces of textures manually uploaded and dynamically competing for a small amount of available eDRAM space. You'd have had a lot of redundancy, meaning that the realistic texture to framebuffer ratios would likely be even worse than they were with PS2.

Those GS-I32 chips on GSCube must have been monsters, maybe 300-400nm large, and they had 16 motherboards with these. Starting to see how $50-80k would have been sold at a major loss...
 
No it is an urban legend... They needed a true GPU on the other side. The plan was to build the PS4 with Cell processor too(multi chip design maybe) or use it in other field like they did building a supercomputer...

There is no such thing as a true GPU. That is a construct concocted by makers if graphics hardware and APIs.
Of course the possible PS3 GS wouldn't just have been an upscaled GS. Good ideas don't often scale.
The main thing that the on die eDRAM solved was the incessant hammering and monotonous writes of and to RAM that is growing even more of a problem with larger framebuffers.
The graphics would have to be tiled, and the die would have APU's/SPE's on it to do shading, blends, transformation etc.
There was a patent drawing out at some point (which is of course impossible to find now) that showed exactly that setup with a main Cell-like CPU and a mainly graphics oriented die with eDRAM and SPE's.

Sony would have owned their graphics architecture instead of having to shop around every generation, with all the associated problems. They would also be able to scale it much more freely and make cost reductions. They did that with both PSX and PS2, even if it wasn't apparent to the enduser.
It would have meant a equally huge investment in software tools to make it frictionless for devs to work it into their existing procedures and habits. But that would still have been more than worth it.
This is not magic guys. Nvidia and AMD are much smaller companies that Sony. Sony would have had more that enough resources to do this had they started early enough.

The GSCube was obviously never anything but a testing of waters and a demonstration of technology,
The main problem with using it was actually getting the data into the system. But that has always been a problem with any kind of rendering system.
 
Last edited:
This is not magic guys. Nvidia and AMD are much smaller companies that Sony. Sony would have had more that enough resources to do this had they started early enough.

Just because a company is big does not mean that they can produce good software. Nokia had more software developers than Apple* but still was not able to produce an OS in any reasonable time frame**.


*According to a source I can't find anymore.

**Probably because they did a lot of stupid decisions like first forking GTK+ and then abandoning that for Qt.
 
There is no such thing as a true GPU. That is a construct concocted by makers if graphics hardware and APIs.
Of course the possible PS3 GS wouldn't just have been an upscaled GS. Good ideas don't often scale.
The main thing that the on die eDRAM solved was the incessant hammering and monotonous writes of and to RAM that is growing even more of a problem with larger framebuffers.
The graphics would have to be tiled, and the die would have APU's/SPE's on it to do shading, blends, transformation etc.
There was a patent drawing out at some point (which is of course impossible to find now) that showed exactly that setup with a main Cell-like CPU and a mainly graphics oriented die with eDRAM and SPE's.

Sony would have owned their graphics architecture instead of having to shop around every generation, with all the associated problems. They would also be able to scale it much more freely and make cost reductions. They did that with both PSX and PS2, even if it wasn't apparent to the enduser.
It would have meant a equally huge investment in software tools to make it frictionless for devs to work it into their existing procedures and habits. But that would still have been more than worth it.
This is not magic guys. Nvidia and AMD are much smaller companies that Sony. Sony would have had more that enough resources to do this had they started early enough.

The GSCube was obviously never anything but a testing of waters and a demonstration of technology,
The main problem with using it was actually getting the data into the system. But that has always been a problem with any kind of rendering system.

I did not said than the RSX was not a true GPU but I don't think two CELL machine was a good idea.

They needed a GPU... A GS 2 or the RSX or a GPU by AMD or a power VR...

After the problem was to have a good documentation and I am not sure the third party devs would have been happy to work with a machine with a GS 2 and a CELL.
 
From a technical point of view my friend tells me he thinks the first idea working with CELL and a GS 2 was a good one...

But he understood that it was too risky and this I the reason they decided to work with NVIDIA for the GPU of PS3 and for the PS4 choose an x86 architecture...
 
Last edited:
There is no such thing as a true GPU. That is a construct concocted by makers if graphics hardware and APIs.
Of course the possible PS3 GS wouldn't just have been an upscaled GS. Good ideas don't often scale.
The main thing that the on die eDRAM solved was the incessant hammering and monotonous writes of and to RAM that is growing even more of a problem with larger framebuffers.
The graphics would have to be tiled, and the die would have APU's/SPE's on it to do shading, blends, transformation etc.
There was a patent drawing out at some point (which is of course impossible to find now) that showed exactly that setup with a main Cell-like CPU and a mainly graphics oriented die with eDRAM and SPE's.

Sony would have owned their graphics architecture instead of having to shop around every generation, with all the associated problems. They would also be able to scale it much more freely and make cost reductions. They did that with both PSX and PS2, even if it wasn't apparent to the enduser.
It would have meant a equally huge investment in software tools to make it frictionless for devs to work it into their existing procedures and habits. But that would still have been more than worth it.
This is not magic guys. Nvidia and AMD are much smaller companies that Sony. Sony would have had more that enough resources to do this had they started early enough.

The GSCube was obviously never anything but a testing of waters and a demonstration of technology,
The main problem with using it was actually getting the data into the system. But that has always been a problem with any kind of rendering system.

I get the desire for a new krazy ken architecture, I used to love that about PlayStation as well, but there's no way this is ever going to be cost effective unless they're able to sell their chips outside of the console industry.

It made sense with earlier PlayStations when the graphics industry wasn't as mature, but now it's just not worth it for a device that gets updated every six years or so.
 
It made sense with earlier PlayStations when the graphics industry wasn't as mature, but now it's just not worth it for a device that gets updated every six years or so.

I would think a six year cycle would make it more cost effective. I'm ignorant about the process, but why would custom be worse?

I mean, if you target 60-80 million sold, that could be profitable. Where is the gap or fail point?
 
I would think a six year cycle would make it more cost effective. I'm ignorant about the process, but why would custom be worse?

I mean, if you target 60-80 million sold, that could be profitable. Where is the gap or fail point?

It's a product sold at a loss or minimal margins, and you have to spend large amounts on r&d, production, tools and libraries.

AMD and Nvidia sell new products every year at pretty good margins and actual IC production is handled by external companies that specialise in fabbing chips. Intel is vertically integrated, but they also sell their products for a lot more than what Sony could charge for a console and the PC market is larger.

For something small margin or loss leading it's better to contract out to hardware vendors for something cheap but semi-custom.
 
I remember Shifty said what Snowblind Studios usedin their Snowblind Engine 2xSSAA. I tried to find some info about SSAA but where isn't much. So maube we also can discuss it here.
1) As I understand 4xSSAA require 4x backbuffer size, right?
2) How then is 2xAA works? Is it uses only 2x backbuffer? Is it horizontal scale or vertical?
3) If I'm right whan backbuffer is two times larger when 2xSSAA used, when how it can fit in EDRAM? Is Z buffer also twice bigger?
4) Did Snowblind Studios used some kind of tiling? I mean first rendered one half of frame, then reduced, then half of front buffer written?
Anyone please explain this to me.
 
I would think a six year cycle would make it more cost effective. I'm ignorant about the process, but why would custom be worse?

I mean, if you target 60-80 million sold, that could be profitable. Where is the gap or fail point?


The fail point is getting people to make software for the architecture. It was far easier to go nuts on hardware when the dev teams were 1-20 people and you could strong arm the publishers with lucrative licensing arrangements. That ship sailed long ago.

Its too bad in some ways because I loved the crazy hardware but its just a reminder that software is still king.
 
I remember Shifty said what Snowblind Studios usedin their Snowblind Engine 2xSSAA. I tried to find some info about SSAA but where isn't much. So maube we also can discuss it here.
1) As I understand 4xSSAA require 4x backbuffer size, right?
2) How then is 2xAA works? Is it uses only 2x backbuffer? Is it horizontal scale or vertical?
3) If I'm right whan backbuffer is two times larger when 2xSSAA used, when how it can fit in EDRAM? Is Z buffer also twice bigger?
4) Did Snowblind Studios used some kind of tiling? I mean first rendered one half of frame, then reduced, then half of front buffer written?
Anyone please explain this to me.
1. Yes.
2. Depends on which way developer wants it, not sure which was used.
3. Yes, SSAA requires color/Z-buffers to be same size.
Buffers are still quite small in size, so 4MB is enough.

4. Pretty sure that no developer did multi pass SSAA on ps2, it would have complicated things a lot.(Re-sending texture data and polygons for each pass.)
 
It was uniform XY supersampling AFAIK - there wasn't aliasing in one direction only. So 900x680 ish buffer downsampled to 640x480 (root two larger in each dimension for two times total area).
 
Back
Top