Was GC more or less powerful than PS2? *spawn

Xbox had the additional problem of the hardware costs not being renegotiable AFAIK. MS left NVIDIA behind because of that. I also wonder about working with Intel (they are probably not popular in consoles for a good reason).

I'd say that Xbox's biggest problem was that XBox Live was not adequately developed yet. Clearly, with ethernet and a HDD, they intended it to be much more than a games machine but the foundation wasn't there yet and content delivery like Netflix/Youtube/etc didn't exist.

Anyway, I liked the GC, really enjoyed Twilight Princess, and in general liked the way GC stuff looked (trilinear filtering ftw) compared to the PS2.
I remember Gamecube and Wii having frequent problems with posterization because of a color depth limit. I've also seen really bad texture aliasing in some games, including Twilight Princess on Wii, because sometimes they would forgo mip mapping in an effort to save memory apparently.
 
But it wasn't really envelope pushing or bought Sony any particular advantage.
An SPU equipped GPU with a decent chunk of eDRAM for tiling AND with a good API, would have been much better suited for the needs of today, would have been much more scalable, and not least Sony would have owned it.

The synergy achievable between the two processors where the GPU was really a subset of CELL would have been very hard to beat.
It would have been more "true 3d" than anything available now or then.

Trying to get a feel for what you are talking about here. Are you referring to a single die with command processor, SPUs, + necessary graphics systems like TMUs, ROPs, with the SPUs replacing traditional shaders? Or are you referring to a similar system more like with the PS2 where 3D was still handled on the main processor and you had a separate die to do the 2D work?

I think meshed approaches are interesting, like with the PS2 and even AMD APUs (though they tend to keep their processes exclusive to a particular part of the die).

I'll admit the idea of a very large SPU array (16 SPUs+) that runs the software a la Cell while containing the necessary graphics equipment to do the graphics on one die is extremely interesting. What makes the approach advantageous in efficiency however makes it possibly damning for a developer in that you have this general amount of GFLOPs and they have to really balance it out in order to get what they want out of it and it's not just graphics related processes either, it's physics, animation, sound, as well as graphics possibly. If you're going to go with two dies to do this kind of thing however, I'm sure it could get costly real quick with having such a large main CPU die, and the necessary high bandwidth connections to the 2D fill GPU. There are many ways to go about this, with tradeoffs in cost and performance. eDRAM would've been pricey and instead of it, how costly would it have been to provide 32 or 64 GB/s bandwidth to 512 or 1 GB of XDR in place of XDR + eDRAM?

At least with a dedicated GPU and CPU die you have this "guaranteed amount of processing potential" (let's not get into memory subsystems :p ). I think certainly in Sony's case this gen, a Cell based graphics approach with a separate 2D GPU or the hardware on the actual CPU die would've worked well in Sony's favor once devs caught up with the technology since they ended up having to to make up for RSX's lack of competitive geometry and vertex pushing capabilities.

I think it's safe to say this gen Sony could've designed a better hardware layout. It's just overly complicated and coming a year late, just underwhelming when it comes to real world performance compared to the 360.
 
Last edited by a moderator:
Trying to get a feel for what you are talking about here. Are you referring to a single die with command processor, SPUs, + necessary graphics systems like TMUs, ROPs, with the SPUs replacing traditional shaders? Or are you referring to a similar system more like with the PS2 where 3D was still handled on the main processor and you had a separate die to do the 2D work?]

Well it could go either way. That's what was so exiting about the approach; it was very flexible.
The SPU's on the die with the tilebuffer would probably mostly have been dealing with 2.5d stuff like shading and decompression of various textures, for bandwidth reasons.
But the whole point of the Cell approach was that SPUlets could be allocated around the system and even on a network as resources fluctuated, but there was still lantency to contend with.

At least with a dedicated GPU and CPU die you have this "guaranteed amount of processing potential" (let's not get into memory subsystems :p ). I think certainly in Sony's case this gen, a Cell based graphics approach with a separate 2D GPU or the hardware on the actual CPU die would've worked well in Sony's favor once devs caught up with the technology since they ended up having to to make up for RSX's lack of competitive geometry and vertex pushing capabilities.

Load balancing is already an issue with the programmable shaders of today.
It's a balancing act in general between more rigid fast and guaranteed power, where you will inevitably waste resources by stalling the very specialized hardware at some points, versus the more flexible hardware that can just be reconfigured.
But that's an age old story.

I think it's safe to say this gen Sony could've designed a better hardware layout. It's just overly complicated and coming a year late, just underwhelming when it comes to real world performance compared to the 360.

The point I think here, is that software and development tools is a huge factor, and possibly takes longer to get right than anything else. If you tap into something existing like DX or OpenGL you get a lot of stuff served on a silver platter.
But, and this is the other important point, you won't make more than incremental advances, you won't be able to do things in a new and better way and get an advantage that way.

Sony made the mistake of not investing as much in development software as they ought to have with the PS2.

I'm imagining that with PS3 Kutaragi and co, as the very plausible and well grounded rumor goes, was planning on doing a GS2 with SPUs and eDRAM, and was behind on the dev tools.
Someone at Sony panicked and insisted that PS3 should have a "normal" GPU, to launch it faster and because they had lost faith with him after the GS debacle.
This could be what broke Kutaragi and eventually got him ousted.
 
Last edited by a moderator:
Well now Squeak you really got my mind going off topic here....new thread for this?

Say we did get a single die Cell based graphics system, would be able to run TMUs and ROPs at 3.2 GHz?

If not...

2 PPEs
~16 SPUs @ 3.2 GHz
+
20 TMUs
8 ROPs @ 500 MHz domain clock

Shared 128 bit memory bus with 1 GB XDR RAM. Like with AMD Trinity, all the memory is available for whatever. No eDRAM so we can get more memory and a more simple mobo.
 
Well now Squeak you really got my mind going off topic here....new thread for this?

Say we did get a single die Cell based graphics system, would be able to run TMUs and ROPs at 3.2 GHz?

If not...

2 PPEs
~16 SPUs @ 3.2 GHz
+
20 TMUs
8 ROPs @ 500 MHz domain clock

Shared 128 bit memory bus with 1 GB XDR RAM. Like with AMD Trinity, all the memory is available for whatever. No eDRAM so we can get more memory and a more simple mobo.

I'd keep the eDRAM, it makes a whole lot of things much much faster and easier. It also makes the memory interface cheaper. You wouldn't need XDR or other high bandwidth, and low latency type RAM. Bog standard memory would work fine as long as it could support a front buffer and the occasional texture read.
But if you went with unified memory, there would be that much more bandwidth for the CPU.
As a nice fringe benefit it could emulate the GS more easily than anything else.

I'm not crazy about the PPEs as such. They seem unnecessary and inelegant. Why not keep the same micro architecture throughout the whole design? Either have an SPU with special abilities or have a common interface that all SPUs can hook into for memory access and such. But I digress. Why wouldn't one PPE be enough for this application?

Edit: Something like an updated SpursEngine with eDRAM would probably do well.
http://en.wikipedia.org/wiki/SpursEngine
 
Last edited by a moderator:
Dreamcast's use of a tile buffer ensured aspects of IQ that cost extra memory on other systems, so DC didn't have to compromise its texture quality for IQ to the same degree as PS2.

The division of workload among the processors in PS2, and PS3 to some extent, was misguided for balancing specialization versus generalization in order to extract the most performance, IQ, and features for a given area and power budget. The maturity of the API and programming model was really secondary there.
 
Last edited by a moderator:
I had Hitman blood money on the ps2 years ago (unfortunately I have lost my copy) & it shows really impressive stuff which I never saw on GC from what I remember: normal mapping, bump mapping with simple software solution. Of course it wasn't at the same level of the pc but probably IO had realized one of the most impressive game on the ps2 in a tech perspective, I really invite the most curiouses to get a try because it is really incredible for the hardware. I even reconsidered chaos theory job on the ps2 after seeing blood money results in the same hardware.
 
The thing with the PPE's is that there will always be some type of work they can do better than an SPE. Not sure that would be when it comes to straight graphics processing, but if it can assist SPE's in even a small task and speed up performance than why the hell not.

I do wonder what the games would look like on PS3 if Sony went with Toshiba's design. I do recall Panajev2001 or whatever his/her name was posting or linking to a diagram of it. Looked cool to me, but obviously was grandiose and lacked things in hardware like shading. Still, I think SPE's would do a great and strong job of shading, they already help out RSX in that regard.
 
Dreamcast's use of a tile buffer ensured aspects of IQ that cost extra memory on other systems, so DC didn't have to compromise its texture quality for IQ to the same degree as PS2.

The division of workload among the processors in PS2, and PS3 to some extent, was misguided for balancing specialization versus generalization in order to extract the most performance, IQ, and features for a given area and power budget. The maturity of the API and programming model was really secondary there.

DC's tiling had very little to do with texture quality from what I can see. It got you more texture fillrate and also saved overdraw. Other than that I don't see any reason why the tiling would play a part in that aspect.

WRT balancing and programmable vs. fixed, that is the way the trend goes today. Sony anticipated that on PS2 and I don't think it was too early. There will always be a need for specialized fixed hardware that you can trust in performing the same every time, but you have to consider very carefully what functions you do that to. It has to be tasks that are performed the same every time and are performed all the time. Otherwise you don't "extract enough performance" as you said, from a given slice of silicon.
The fixed hardware on the old GPUs was a done for convenience more than optimal design.
If there had been one standard PC GPU it would have been programmable long before it recently did.

The thing with the PPE's is that there will always be some type of work they can do better than an SPE. Not sure that would be when it comes to straight graphics processing, but if it can assist SPE's in even a small task and speed up performance than why the hell not.

I do wonder what the games would look like on PS3 if Sony went with Toshiba's design. I do recall Panajev2001 or whatever his/her name was posting or linking to a diagram of it. Looked cool to me, but obviously was grandiose and lacked things in hardware like shading. Still, I think SPE's would do a great and strong job of shading, they already help out RSX in that regard.

As I said the PPE is inelegant, and in a way goes against the whole grain of the idea in the CELL.
It has a different architecture, takes up way to much space for what it does (there are die images available online) and consumes a lot of power. For instance It has a large L2 cache that takes up the space from one entire SPU, and the altivec units are completely redundant given the SPUs.
Sure, coders might have found it nice to find something more conventional and well known in there, but it is wrong for the same reasons C++ and Javas approach to OoO is wrong. It's on the fence about the new idea. Botches it up. Not forcing people to think about it and embrace it, and creates doubt as to it's wholeheartedness.
I have a suspicion that it was just included to please some people at IBM, so there would be something IBM native in the design.
It would be perfectly possible to have one of the SPU's, or an add hock assembly of them at times, take care of the tasks the PPE does.
 
Last edited by a moderator:
That kind of wrap around texturing was bad practice even back then. You don't waste anything if you use texturing to texture single objects or parts of a model, instead of the whole model.

That's not really true. If you're using power of two textures and you don't want unevenly stretched textures - because they look bloody awful - you can easily end up with "white space" that consumes memory and does nothing. By packing lots of surfaces onto a single texture you can minimise the space lost in maintaining a regular distribution of texels over you polys. You also minimise the overhead of having a lookup table per texture. Changing the current texture you're sampling from also has an overhead, although I don't know big an impact increasing this chore several fold would have.

If you're loading everything in during one loading screen and you want decent texturing with properly spaced texels then large textures make sense. If you're streaming textures in and the gain from small textures is greater than the wastage then it's probably a different matter.

Particularly on old systems where a single texel often covers several or many pixels, uneven stretching across X and Y really is horrible though. I can't stress this enough.

The 8bit consoles had to define the world with the 16 colours, when you had polygons, textures and 16 mil colours to do it on PS2, so no comparison there.

A single texture could easily cover a large proportion of the screen (potential the entire screen), depending on camera and object placement. It's valid to point out that 16 colours was limiting in the 1980's and could easily be very limiting (and have a major negative impact) in later decades too!

You are not supposed to draw details when you have that many polys. The rare poster, painting, sign or other object in the games that needs more than 16 colours can be done with 8 bit fine without breaking the bank.

Details are still "drawn" using textures even today, even with 20 or 30 or more times the polygon complexity! You should try asking an artist whether there is any benefit to > 16 colours in their texturing work. Seriously.

Try doing some conversions of monocrome-ish stuff to 16 colours. Often if there is no large colour gradations (which could be done with vertex colours anyway) but many small details, it's quite hard to spot the difference.
And again, even if we suppose that colourdepth was a bigger problem than I'm making it out to be, there would still be no explaining the difference in resolution. S3TC has the same bits per pixel, and DC had fewer Mb's per frame.

Well if you don't think quantity of texture memory used and texture compression supported (CLUT vs VQ/S3TC) were a huge issue - and we've had devs flat out state that they were - what do you put it down to?

Unless of course you are implying that devs used 8bit textures very often to compensate, which I think we can safely say, based on anecdotal evidence and visual evidence, was not the case.
8bit CLUT textures, while they would have been on par with VQ in colours, or better in fact, would have been a complete waste of resources most of the time and would have been much blurrier.

And PS2 textures were, in fact, blurrier. I think for the DC ports they probably did use quite a few 8-bit cluts. Perhaps the PS2 was also less efficient with the way it stored textures - perhaps using more, smaller textures. This might lead to some of the issues I was talking about above, where textures are broken up and space (memory) is wasted. Maybe?

What size textures did the PS2 and Dreamcast support? I'm sure this has been discussed here before, but search is failing me.

If that was true, which I don't think it is, it's strange that not a single game chooses to go in the other direction. Even games that was quite spare on geometry and had limited environments, like ICO and Ecco the Dolphin only had slightly better textures than other games.
Incedently Ecco, a DC port, has some of the best textures ever on PS2, even though they are still downgraded from the DC original. :???:

Ecco was really colourful. Using 16 colour textures for everything would have wrecked it - I can quite image that game using quite a few 8-bit CLUTs. ICO seemed to have more detailed geometry than DC games, so that should eat into memory (relative to DC games) too.

There was one game, Jack and Daxter, that had quite detailed textures, with none of the lowres blur apparent in almost all other games (the sequels didn't impress me as much) the downside was that it was mainly small heavily tiled textures, which was noticable once you saw it.
Point being, that it was not some kind of deficiency in the system that didn't allow a high texel to pixel ratio.

I didn't think there was such a deficiency - texture resolution is the only issue I can think of for the blurry textures thing.

They were more detailed without question. Look at Metroid and RE4 for some good examples. "256x256 and 512x512 textures all over the place.

I was thinking specifically of ports from PS2 to Gamecube (the kind ERP mention at the start of this thread), where the structure of the game might not have allowed the GC's A-ram to be used optimally.

Well detail-maps that are not supposed to be transparent in the details, water, window, quads where you don't want the hard edges of binary alpha, etc. Depending on the game of course, but they are not that rare.

They were on the DC! Opacity and binary alpha ahoy! The DC also supported 4 and 8 bit CLUTs though iirc (definitely supported some form of CLUT) so it had a reasonable fall back when VQ couldn't be used. I think some of the early Naomi games actually used quite a lot of CLUTs even when it didn't need to. I guess artist familiarity and liking a certain "look" that fits your older hardware could be issues, or perhaps the art was being worked on before the hardware and tools were finalised.
 
That's not really true. If you're using power of two textures and you don't want unevenly stretched textures - because they look bloody awful - you can easily end up with "white space" that consumes memory and does nothing. By packing lots of surfaces onto a single texture you can minimise the space lost in maintaining a regular distribution of texels over you polys. You also minimise the overhead of having a lookup table per texture. Changing the current texture you're sampling from also has an overhead, although I don't know big an impact increasing this chore several fold would have.

If you're loading everything in during one loading screen and you want decent texturing with properly spaced texels then large textures make sense. If you're streaming textures in and the gain from small textures is greater than the wastage then it's probably a different matter.

It's a matter of modeling style I guess. But characters are only a small part of the total texture budget. The majority and largest textures are environment textures which are mostly tiled, at least if you are not using mega texture/clip-maps.
If using all the space was a big issue it would be pretty easy to interleave many textures in one with one big 8bit palette that is changed instead of the texture (you'd have to have or write the appropriate software to do palette "animation"). That way you'd also be able to do arbitrary number colour (up to 256) textures "on top" of each other.

Going completely out on this idea, you'd also be able to vary the texture over a model by using a fast nontextured poly mask with alpha vertexblending to transition in and out between either of two bitfields.

A single texture could easily cover a large proportion of the screen (potential the entire screen), depending on camera and object placement. It's valid to point out that 16 colours was limiting in the 1980's and could easily be very limiting (and have a major negative impact) in later decades too!

Details are still "drawn" using textures even today, even with 20 or 30 or more times the polygon complexity! You should try asking an artist whether there is any benefit to > 16 colours in their texturing work. Seriously.

Well if you don't think quantity of texture memory used and texture compression supported (CLUT vs VQ/S3TC) were a huge issue - and we've had devs flat out state that they were - what do you put it down to?

Of course there are advantages to more than 16 colours. I'm just saying that in many instances resolution is more important. In that regard CLUT is on par with S3TC. You'd be making the wrong choice as a PS2 dev if you always prioritized colour over resolution.

And PS2 textures were, in fact, blurrier. I think for the DC ports they probably did use quite a few 8-bit cluts. Perhaps the PS2 was also less efficient with the way it stored textures - perhaps using more, smaller textures. This might lead to some of the issues I was talking about above, where textures are broken up and space (memory) is wasted. Maybe?

What size textures did the PS2 and Dreamcast support? I'm sure this has been discussed here before, but search is failing me.
They both supported much larger textures than was ever used in a 3d game. 1024x1024 I think.

GS VRAM was chopped into 8Kb pages so it liked certain formats better than others. It was a minor issue though as long as the geometry and UV setup of the game wasn't thrashing the pagebuffer for fram and texture too much. Something that could happen if there were many polys with detailed textures slanted away from the camera. Think lack of inclination aware MIP mapping. This, as I understand it was never a great concern on wellwritten games.

Some devs seemed to think though, that this meant that you should never use more than 128x128 4bit or the like, with MIP maps on a different page.

Ecco was really colourful. Using 16 colour textures for everything would have wrecked it - I can quite image that game using quite a few 8-bit CLUTs. ICO seemed to have more detailed geometry than DC games, so that should eat into memory (relative to DC games) too.

The geometry in ICO was quite simple.

I was thinking specifically of ports from PS2 to Gamecube (the kind ERP mention at the start of this thread), where the structure of the game might not have allowed the GC's A-ram to be used optimally.

Well one interesting example is the Sonic Adventure 2 GC port.
That game had some of the best textures ever on DC and was perfectly ported to GC (I even think it ran a tad smoother). The PS2 version, predictably looked real bad.

They were on the DC! Opacity and binary alpha ahoy! The DC also supported 4 and 8 bit CLUTs though iirc (definitely supported some form of CLUT) so it had a reasonable fall back when VQ couldn't be used. I think some of the early Naomi games actually used quite a lot of CLUTs even when it didn't need to. I guess artist familiarity and liking a certain "look" that fits your older hardware could be issues, or perhaps the art was being worked on before the hardware and tools were finalised.

I don't think DC supported CLUT actually. Simon would have to chime in to give us the final answer on that. It didn't have much use for it either with VQ.
CLUTs being VQ with one pixel instead of 2x2, it does seem like it would be quite easy to do though.
 
Last edited by a moderator:
Ecco was really colourful. Using 16 colour textures for everything would have wrecked it - I can quite image that game using quite a few 8-bit CLUTs.
16 colour textures could of course be layered in multipass rendering, and are further lit to 256 (or whatever it is) lightness values along with whatever colour the light source is. So the same 16 colour texture could be quite colourful or varied throughout a scene due to placement and viewing. I'd be curious to see examples of PS2 games where the use of 16 colour textures is obvious. I think it was an otpion well utilised by developers when appropriate.
 
16 colour textures could of course be layered in multipass rendering, and are further lit to 256 (or whatever it is) lightness values along with whatever colour the light source is. So the same 16 colour texture could be quite colourful or varied throughout a scene due to placement and viewing. I'd be curious to see examples of PS2 games where the use of 16 colour textures is obvious. I think it was an otpion well utilised by developers when appropriate.
In the 70's an 80's there were professional flight sims that only supported BW 16 colour textures, that was then coloured with vertex colours.

On PS2 two-pass luminance compression would get you very good rates depending on the material (between 1 and 2.25 bpp for 4bit material), with results that wasn't shy of DC VQ, if not better in some instances. The lowres colourlayer could be put on the same page as the MIP maps. This would be done by changing the pallette and UV setup and then alphableding.
But of course all at the additional cost of an extra pass.
Anyone know if that method was actually used in a game?
 
Last edited by a moderator:
I still think Ghost Hunter is still one of the best looking PS2 games available and is a true case of what could be done on PS2 given the right effort, I remember being completely blown away by it, was one of the cleanest looking PS2 games as well iirc..

ghost1zootk.jpg


ghost3e8rim.jpg
 
I was more impressed with the GC's best looking games (RE4 IMO) than the PS2's best to be perfectly honest though I was more technically impressed with what was pulled off on the PS2.

To deliver my opinion on whoever it was' RE4 vs MGS3 debate..........

RE4 clearly had better shading, the addition of bumpmapping, and awesome lighting, not to mention better textures on average. It's art was really good. The overall polygon counts also seem quite a bit higher than what we see in MGS3 as well. What blows me away most now that I remember it, is the lack of slow down in RE4 GC.

MGS3 clearly puts the polygon fillrate into the density of the vegetation. I also remember the high amounts of real time water reflection used in appropriate parts of the game. Textures were average at best, but of course, probably would've been higher with more RAM or better texture decompression schemes. Anyone know if Kojima Productions used both VUs? I would assume so.

Speaking of water reflections........

Versus bump and normal maps, why are water reflections so ubiquitous on the PS2 yet we see nary a bump or normal map on the system? In what regard to they differ in hardware and memory resource use as to what makes it seem that reflection maps are easy to deploy vs other "advanced" shaders (more vertex shader based?)?
 
I still think Ghost Hunter is still one of the best looking PS2 games available and is a true case of what could be done on PS2 given the right effort, I remember being completely blown away by it, was one of the cleanest looking PS2 games as well iirc..
It helps when you link to PR shots or PC emulators running 16xSSAA. ;)
ghosthunter-20040708045711601.jpg
 
It helps when you link to PR shots or PC emulators running 16xSSAA. ;)
ghosthunter-20040708045711601.jpg

I did that because native shots when displayed on here wouldn't look as good as the game does actually running on a tv, stops the game from being made to look worse then it actually is.

I think you can see the point in the details though, it is a purdy looking game.

The from an emulator thigh with no AA..
 
Last edited by a moderator:
almighty said:
I still think Ghost Hunter is still one of the best looking PS2 games available and is a true case of what could be done on PS2 given the right effort, I remember being completely blown away by it, was one of the cleanest looking PS2 games as well iirc..

Lol. Yeah, you might also just start showing pictures of CGI cutscenes if you use those pictures!
 
I did that because native shots when displayed on here wouldn't look as good as the game does actually running on a tv...
The PS2 had no magical IQ improvement thanks to display on a TV versus any other games or platform. CRTs made thing easier on the eye back then. In the cold light of LCD monitors though, the game is clearly no better than others. Maybe they had ground textur filtering which would be a marked improvement on monstrosities like FFX, but all in all I don't see that it's any cleaner than other games. Just that the palette hides jaggies better. Possibly also native res, where other games might be upscaled (no pixel counting for the PS2 era!).
 
The PS2 had no magical IQ improvement thanks to display on a TV versus any other games or platform. CRTs made thing easier on the eye back then. In the cold light of LCD monitors though, the game is clearly no better than others. Maybe they had ground textur filtering which would be a marked improvement on monstrosities like FFX, but all in all I don't see that it's any cleaner than other games. Just that the palette hides jaggies better. Possibly also native res, where other games might be upscaled (no pixel counting for the PS2 era!).

You named quite a reasons as to why it looked cleaner then the average game.

Pretty sure it had 480p support too
 
Back
Top