Was GC more or less powerful than PS2? *spawn

In terms of raw processing GC was ahead of PS2 in most areas. Hardware T&L on flipper was 8 Gflops, plus 2.5 Gflops on the CPU, 10.5 Gflops total, compared to only 6.2 Gflops for PS2s entire chipset, and even at that you had to pull some magic tricks to get that out of it. The pixel side of the PS2s GS was beefy and could pump crazy pixel numbers, but in real world situations with maultilayer textures it dropped like a rock, even below the GCs.
 
Is this argument still going on?

Let's propose a different scenario to compare the machine's power. What if it was the EE and Flipper for one machine, and then Gekko and RS in the other? Which machine would be more powerful then?
 
Is this argument still going on?

Let's propose a different scenario to compare the machine's power. What if it was the EE and Flipper for one machine, and then Gekko and RS in the other? Which machine would be more powerful then?

My biggest complain about PS2 was always washed out colors, low resolution textures and aliasing.

Probably if you exchange PS2 RS with Flipper those issues mentioned before should vanish, as we would get a more balanced machine.
 
I think there is lot more psychology in this than people are willing to believe. Not the least factor being if the developers are fans of that particular system. Both with regards to who made it, but also if it does things how they are used to do them.
Being fans of stuff is not something that is confined to mortal gamers.

The last generation was a kind of a watershed in complexity of development. Not only when it came to assets but perhaps more so with regards to code. Console coders finally ran into some of the same complexity problems that has plagued large computer programs since the 60's. And as with the rest of the business or perhaps even worse, the reaction has been really slow and incremental on already known things.
What was a good way to code on small machines, very quickly became messy intractable garbage heaps at some point.
Good ideas don't often scale.

It's like the mythical man month was never written. It's like higher level development as in Lisp and Smalltalk was not applicable to games, etc.

I suspect that the main reason xbox got such good results was mainly due to the above factors. Even though the DX libs are abysmal, they are still better than panicing, half decent to bad coders trying to learn (or not learn, even) the PS2 architecture while doing a multi console port.

A developer once described the difference between PS2 and xbox, as the difference between writing with a brush and a ballpoint pen.
I think that's a very apt description.
I also think the GC fits the ballpoint pen category well, just like another Nintendo machine, the DS.
It is set up to perform to a certain spec and isn't very malleable and flexible beyond that.

Five years isn't long to learn a completely new and very different architecture, and on top of that you have the constant pressure from deadlines and budgets.
I think Sony counted on being able to dominate much more than they were able to. The xbox was sort of a spanner in the works because it allowed for an "easy" escape route into console land, with tools and methods that developers were familiar with.

I think multiplatform games were developed to some imaginary in-between console that just covered the common ground between the machines, with none of the strengths of the three machines catered to. Other than perhaps xbox where it was relatively straight forward to ad som extra assets because of the extra memory and use the API to put some detail texturing and other nice effects in.

I'm not saying the PS2 was a great misunderstood architecture with no flaws. Not at all. I just think, counterintuitive as it may sound, if you think of the great success it was after all, that it was the most underutilized of the three machines.

Developers just never had any great incentive, time or will to push the machine beyond a certain almost preordained agreed upon limit.

For instance no one ever used the MPEG II decompression unit to do anything other than decompress video for FMV sequences, when in fact it was envisioned that it could be used to increase the texture budget by as much as 5x with the right method. And that's just one small example...

PS2 was an interesting architecture, that had a lot of future concepts, some in the bud and others just not very well used at all.
PS3 continued some of the stuff started with the PS2, but I think didn't bring a lot of them to their logical conclusion, and others where watered out or scrapped entirely.
Pity.
 
Last edited by a moderator:
PS3 continued some of the stuff started with the PS2, but I think didn't bring a lot of them to their logical conclusion, and others where watered out or scrapped entirely.
Pity.

Thank god for the rise of the true 3D GPU. I'm sure it made things quite a bit easier when developers started to implement bump and normal mapping, as well as other advanced effects. Of course for Xbox and PC developers, they could essentially port to and fro from either platform as long as they managed their CPU and memory limits accordingly.
 
Thank god for the rise of the true 3D GPU. I'm sure it made things quite a bit easier when developers started to implement bump and normal mapping, as well as other advanced effects. Of course for Xbox and PC developers, they could essentially port to and fro from either platform as long as they managed their CPU and memory limits accordingly.

But it wasn't really envelope pushing or bought Sony any particular advantage.
An SPU equipped GPU with a decent chunk of eDRAM for tiling AND with a good API, would have been much better suited for the needs of today, would have been much more scalable, and not least Sony would have owned it.

The synergy achievable between the two processors where the GPU was really a subset of CELL would have been very hard to beat.
It would have been more "true 3d" than anything available now or then.
 
I'm not saying the PS2 was a great misunderstood architecture with no flaws. Not at all. I just think, counterintuitive as it may sound, if you think of the great success it was after all, that it was the most underutilized of the three machines.

I disagree! If your console sells 150 million, dominates the games industry for 7 - 8 years in a way no other console in history has, and is for most of that the target platform for all multiplatform games, and it's still the most underutilised console of all (when the others live for 2 ~ 5 years and sell no more than 25 million) then you built the wrong console, you're to blame, and both you and your console should be ridiculed!

I think the PS2 was probably utilised pretty well by mid way through its life, and much better than the Xbox which was continually hobbled by games targeting the PS2. It barely ever had a chance to shine in it's horribly short lifespan, but when it did it was massively ahead of the both the PS2 and GC - almost a half generation gap in my (subjective) view.

For instance no one ever used the MPEG II decompression unit to do anything other than decompress video for FMV sequences, when in fact it was envisioned that it could be used to increase the texture budget by as much as 5x with the right method. And that's just one small example...

IIRC, the JPEG textures would still need to be decompressed fully into main memory and converted into CLUTS before they could be used as textures for triangles. If so, it would save you some latency on accessing textures from the disk but you'd need a pool of main memory dedicated to a JPEG cache and still need to store all the uncompressed textures for your current scene in main memory. Scene management would also become more complex and processor intensive and additional constraints would be placed on the art and design side (as less uncompressed texture data would be available per scene). If my recollection is even remotely accurate then it's not surprising no-one ever used it. It would make far more sense to use it for none-interactive animated backgrounds - something that mostly got left behind with PS1.

Given the pressures on PS2 texture memory I find it inconceivable that even PS2 exclusives would opt to ignore a practical and cost effective way to increase texture quality by 5X.

We should now talk about the Dreamcast, how it was twice as fast as a Neon 250, and how it's a pity that its aniso filtering, Dot3 bump mapping, and modifier volumes were awesome but so rarely used. Because that's the kind of territory we're heading into.* :p

*No-one does "b...b...b..but my favourite console was underutilised!!!" like a Dreamcast fanboy!
 
In terms of raw processing GC was ahead of PS2 in most areas. Hardware T&L on flipper was 8 Gflops, plus 2.5 Gflops on the CPU, 10.5 Gflops total, compared to only 6.2 Gflops for PS2s entire chipset, and even at that you had to pull some magic tricks to get that out of it. The pixel side of the PS2s GS was beefy and could pump crazy pixel numbers, but in real world situations with maultilayer textures it dropped like a rock, even below the GCs.

Unless I'm missing something, Gekko was 1.9 GFLOPs. I'm also not sure how much of an advantage GC had in multi-texturing. Average GC game was 2 or 3 texture layers, probably around what PS2 achieved with multi-pass.

PS2's flexible approach did result in some nice looking games. I don't know if I'd say superior to Gamecube but definitely competitive with, and on older hardware. It's too bad deferred rendering was held back by bus limitations going from GS to main memory, normal mapping on PS2 would've been ridiculous.
 
*No-one does "b...b...b..but my favourite console was underutilised!!!" like a Dreamcast fanboy!

FWIW my favourite console ever is the N64.

I disagree! If your console sells 150 million, dominates the games industry for 7 - 8 years in a way no other console in history has, and is for most of that the target platform for all multiplatform games, and it's still the most underutilised console of all (when the others live for 2 ~ 5 years and sell no more than 25 million) then you built the wrong console, you're to blame, and both you and your console should be ridiculed!

Not if you expected the market conditions to be significantly different. Instead xbox comes in and gives developers, constrained as they are in a number of ways, a shortcut to technically good-looking games. Sugardaddy microsofts treat. ;-)
xbox became the "elite" platform. Where you put your effort if you had any to spare.

The history of computers is full of people not using, what seems to be obvious and sometimes straigthforward, good ideas, out of conservatism, out spite, out of mental lazyness, out of FUDD, for religious reasons etc.

With xbox it was a lot easier to get differentiating important stuff like detail texturing and normalmapping into your game. It didn't require thinking up methods and writing a lot of code.

For instance, bumpmapping in a variety of implementations (CLUT cycling, offset, normal mapping with or without textured normalization etc.) would be pretty trivial to get to work with multipass on PS2, but I can only think of one game (the Last Hitman game on the system) that used it to even the slightest extent.

I think the PS2 was probably utilised pretty well by mid way through its life, and much better than the Xbox which was continually hobbled by games targeting the PS2. It barely ever had a chance to shine in it's horribly short lifespan, but when it did it was massively ahead of the both the PS2 and GC - almost a half generation gap in my (subjective) view.

Midway through it's life PS2 already looked old hat even to developers, because to the naive the xbox looked so much more powerfull on paper. also, xbox together with GC, was so much more straightforward to get good results out of. Mainly because of the well developed DX and OpenGL libs and API.
The last years of it's lifespan it was just the lowest common denominator system and was used and developed for as such. So those years don't count. By then it was "too late".

IIRC, the JPEG textures would still need to be decompressed fully into main memory and converted into CLUTS before they could be used as textures for triangles. If so, it would save you some latency on accessing textures from the disk but you'd need a pool of main memory dedicated to a JPEG cache and still need to store all the uncompressed textures for your current scene in main memory. Scene management would also become more complex and processor intensive and additional constraints would be placed on the art and design side (as less uncompressed texture data would be available per scene). If my recollection is even remotely accurate then it's not surprising no-one ever used it. It would make far more sense to use it for none-interactive animated backgrounds - something that mostly got left behind with PS1.

A single frame very rarely has a larger bandwidth and memory usage for textures, than what the finished frame takes up in total, and certainly not back then. With realistically 35 to 15 Mb available per frame to the GS that should be plenty to use even 16bit textures or even 24 if it made sense. CLUT could probably also have been handled by the decompressor even if the advantage would be smaller.
To get the JPEG decompression working would be a matter of setting up a virtualization scheme, not unlike LODing and MIP mapping. Not trivial, but certainly doable.

Given the pressures on PS2 texture memory I find it inconceivable that even PS2 exclusives would opt to ignore a practical and cost effective way to increase texture quality by 5X.

Why bother if the customer seems satisfied with halved or quartered texture sizes?
BTW. PS2 memory wasn't that constrained compared to the competition. 24Mb for GC and 15Mb for DC. It was just microsoft that was willing to bleed money.
 
Squeak, I definitely agree about the N64, it is my favourite as well. I have been replaying Donkey Kong 64, and it remains impressive for its time (perhaps less so than Banjo Tooie, which I found exceptionally good). The environment mapping is impressive on those selected metal surfaces that use it. All of those Rare games had a great, consistent look in my opinion.

It also seems an example of the era when Nintendo hardware was technically quite strong - the R4300i and the RCP were seemingly very sound choices for the release date (though I know the relative merits of that generation were debated extensively many years ago, and the persuasive critique of the memory subsystem).

Anyway, it is off-topic, but I certainly find the N64 a great console.
 
Not if you expected the market conditions to be significantly different. Instead xbox comes in and gives developers, constrained as they are in a number of ways, a shortcut to technically good-looking games. Sugardaddy microsofts treat. ;-)
xbox became the "elite" platform. Where you put your effort if you had any to spare.

You're kind of saying that an entire global community of publishers disregarded the business practice of putting resources where they'll get the most return, and instead spent all their spare efforts on Xbox? Just because developers wanted to? I really find that difficult to believe.

ERP has already said that this was not the general case, and commented on just how little work typically went in to making the Xbox version of PS2 focused multiplatform game (and this was most multiplatform games).

The history of computers is full of people not using, what seems to be obvious and sometimes straigthforward, good ideas, out of conservatism, out spite, out of mental lazyness, out of FUDD, for religious reasons etc.

But you're implying that this is pretty much the entire story of PS2 development, but not the case for Xbox and GC development. That's rather one-sided and appears rather conspiratorial.

With xbox it was a lot easier to get differentiating important stuff like detail texturing and normalmapping into your game. It didn't require thinking up methods and writing a lot of code.

Which means that Microsoft did a much better job of developing a platform than Sony did! I disagree with you that PS2 developers didn't like thinking and writing code to get good results though. If they have the time, a lot of game developers like nothing more than being able to experiment and push.

For instance, bumpmapping in a variety of implementations (CLUT cycling, offset, normal mapping with or without textured normalization etc.) would be pretty trivial to get to work with multipass on PS2, but I can only think of one game (the Last Hitman game on the system) that used it to even the slightest extent.

I think Baldors Gate used bump mapping too.

Bump mapping that doesn't use normal maps is kind of limited and outside of certain constraints would look wrong. It also eats up texture memory, which would hurt the PS2 more than most of its competitors. I doubt it was trivial to get normal mapping working at acceptable speed on the PS2.

Midway through it's life PS2 already looked old hat even to developers, because to the naive the xbox looked so much more powerfull on paper. also, xbox together with GC, was so much more straightforward to get good results out of. Mainly because of the well developed DX and OpenGL libs and API.
The last years of it's lifespan it was just the lowest common denominator system and was used and developed for as such. So those years don't count. By then it was "too late".

The Xbox wasn't just more powerful on paper, and the difference in terms of graphics was really big.

On the Xbox and GC better and better looking titles came along during 2003 to 2005. Sony continued to push some great looking exclusives out on the PS2 but they still didn't add normal mapping, find a way to work around texture compromises or get visuals near to Xbox quality. "Too late" in the context of this argument seems like a bit of a cop out for why the PS2 never saw the kind of mega-breakthroughs you're saying were within easy reach.

For a 2000 console the PS2 ended up doing pretty damn well, but some advantages - like a smaller process node, a huge advance in underlying GPU design and an Intel OoOE CPU - are just too hard to overcome. There's no shame in the Xbox being more powerful given the time it arrived and the huge cost of the machine.

BTW. PS2 memory wasn't that constrained compared to the competition. 24Mb for GC and 15Mb for DC. It was just microsoft that was willing to bleed money.

GC and DC both had a lot more memory than that, and the DC needed a lot less memory than the PS2 for textures of the same quality. The PS2 was only really short of memory compared to the Xbox, but as that seems to be main comparison being made then it's certainly true to say it was constrained - even more than the numbers suggest as the Xbox had more effective texture compression and 1GB of HDD scratch pad.
 
You're kind of saying that an entire global community of publishers disregarded the business practice of putting resources where they'll get the most return, and instead spent all their spare efforts on Xbox? Just because developers wanted to? I really find that difficult to believe.

ERP has already said that this was not the general case, and commented on just how little work typically went in to making the Xbox version of PS2 focused multiplatform game (and this was most multiplatform games).

Developers are not as rational and "going for gold" only as it seems popular to think.
As already implied in my previous posts there is a good deal of favoritism and macho behaviour in the industry, on all levels.
This is the reason Nintendo has had such a hard time getting american developers to put big budget games on the Wii even though they would probably have sold more, even with the mixed demographic.
The kiddy image (is there anything worse, apart from being called gay to an american male than being called kiddy?) and the "weak hardware" stigma is probably where it's at mostly.

But you're implying that this is pretty much the entire story of PS2 development, but not the case for Xbox and GC development. That's rather one-sided and appears rather conspiratorial.
Again brush and ballpoint pen...


Which means that Microsoft did a much better job of developing a platform than Sony did! I disagree with you that PS2 developers didn't like thinking and writing code to get good results though. If they have the time, a lot of game developers like nothing more than being able to experiment and push.
They did have a an almost seven year headstart when it came to the API and libs. Not that that is cheating, but it doesn't prove that PS2 couldn't have done better given the right software tools at the right time.

The Xbox wasn't just more powerful on paper, and the difference in terms of graphics was really big.
Well it wasn't as clearcut as it would appear to a naive reader, by just looking at the numbers. The biggest and most real advantage was probably the 64Mb of RAM. But even there the bandwidth to size ratio was better on PS2.

On the Xbox and GC better and better looking titles came along during 2003 to 2005. Sony continued to push some great looking exclusives out on the PS2 but they still didn't add normal mapping, find a way to work around texture compromises or get visuals near to Xbox quality. "Too late" in the context of this argument seems like a bit of a cop out for why the PS2 never saw the kind of mega-breakthroughs you're saying were within easy reach.

I never said it was easy. I said it was doable if the effort had been more concerted and focused on that one machine. Also Sonys followup strategy with tools was not great. Maybe Kutaragi was already feeling the pinch at that time?

For a 2000 console the PS2 ended up doing pretty damn well, but some advantages - like a smaller process node, a huge advance in underlying GPU design and an Intel OoOE CPU - are just too hard to overcome. There's no shame in the Xbox being more powerful given the time it arrived and the huge cost of the machine.

There was one and a half year between the machines, not a lot, and not enough to explain that large a difference.
We've been over this before, but just look at the DC. Much better textures for a machine that was also one and a half year behind.
Don't start with VQ compression now. The issue was not colourdepth but resolution, and there VQ and CLUT is on pretty even footing.
 
Well it wasn't as clearcut as it would appear to a naive reader, by just looking at the numbers. The biggest and most real advantage was probably the 64Mb of RAM. But even there the bandwidth to size ratio was better on PS2.

I'd say the biggest advantage was the graphics chip - lots and lots of polygons with much better texture filtering, fast multitexturing, DXTC, and per pixel effects that really made things stand out. The memory certainly helped though - the Xbox would have been half crippled with only 32MB of ram.

I never said it was easy. I said it was doable if the effort had been more concerted and focused on that one machine. Also Sonys followup strategy with tools was not great. Maybe Kutaragi was already feeling the pinch at that time?

Was Kutaragi ever that bothered about the software / tools side of things? It seems that he was more interested in hardware and performance. Then again, if he was bothered about the software side of things you'd probably never had gotten Cell, and that's got a lot of love from hardcore codebangers (I just made that word up).

There was one and a half year between the machines, not a lot, and not enough to explain that large a difference.
We've been over this before, but just look at the DC. Much better textures for a machine that was also one and a half year behind.
Don't start with VQ compression now. The issue was not colourdepth but resolution, and there VQ and CLUT is on pretty even footing.

Fafalada reckoned you needed 2 ~ 3 times the memory for CLUTs to achieve a similar quality to the DCs VQ. When you're talking about 4 and 8 bit CLUTs you can't separate colour and resolution issues because the two are interchangeable - he the need to choose between higher resolution 4 bit and lower resolution 8 bit textures optimally is part of development.

The PS2 had the memory to match DC texture quality using CLUTs, but that was not the best way to use the PS2's memory. I think Fafalada touched on this too. No system is perfect I guess - the DC was held back by polygon transform and lighting on the CPU, the PS2 lacked advanced texture compression support and certain texture filtering options, the Xbox was short of bandwidth and Microsoft were short of billion$ to lose making it (lol).
 
I'd say the biggest advantage was the graphics chip - lots and lots of polygons with much better texture filtering, fast multitexturing, DXTC, and per pixel effects that really made things stand out. The memory certainly helped though - the Xbox would have been half crippled with only 32MB of ram.

The PS2 had a much more synergistic take on doing 3d. The EE was really made to work with the GS and do all of the 3d processing in one place, then let the GS take care of the 2d and 2.5d stuff. It's a different approach, but not a worse approach as such.
Having flexible and powerful 3d processing in one place allows you for instance to do early culling of a lot of geometry, instead of using bandwidth to push raw meshes around in the machine. It also allows you to create new geometry procedurally giving huge savings on many kinds of geometry and generally have a much more live, interactive world. You could potentially also do completely wacky off the wall things in software that would be impossible on a GPU even today. Voxels for instance or doing creative stuff with volumetrics or particles.
Even though the xbox had a higher geomtry throughput on paper, I never saw a game that proved that. I suspect that when all is culled, clipped and generated, PS2 could have a real advantage in that department.

Was Kutaragi ever that bothered about the software / tools side of things? It seems that he was more interested in hardware and performance. Then again, if he was bothered about the software side of things you'd probably never had gotten Cell, and that's got a lot of love from hardcore codebangers (I just made that word up).

Well we can only speculate. It seems though that he had some of the same biological cell analogies going in his head with EE and Cell as the guys who invented object oriented programming did. Software and hardware really ideally should be designed for each other and interwoven much more than is the case today.

Fafalada reckoned you needed 2 ~ 3 times the memory for CLUTs to achieve a similar quality to the DCs VQ. When you're talking about 4 and 8 bit CLUTs you can't separate colour and resolution issues because the two are interchangeable - he the need to choose between higher resolution 4 bit and lower resolution 8 bit textures optimally is part of development.

The PS2 had the memory to match DC texture quality using CLUTs, but that was not the best way to use the PS2's memory. I think Fafalada touched on this too. No system is perfect I guess - the DC was held back by polygon transform and lighting on the CPU, the PS2 lacked advanced texture compression support and certain texture filtering options, the Xbox was short of bandwidth and Microsoft were short of billion$ to lose making it (lol).

As already mentioned the limitation was not texture throughput per frame, for which there was more than enough bandwidth on either of the four consoles. If we take it that it was main memory size that was the problem, then that presents us with the problem of explaining how the GC and especially DC could trump the PS2 in that aspect so throughly. Can't remember if the DC could texture from the 16Mb pool or transfer texture to VRAM when building a frame, or if you were stuck with the 8Mb per frame?
Either way even with VQ it still had much less texture memory in total than PS2. The 2bpp VQ could only help so much because the actual average compression rate would only be somewhat higher than 4bit CLUT (not double) with the larger tables and extra transparencies needed.
If PS2 had used JPEG decompression then... well you do the math. :smile:
 
Last edited by a moderator:
Didn't the last Matrix game on PS2 use normal mapping?

Overall the biggest advantages that X-box had over PS2 was texture compression and pixel shaders, those 2 things make a large difference.

Could Sony of made a custom chip with pixel shaders back when they were designing PS2?
 
Didn't the last Matrix game on PS2 use normal mapping?

Overall the biggest advantages that X-box had over PS2 was texture compression and pixel shaders, those 2 things make a large difference.

Could Sony of made a custom chip with pixel shaders back when they were designing PS2?

No.
Texture compression was not an advantage with regards to resolution. S3TC still has 4bpp. Colourdepth doesn't matter that much, because depending on the game the majority of textures will tend toward the monocrome or only have two sets of gradating colours. For that 4bit CLUT is, if certainly not ideal, then adequate. What matters is resolution.

The pixelshaders was:
A - Not powerful enough to make a real difference on the overall look of most games.
B - You could do a lot of the same stuff in other ways faster on PS2 by using the frugal functions of the GS for blending and CLUT manipulation.

What really made the difference was detail mapping, better MIP mapping (that took into account the inclination of the surface to the line of sight (something that could have been compensated for on PS2 by doing it on the EE)), and to a much lesser extent normal mapping.
 
How can the environment be far more open if you have to stop and wait for loading? That's a complete contradiction in terms.

There's no contradictions since - like I said before - even though there are load times each sections of MGS3 are still much larger and more wide open than the ones in RE4. Keep in mind that RE4 has load times as well, and that each section are basically a straight limited narrow/linear path.


You keep talking about RE4 environments being lifeless, detail wise they are not in the slightest, various animals roam around in RE4 and there are also lots of little details like dust/fog/leaves blowing around.

Animals in RE4? The only time where I noticed there are animals is the first chapter, and it's just a few cows and some chickens running around; whereas living creatures are found in nearly every part of MGS3. MGS3's environments are just full of life and details


Also RE4 lighting was dynamic, that much is completely obvious.. I mean where are you even getting this stuff from? Did you play the PS2 or PC version or something?

A lot of the lightings in RE4 are pre-baked - this was achieved by increasing the contrast/brightness on the texture. It's a trick. And no, I played the GC version


I actually agree in part about textures. RE4 was inconsistent to a degree. Some of its texture work was a bit low detail while others were extremely detailed. While MGS3's textures were consistent, consistently bland green and brown smudges (outside of some cut scenes).. razor sharp?, you're joking yeah?

MGS3's textures are very consistent in quality since they remain great looking though out the game, and don't suddenly change and drop down to "low-res" quality in certain sections like RE4.


As for the lava room being one small area, yes it was a smaller area than most in RE4, it was also absolutely stunning. The lava, the heat effects in the air, probably the most impressed I've ever been by a game.

But then again, that is just one small sections, whereas there are many incredible looking moments in MGS3
 
As already mentioned the limitation was not texture throughput per frame, for which there was more than enough bandwidth on either of the four consoles. If we take it that it was main memory size that was the problem, then that presents us with the problem of explaining how the GC and especially DC could trump the PS2 in that aspect so throughly. Can't remember if the DC could texture from the 16Mb pool or transfer texture to VRAM when building a frame, or if you were stuck with the 8Mb per frame?

It wasn't the PS2's memory size that was a problem - that seemed fine taking into account the machines release date and comapring it to DC and GC. It was the amount of space that textures of a similar quality to DC, GC and Xbox took up in main memory that was the problem. A 16 colour CLUT simply doesn't have the colour depth to replace 2bpp VQ and 4bpp S3TC/DXTC.

I remember that large textures were often used to cover all or most of a model (character body, car, etc) because they were the most efficient way to manage textures and also the store the data (arrange surfaces optimally within the texture to minimise wasted blank space). 16 colours could easily be an appallingly low number of colours to use in a texture for a 24-bit rendered image, depending on what you wanted to reproduce. 16 colours is what the Master System had for backgrounds (and sprites) btw. :D

The DC could only texture from video ram - meaning every game had around 5.5MB of texture memory available all the time. I don't think you could really use it for anything else (at least without copying to main ram first) so that's probably why DC games tended to have such detailed textures compared to the PS2. On PS2 texture memory had to fight for space with everything else. Using a mix of 4 and 8 bit CLUTs, and using Fafalada's rough conversion/comparison, that equates to something like 11 ~ 16.5 MB of PS2 main ram lost if you want to use "DC quality" textures - which would barely leave you enough memory for DC quality models and maps. It's easy to see IMO why PS2 devs chose to go for much higher poly counts (that would also benefit from more and better per vertex lighting).

On the GC it would seem that S3TC and the A-ram made a similar difference. Although now I think about it, while I remember GC textures being far more colourful, I'm not sure they were always that much higher resolution than the PS2. Perhaps that depended on how you used the A-ram. Like with the PS2 textures had to fight for space with everything else.

Either way even with VQ it still had much less texture memory in total than PS2. The 2bpp VQ could only help so much because the actual average compression rate would only be somewhat higher than 4bit CLUT (not double) with the larger tables and extra transparencies needed.

Transparent textures were rare though, and with large textures (the norm) the space taken by conversion tables should be relatively tiny (unless I'm missing something). And with quite a few 8-bit CLUTs needed in the mix to do a similar looking range of textures I can easily image needing more than double the space.

If PS2 had used JPEG decompression then... well you do the math. :smile:

Even without JPEG decompression, the kind of from-disk texture streaming that we see on current systems would have benefited the PS2 a lot. Check out Rallisport Challenge 2 on the Xbox to see tens or hundreds of MB of data per (full) course being streamed in from HDD - the game is incredible, and the most impressive racing/driving game of last gen by a frikkin * mile.

*That's how impressive it is - I almost swore.
 
It wasn't the PS2's memory size that was a problem - that seemed fine taking into account the machines release date and comapring it to DC and GC. It was the amount of space that textures of a similar quality to DC, GC and Xbox took up in main memory that was the problem. A 16 colour CLUT simply doesn't have the colour depth to replace 2bpp VQ and 4bpp S3TC/DXTC.

I remember that large textures were often used to cover all or most of a model (character body, car, etc) because they were the most efficient way to manage textures and also the store the data (arrange surfaces optimally within the texture to minimise wasted blank space). 16 colours could easily be an appallingly low number of colours to use in a texture for a 24-bit rendered image, depending on what you wanted to reproduce. 16 colours is what the Master System had for backgrounds (and sprites) btw. :D

That kind of wrap around texturing was bad practice even back then. You don't waste anything if you use texturing to texture single objects or parts of a model, instead of the whole model.
The 8bit consoles had to define the world with the 16 colours, when you had polygons, textures and 16 mil colours to do it on PS2, so no comparison there.
You are not supposed to draw details when you have that many polys. The rare poster, painting, sign or other object in the games that needs more than 16 colours can be done with 8 bit fine without breaking the bank.
Try doing some conversions of monocrome-ish stuff to 16 colours. Often if there is no large colour gradations (which could be done with vertex colours anyway) but many small details, it's quite hard to spot the difference.
And again, even if we suppose that colourdepth was a bigger problem than I'm making it out to be, there would still be no explaining the difference in resolution. S3TC has the same bits per pixel, and DC had fewer Mb's per frame.
Unless of course you are implying that devs used 8bit textures very often to compensate, which I think we can safely say, based on anecdotal evidence and visual evidence, was not the case.
8bit CLUT textures, while they would have been on par with VQ in colours, or better in fact, would have been a complete waste of resources most of the time and would have been much blurrier.

The DC could only texture from video ram - meaning every game had around 5.5MB of texture memory available all the time. I don't think you could really use it for anything else (at least without copying to main ram first) so that's probably why DC games tended to have such detailed textures compared to the PS2. On PS2 texture memory had to fight for space with everything else. Using a mix of 4 and 8 bit CLUTs, and using Fafalada's rough conversion/comparison, that equates to something like 11 ~ 16.5 MB of PS2 main ram lost if you want to use "DC quality" textures - which would barely leave you enough memory for DC quality models and maps. It's easy to see IMO why PS2 devs chose to go for much higher poly counts (that would also benefit from more and better per vertex lighting).

If that was true, which I don't think it is, it's strange that not a single game chooses to go in the other direction. Even games that was quite spare on geometry and had limited environments, like ICO and Ecco the Dolphin only had slightly better textures than other games.
Incedently Ecco, a DC port, has some of the best textures ever on PS2, even though they are still downgraded from the DC original. :???:
There was one game, Jack and Daxter, that had quite detailed textures, with none of the lowres blur apparent in almost all other games (the sequels didn't impress me as much) the downside was that it was mainly small heavily tiled textures, which was noticable once you saw it.
Point being, that it was not some kind of deficiency in the system that didn't allow a high texel to pixel ratio.

On the GC it would seem that S3TC and the A-ram made a similar difference. Although now I think about it, while I remember GC textures being far more colourful, I'm not sure they were always that much higher resolution than the PS2. Perhaps that depended on how you used the A-ram. Like with the PS2 textures had to fight for space with everything else.

They were more detailed without question. Look at Metroid and RE4 for some good examples. "256x256 and 512x512 textures all over the place.

Transparent textures were rare though, and with large textures (the norm) the space taken by conversion tables should be relatively tiny (unless I'm missing something). And with quite a few 8-bit CLUTs needed in the mix to do a similar looking range of textures I can easily image needing more than double the space.

Well detail-maps that are not supposed to be transparent in the details, water, window, quads where you don't want the hard edges of binary alpha, etc. Depending on the game of course, but they are not that rare.

Even without JPEG decompression, the kind of from-disk texture streaming that we see on current systems would have benefited the PS2 a lot. Check out Rallisport Challenge 2 on the Xbox to see tens or hundreds of MB of data per (full) course being streamed in from HDD - the game is incredible, and the most impressive racing/driving game of last gen by a frikkin * mile.

Some PS2 games did actually stream textures from the DVD. I seem to remember the sequel to Baldurs Gate being one of them.
 
Back
Top