PS2 question

Regarding JPEG texture compression:
The IPU is on the EE die and the decompression needs to be done in GS memory. Wouldn’t JPEG decompression be a bit like building a ship in a bottle: A lot of work but not much gained?

Since 99.9% of PS2 games use CLUT it's like the claim that PS2 can do bumpmapping :oops:
 
Wildstyle said:
Uhh how could you say that the Ps2 has a gargantuan pile of texture only bandwidth because if I am not mistaken doesn't the Gamecube have 10.4GB/sec of texture only bandwidth and what exactly would the GC's core use all that bandwidth for?

GameCube also has 1MB fixed space for textures which has to move around a lot more.

Couple that with GameCube using trilinear as standard...

I suppose if devs actually used MIP mapping reliably, let alone trilinear...

Also:

marconelly! said:
6x compared to what? Non-compressed 32 bit textures? Certainly not 6x compared to 4 or 8 bit clut textures.

As far as I can see colour quality would improve, though S3TC artefacts would be added. <g> And don't dig up those old 4- and 8-bit pieces of art - they had VERY limited colour ranges.
 
GameCube also has 1MB fixed space for textures which has to move around a lot more.

Couple that with GameCube using trilinear as standard...

I suppose if devs actually used MIP mapping reliably, let alone trilinear...

So what are you saying bro that the GC has less overall texture bandwidth then the Ps2 because the GC has a 1mb fixed TC and that the textures have to be moved around alot more? and also doesnt the Ps2 have a fixed cache too or is it like what ever is left over after the frame/z-buffer is taken into account.
 
Where to start...

So explain what the core's going to do with that gargantuan pile of texture-only bandwidth? 9.6GB/sec if I'm not mistaken. I suppose actually that could be used for really hi-res rendered textures... but otherwise...

Depends... Obviously you're dealing with a theoretical derived from the hardware specs. What actually comes from it though is more of a factor of latency and what the programmer does with it...

As far as "hi-res" textures go, that'd be a no-no...

Yeah, well, were you making a serious effort to push numerous large textures? Not to belittle you or anything, I deeply respect your opinion on such things, but you came from Square, correct? FFX isn't exactly the most incredibly detail-textured game I've seen, and it doesn't use MIP maps either which saves on texture size too.

Well for one, FFX is a poor example for you to dig up. Not only is it older than iron (for a PS2 game), it's also loaded with a ton of legacy code adapted for the PS2 as well. Certainly not the shining example of a well done PS2 title. Secondly, even with three client paths, you've still got more to worry about utilizing the existing bus bandwidth before worrying about needing more. And finally, MIP maps do now save on texture size! If anything you're increasing your memory footprint by 1.333 (unless you're just talking about storing everything in main memory and loading only the specified MIP to your local buffer).

Nobody said it would... but having S3TC would mean up to 6x the texture space and total bandwidth... giving the artists and designers more freedom in slapping hi-res, colourful textures on everything.

If anything, S3TC might have the side-effect of thrashing your page-buffer just as much as the non-compressed large UV map, while also causing even more DRAM accesses killing your actual fill-rate even more... At least in a simple implimentation... A more sophisticated implimentation would probably require signifigant redesign of the GS (which may have proven too much of an undertaking within it's design timeframe).

Wouldn’t JPEG decompression be a bit like building a ship in a bottle: A lot of work but not much gained?

No, it's still quite usefull in storing data in main mem that would otherwise consume MUCH more...
 
And don't dig up those old 4- and 8-bit pieces of art - they had VERY limited colour ranges.
I hope you realize that most textures have even more limited colour ranges. I simply don't think ST3C offers any significant improvement and various comparisions that I've seen on the internet prove that. Sure, it would look somewhat better and would take somewhat less space depending on texture color complexity, but I don't think it would be significant at all.
 
Yeah, but programming that kind of streaming by hand is a PITA that wouldn't be (as) necessary if GS had say an additional 2MB, or S3TC support.

From my understanding, one of the things you do is sort by shader unless you want to kick your performance in the crotch.
 
marconelly! said:
And don't dig up those old 4- and 8-bit pieces of art - they had VERY limited colour ranges.
I hope you realize that most textures have even more limited colour ranges. I simply don't think ST3C offers any significant improvement and various comparisions that I've seen on the internet prove that. Sure, it would look somewhat better and would take somewhat less space depending on texture color complexity, but I don't think it would be significant at all.

I would consider myself fairly well versed in the field of texture compression and, IMHO, both S3TC and VQ offer significant improvements over 4/8 bpp CLUT textures in terms of quality/bit. Examples comparing the various techniques can be found here: I may eventually add PVR-TC to this as well.

To answer someone else's comments on why DC could do so well with 'only' 800Mb/s access, it of course was due to a number of factors, but the main ones were:
  • a ~2bpp texture mode (i.e. VQ)
  • an efficient texture cache and rendering system
  • the deferred texturing and shading mechanism.
Simon
 
I would consider myself fairly well versed in the field of texture compression and, IMHO, both S3TC and VQ offer significant improvements over 4/8 bpp CLUT textures in terms of quality/bit
Ah, yes, that's exactly the site I was thinking about. Well, I guess it depends on what you consider by 'significant'.
 
FF12 to use tweakedd Bouncer engine! :oops: :oops: :oops:

FFX has too many dithered looking textures and shimmering! The quantity is varied but the quality is lacking. ;)
 
chaphack said:
FF12 to use tweakedd Bouncer engine! :oops: :oops: :oops:

FFX has too many dithered looking textures and shimmering! The quantity is varied but the quality is lacking. ;)



i might be wrong, but i remember playing The Bouncer and i really don't recall anything too impressive apart from the admittedly very nice char models and the physics.... oh and the motion blur which was cool at first but got tiring after a while... other than that it was a mile and a half below the WOW effect FFX gave me... and thats without mentioning THE GAME itself :oops:

anyone care to enlighten me?
 
Wildstyle said:
So what are you saying bro that the GC has less overall texture bandwidth then the Ps2 because the GC has a 1mb fixed TC and that the textures have to be moved around alot more? and also doesnt the Ps2 have a fixed cache too or is it like what ever is left over after the frame/z-buffer is taken into account.

The textures have to move a lot more because GC has *less* space. It's also invisible to the developer, so I doubt it's as efficient as PS2's cache either.

archie4oz said:
Depends... Obviously you're dealing with a theoretical derived from the hardware specs. What actually comes from it though is more of a factor of latency and what the programmer does with it...

As far as "hi-res" textures go, that'd be a no-no...

By hi-res textures I meant render-to-texture textures which could very well be hi-res. Right?

And that's also what I'm talking about in general, "hi-res" textures shouldn't be a no-no. They should be feasible. But they aren't, for various reasons which we're in the process of discussing now. 8)

archie4oz said:
Well for one, FFX is a poor example for you to dig up. Not only is it older than iron (for a PS2 game), it's also loaded with a ton of legacy code adapted for the PS2 as well. Certainly not the shining example of a well done PS2 title. Secondly, even with three client paths, you've still got more to worry about utilizing the existing bus bandwidth before worrying about needing more. And finally, MIP maps do now save on texture size! If anything you're increasing your memory footprint by 1.333 (unless you're just talking about storing everything in main memory and loading only the specified MIP to your local buffer).

I mentioned FFX because IIRC you came from Square. That's correct, isn't it? It's the best example I could come up with from what I know some of your experience comes from. You said:

archie4oz previously said:
As far as the GIF-GS interface... The performance is what you make of it. I personally haven't hit performance bottlenecks with it myself (other than stupid things, which cause too much bus chatter and leave you hanging dry, but that applies to pretty much any bus and is a problem of the programmer not the hardware.)

And FFX certainly wouldn't max out the GS's main memory pipe.

I'm just working from stuff I know something about, hun. ^_^;
 
The textures have to move a lot more because GC has *less* space. It's also invisible to the developer, so I doubt it's as efficient as PS2's cache either.

I suppose that would depend on how good the developer is at managing the PS2's cache.

BTW this brings me to something I've been thinking about recently. Hopefully a GC dev can clear this up for me:

GC's cache is invisible to the dev when using Nintendo's dev tools. But couldn't a dev change that by not using those tools and instead programing closer to the metal?

For instance, AFAIK, plenty of PS2 devs use tools that attempt to manage the cache for them. But a lot of devs also choose to program closer to the metal and managed everything themselves. Isn't this sort of thing also possible on GC?
 
Teasy said:
The textures have to move a lot more because GC has *less* space. It's also invisible to the developer, so I doubt it's as efficient as PS2's cache either.

I suppose that would depend on how good the developer is at managing the PS2's cache.

BTW this brings me to something I've been thinking about recently. Hopefully a GC dev can clear this up for me:

GC's cache is invisible to the dev when using Nintendo's dev tools. But couldn't a dev change that by not using those tools and instead programing closer to the metal?

For instance, AFAIK, plenty of PS2 devs use tools that attempt to manage the cache for them. But a lot of devs also choose to program closer to the metal and managed everything themselves. Isn't this sort of thing also possible on GC?

Sure it is. But the maximum efficiency - texture compression ignored for a moment because the two consoles use totally different formats - of a 1MB cache will be half that of a 2MB cache (they have comparable bandwidth... 9.6 vs 10.4)
 
Simon F said:
I would consider myself fairly well versed in the field of texture compression and, IMHO, both S3TC and VQ offer significant improvements over 4/8 bpp CLUT textures in terms of quality/bit. Examples comparing the various techniques can be found here: I may eventually add PVR-TC to this as well.

True, but the test on your site uses photographic or multicolour images, most textures of individual objects in the real world consists of very few colours (and therefore does not benefit much from S3TC). Large individual surfaces with many colours is really only common on human made objects like photos, paintings or rugs.
 
On the whole JPeg thing, I've said this before - It's a means to expand your memory. Most of you have no trouble understanding benefits of stuff like streaming data from obscenely slow optical media, or using slow ass ARam when main memory is tight, or generating data dynamically via whatever procedural approaches.
I don't see what's so hard to grasp about using units with much higher throughput then any of the above cases. (Except maybe that IPU is in a sony console, but whatever).

Tagrineth said:
Nobody said it would... but having S3TC would mean up to 6x the texture space and total bandwidth... giving the artists and designers more freedom in slapping hi-res, colourful textures on everything.
Mem/bandwith improvement would be more like between 1:1-2:1. Probably closer to 1:1 in current games. You'd be improving color quality of textures, but that's pretty much all the benefit worth mentioning.
Speaking of improvements, automatic texture caching would be more notable - as far as programmers go, but it wouldn't really help the artists though.

Tagrineth said:
And FFX certainly wouldn't max out the GS's main memory pipe.
It could very well be memory limited before any of the buses is touched though.

Teasy said:
GC's cache is invisible to the dev when using Nintendo's dev tools. But couldn't a dev change that by not using those tools and instead programing closer to the metal?
Flipper texture cache management is a hardware function - not that of tools. It Is however, possible to lock portions of texture cache and use them as normal memory - much in the same way GS does.
So if you absolutely want to, you could hand manage the texture uploads on GC too.
 
Tag:

GCs 1MB dedicated TC is not less than the ~2MB framebuffer space (NOT TC!) the PS2 has, because GC uses texture compression in its cache in almost all cases (except when rendering to texture naturally). So that 1MB is comparable to 4-6MB in reality.

And since it uses virtual texturing (only fetching visible portions of a texture), that lasts even longer than in other hardware which has to be fed the entire texture.

Faf already mentioned the texture locking mechanism too, so basically, your complaints/comments re. GCs texturing mechanism seem overly harsh. :D


*G*
 
Grall said:
Tag:

GCs 1MB dedicated TC is not less than the ~2MB framebuffer space (NOT TC!) the PS2 has, because GC uses texture compression in its cache in almost all cases (except when rendering to texture naturally). So that 1MB is comparable to 4-6MB in reality.

I know, and PS2 has 2MB for uncompressed textures - but with PS2's best compression method, 8-bit CLUT, its 2MB becomes much larger too. S3TC > 8-bit CLUT for quality, but they're still two different means to one end, which is why I said 'not counting TC'.

And since it uses virtual texturing (only fetching visible portions of a texture), that lasts even longer than in other hardware which has to be fed the entire texture.

Right, right, I'm stupid sometimes, I was one of the preachers of this one a while back, remember? I forgot about that for a moment. VT makes the 1MB texture cache "seem" much larger compared to a conventional 1MB cache, even with the same other capabilities...

Faf already mentioned the texture locking mechanism too, so basically, your complaints/comments re. GCs texturing mechanism seem overly harsh. :D

*G*

Don't get me wrong. I think GCN basically took PS2's 'proof-of-concept' cache setup and fixed pretty much every problem it had. I wasn't aware of the texture locking though, which could be VERY useful in many games (especially ones where certain bump textures are used way too often <g>).
 
The textures have to move a lot more because GC has *less* space. It's also invisible to the developer, so I doubt it's as efficient as PS2's cache either.

Sure it is. But the maximum efficiency - texture compression ignored for a moment because the two consoles use totally different formats - of a 1MB cache will be half that of a 2MB cache (they have comparable bandwidth... 9.6 vs 10.4)

Hmm so generally then what would be the effiency of the GC's texture cache compaired to the Ps2's? and also on the last part of the last quote were you trying to say the the GC's texture cache would be like 5.2gbs half of it's bandwidth compaired to the Ps2's 9.6gbs of bandwidth because of the Ps2 haveing 2mb compaired to the GC's 1mb?
 
Squeak said:
True, but the test on your site uses photographic or multicolour images, most textures of individual objects in the real world consists of very few colours
(and therefore does not benefit much from S3TC). Large individual surfaces with many colours is really only common on human made objects like photos, paintings or rugs.
I'm a bit surprised by your comments:
  • You seem to imply that I have chosen images that would not be used in a game - I'm therefore bewildered as to what you think the "Unreal texture" is. Furthermore, I tend to collect textures/images sent to me from developers and I feel that my choice of images is fairly representative. (well with the exception of the last example which is deliberately chosen to show problems with all the methods!)
  • In S3TC, each 4x4 block can only use a very restricted set of four colours - 2 chosen and 2 which are implied.
  • As for textures with many colours - they do occur in games. As I said, I have a corpus of textures from various sources. Are you sure you're not looking at examples that are targeted specifically at systems which only have, say a CLUT system, thus forcing developers to restrict their art work? (BTW have you seen the colour ranges of Normal Maps!!)
Actually, I found your last statement a bit ironic given someone else wrote "And don't dig up those old 4- and 8-bit pieces of art - they had VERY limited colour ranges". :) It just shows that some people clearly see such 'near monochromatic' textures as somewhat bland. <shrug>
Tagrineth said:
The textures have to move a lot more because GC has *less* space. It's also invisible to the developer, so I doubt it's as efficient as PS2's cache either.
Tagrineth, a "true cache" will have the HW managing data at the atomic level of a few texels - in the majority of cases, it should be more efficient than a block of dedicated scratch RAM (which, to my understanding is what the PS2 provides), because only the required texels will ever be loaded.

marconelly! said:
I would consider myself fairly well versed in the field of texture compression and, IMHO, both S3TC and VQ offer significant improvements over 4/8 bpp CLUT textures in terms of quality/bit
Ah, yes, that's exactly the site I was thinking about. Well, I guess it depends on what you consider by 'significant'.
I would think that the RMS errors and RMS error/bit factors that I gave in comparison are quite significant.

Taking the "unreal" test example, which was a CLUT texture to start with(!!), the 4bpp CLUT has an RMS of 17, the 4bpp DXTC scores 13 (~ 23% improvement), and 2bpp VQ has an error of 16. From this it is clear that DXTC has better quality (typically more significant on other images) for ~ the same storage costs, while VQ has slightly better quality at ~1/2 the cost.
 
Back
Top