PS2 question

london-boy

For comparasment sake, it'd be unfair to compare both consoles under different circumstances. The more polygons a system is pushing, the more textures it needs to fill those polygons. Also, more polygons == more data == less for textures. That's why a game using heavy texturing is more likely to feature less geometry.
 
Johnny Awesome said:
But it doesn't when you look at what comes out on screen. :)

Seriously, that fact that DC developers used VQ over CLUT when both were available speaks volumes for which is generally more useful.

Honest question: Are there any full-3D games on the PS2 with good IQ and texture variety? ICO and MGS2 have limited colors for example, even though they have reasonably good IQ.

Many of you think that my comments about PS2 are harsh, but it's very difficult not to see the problems with PS2 texturing/filtering when you have an HDTV setup. The difference between Xbox/Cube quality vs. PS2 quality is like night and day for all but a very small handful (maybe 3-4) PS2 titles.

Geometry and particles can only get you so far when you are still using Bilinear filtering, 4bpp CLUT, very little mip-mapping, and no bump-mapping or per-pixel lighting.

To be fair though, many Xbox games that are technically good, suffer from really poor art direction (Quantum Redshift comes to mind), so I can understand why the average Joe with a regular 25" NTSC set doesn't see what I see, but it's baffling that people on this board can't tell the difference. :?:


i play my games on an LCD display with progressive scan, and to be honest there are games on PS2 that look MUCH better (IQ-wise) than XBOX and GC games. i guess my comparison isn't fair since i play those games in pro-scan while XBOX and GC don't have it (i'm in little old unlucky europe)... also, mentioning ICO and MGS2 as examples of IQ isn't really fair... ICO renders at half the resolution all the other games run at and MGS2 is OK, as good as any other good full-frame rendered game out there...

In europe the games with the best IQ are on PS2 because it's the only console to have PRo-scan... so yeah my opinion will be a bit biased...
 
Johnny Awesome said:
Honest question: Are there any full-3D games on the PS2 with good IQ and texture variety? ICO and MGS2 have limited colors for example, even though they have reasonably good IQ.

Baldur's Gate: Dark Alliance has excellent image quality and texture variety. Primal (new 1st-party game) seems to as well, although the color pallete is pretty dark. This game is also the first progressive-scan Ps2 game I've played, and this really narrows the gap quite a bit. I'd say that the game looks better than Eternal Darkness. Primal also support Dolby PLII and widescreen. The dev didn't leave anything out 8)

Johnny Awesome said:
Many of you think that my comments about PS2 are harsh, but it's very difficult not to see the problems with PS2 texturing/filtering when you have an HDTV setup. The difference between Xbox/Cube quality vs. PS2 quality is like night and day for all but a very small handful (maybe 3-4) PS2 titles.

Speaking from my experience, an HDTV is not the way to go on Ps2 (unless more games start supporting 480p). If you have the extra cash, invest in a nice interlaced CRT TV. My old Sony WEGA looks much better than what I get on my HDTV.
 
I find it sad that Sony chose to use the poorer clut when even DC had the nice VQ. If only Sony had dissected the DC and analysed its strength or at least consulted with leading graphics rendering houses, PS2 would not be a 3D graphics headache.


PS2 apologists can always try to believe PS2 had better texturing than DC, but where are my PS2 games that truely surpass the highly textured, dynamically colored and smoothly filterd SA2?


Please dont quote me JD, becasuse i am not seeing so. It has nice PS2 level texturing, but no where the intricacy and detail of SA2.


AFAIK, PS2 texturing, when done right, is about DC levels. CLUT gives PS2 game a more muted and gritty look(good for realistic games i guess), whereas DC VQ is really nice and smooth.

But is that acceptable for a 2000 hardware(vs 1998)? :oops:


Sony + IBM + Toshiba better have a good solution. I suggest they try to get Nvidia and the likes into the partnership and ensure the GPU is the last part of the design process. We dont want a GS2 again. :p


Many of you think that my comments about PS2 are harsh, but it's very difficult not to see the problems with PS2 texturing/filtering when you have an HDTV setup. The difference between Xbox/Cube quality vs. PS2 quality is like night and day for all but a very small handful (maybe 3-4) PS2 titles.

:D Yes sir! Right sir!
 
This game is also the first progressive-scan Ps2 game I've played, and this really narrows the gap quite a bit. I'd say that the game looks better than Eternal Darkness.

I'm not sure why you mentioned Eternal Darkness there. What's the significance of this game looking better then Eternal Darkness?
 
marconelly! said:
Overal memory comparision would be 16+8 on DC and 32+4+2+2 on PS2 which is 1.7x more.

This has probably already been covered, but the DC has 2 megs of sound ram, giving a 26/40 ratio. In other words, the PS2 has 1.5 times the memory of the DC!

Assuming 5.5 megs of V-ram used for (VQ compressed) textures on DC, and 2-3 times the amount of memory is needed to match their quality using a combination of 4 and 8 bit cluts on PS2, you're looking at 11 to 16.5 megs of main memory to match quality. (Though of course, there will still be some differences in appearance). This would leave the PS2 with a very similar amount of main memory to the DC, only with far more of it taken up by world/model data (it's easily shifting several times the number of polygons), and that may well take up several megs.

It doesn't take much to see that the DC possibly does has a real world advantage in terms of texture variety. But that's only a fraction of what actually makes games look good, and the size of any advantage is debateable.

I think a lot of the improvement in PS2 visuals is down to increaingly experienced artists and an increasingly effecient use of caching, rather than any magic increase in what the PS2 is storing. DC development stopped back in 2000 effectively, so there are no fair comparisons.

And before anyone mentions Shenmue 2, a lot of the modelling and texture work on that dates back to 1998. It could well be the most inpressive first generation title I've ever seen ... :eek:
 
Not entirely correct. GS data generated by VU1 does not go over EE's internal bus unless you for some stupid reason decide to make it so. VU1 has a dedicated path to the GIF unit (which handles EE<->GS transfers). As far as memory transfers are concerned, if you run a GB+ of data every second from memory to GS, it won't matter you tie up a GB+ of bus bandwidth because you'd tie up a GB+ of memory bandwidth also; it's not as if the EE is being deprived of anything it could otherwise have put to some other use.

Besides, I'm not sure the so-called "path3" connection to the EE really touches the internal EE bus. It may go straight from the DMAC to the GIF unit. I could be mistaken here, I'd have to look at the Ars Technica blackpaper on PS2 first and I am too tired to arsed to do that right now.

As I think archie already covered path3 touches the EE bus and thus transfers using path 3 DO take control of the bus.

Also I was thinking at data that feeds the EE's execution units in addition to the texture transfers ( and any DMA operation ) tying up the EE's bus bandwidth...

It is true that if we send all ur geometry data from VU1 to the GIF through its direct connection with the GIF and we use the busses which connects VU0 to VU1 and RISC core and VU0 we might avoid too horrible bus contention issues... if our texture transfers take 600 MB/s ( half of the GIF-to-GS bandwidth ) then they will take off at least 600 MB/s from the EE's bus bandwidth, lowering it down to 1.8 GB/s....
My point was: if you had a 2.4 GB/s GIF-to-GS bus and you were planning to use the much increased bandwidth...


As far as memory transfers are concerned, if you run a GB+ of data every second from memory to GS, it won't matter you tie up a GB+ of bus bandwidth because you'd tie up a GB+ of memory bandwidth also; it's not as if the EE is being deprived of anything it could otherwise have put to some other use.

I beg to differ... if we sent 1 GB/s of texture data to the GS and left the EE's bus with 1.8 GB/s we would still deprive the EE of something: yes we are stealing 1 GB/s of main RAM bandwidth ( another reason why the doubling of GIF-to-GS bus bandwidth would not be a holy grail for PlayStation 2 ), but the EE bus might be used for other purposes too... VU0, VU1, RISC core and IPU need to use EE's bus bandwidth...

The more I think the more there might be scenarios if which the GS has free fill-rate to burn and it would be nice to decompress S3TC or VQ on the GS ( doesn't S3TC need like 4 passes ? we could also hide DOT3 bump-mapping and anisotropic filtering in there ;) [helped by the EE] )

Having S3TC or VQ, even decompressed by the GS, would lower the over-all bandwidth used to transfer textures, which in return would free EE's bus bandwidth and main RAM bandwidth allowing the EE to take advantage of the extra memory bandwidth with more aggressive pre-fetching, etc... lowering the number of stalls due to main RAM contention ( data for the GS would steal cycles from data to the EE's units )
and lowering their performance hit...
 
Teasy said:
This game is also the first progressive-scan Ps2 game I've played, and this really narrows the gap quite a bit. I'd say that the game looks better than Eternal Darkness.

I'm not sure why you mentioned Eternal Darkness there. What's the significance of this game looking better then Eternal Darkness?

Simple. Because both games are similiar in style and somewhat-similiar in gameplay.
 
Ozymandis said:
Teasy said:
This game is also the first progressive-scan Ps2 game I've played, and this really narrows the gap quite a bit. I'd say that the game looks better than Eternal Darkness.

I'm not sure why you mentioned Eternal Darkness there. What's the significance of this game looking better then Eternal Darkness?

Simple. Because both games are similiar in style and somewhat-similiar in gameplay.


haven't really played eternal darkness, but from screenshots and such, the models are much more detailed in Primal, and the game also features some pretty cool effects.... but again, i haven't played eternal darkness so... oh and it supports progressive scan which eternal darkness does not here in europe...
 
london-boy said:
haven't really played eternal darkness, but from screenshots and such, the models are much more detailed in Primal, and the game also features some pretty cool effects.... but again, i haven't played eternal darkness so... oh and it supports progressive scan which eternal darkness does not here in europe...

The textures in ED are a bit better. Other than that I'd say Primal probably looks superior.

primal_032403_02.jpg
 
Ozymandis said:
Johnny Awesome said:
Honest question: Are there any full-3D games on the PS2 with good IQ and texture variety? ICO and MGS2 have limited colors for example, even though they have reasonably good IQ.

Baldur's Gate: Dark Alliance has excellent image quality and texture variety. Primal (new 1st-party game) seems to as well, although the color pallete is pretty dark.

Of course, BG:DA is not a full-3D game. It has a limited view that really helps the game engine get away with what it does.

I haven't seen Primal yet, so I can't compare, but ED has amazing texture filtering and IQ.
 
Simple. Because both games are similiar in style and somewhat-similiar in gameplay.

I didn't mean why should those two games be compared. I meant what was the reason for comparing them? The way you said it, it sounded like being better looking then ED had some sort of significance to it. Like maybe you consider ED to be a benchmark for games of its type, visually.

I was just wondering if that's what you meant. Not that its important or anything, I was just wondering.
 
Of course, BG:DA is not a full-3D game. It has a limited view that really helps the game engine get away with what it does.
Of course it is full-3D... The view is limited by gameplay decision, why should anyone care of it helps getting away or no? It's not like that's the first game with that kind of viewpoint, and it's not like that viewpoint doesn't change during the cutscenes which are simply a camera transition to and from gameplay viewpoint, and have no extra loading.

You asked about games with good textures and good image quality - I can think of the few that I have: SH2, Primal, BGDA, War of the Monsters, Burnout 2, J&D.

If so, just why these were included as part of the memory which could be used for textures is a little beyond me...
Well, you wanted to compare complete memory sizes. I forgot about 2MB of sound memory on DC, though, so the 1.5 ratio is actually correct - my bad. Still, I made a point that you can actually decide how much memory to use on PS2 and dedicate it for textures, it's unreasonable to use on chip GS memory for storage.

Please dont quote me JD, becasuse i am not seeing so. It has nice PS2 level texturing, but no where the intricacy and detail of SA2.
Yeah, yeah...
 
Teasy said:
I didn't mean why should those two games be compared. I meant what was the reason for comparing them? The way you said it, it sounded like being better looking then ED had some sort of significance to it. Like maybe you consider ED to be a benchmark for games of its type, visually.

I was just wondering if that's what you meant. Not that its important or anything, I was just wondering.

Because ED is the only game of that type that I've played on Xbox or Gamecube besides the port Silent Hill 2. I didn't really think about ED being a graphical benchmark for the genre but rather that the "feel" of the games and the style was very similiar.

Fair enough? :D

Johnny Awesome said:
Of course, BG:DA is not a full-3D game. It has a limited view that really helps the game engine get away with what it does.

I haven't seen Primal yet, so I can't compare, but ED has amazing texture filtering and IQ.

Well, you can rotate the camera in Baldur's Gate. I wouldn't say it's really that limited.
 
Indeed! What tool did you use? The only PC other product I had produced an even worse result than the gimp, and I think the NetPBM tools on Unix only use Heckbert's method which doesn't work well when there is only a small palette.

Eh... Photoshop (which is what I used) has really improved is color reduction capabilities of late. Of course if I were actually doing a production title I'd rely on OPTIXImageStudio since that's what we used at Square (It get's used at quite a lot of studios actually), and it can really embarrass even Photoshop at color reduction...

I've sometimes wondered if I should use the VQ compressor to just generate palettes to see if the algorithm I used could do a better job.

VQ compressors are OK for that (I mean the IPU does it on the fly to generate IDXT8 CLUTs from I-pictures if you want) but there's too many pathological cases where they fall apart... Plus once you start banging down into obscenely small palettes, controlling diffusion becomes the most critical aspect and can become rather voodoo-ish. I'd rather have an artist comb over the image with a fine tooth pick than rely strictly on any mathematical solution (but that's also me as an artist speaking :) )

I don't know exactly when the GS was completed, so I couldn't comment on the difficulties involved - Savage3D appeared around July 1998, Playstation 2 was released in March 2000. The timelines may well have been too tight to make an attempt to include S3TC. Like I said, I'm sure Sony had their reasons.

The GS itself was completed sometime in late '98...

Seriously, that fact that DC developers used VQ over CLUT when both were available speaks volumes for which is generally more useful.

It does? Sounds more like to they just wanted to simplify their production pipeline by sticking to a uniform format. I don't think about it too much but most of my experiences (plus me also being an artist as well as programmer) are with large art staffs so it's no real biggy. Sega's houses (at least according to my old Uni roommate) tend to have smaller staffs so it'd make sense to streamline that as much as possible.

Many of you think that my comments about PS2 are harsh, but it's very difficult not to see the problems with PS2 texturing/filtering when you have an HDTV setup. The difference between Xbox/Cube quality vs. PS2 quality is like night and day for all but a very small handful (maybe 3-4) PS2 titles.

Actually even if you have an HDTV, that alone can also play a large role in the output differences based in it's display method (CRT, LCD direct, LCD projection, Plasma) and the set's filtering and scan capabilities. (Not to mention A/V recievers can also induce visual variation as well, if you're passing through them)...

Sony + IBM + Toshiba better have a good solution. I suggest they try to get Nvidia and the likes into the partnership and ensure the GPU is the last part of the design process.

<cheapshot>Why? So it can be delayed and released requiring a hoover cooler to deliver decent performance? ;) </cheapshot>
 
My point was: if you had a 2.4 GB/s GIF-to-GS bus and you were planning to use the much increased bandwidth...

Dev's are having enough time trying to keep the existing bus saturated, adding more really wouldn't accomplish much more (expecially if you're just wideneing it) other than in a few distinct cases. In the end, you'd be adding more cost and complexity and just leave you with a more idle bus.

The more I think the more there might be scenarios if which the GS has free fill-rate to burn and it would be nice to decompress S3TC or VQ on the GS ( doesn't S3TC need like 4 passes ? we could also hide DOT3 bump-mapping and anisotropic filtering in there [helped by the EE] )

Why would you need a faster bus to the GS to do what pretty much amounts to a bunch buffer maths on the local buffer? And anisotropic is too slow (it like tri-linear is brutal on the page-buffer and kills too much usefull fill-rate).

... which in return would free EE's bus bandwidth and main RAM bandwidth allowing the EE to take advantage of the extra memory bandwidth with more aggressive pre-fetching, etc... lowering the number of stalls due to main RAM contention ( data for the GS would steal cycles from data to the EE's units ) and lowering their performance hit..

How are you going to go about a more aggressive prefetch? It's not like it's loaded with a programmable data-touch engine like AltiVec... Besides, it's not the "GS stealing cycles" that one has to worry about, it's the "EE core stealing cycles" that one has to worry about. There's a reason the EE is loaded with all sorts of caches sprinkled all over...
 
My point was: if you had a 2.4 GB/s GIF-to-GS bus and you were planning to use the much increased bandwidth...

Dev's are having enough time trying to keep the existing bus saturated, adding more really wouldn't accomplish much more (expecially if you're just wideneing it) other than in a few distinct cases. In the end, you'd be adding more cost and complexity and just leave you with a more idle bus.

My point was that widening the GIF-to-GS bus was not going to help much, not that increasing its bandwidth would be the holy grail...

If you had a faster GIF-to-GS bus and IF you managed to use the extra bandwidth to stream more texture data to the GS you would steal even more bandwidth from the EE's bus and the main RAM...

The more I think the more there might be scenarios if which the GS has free fill-rate to burn and it would be nice to decompress S3TC or VQ on the GS ( doesn't S3TC need like 4 passes ? we could also hide DOT3 bump-mapping and anisotropic filtering in there [helped by the EE] )

Why would you need a faster bus to the GS to do what pretty much amounts to a bunch buffer maths on the local buffer? And anisotropic is too slow (it like tri-linear is brutal on the page-buffer and kills too much usefull fill-rate).

Again, I was not the one calling for a faster bus, I just called for, amongst the other things, use of compressed textures like S3TC or VQ that would get de-compressed on the fly by the GS in 3-4 passes... doing that would have the PRACTICAL effect of increasing the bandwidth of main RAM, of the EE's main bus and of the GIF-to-GS bus... it is not like those busses get wider or faster, but that textures get smaller...

In the case you have fill-rate to spare and you wanted more bandwidth for the EE's bus and main RAM you could use VQ or S3TC textures... of course you are wasting 3-4 passes decompressing them, but the GS would not be sitting idle... during that time you could perform DOT3 and some anisotropic filtering ( you can use per vertex algorithms too )... the idea is this: "if you have 3-4 rendering passes you have to spend decompressing S3TC or VQ textures, why not doing something while the textures are decompressed ;) ?"



... which in return would free EE's bus bandwidth and main RAM bandwidth allowing the EE to take advantage of the extra memory bandwidth with more aggressive pre-fetching, etc... lowering the number of stalls due to main RAM contention ( data for the GS would steal cycles from data to the EE's units ) and lowering their performance hit..

How are you going to go about a more aggressive prefetch? It's not like it's loaded with a programmable data-touch engine like AltiVec... Besides, it's not the "GS stealing cycles" that one has to worry about, it's the "EE core stealing cycles" that one has to worry about. There's a reason the EE is loaded with all sorts of caches sprinkled all over...
Technically you only have TWO caches ( not trying to be mr. smarty pants archie, just that caches sprinkled all over and the actual EE do not go SO well along together... ): L1 Data and L1 Instruction cache, both attached to the RISC core... the SPRAM, the VUs' data and instructions micro-memory are not caches ( whcih doesn't exclude the fact programmers might treat them as caches and implement a caching policy on their own... and the way you prefetch and do caching since it is manual can be modified with the idea of wasting more memory bandwidth trying to prefetch useful data and lower latency )... the EE is not exactly receiving awards for best caching hierarchy even if I like the idea of having relatively nice sized Local Memories rather than straight HW caches ( that is why I like the Cell processor featured in that now famous patent )...

Anyways, ok the problem is EE stealing cycles... we can turn the thing around and look at this scenario, but it would seem like following quite nicely that if the GS stole even fewer cycles there would be more cycles to divide between EE's execution units...

If you sit at a table where someone eats most of the pasta, it seems to me that if you serve 2x as much pasta that person will most likely not eat 2x as much pasta... maybe he will eat more than before, but not as much to negate the fact that we are now serving 2x as much pasta :)
 
Panajev2001a said:
As I think archie already covered path3 touches the EE bus and thus transfers using path 3 DO take control of the bus.

Yes, I would think so too, I just wasn't 100% certain (you don't sound entirely certain either btw so I guess the issue is still unresolved). However, it doesn't matter either way because like I ALREADY SAID, if you tie up a gig+ of mem b/w a second and a gig+ of bus b/w too, that doesn't matter compared to a scenario where the bus was free of that traffic. What ELSE would you use the bus for? You couldn't! You're already down a gig/sec of memory bandwidth, remember? The bus would just sit there idle and not do anything since you couldn't access memory anyway while textures are being fetched!

Also I was thinking at data that feeds the EE's execution units in addition to the texture transfers ( and any DMA operation ) tying up the EE's bus bandwidth...

What are you babbling about now? The EE bus is there to feed the EEs execution units. It's not as if it's being "burdened" by it or anything, the bus exists to serve the execution units, not the other way around!

Think of it like a car. The car is meant to transport passengers, that's why it exists. The passengers do not exist for the car (though some delusional car owners sure seem to think so! :)).

if our texture transfers take 600 MB/s ( half of the GIF-to-GS bandwidth ) then they will take off at least 600 MB/s from the EE's bus bandwidth, lowering it down to 1.8 GB/s....

Yes, so what? Point being...? EE can't access memory anyway, so what would it need those 600MBs of bus for?

My point was: if you had a 2.4 GB/s GIF-to-GS bus and you were planning to use the much increased bandwidth...

I don't see your point at all. You set up a supposition, but there is no conclusion. Your example is flawed and incomplete.

If the GIF-to-GS bus was 2.4GB/s, what exactly do you think would improve? Remember, REAL developers have already said the machine is rarely if ever bottlenecked at that point.

Assume 30 bytes per vertex and a quarter million vertexes per frame, you still have some 12+ megs of texture upload bandwidth PER FRAME AT 60FPS. That's over a third of the machine's RAM capacity! You seriously believe you're going to be seeing a third of the machine's RAM capacity on-screen in textures at any one time? Especially since you can use the IPU to compress textures in main RAM? :)

I beg to differ... if we sent 1 GB/s of texture data to the GS and left the EE's bus with 1.8 GB/s we would still deprive the EE of something: yes we are stealing 1 GB/s of main RAM bandwidth ( another reason why the doubling of GIF-to-GS bus bandwidth would not be a holy grail for PlayStation 2 ), but the EE bus might be used for other purposes too... VU0, VU1, RISC core and IPU need to use EE's bus bandwidth...

You're not making any sense. You're not "stealing" main RAM b/w from anything, we need that b/w to fetch our textures! Btw, 1GB of b/w just for textures on a PS2 sounds like total overkill. I don't ever see that happening in reality...

Having S3TC or VQ, even decompressed by the GS, would lower the over-all bandwidth used to transfer textures

...But that's not the limiting factor, as testified by actual developers. So stop flogging this dead horse already, okay? It's getting tiresome hearing this myth repeated over and over again.


*G*

Edit:Quote error. Why does this silly board need a slash to close a quote block? Stupid idea... Grr.
 
Grall... catch a breath while posting...

What are you babbling about now? The EE bus is there to feed the EEs execution units. It's not as if it's being "burdened" by it or anything, the bus exists to serve the execution units, not the other way around!

babbling ? First, tone it down, I am not using this kind of paternalistic tone of voice with you, so you should at least grant me this favour...

still,... babbling... sorry mr. Bernard Shaw if my wording is sub-optimal... sub-optimal might be the time you spent reading my posts in their whole too though...

The bus has a finite bandwidth... part of it is used to feed the execution units of the EE and part of it is used to feed the GS... and if you wanted to be a... I mean precise all of it is used to feed EE's units, with the GIF counting as one of them...

If you increased ( I said IF ) the GIF-to-GS bandwidth with everything else staying the same and you decided to stream more texture data to the GS you would use more and more of main RAM and EE's bus bandwidth for data that is not processed by either the RISC core or the VUs... thus increasing the GIF-to-GS bandwidth in itself is not worth the effort ( considering cost, etc... ).

The funny thing is that the reason I posted here in the first place was because the GIF-to-GS bandwidth was blamed as the cause to blame, as THE PlayStation 2 bottleneck and I disagreed... of course in the parallel universe that online forums are suddenly it seemed that I was advocating for the GIF-to-GS bus to be widened or clocked higher... :?

I don't see your point at all. You set up a supposition, but there is no conclusion. Your example is flawed and incomplete.

I agree with you, it is incomplete, you quoted like half of it and understood half of what I was trying to say too...

Assume 30 bytes per vertex and a quarter million vertexes per frame, you still have some 12+ megs of texture upload bandwidth PER FRAME AT 60FPS. That's over a third of the machine's RAM capacity!

First, my point was NOT advocating simply more bandwidth to the GIF-to-GS bus ( as it would cause more problems than it would solve ), I know TOO that the situation is more complex and increasing bandiwdth there would proove it... still you want to throw around numbers...

[church lady]Let's do some more math shall we ? [/church lady]

15 MVps at 60 fps yelding 250K Vertices/frame and no multi-texturing ? ;)

Assuming 3 layers of textures we can cut that per-frame figure by 3 and we get ~84K Vertices/frame for actual geometry ( multi-pass multi-texturing here we come )... if we wanted to have a bit higher polygon-count ( say 150K Vertices/frame with 3 layers of textures/pixel )... we would have to send over to the GS 150K * 3 = 450 KVertices each frame and that would mean ~810 MB/s leaving ~390 MB/s for texture transfers... that would mean 6.5 MB/frame of texture upload bandwidth assuming 100% utilization of the GIF-to-GS bus...

We would like to have more... simply increasing the GIF-to-GS bandiwdth would not help... supporting something like on chip HW VQ decompression ( or maybe 8 nice little IPUs ;) ) would do that and would also reduce the bandwidth stolen for texture transfers from main RAM bandwidth ( assuming we want to send the same amount of textures, just at a better compression ratio ) and bandwidth stolen from the EE bus...


If the GIF-to-GS bus was 2.4GB/s, what exactly do you think would improve? Remember, REAL developers have already said the machine is rarely if ever bottlenecked at that point.

I will repeat again, that is not the way to go to improove performance... not with everything else being equal...

Having S3TC or VQ, even decompressed by the GS, would lower the over-all bandwidth used to transfer textures

...But that's not the limiting factor, as testified by actual developers. So stop flogging this dead horse already, okay? It's getting tiresome hearing this myth repeated over and over again.

You're not making any sense. You're not "stealing" main RAM b/w from anything, we need that b/w to fetch our textures!


In some occasion it might also become the limiting factor, still the GIF-to-GS bus utilization is not the only thing affected by using Texture Compression ( something that yelded better than CLUT results )... the textures do not magically pop in the GIF-to-GS bus, they come from main RAM, pass through the EE bus and then they get to the GIF-to-GS bus...


the EE is not applying textures, those little things are sent to the GS and they are processed there...

Let's assume we had better texture compression something that let's say instead of 1:4 had something like 1:5 compression ratio...

Let's use again our 1 GB/s number...

1 GB/s = 4 GB/s of effective data transfer

4 GB/s / 5 = 800 MB/s

This way we would take 200 MB/s of less bandwith used to transfer the same amount of effective texture data... 200 MB/s we would not take away from main RAM bandwidth leaving more bandwidth for the EE...
 
Grall said:
Edit:Quote error. Why does this silly board need a slash to close a quote block? Stupid idea... Grr.

Go complain to the guys that designed HTML. That's what it's based on.
 
Back
Top