Brief inquiry on PS2 GS capabilities.

Status
Not open for further replies.
GS sucks...the proof is in the games... What sucks more is how could Sony locked down the design in 1997(as said by some Sony people) AND the very same people made it as though its all fine n dandy?

Ain that a sign of a poor sighted 3D hardware manufacturer?

ANYWAY, i think GT4 uses 16bit, i see dithering up close n personal.

There is also ANOTHER DC game, Surf Rider or something, that had bumpmapping all over the water waves.
 
Huh? I thought Lazy8s and Sonic (and maybe 1-2 others) confirmed a while ago that no Dreamcast game ever actually made use of bump mapping...?
 
I have a Tomb Raider title here
Really? OK, I have played TR4 *demo* once on my DC, and I don't remember seeing it. As a matter of fact, I remember I was disgusted how poor that game looked. It looked as if they spent no effort improving it over the PS1 except for increasing the resolution and textures a bit.

GS sucks...the proof is in the games...
I think that for such an old piece of hardware running at puny 150Mhz it does a very good job when used properly and efficiently. As a PC port machine it does a poor job, but with specialized games it shines. For example, just take a look at what kind of monster PC you need to run the SH3 in the resolution and framerate to match or better the PS2 version.
 
Ah yes...SH3 is brought up once again...

It is Konami's "famous" PC port job for one. You need a monster PC? Hmm...if only the demo werent 500mb, i be trying on my PC but how monstrous the PC is needed? I think a good GF3 will do the job greatly at 480 VGA + mipmapping. i.e The game be looking very much closer to those screenshots than PS2 will ever display. :LOL:

Sorry Marc, but arent you the guy who said ZOE2 runs 60fps 90% of the time(IIRC), Jak2 looks leap n bound above Jak1 and it ran 50-60fps(more like 30fps/60fps), MGS2X is pisspoor visually to PS2 and what recently, TTS look bad vs SOL because it ain blurry enough?

No offense brudder, but i find you exaggerating all the good things about PS2 and the opposite with others. :?
 
Tagrineth:
no Dreamcast game ever actually made use of bump mapping...?
Just no well-known cases, nor examples you'd particularly want to brag about. The Tomb Raider port was appalling as marconelly! observed, although it did use the Windows CE libraries to put bump mapping in a DC game. Even the execution was poor, but that was evidently the fault of the developers as PowerVR demos showed high quality implementation of the feature.

Missed following up on this line of discussion:
marconelly!:
On the other hand, many people enjoy that type of visuals. It's was just something different than what PC at the time could offer (or Dreamcast for that matter) and it's also obvious that with some effort you can get good looking textures out of the machine as well...

...Back then, it seemed noone really could do anything comparable.
The machine that really showed those cinematic fillrate effects at impressive levels to me first was Dreamcast, actually. When Kasumi flips out in front of the stained-glass window in Dead or Alive 2, or when Tengu drops down onto the final stage, there is full-screen distortion and motion blur effects that still look great today.
 
That's probally why those cut scenes run at 30Hz, rather than 60Hz in game.

Anyway the PVR chip in the DC is also pretty fast - If I remember correctly occluded geometry is rasterised at 32 pixels/cycle, which isn't too shabby...

As PC's get faster it's easy too look at the GS and criticise it, but it's not doing too badly.. You could argue that if the fillrate had been dropped to allow more complex pixel functionality it may have made the machine look more dated in comparision with newer tech.

As for DM's 8 way PS1 chipset - if he want's to look at it that way he can, but it doesn't really match the way the chipsets perform at all... :)
 
It is Konami's "famous" PC port job for one. You need a monster PC? Hmm...if only the demo werent 500mb, i be trying on my PC but how monstrous the PC is needed? I think a good GF3 will do the job greatly at 480 VGA + mipmapping. i.e The game be looking very much closer to those screenshots than PS2 will ever display.
Famous or not, maybe you should ask Deano C and question his programming skills. He did SH2 port, but SH3 turned out even better. It doesn't work well with GF3 AFAIK, you need at least GF4 to achieve the same look and run it at the same resolution at the acceptable framerate. With ATI9800 you can run it at 60FPS. Besides, even GF3 + 1.5-2Ghz processor seems like a pretty much monster PC to me compared to a machine with 300Mhz processor and 150Mhz GPU, which is supposed to be crappy and weak.

Sorry Marc, but arent you the guy who said ZOE2 runs 60fps 90% of the time(IIRC)
I said 80% of time I think, which still looks pretty valid to me. I don't know if you fully realize how much 20% of something as long as several hour game is. I'm betting that if some game would be running with slowdown 50% of the time, people would be inclined to say that the whole game runs in slow motion. That's a symple matter of our perception and what sticks to mind.

Jak2 looks leap n bound above Jak1 and it ran 50-60fps(more like 30fps/60fps)
I have yet to see the moment that Jak 2 drops to 30FPS and have played the game for hours to end. Every time the screen tears, the framerate drops below 60FPS, but you can tell by the smoothness of the motion that it's not 30FPS, but higher than that. The game does look leaps and bounds above Jak 1 and is in progressive scan and widescreen to boot. You, of all people, should see the difference as night and day on your plasma TV, as I don't even want to imagine how J&D's half frame buffer output looks like on it.

MGS2X is pisspoor visually to PS2 and what recently, TTS look bad vs SOL because it ain blurry enough?
What the heck are you talking about? I think you mixed me up with someone else. I have never claimed that MGS2X is pisspoor. It's very good looking, just not as good as the PS2 version. Dark10x on GAF has the theory that TTS uses less post processing effects than SOL, which is something I'm not so sure about, so stop putting words into my mouth. I simply think the game looks about the same as SOL with better textures in places, less dramatic lightmaps (maybe, not 100% sure about that either) and worse framerate (which is confirmed by pretty much every preview).
 
Crazyace:
That's probally why those cut scenes run at 30Hz, rather than 60Hz in game.
Good possibility, though I never claimed it was an even match for the PS2. Just that, as the PS2's design was not focused so heavily on texturing performance but can still put out nice textures with hard work, so could a PC-like design (which DC was being referred to as) put out nice framebuffer manipulated effects with hard work.

DC going down to 30hz for its heaviest usage of fillrate-intensive effects is not so unlike PS2 going to 30hz with MGS2's intensive cut-scenes.
 
WELL, DC a 1998 design costing 199(Yen mayB higher). Ain that bad. I really wonder if Sega wasnt in the shiite they were, could they hold off the DC and release in 2000 for $299 with a Naomi2 with less RAM? It be rockZ, possibly. :LOL:

AND marc, i DON have a PLASMA TV! :LOL:
I wish i did, or at least me dad did.. :oops:
 
Steve Dave Part Deux said:
So AFAIK it's not really a case of waiting for the entire contents of the buffer to clear while your entire pipeline stalls.

So how do you propose to access the on-chip memory from the 3D renderer while the on-chip memory is being accessed by the dedicated framebuffer-and-antialiasing hardware? Without multiported memory, you can't do that (think back to the good ol' Matrox Millennium here, or PS2 Graphics Synth :LOL:).

Flipper doesn't have a multiported memory implementation, or the quoted bandwidth figures for the RAM would have been much higher. How else do you think the GS gets 48GB/s when running at a mere 150MHz?
 
Crazyace:
That's probally why those cut scenes run at 30Hz, rather than 60Hz in game.
Wanted to add that the developer's official explanation as to why DC DOA2's cut-scenes are 30fps was so that they'd look more cinematic, closer to the framerate of a movie.... similar to one of the reasons MGS2's devs mentioned for their game.

Being a realist though, it'd be hard not to acknowledge that the less demanding framerate allowed both developers to go as far as they did during the cut-scenes.
 
OH! and DeanoC, if ya reading, i ain questioning your l33t programming skills(who am i to? :LOL: ) JUST that im sure you + your team have to work with the limiting constraints of a port job, tons of legacy items left over, impossible to rewrite the PS2 optimised codes to suit the PC platform. Im sure with enough time + resources, SH3 be very doable well on a GF3/8500(which happens be the min requirements), i don see the special of graphiX, especially with the angles and pacing the game take place.

Now the main thing i be to say is, please, SH3 is PS2 ishly good, but lets not go over now. Please also don take out the optimise -> ports talk. It be just not good. :)


How else do you think the GS gets 48GB/s when running at a mere 150MHz?
Ain sure, BUT that 48GB/s be the BEST case possible theoretical numbers, WHEN all the 16 pipes are working full time. I hear PS2 ain the efficiency monster. ..
 
Re: ...

aaaaa00 said:
Wait, what is my GF4 telling me when it says it supports D3DFMT_D24S8 then? I'd call that a 32-bit z-buffer - 24 bits for depth, and 8 for stencil.
Yes 24bit depth precision, that's what DM was talking about.
 
Hey Chap,

Very few things work at their peak performance figures ( that's why they are peak )

PC titles are ( by their very nature ) more difficult to optimise for, as the different HW combinations often require special work, and there is a great temptation to rely on faster HW rather than fully optimised code.
 
Guden Oden said:
Squeak said:
To me it seems odd that the engineers who made the GS chose not to allocate more internal bandwidth for texture. Out of the 48Gb/s there is only 9.6Gb/s available for textures

Only? 9.6 is more than say, a GF3 Ti500, has for its entire framebuffer. Considering the basic architecture with 8 textured pixels/clock (2 clocks, if you want trilinear afaik), it's entirely sufficient. Whatever could it use more for anyway?

It's already got a 2560-bit memory interface, isn't that enough for you? :)

I don't think 9.6Gb/s is bad for its time

Not bad for its time? :) It's a totally, overwhelmingly class-leading device for its time and you say it's not BAD? The thing went on sale in 2000, was there anything even remotely similar in consumer space? No.

it just seems obvious to me, that texture bandwidth should have first priority, and the more the better.

I don't know what I'm missing, but it must be something. Assume 16-bit textures, bilinear filter * 8 pipes = 64 bytes/clock * 150MHz (actually a little less but whatever), that's under 9GB/s, well within the available bandwidth. And, we know PS2 games rarely ever use 16-bit textures because actual devs working on the thing have posted here and said as much! What would it possibly use more than the current available bandwidth for, 32-bit textures it doesn't have room to store, nor the need for? ;) Also, it'd be unlikely (to say the least) that the chip would need to fetch 4 unique texel samples for each pixel. Real-world texture b/w would likely be far less.

The "cinematic" effects made easier by large screenbuffer bandwidth is nice, but if textures look bad, it doesn’t really matter how much blur and how many reflection maps you can do.

But it's already got sufficient texture b/w for the task, so why worry?

Sony must have known that texture bandwidth would be one of the first areas competitors would try to hammer them

It's got over nine and a half gigs of texture b/w ALONE. The on-chip memory already makes it rediculously fast at just about any framebuffer operation. Bandwidth is most likely the least of Sony's worries. ;)

when apparently they could have had so much better.

Yeah, because all their opponents SO out-bandwidth Sony, get outta here! :) Come on now, give them some cred for frig's sake. 48 gigs, can't mess with that. There's nothing on the market right now, four years and counting after PS2's release, that offers higher aggregate numbers. You could pull a Nvidia and factor in compression of course, but that'd be cheating...

I guess the question I'm really asking, is if the GS is bottlenecked at the eDRAM to texture cache bus?

Uh, why would it be? Don't really know if the GS can be described as being bottlenecked anywhere internally, it's pretty fast for what it does. :)
I don’t think you can compare a PC GPUs external bandwidth, with the internal texture bandwidth of GS. A better comparison would be between the L2 to L1 texturecache bus on for example NV2a, versus the bus from eDRAM to texturecache on GS.
But the size of that bus on modern GPUs, seem to be a big secret for some reason.

But if we suppose that you are right in that 9.6Gb/s is more than enough for textures, then it still doesn’t answer the question, as to why the engineers chose to use so much die space for the 38Gb/s framebuffer bus, if other architectures gets by with much much less. They could have used the space for more eDRAM or a register combiner or something else.
 
ERP said:
The issue is the setup for mip mapping, not the GS side, it's just that pretty much every piece of hardware, since voodoo 1 and a lot before have done it for you. You basically have to compute or fudge the parameters. Usually it ends up being the latter.
I would argue with this.
While it's technically possible to recalculate mip-koefficients per polygon and that should give you accurate lookups - if they wanted me to do this, why are the respective settings not part of the vertex registers and I would have to clutter my display lists with extra texture register settings not to mention the pretty sizeable calculation overhead involved if I do this on per vertex basis?
I don't mind doing something myself if the interface allowing me to do so is accessible as opposed to almost prohibitive.

Deadmeat said:
GS = an 8-way PSX1 GPU SLI
Ok, I'm just about tired of this, so you either give me an explanation or I will reply to every post you make from now on with this same question, regardless of what you posted about.
SLI = Scan Line Interleaving. Now, to best of my knowledge, scanlines all go one direction, usually horizontal.
So how pray tell do you interlave 8 chips on scan lines and get them drawing in 8x2 quad pixel configuration (textureless) and 4x2 quad pixel configuration (textured)?
SPU2 = two PSX1 SPUs in one chip.
Yes, that's how they got PS1 compatibility, among other things.
CLUT textures will never compare to real compressed textures.
So you are saying that VQ doesn't qualify as texture compression?

Marconelly said:
You can't have a skydome textured with 4bit CLUT texture for example, and 8bit CLUT is not very space efficient.
No, skydomes are usually best done with Luminance compression which averages as low as 2bits/texel. 8)

Chap said:
Hmm...if only the demo werent 500mb, i be trying on my PC but how monstrous the PC is needed? I think a good GF3 will do the job greatly at 480 VGA + mipmapping. i.e The game be looking very much closer to those screenshots than PS2 will ever display.
The game looks like those high-res screens when you run it in that kind of resolution. To run SH3 ~30fps in 1024x1024, you need more then a GF3.
 
Hi Crazyace,
Yeay be that about peak. BUT i hear GS is bit less efficent than "norm", SINCE there are 16 pipes to keep fed constantly.

Hi Faf,
I be meaning 640x480 VGA like these
PC_02.jpg

PC_04.jpg

PC_01.jpg

Looking better than PS2 SH3 could. NOW i know some be gonna mention the Blaze VGA, but i really doubt it could replicate true VGA IQ of PC power.
 
CLUT textures will never compare to real compressed textures.
So you are saying that VQ doesn't qualify as texture compression?

You know, technically, if I take a 512x512 texture and scale it down to 64x64, this is a form of "compression" just like CLUT is a form of "compression". However CLUT is a long out-of-date form of compression.
 
chapban. said:
Hi Faf,
I be meaning 640x480 VGA like these
... snip, cannot be displayed with referrer logging ...
Looking better than PS2 SH3 could. NOW i know some be gonna mention the Blaze VGA, but i really doubt it could replicate true VGA IQ of PC power.

To achieve the image quality like the shots, you will need a back buffer of at least 1024x1024 to a 640x480 front buffer.

If you render at 1024x512 or even 512x512, you will get a screen that looks very much like PS2 output.

To render at 1024x1024 and run smoothly, you will need an ATI 9700 or faster. GF4s don't cut it.
 
Status
Not open for further replies.
Back
Top