PS2 EE question

DOOM 3 ran poorly on GF3, as it couldn't do fancy per-pixel lighting in one pass. It also lacked bandwidth and fillrate for the heavy stencil shadows rendering system.

As I understand, DOOM 3 worked like that by design, on any hardware. A new pass for every light. That plus the use of stencil shadow volumes made that game very fillrate hungry.
 
I did finish Doom 3 on a Pentium III 1Ghz with 320 MB RAM and a GeForce Ti4200 with 128MB. It ran at an average of 15 to 20 FPS! Yes, I'm a very patient guy :LOL:
 
Doom 3 on xbox isnt whats its like on pc, the differences arent that small either in some effects/lighting, exemple in the first level where a body comes hanging from the ceiling the light effects are completely different in that room/hallway.

Not saying it looks bad, not at all. It has dd5.1 too and runs 30fps.
That he meant was there wasnt much of a difference going from gf3 to 4, or one vertex shader extra. Then im also sure a gf4 ti would have some more improvements then just a added vertex shader. At 640x480 maybe fillrate was less of a problem then at higher resolutions common in 2004? I cant remember which dx path i was running doom 3 but at 640x480 it wasnt worse then on xbox, atleast to the level it was hard to spot. Now i dont think doom was build from the ground up for the xbox either.

Besides, searching for it, both the gf3/4 seemed to struggle with doom 3 on about the same level, atleast from reading old forum posts around the net.
Upgrading from a gf3 to gf4 wasnt worth the money imo, was better to wait for the 9700pro, or something ati/amd in that range, nvidias 5800/5900 wasnt that much of a success or even that much better then the 4x00 series.
 
DOOM 3 ran poorly on GF3, as it couldn't do fancy per-pixel lighting in one pass. It also lacked bandwidth and fillrate for the heavy stencil shadows rendering system.

And World of Warcraft chugged quite badly on GF3 as well, due to lots of overdraw, even when the game had far less view distance than it does today. There will undoubtedly be other games that ran poorly on it as well, especially towards the end of its lifespan... The original Far Cry perhaps? I don't recall when that one came out.
I never really got into WoW but yeah, DooM 3 was pretty slow if you didn't run it at 480p or maybe 600p. DooM and FarCry came out in 2004, I know I'd moved on from the card by then, first to a Radeon 8500 and then to a 9700. I had a X700 right around the time Battlefield 2 which would have been summer 2005. Anyway, the point was that I don't think the extra geometry engine in NV2A made a world of difference as games that were available on the Xbox and PC ran equal or better on a Geforce 3 at comparable resolutions (640*480 or 800*600). And sometimes even higher resolutions.
 
As I understand, DOOM 3 worked like that by design, on any hardware. A new pass for every light.
Yes, but GF3 could not do a light in a single pass, requiring at least two (I don't recall exactly). DX9-class hardware was required for single-pass lighting in D3 (IE, Geforce FX, Radeon 9800 or better.)

My R9800 Pro, which overclocked quite well as I recall both on core and mem, ran 1080P D3@60fps steady after some driver updates. It was quite nice actually! Of course, if you modded the game to give dynamic lights to plasma rifle especially the framerate crashed, but still, it was fun.

I should try to find that old mod I made so long ago now and see how a modern card handles dynamic light plasma rifle fire...
 
Last edited:
@Grall

Your right in that the best way play doom 3 you needed DX9 hardware and with that 9700pro or better to enjoy like the game was meant to be. Neither GF3 or GF4 or NV2A for that matter would do it. GF3 and 4 had quit equal performance in most games.

Yeah the 9800pro was a real nice gpu for its time, had a 9600XT cause it was cheaper and it ran doom 3 and HL2, even FC very nicely. Offcourse that was paired to a fast cpu too.
Do you mean the sikkmod for doom?


Can also recommend open-coop mod, possible to play through the whole game with a friend, new levels etc.

http://www.moddb.com/mods/opencoop
 
Do you mean the sikkmod for doom?
No, just a bunch of changes to the game's text file definition of weapons that turned on dynamic lights (and raised damage/ammo capacity as well, because hell, why not. :D) The dynamic lights bit I had help with from people on forums who knew which lines to change into what, and the damage/ammo bit I just retroengineered myself by reading the config files - they are all english/plain text, so very easy to do.

Just plug in new numbers in the right places and BOOM you have a shotgun firing 50 pellets instead of 6 or whatever... :p Couldn't get it to work with the D3 expansion though since technically that one was also a mod of the original game... Very saddening, yes. Would have been fun to be able to mod the double-barreled shotgun.
 
Yeah doom 3 is very nice to modders, many mods out there to alter the graphics but also coop mods etc which made the game worth it imo.
Back to the VU vs a vertex shader, we were comparing VU1 vs a vertex shader but didnt the VU1 had more job to do on average then a vertex shader? I mean EE had to transform and do many graphic functions while a vertex shader has its specific functions, while other parts of NV2A taking care of pixel shading effects.
 
Depends on the game. Clearly VU1 being tasked to run animation and physics on top of T&L would be insanely compromised. It's difficult to find hard data on exact VU usage.
 
I know VU1 takes on T&L for most games, but do the NV2A's vertex shaders take care of T&L aswell? Geforce 256 and on had hardware T&L and geforce 3 and up got vertex shader/pixel shaders, is there a seperate hardware unit left that takes care of T&L?
 
Transform and (per vertex) lighting was the primary role of vertex shaders. That's mostly what they did (along with some aspects of animation if it was implemented in VS code).

NV2A could also perform per-pixel lighting in a way nothing else could (well, maybe the Dreamcast kinda, but almost nothing used it) and that used both vertex normals (VS) and per pixel / fragment normal values (PS).
 
Didnt know that, then the T&L engine from Geforce 1/2 was taken away in favor of the new vertex shader for GF3 and up?

That's mostly what they did (along with some aspects of animation if it was implemented in VS code).

So was animation possible on vertex shaders? also things like particles?
 
Didnt know that, then the T&L engine from Geforce 1/2 was taken away in favor of the new vertex shader for GF3 and up?

Basically, yeah.

Probably the most powerful T&L unit around in 1999 was Elan from PVR. 12 mpps with six point (not directional!) lights. It powered VF4 (and Naomi 2) in the arcades, but as with short dev cycle arcade games, it was never pushed to anything like its limits.

So was animation possible on vertex shaders? also things like particles?

Certain types of animation could be done with little CPU input, and particles being primarily point position based with time affecting transform would be ideal VS workloads.
 
Aside from vertex shaders there were pixel shaders aswell, is that a different hardware unit, if so, what can it/they do if compared to vertex shaders?
 
what can it/they do if compared to vertex shaders?
Well, they were maths processors, but worked on pixel data instead of polygon vertices. Back in the day before shader processors unified, there were different rule-sets for either type of shader processor, like max number of allowed instructions, some differences in instruction sets, registers and such. Pixel shaders were quite primitive back then, largely described as register combiners on steroids.

You didn't have a lot of instruction slots per pixel shader program, and if you did more than a few instructions per pixel your performance crashed and burned anyhow. It's not like these days when high-end GPUs and cutting edge game engines often run dozens to maybe hundreds of instructions per pixel without any visible slowdown at all. With NV2A-era hardware, anything even slightly advanced, chances were you'd have to do multipass rendering which could tank performance, plus 8bit integer precision per color channel meant banding could occur due to precision loss.

It wasn't until floating point color buffers got sufficiently fast enough to become useful that pixel shading could really start to stretch its legs as far as effects go. Modern games do an entire laundry list of special effects over much if not the entire screen, and still achieve no visible banding or other artefacts...
 
Yes, but GF3 could not do a light in a single pass, requiring at least two (I don't recall exactly). DX9-class hardware was required for single-pass lighting in D3 (IE, Geforce FX, Radeon 9800 or better.)
Slightly OT, but that is not entirely true. The DX8-class Radeons (Radeon 8500, 9000 mostly) could do it in a single pass as well, their pixel shaders were quite capable. Too bad though the GF4 4200 was still quite a bit faster despite needing 2-3 passes anyway...
 
Well, they were maths processors, but worked on pixel data instead of polygon vertices. Back in the day before shader processors unified, there were different rule-sets for either type of shader processor, like max number of allowed instructions, some differences in instruction sets, registers and such. Pixel shaders were quite primitive back then, largely described as register combiners on steroids.

You didn't have a lot of instruction slots per pixel shader program, and if you did more than a few instructions per pixel your performance crashed and burned anyhow. It's not like these days when high-end GPUs and cutting edge game engines often run dozens to maybe hundreds of instructions per pixel without any visible slowdown at all. With NV2A-era hardware, anything even slightly advanced, chances were you'd have to do multipass rendering which could tank performance, plus 8bit integer precision per color channel meant banding could occur due to precision loss.

It wasn't until floating point color buffers got sufficiently fast enough to become useful that pixel shading could really start to stretch its legs as far as effects go. Modern games do an entire laundry list of special effects over much if not the entire screen, and still achieve no visible banding or other artefacts...

I understand the proprietary nature and purpose of the register combiners, but how are the PS2's "pixel pipes" different in comparison when it comes to architecture? Also, towards the end of dedicated shaders on Nvidia hardware, were the pixel shaders more akin to having a combined vector & scalar implementation like ATi did with the X1000 series?

We've come so far............but I long for the days when I really didn't no poop about the hardware, not because I didn't know poop, but there still was a place and purpose to each console. They each had their own proprietary hardware philosophy. PS2 had too-far-forward looking hardware ideas (oodles of Vector Processing), Xbox had a directly programmable graphics pipeline, and the Gamecube had..........er............Resident Evil 4. Honestly, the GC was just a cost effective, well engineered, efficient box of goodness for those devs willing to put in the time.
 
I understand the proprietary nature and purpose of the register combiners, but how are the PS2's "pixel pipes" different in comparison when it comes to architecture? Also, towards the end of dedicated shaders on Nvidia hardware, were the pixel shaders more akin to having a combined vector & scalar implementation like ATi did with the X1000 series?

We've come so far............but I long for the days when I really didn't no poop about the hardware, not because I didn't know poop, but there still was a place and purpose to each console. They each had their own proprietary hardware philosophy. PS2 had too-far-forward looking hardware ideas (oodles of Vector Processing), Xbox had a directly programmable graphics pipeline, and the Gamecube had..........er............Resident Evil 4. Honestly, the GC was just a cost effective, well engineered, efficient box of goodness for those devs willing to put in the time.

Rogue Leader was awesome at the time, Factor 5 knew how to use the hardware...
 
The Metroid Primes looked hella good too. They had skinned characters with ragdoll physics and some really nice environments and special effects (especially the original Prime was monstrously well polished.)
 
And, I'm biased because I loved the game so much on N64, but the Gamecube version of Wave Racer was very nice too, at the time, the water effects and physics blew my mind.
 
Back
Top