Aquamark3 Preview at 3DGPU

Joe,

No, it's not.

Yes, it it. 3dfx spent lot's of marketing muscle to convince people that 16/22bit color looked as good as 24bit color (or so close you couldn't tell the difference). They pushed for it to be benched against 24 bit performance on other video cards.

Other than Nvidia saying it's a DX 9 video card and that may or may not be the case, how different is this situation? 3dfx would have turned the issue about supporting DX features backwards by saying the REAL issue is about is about supporting the features the "majority" of games actually use today. They also would have said that by the time developers really start using pixel shader 2.0 features, there will be newer/faster hardware out there that can utilize those features better.

Just curious, but do you still agree with that school of thought, or do you think people shouldn't make such a big fuss over pixel shaders 2.0 support.
 
Ratchet said:
Yes, AquaMark3 only has 4 PS2.0 shaders, but those 4 shaders are used a lot throughout the benchmark. Here is a pic I snapped while testing SVIST (Shader VISualization Test). The red areas are PS2.0, the yellow are PS1.x, and the blue area have no PS applied (the bright plume represents an area of high ovedrawn, in this case an explosion created by the particle system). Going by SVIST; the terrain, the vehicles, the buildings, and the rock formations all use PS2.0. On the ATI DX9 hardware I have here when runnig SVIST, most of the scenes are at least 50-60% "red".

So you simply say the ATi-card use PS2.0 for 60% of the frame and the Nvidia-cards only for ~30%? WOW how missleading.
The question is now, is the R9800Pro faster cause it can do PS2.0 or is it slower because it has to do PS2.0 when PS1.4 would give the same result.
 
Qroach said:
Yes, it it. 3dfx spent lot's of marketing muscle to convince people that 16/22bit color looked as good as 24bit color (or so close you couldn't tell the difference). They pushed for it to be benched against 24 bit performance on other video cards.

Funny, I don't recall 3dfx's drviers ever accepting 32 bit frame-buffer requests from a game, and rendering in 22 bit mode.

That's the difference.

With 3dfx, it was rendering increased quality in 16 bit framebuffer mode, not decreased quality in 32 bit framebuffer mode. 3dfx didn't support 32 bit frame buffer, and didn't claim their cards did by hacking their 22 bit solution as 32 bit mode..

Of course 3dfx pushed for 22 bit to be compared to 24 bit. But they didn't do it by claiming their hardware supported 32 bit, let games think they were rendering in 32 bit, but produced a 16/22 bit result.

Just curious, but do you still agree with that school of thought, or do you think people shouldn't make such a big fuss over pixel shaders 2.0 support.

Everyone must decide for themselves about the support of shaders 2.0.

The point is, nVidia shouldn't be doing this for us. Render 2.0 if that's what the game calls for.

If 3dfx said "yes, our V3 cards support 32 bit frame buffer", and developers could treat the V3 as if it had a 32 bit frame buffer, but then 3dfx only rendered 22 bit, then 3dfx would be just as bad.
 
I agree with joe. From my point of view, the issue at the moment is not that Nv cab't do FP24+, it's that Nv can't do it at a good speed (compared to R3** cards)
 
Qroach said:
Wait a sec, I thought NV could do FP16 and 32 but not FP24?
No, you have it right. When NV has to do FP24 it's forced to use FP32, it's kind of one of the crux-o-the-problems of their initial design decision. ;)
 
ok, just so i got this straight...

NV supports FP16 and FP32
ATI support FP24 (and not FP16?)

Does DX 9x "officially" support all three formats? FP16, FP24, and FP32?
 
DX9 supports "Full Precision" and "Partial Precision".

Full precision corresponds to a minimum of FP24 on all operations. Partial Precision allows some ops (bar texture addressing) to drop down to a minimum of FP16.
 
ok, I understand. So when joe said...

Funny, I don't recall 3dfx's drviers ever accepting 32 bit frame-buffer requests from a game, and rendering in 22 bit mode.

This is currently happening to Nvidia, but a reversal could also happen to ATI if a game is requesting FP32? correct?

Joe,

Didn't the 3dfx voodoo 3 accept everything at 24 bit internally, only to output the rendered image as 22bit color, even when the game requested 24 bit output? If that's true, then how is that different from what's you just said?
 
Are the spec supposed to change at all when DX9b or c rolls along? (Is there any chance of M$ dropping the DX9 requirement to FP16? )
 
digitalwanderer said:
Are the spec supposed to change at all when DX9b or c rolls along? (Is there any chance of M$ dropping the DX9 requirement to FP16? )
I would doubt it. You can't really do much in the way of dependent texture lookups down at that precision.
 
Veridian3 said:
Ratchet said:
Yes, AquaMark3 only has 4 PS2.0 shaders, but those 4 shaders are used a lot throughout the benchmark. Here is a pic I snapped while testing SVIST (Shader VISualization Test). The red areas are PS2.0, the yellow are PS1.x, and the blue area have no PS applied (the bright plume represents an area of high ovedrawn, in this case an explosion created by the particle system). Going by SVIST; the terrain, the vehicles, the buildings, and the rock formations all use PS2.0. On the ATI DX9 hardware I have here when runnig SVIST, most of the scenes are at least 50-60% "red".

Interestingly when you run the SVIST on an FX and a Radeon you get the same output from the program...i.e. Aquamark still thinks its using PS 2.0 in all the same places on both cards.

Stu
...but correct me if I'm wrong (I often am regarding this stuff) but couldn't the shader be running in PS2.0, but only at partial precision? Does a PS2.0 shader automatically become a PS1.x shader when you lower the precision on it?

Do we now also need a PVIST? Precision Visualization Test? ;)
 
Qroach said:
This is currently happening to Nvidia, but a reversal could also happen to ATI if a game is requesting FP32? correct?

If a game exceeds FP32 then it is also exceeding DX9 standards and would have to present particularly good reason for doing so--especially considering FP32 on nVidia runs like a dog and FP32 on ATi would get culled to FP24 anyway. Offhand I can't think of any good reason to do so knowing that such a decision would be gimped or unnecessary on every DX9 card on the market.

The difference is that nVidia presented the FX series as "DX9 cards" and marketed by saying "you need to see DX9 to really appreciate the FX cards," while what they currently have are cards that either run tremendously slow within DX9 or force you outside to regain speed.

I don't think people would have an issue if instead nVidia was saying "so who needs DX9 right now? Our cards PROUDLY scoff at such trivial matters to bring you the performance you want TODAY on the games you'll be playing JUST today!" But obviously such is not the case. ;) People could take issue with a statement like THAT in and of itself, but at least they couldn't claim misrepresentation. Hehe...
 
Back
Top