Luminescent said:If clocked equally, NV40 would open up a can on R420.
so what? what if the r300 was clocked equally to the nv30?
Luminescent said:If clocked equally, NV40 would open up a can on R420.
I think it depends on what the benchmark demonstrating. Game development conditions? A technique? In the case of the former, emulating a feature that other hardware has may not be particularily fair -- unless it is deemed to be a necessary. It doesn't make sense for a game devloper to force one video card to be slower without giving the option for an alternative technique. In the case of demonstrating a technique, it is totally fair as it shows how fast various hardware can render the effect with the same quality.ERP said:If I were a betting person (which I aint) my money would be on NV40 being faster, I base this solely on the use of Perspective Shadow Map's.
Assuming futuremark use the built in depth texture sampling and PCF on NV hardware, emulating it at the same quality on ATI hardware is going to require a lot of extra shader ops.
Which I guess brings up another issue, is using a DX feature unique to one manufacturer that significantly accellerates rendering of a key feature in the benchmark fair?
Bouncing Zabaglione Bros. said:Luminescent said:If clocked equally, NV40 would open up a can on R420.
But they don't clock equally, do they? Probably one of the reasons why ATI left out SM3.0 was so that they would have that clock speed advantage.
trinibwoy said:Bouncing Zabaglione Bros. said:Luminescent said:If clocked equally, NV40 would open up a can on R420.
But they don't clock equally, do they? Probably one of the reasons why ATI left out SM3.0 was so that they would have that clock speed advantage.
So when ATI is forced to move to SM3.0 who do you guys think will have the upper hand? That will be interesting to see since we won't have this precision / feature support disparity that we do now. Unless nvidia decides to support SM3.0b or some shit and increase their tran count even more.
trinibwoy said:So when ATI is forced to move to SM3.0 who do you guys think will have the upper hand? That will be interesting to see since we won't have this precision / feature support disparity that we do now. Unless nvidia decides to support SM3.0b or some shit and increase their tran count even more.
maosee said:SM2b is for cheerleaders.
I think you'll find that ATIs implementation of PS1.4 had additional precision as well as range - remember that a major advance of PS1.4 was generalised dependent texture lookups - if a PS1.4 implementation only had 8-bits of precision you would get no bilinear filtering on a 256x256 texture on a dependent read, and wouldn't be able to even address all the texels in a 512x512 texture individually, which wouldn't exactly be great...Ostsol said:Before DX9 there was simply no FPxx in the programmable pixel pipeline. Everything was some sort of fixed point precision. PS1.x cards had 8 bits of mantissa precision, plus the sign bit. ATI's implementation of PS1.4 allowed the same precision, but more range. Instead of a range of [-1,1], they had [-8,8].
True. . . If I still had my Radeon 8500 I'd experiment with this.andypski said:I think you'll find that ATIs implementation of PS1.4 had additional precision as well as range - remember that a major advance of PS1.4 was generalised dependent texture lookups - if a PS1.4 implementation only had 8-bits of precision you would get no bilinear filtering on a 256x256 texture on a dependent read, and wouldn't be able to even address all the texels in a 512x512 texture individually, which wouldn't exactly be great...Ostsol said:Before DX9 there was simply no FPxx in the programmable pixel pipeline. Everything was some sort of fixed point precision. PS1.x cards had 8 bits of mantissa precision, plus the sign bit. ATI's implementation of PS1.4 allowed the same precision, but more range. Instead of a range of [-1,1], they had [-8,8].
ERP said:I don't know about the average PC developer, but if I were implementing PSM on both pieces of hardware (And FWIW I think it's probably the right way to go), I would implement it using NV's depth textures and emulate it on ATI hardware. I might give an option (or just defaullt) to reduce the quality on ATI cards.
The real question is what is 3DMark trying to simulate (and I think it's game like environments) so it comes down to what do the futuremark developers think subsequent games will do, and how will they implement. Multiple paths for cards with Depth Texture support and those without, or just ignore the depth texture support and have a single path for both sets of cards. I can see arguments that are valid either way.
ERP, Scali: Am I missing something?Scali said:ERP said:I don't know about the average PC developer, but if I were implementing PSM on both pieces of hardware (And FWIW I think it's probably the right way to go), I would implement it using NV's depth textures and emulate it on ATI hardware. I might give an option (or just defaullt) to reduce the quality on ATI cards.
This seems to be the general trend. Carmack is going to use PCF...
Carmack seems to have spent a LOT of time with shadow maps, and is saying built-in PCF doesn't look very good. I sort of wonder if PCF + multiple jittered samples may look a bit better than w/o PCF, but probably not (I'm sure Carmack thought of that and tried it).[url=http://www.gamedev.net/community/forums/topic.asp?topic_id=266373 said:Carmack[/url]]With shadow buffers, the new versions that I've been working with, there's a few things that have changed since the time of the original Doom 3 specifications. One this that we have fragment programs now, so we can do pretty sophisticated filtering on there, and that turns out to be the key critical thing. Even if you take the built in hardware percentage closer filtering [PCF], and you render obscenely high resolution shadow maps (2000x2000 or more than that), it still doesn't look good. In general, it's easy to make them look much worse than the stencil shadow volumes when you're in that basic kind of hardware-only level filtering on it. You end up with all the problems you have with biases, and pixel grain issues on there, and it's just not all that great. However, when you start to add a bit of randomized jitter to the samples, you have to take quite a few samples to make it look decent, it changes the picture completely. Four randomized samples is probably going to be our baseline spec for normal kind of shipping quality on the next game. That looks pretty good.
yep, cheerleaders with "a little somethin' extra", ifyaknowutimean.so whats sm3.0 ? Is that for male cheerleaders ?
see colon said:yep, cheerleaders with "a little somethin' extra", ifyaknowutimean.so whats sm3.0 ? Is that for male cheerleaders ?
i have nothing productive to add to this conversation
Just giving a one-sided fact . I know from a consumer's standpoint it's quite insignificant, although I believe the 6800 Ultra Extreme wins a good number of SM 2.0 benchmarks against the XT PE without SM 3.0 optimized programming.Bouncing Zabaglione Bros. said:Luminescent said:If clocked equally, NV40 would open up a can on R420.
But they don't clock equally, do they? Probably one of the reasons why ATI left out SM3.0 was so that they would have that clock speed advantage.
Going by this, I wonder if there is any significance with the new bolded textLuminescent said:although I believe the 6800 Ultra Extreme wins a good number of SM 2.0 benchmarks against the XT PE without SM 3.0 optimized programming.
Luminescent said:6800 Ultra Extreme