Who Says Nvidia faster in Doom3?

russo121

Regular
I think I am outdated but due to the confusing around hl2, TRAOD, am3... I re-read the .plan file from jc at http://www.webdog.org/cgi-bin/finger.plm?id=1 (I think it's the last one) and what we check is :

1- "...ARB2 (floating point fragment shaders..."

2 - "...The NV30 runs the ARB2 path MUCH slower than the NV30 path. Half the speed at the moment. This is unfortunate, because when you do an exact, apples-to-apples comparison using exactly the same API, the R300 looks twice as fast, but when you use the vendor-specific paths, the NV30 wins..."

3 - "...and because I feel Nvidia still has somewhat better driver quality (ATI continues to improve, though)..."

I know he's talking about nv30 and not nv35 ..... but also, there was no plan update about nv35.... Why? Is it the same story?

What we see in 2 isn't what is happening with HL2, TRAOD, etc?

What he means by "...vendor-specific paths..."?? 32 bit to 16 bit?

Is it me, or jc is biased to Nvidia in 3?

Lots of questions from a dumb.....


PS: don't read this one - why do i fell when doom3 come out will be outdated by other games?
 
You might wanna try the "search" function on the forum, Doom3 has pretty well been covered here by every imaginable angle and a few I would have never thought of. ;)

Basically:

-FX probably will be faster at D3 than Radeon.

-Carmack is coding a special path for the FX, and that's why.

-Someone has recieved an undisclosed (rumored $5 million) amount for the nVidia/D3 tie-in. (Not sure if it's iD, their publisher, or whoever)

-Carmack pretty much acknowledges the R3xx as being the superior product, he just doesn't come out and say it.

-The benchmarks that "prove" the FX is faster than the Radeon is total BS, you'll have to use that "search" button to find out why 'cause I'm just a bit sick to death of listing it out.

Hope that helps ya some. Welcome to Beyond3D and enjoy your stay. :)
 
I imagine it will be, but I also imagine there will be NOWHERE near the performance gap we see with the DX9 games coming out now, and probably little in the way of IQ differences.

Carmack is programming to OGL and nVidia's strengths, and he's pretty well known for being able to drag out great capacity from specific hardware. ^_^ I don't think he'll be doing as much specifically to program for Radeons, but in the meanwhile the game on the whole doesn't use the FX's weak points, so we'd expect to see general performance comparisons much we do in 99% of the game right now, leaning more in FX's direction because of what Carmack wants to draw out of them.

The FXes are not horrible cards, but they have an Achilles Hell which is getting more pronounced in new games. However, if games are not shader heavy nor playing to DX9 standards, we're not going to see those problems pop up. Doom III seems to be playing to neither, and concentrating in very different directions both technically and artistically.
 
cthellis42 said:
The FXes are not horrible cards, but they have an Achilles Hell which is getting more pronounced in new games. However, if games are not shader heavy nor playing to DX9 standards, we're not going to see those problems pop up. Doom III seems to be playing to neither, and concentrating in very different directions both technically and artistically.

Here is was JC had to say about shader performance on the NV3x just teen days ago:

GD: John, we've found that NVIDIA hardware seems to come to a crawl whenever Pixel Shader's are involved, namely PS 2.0..

Have you witnessed any of this while testing under the Doom3 environment?

"Yes. NV30 class hardware can run the ARB2 path that uses ARB_fragment_program, but it is very slow, which is why I have a separate NV30 back end that uses NV_fragment_program to specify most of the operations as 12 or 16 bit instead of 32 bit."

John Carmack

Source:
http://www.gamersdepot.com/hardware/video_cards/ati_vs_nvidia/dx9_desktop/001.htm
 
I can only hope, Ati's R3xx, will be always 32bit at playable frame rates - no mixed mode.... quality above all!
 
russo121 said:
I can only hope, Ati's R3xx, will be always 32bit at playable frame rates - no mixed mode.... quality above all!
The R3xx chips only support 24-bit floats. Framebuffer output is dependent on the surface being rendered to whether 16-bit, 32-bit, 64-bit or 128-bit. The mixed modes in HL2 people are referring to are adding _pp modifiers to pixel shader instructions so that the driver can use lower quality (16-bit floats) if desired.
 
But remember that when JC was talking about NV30 vs R300, he said that running ARB2 the NV30 was half as fast as when running the NV30 special path and was much slower than the R300. However, when running the NV30 special path, the NV30 was faster than the R300 in most *but not all* circumstances.

Given that NV35 and R350 seem to be just incremental increases (for the purposes of D3) I expect the performance delta between the two to be pretty similar, ie very close, with the NV35 pulling ahead on most, but not all scenes, and only then due to it's special NV30/reduced precision paths.

However, who know how much IQ Nvidia will be willing to sacrifice by the time D3 arrives in order to try and make their "killer app" look more impressive in terms of framrate numbers (if not IQ)?
 
russo121 said:
I can only hope, Ati's R3xx, will be always 32bit at playable frame rates - no mixed mode.... quality above all!
Although I'd love 32-bit precision everywhere as well, I'm not sure if it would make much of a difference.

Have you seen the new Half-Life 2 movie? If you look closely at the reflecting rooftop, you see a lot of aliasing. That's most probably caused by not using dsx/dsy to get the right mipmaps for the environment bump mapping. So before we need more precision in the shader registers, we need better texture sampling...
 
It's not the texture sampling that's the problem. The textures are sampled just fine, but the effect applied on those values introduce aliasing. The most perfect filter could not solve it. To get the lighting correct, you need to evalute the lighting on all normal vectors and come up with an average, rather than coming up with an average normal and then applying lighting to that single normal.
 
Nick said:
Have you seen the new Half-Life 2 movie? If you look closely at the reflecting rooftop, you see a lot of aliasing. That's most probably caused by not using dsx/dsy to get the right mipmaps for the environment bump mapping. So before we need more precision in the shader registers, we need better texture sampling...

I don't think this is the problem. Hardware is supposed to choose the correct mip map automatically (although the Radeon 8500 had a driver problem with it I think). I know for sure that when doing texm3x3spec and similar instructions, R300 chooses the mip-map correctly by using a pixel quad.

Besides, I'm pretty sure HL2 uses math for the lighting, i.e. DOT3 with exponential specular, and not an environment map. The aliasing is a fundamental flaw of DOT3 lighting. What they need to do is blur out the mipmaps of the normal maps a bit instead of using a simple box filter. Check Humus' recent thread on this matter. Another possible solution is using Polynomial Texture Maps, which can be filtered correctly, but they require more computations, and are an approximation which has strengths and weaknesses compared to DOT3.

EDIT: Thanks for replying, Humus.
 
[url=http://www.nvnews.net/cgi-bin/archives.cgi?category=1&view=9-03 said:
John Carmack @ Quakecon (NVNews blurb, 9/11/03)[/url]]The GeForce FX is currently the fastest card we've benchmarked the Doom technology on and that's largely due to NVIDIA's close cooperation with us during the development of the algorithms that were used in Doom. They knew that the shadow rendering techniques we're using were going to be very important in a wide variety of games and they made some particular optimizations in their hardware strategy to take advantage of this and that served them well.
 
Why I lost alot of respect for John Carmack, especially when we all know the cards in Question is getting 'alot' of special attention AND not even meeting DX9 Criteria AND forcing developers to used 'custom code paths'.
Harping about higher precision but becoming a salesemen for Low Precision mode cards.

Show me the money ;)
 
Doomtrooper said:
Why I lost alot of respect for John Carmack, especially when we all know the cards in Question is getting 'alot' of special attention AND not even meeting DX9 Criteria AND forcing developers to used 'custom code paths'.
Harping about higher precision but becoming a salesemen for Low Precision mode cards.

Show me the money ;)

the scenes in doom3 doesn't need as high precision as that in HL2,that's why Carmack can make a NV30 path with lower precision but without big IQ degradation(if there's any). nvidia designed the hardware for the doom3 engine,Carmack coded for that hardware, I don't see what's wrong with it.
 
Show me the money.

That's the easiest explanation according to popular conspiracy theories (Doom3/NVIDIA, HL2/ATI).

At the end of the day it was known for ages now that JC will make extensive use of dynamic shadows for the game and it's really only ATI's responsibility that they didn't concentrate more on Z/stencil performance with their recent designs.

Would R3xx's have had twice the Z/stencil abilities than today, FX's wouldn't stand a chance even with 12/16bit mixed mode precisions.
 
the scenes in doom3 doesn't need as high precision as that in HL2

Eh ?? Is Doom 3 released....we don't know that do we.

Even if it is little difference, it is still very hypocritical. I do feel HL2 will be a more 'modern' engine over Doom III...but I also feel Carmack has swept the poor performance under the rug so to speak.
Who knows how much time was invested in the NV30 codepath, 5X ?? I do know I have the Doom 3 Alpha Leak and there is a Nv30 code path it that old release :LOL: ....we are talking about 6 months prior to any FX hardware being made.

So tell me, how would carmack 'know' about Nv30 performance issues without hardware, so IMO he has had plans for a custom codepath for the Nv30 from Day 1.
With all the image quality shots, from Tomb Raider AOD, to HALO to HL2, to AQUANOX..it will be interesting to see how Carmack hides the low precision on NV30 hardware, as these developers couldn't.

Easy to have the speed crown when you are rendering sub par DX9 precision on supposed DX9 hardware. FX5200->FX5900.
 
Doomtrooper said:
it will be interesting to see how Carmack hides the low precision on NV30 hardware, as these developers couldn't.

He'll be able to "hide it" mostly because the Doom3 engine doesn't really need high precision for its effects.
 
Easy to have the speed crown when you are rendering sub par DX9 precision on supposed DX9 hardware. FX5200->FX5900.

Please read above. JC was able to lower precision in Doom3 since it actually has only a fraction of advanced shaders compared to HL2 and internal precision is less important, but he refused for understandable reasons to deduct the dynamic shadows from his game; it would kill the whole purpose of it anyway.
 
I will reserve judgement when I see the final product...Doom 3 has been delayed again (anyone remember that Nvidia Kyro .PDF where a GF2 is 'best' for Doom3) :LOL: ...guess that slide is outdated now.

04-06b-pic3.gif


Maybe there is HDR being added, who knows.
 
Back
Top