David Kirk talks about NV25 improvements

ram,

thanks for the link, it's a very good read. And yes, a nice change from the usual marketing hogwash the internet is subjected to.

According to Kirk, his design team was able to improve upon inefficiencies in the NV2A's twin vertex pipeline architecture by observing and studying actual Xbox game code running on the hardware, and discovering stall conditions, conflicts and pipeline bubbles.
XBox developers as beta-testers? ;)

ta,
.rb

_________________
http://www.nggalai.com -- it's not so much bad as it is an experience.

<font size=-1>[ This Message was edited by: nggalai on 2002-02-07 00:13 ]</font>
________
Harley-Davidson XR750
 
Last edited by a moderator:
nggalai, I wouldn't say that *too* loud, you might catch on fire. Spontaneous human combustion is becoming much more wide spread than initially believed. :smile:

<font size=-1>[ This Message was edited by: VitruvianMan on 2002-02-07 00:27 ]</font>
 
Am I the only one that seems to think these guys couldn't get 2x SV to work at all on the Athlon for the 8500?
 
lol
i noticed that too shark
im wondering what they were thinking when they posted those graphs

i noticed a lot of graphi inconsistencies at digitlife as well

seems like everyones always in a hurry to get these stupid reviews out
 
It appears from the AMD graph that ATI gives you 2X FSAA for "free", but what really happened is that despite our setting the FSAA to 2X in the control panel, the driver failed to put 2X FSAA "into gear". In other words, the setting didn't take, and so we wound up running sans FSAA when we wanted 2X. We saw some instances of nVidia's drivers doing the same thing in other tests, most notably Unreal Tournament. Both nVidia and ATI are running on their latest beta, non-WHQL drivers in this roundup.
 
And why are they the only ones running into this issue.. and only on the Athlon platform?

I guess the SmoothVision(tm) slider in Windows was specifically optimized for Pentium 4? Gee, I did notice it slides a little smoother between 2x and 6x on my P4 system. :smile:

(**cough.. monkeys.. cough**)
 
From what I gather, both the GF4's VS are programmable. The 8500 however, has one fixed-function VS (for backward compatibility with DX7 games?).

So tell me, how will the GF4 handle DX7 games that used fixed T&amp;L? Will it take a hit? And similarly how will the 8500 handle games that are written with >=DX8 in mind? Will it take a performance hit?

How does the 8500 use its' VS, since one of them is programmable and the other is not? Is it a parallel processing similar to what the GF4 does with its VS? I am (very) new at all this so thanks for being patient with me :smile:

-m
 
On 2002-02-07 07:28, merlin wrote:
So tell me, how will the GF4 handle DX7 games that used fixed T&amp;L?

IIRC NV20 uses its vertex shader to emulate DX7 T&amp;L. I would imagine the same to be true of NV25, but I don't know if having two vertex shaders will effect its DX7 T&amp;L performance either way.

HTH :smile:
 
Man he worked awfully hard at not mentioning the actual word hierarchical-Z while describing the Z-culling.
 
On 2002-02-07 07:55, MfA wrote:
Man he worked awfully hard at not mentioning the actual word hierarchical-Z while describing the Z-culling.

:smile:
We must give nVidia credit where it is due -- they certainly know how to dance.

"IIRC NV20 uses its vertex shader to emulate DX7 T&amp;L."

Surely an emulation would be slower than the real thing? Maybe this is why developers are not jumping on the DX7/DX8 bandwagon -- DX7 cards are affordable, but the tech is not attractive enough, whereas DX8 gives the freedom, but the cards are not in the market's PCs. Thanks for the reply :smile:

-m
 
Whoever wrote the article certainly doesn't seem to have the greatest mathematical ability when comparing the GF4 to the Radeon 8500:

Its faster memory certainly plays a role, and here the GF4 enjoys an 36% bandwidth advantage (18% actual clock advantage)

I makes me wonder how many other errors are in the article!
 
By the time DX9-class content is shipping in earnest, we'll likely be telling you about NV30.

I hate comments like that. NV30 will probably ship long before any plurality of games support DX8, let alone DX9.

<font size=-1>[ This Message was edited by: John Reynolds on 2002-02-07 15:24 ]</font>
 
The fact is that 3D Graphic Hardware, for 99% of developers out there, is already evolving too fast for them to keep up. I think this is why people care more about speed than features.
 
Back
Top