What makes nVIDIA's cards so special?

K.I.L.E.R

Retarded moron
Veteran
When in CPU limited areas, nVIDIA cards have an advantage over Ati cards.

Why is that?

I believed it had something to do with the CPU instruction set that nVIDIA uses better to advantage its cards.

Am I right?
 
K.I.L.E.R said:
When in CPU limited areas, nVIDIA cards have an advantage over Ati cards.

Why is that?

I believed it had something to do with the CPU instruction set that nVIDIA uses better to advantage its cards.

Am I right?

Can you site proof of this ? I don't see any diffrence in frame rates on games like jedi knight 2 between the cards besides the 9800pro and 9700pro is much much faster than my 5800ultra .
 
Can the driver be a determinant factor?

say,nvidia driver allocates resource better among the local/AGP/SYS memory.
 
K.I.L.E.R said:
When in CPU limited areas, nVIDIA cards have an advantage over Ati cards.

Why is that?

I believed it had something to do with the CPU instruction set that nVIDIA uses better to advantage its cards.

Am I right?

I'm not sure I'd agree with this as a general statement, but if it is true in some cases then it's possible the gpu is utilizing more of the cpu than the vpu does....Heh...;)

A few years ago I looked at some interesting tests between a 3d-Labs card and a nVidia card at the time. The nVidia card walked all over the 3d-labs card in the API framerate tests, but the interesting thing was that in doing so the nVidia drivers were using 100% of the cpu when running while the 3d-Labs drivers were always using < 50% (it's been a long time so this is really fuzzy.) I corresponded with the author of the article and we agreed that nVidia was effectively piggy-backing off the cpu to gain much of its performance advantage. Of course then cpus were much less powerful than they are today. So if it's true that would be my guess as to why you might see a slight nVidia advantage in some cpu-limited cases. Again, though, without looking at IQ this doesn't mean a whole lot. Back then neither the 3d-Labs card or the nVidia card at the time was using FSAA or AF, and I won't even guess about the filtering...
 
I would say that this isn't the case. r300 based cards beat nv30 ones quite a bit in cpu limited stuff, especially when the nv30 was released. This is probably because the r300 has had a longer time to develop drivers. On its release it was beaten by the gf4s in cpu limited stuff for the same reason.


edit: edited vpu to cpu in last sentence
 
WaltC said:
A few years ago I looked at some interesting tests between a 3d-Labs card and a nVidia card at the time. The nVidia card walked all over the 3d-labs card in the API framerate tests, but the interesting thing was that in doing so the nVidia drivers were using 100% of the cpu when running while the 3d-Labs drivers were always using < 50% (it's been a long time so this is really fuzzy.) I corresponded with the author of the article and we agreed that nVidia was effectively piggy-backing off the cpu to gain much of its performance advantage. Of course then cpus were much less powerful than they are today. So if it's true that would be my guess as to why you might see a slight nVidia advantage in some cpu-limited cases. Again, though, without looking at IQ this doesn't mean a whole lot. Back then neither the 3d-Labs card or the nVidia card at the time was using FSAA or AF, and I won't even guess about the filtering...

It was told on opengl.org once that 3Dlabs' hardware has the ability to suspend driver threads and let the hardware wake the driver thread up again through an interrupt once it has finished its work. For instance while waiting for vsync, the driver can just sleep until the hardware is done. I'm not sure any consumer level hardware has this capability, so they are essentially doing busy-wait, which causes 100% CPU utilization.
 
Want to know what makes Nvidia cards so special? Well, my 5900 Ultra makes real cool beeps when mouse scrolling and displays real cool strobing/flickering video during 3d games. My Radeon 9800 Pro does not, so the 5900 must be REAL special. The best part of it all is that the 5900 was only $500.00!! I bet no one else sells a card at that price with those cool extra features!!
 
skoprowski said:
Want to know what makes Nvidia cards so special? Well, my 5900 Ultra makes real cool beeps when mouse scrolling and displays real cool strobing/flickering video during 3d games. My Radeon 9800 Pro does not, so the 5900 must be REAL special. The best part of it all is that the 5900 was only $500.00!! I bet no one else sells a card at that price with those cool extra features!!
My 5800 ultra has the ride the little yellow bus to school every day. All the other kids make fun of him.
 
Humus said:
It was told on opengl.org once that 3Dlabs' hardware has the ability to suspend driver threads and let the hardware wake the driver thread up again through an interrupt once it has finished its work. For instance while waiting for vsync, the driver can just sleep until the hardware is done. I'm not sure any consumer level hardware has this capability, so they are essentially doing busy-wait, which causes 100% CPU utilization.

Yes, the article as I recall it had to do with examining how much work the compared processors were off-loading from the cpu. The presumption of the article was that the 3dlabs card, while much slower, was far less cpu dependent, presumably providing the potential for more parallel processing (in an application written for it) to occur between the gp and the cpu. I'm assuming that busy-wait would effectively tie up the cpu in this regard. But, it's probably not applicable here, anyway, as I really don't think there'd be much difference between what ATi and nVidia are currently doing in this regard.
 
skoprowski said:
Want to know what makes Nvidia cards so special? Well, my 5900 Ultra makes real cool beeps when mouse scrolling and displays real cool strobing/flickering video during 3d games. My Radeon 9800 Pro does not, so the 5900 must be REAL special. The best part of it all is that the 5900 was only $500.00!! I bet no one else sells a card at that price with those cool extra features!!

Wow...Talk about uncovering those hidden Easter eggs....:) I'm sure you were "delighted" to discover these features.... :D
 
You can always try to synchronize the gamma flickering with the mouse scrolls and do a synchronized office chair dance. 8)
 
[edit]In case it wasn't clear, this was in response to the mini-discussion about some drivers spinning until the GPU is done, and thus eating up CPU time it shouldn't be[/edit]

It's less of a problem now with DX9, since if the card is busy drawing the scene (and the program is set up properly), the driver returns control back to the application to continue doing whatever it wants with the extra CPU cycles until the GPU is available again. Unfortunately, very few people seem to be taking advantage of this for some reason (I've found the extra time to generally be several milliseconds which can be put forth processing the next frame, doing more time steps for your physics simulation or caching level resources essentially for free).
 
I think this view comes from, as mentioned above, the launch of the radeon 9700 where in CPU limited cases a geforce 4 4600 could beat a radeon 9700.

see

http://www20.tomshardware.com/graphic/20020819/radeon9700-12.html

http://www20.tomshardware.com/graphic/20020819/radeon9700-10.html

there are loads of better examples (ie 640x480) but these give the general idea. Anyway I presume it was to do with the early drivers for the 9700, I could not see it making much difference now (not that there is much difference in my links!)
 
This isn't a recent view.

This view has been around since the 8500, IIRC (and perhaps before that). nV cards regularly outperformed ATi cards in lower resolutions, but the two evened out at higher resolutions, a fact attributed to superior nV drivers (and detailed in Anand's 5900U p/review).
 
Why have nV's drivers gone down hill now? (not refering to PS/VS performance) Or is it just that Ati drivers have severely improved?
 
Back
Top