Do you expect GFFX performance improvements with drivers?

Do you expect any significant GeForce FX performance improvements with future drivers?

  • No, I do not expect any performance improvements at all

    Votes: 0 0.0%
  • No, I do not expect any significant performance improvements

    Votes: 0 0.0%
  • Performance improvements be damned, just reduce the noise!

    Votes: 0 0.0%

  • Total voters
    225
Brent said:
couldn't aniso see an improvement via drivers, if say they improve the algorithm which determines when to aniso etc... something like that
In my books, that's not improving AF performance - that'd be changing when and where to apply AF. Hardly the same things.
 
Himself said:
Where it counts, with FSAA and anisotropic performance I wouldn't expect much, it's pretty much all hardware there.
Actually, that's generally not the case. Most of the games are in higher resolutions, which makes one believe that most of the driver tweaking that improves performance later on is due to low-level tweaking of hardware settings.

I'm pretty certain that nVidia's unified driver architecture keeps CPU-side drivers very streamlined, such that even with a new architecture, the CPU limitations are not going to increase sharply.

But some of the places where driver tweaking can help the most are, for example, with texture management. This is one thing that took a while for nVidia to get right, and even now there is an occasional driver release that seems to have texture management issues on some cards. Improving texture management directly affects high-resolution performance.
 
Reverend said:
Brent said:
couldn't aniso see an improvement via drivers, if say they improve the algorithm which determines when to aniso etc... something like that
In my books, that's not improving AF performance - that'd be changing when and where to apply AF. Hardly the same things.
Which isn't a bad thing if, say, anisotropic is disabled for lightmaps. However, it doesn't look like we've seen this happen to date, and it may never happen.
 
Chalnoth said:
Tahir said:
Doesn't Rivastation and Guru3D do something similar to what you mention Brent?

edit: Here is one such comparison from Guru3D comparing the detonators and performance with a GF4 Ti 4600.

http://www.guru3d.com/detonator-dbase-xp/
Only problem is, a comparison like that isn't particularly telling. One with drivers throughout the lifecycle of, say, the TNT, GeForce DDR, or GeForce3 would be far more telling. The drivers were already reasonably-mature at the release of the GeForce4 (since it was a refresh part).

I cant be bothered Chalnoth... if you have a TNT/GF DDR/GF3 please prove your point or find some evidence to back up your statements. Don't speculate without evidence, it's really annoying. I gave you something to think about and a little bit of numerical data but you spit it out because it doesnt agree with what you thought.

The Radeon 9700 Pro drivers on release were nowhere near as mature or feature rich in tweaks as they are now or even as the GFFX drivers are at this time (maybe this is a hint at how long NVIDIA have been working on them)... so you test the Radeon 9700 Pro out too. :arrow:
 
Brent said:
couldn't aniso see an improvement via drivers, if say they improve the algorithm which determines when to aniso etc... something like that

and with aa i would think you could get improvements via driver as well

i think we saw this with the 9700 pro, didn't ati come out with some drivers a while back that improved aa in some games?

You're asking me like you think I know something. :)

For the former I think that if they could have fixed the issue with the 8500 wrt to adaptive anisotropic and some angles via drivers they would have.
I would think that any filtering of when and how to apply that would have to be done after you know what the angles are, so somewhere after the final view is done and from what I understand that's the vertex shader or older equivalent of that. I can't imagine how the drivers would be able to improve matters. How programmable the hardware registers are with regard to that, I have no idea, but I doubt it's infinitely flexible. In any event I would think NVIDIA would want to improve image quality, not hack in shortcuts to reduce it, I think they have gone as far as they dare there.

As for ATI I would guess they found some large chunk of constipated code rather than change how the hardware was working. The improvements didn't translate much to the 9700 that I read about, mostly about the 7500 and 8500. They always test games I played several months ago and finished already. :)

As for NVIDIA, they are all about just changing a hal and reusing the windows bits, so I don't know that they would have the same kind of slack. The FSAA seems rather far along optimization wise with all the different modes. I think the cooling solution suggests the drivers are doing all they can. :)
 
I think extra 10-20% is not unresonable in few days. Drivers are complex beasts so there are lot of areas for improvement. Who knows, even 100% improvement might be possible or none at all, even negative perhaps. Some tight asm loop can yield 100-1000% improvement. Some logic errors could also be taken care of. The same goes for Ati and others.
 
I just can't see NV pulling 20% increase out of the bag within a year, let alone a month. As the Guru3D - 3Dmark demonstrates, it's taken years just to refine their drivers to get 4%
 
THe_KELRaTH said:
I just can't see NV pulling 20% increase out of the bag within a year, let alone a month. As the Guru3D - 3Dmark demonstrates, it's taken years just to refine their drivers to get 4%

Oh for gods sake will people get a sense of proportion and check their facts before just bashing. Years, snort, 7 months in fact.

The Gf3 launched with what? the 10.xx series? in February/March '01?

The Guru3D analysis is from the 27.xx to 40.72's roughly from february 02 to september 02.

Therefore these tell us nothing about improvements from launch to maturity. They just tell us what we all know - that the Gf4 was launched against a backdrop of already mature Gf3 drivers.

I also have a sneaking suspicion that tetsing on 1.33gig t-bird doesn't help either.
 
Heh.. ok Randell, it just seems like years as I was as much thinking about when I got my Geforce 2GTS. (deep thought - great card in it's timeframe too). ;)
 
THe_KELRaTH said:
Heh.. ok Randell, it just seems like years as I was as much thinking about when I got my Geforce 2GTS. (deep thought - great card in it's timeframe too). ;)

Indeed the perfect nVidia execution - a surprise, and upstaged the competition. Although I got a V5 as I was still wary of nVidia cards on socket 7 mobo's and had loved my V3.
 
According to Carmack,

Nvidia assures
me that there is a lot of room for improving the fragment program performance
with improved driver compiler technology.

Translation: Nvidia released the card withouth having optimized driver s, at least for the fragment program part. This is not impossible, given the problems they had with fabing NV30.
 
One of the biggest things that I learned from my machine architecture class, is that your biggest optimization gains (somewhere around 80%) are probably going to come from the first 10-20% of your optimizations. You'll spend the next 80% of your time shaving off half a percent here, maybe a percent if your lucky there. Unless you've missed some important things when you started optimizing, you'll probably not see a whole lot more improvement after you take care of the big issues.

My guess is that Nvidia wanted to have most of those big issues ironed out before letting reviewers get their hands on them. Maybe there are some smaller things that can be revamped or redone, but often times these take a lot of work and only yield marginal improvements. (though with lots of marginal improvements you can get somewhat significant gains).

My guess is that what optimizations they haven't already done are probably the kind of optimizations that are going to take a lot of work to implement and yield rather small gains, otherwise they would have done them already. I actually wouldn't be all that suprised if some of the rendering problems they are having might be due to putting in optimizations that weren't tested thoroughly enough and are resulting in rendering bugs. Fixing those might result in decreased performance depending on what exactly they are doing.

Nite_Hawk
 
boobs said:
According to Carmack,

Nvidia assures
me that there is a lot of room for improving the fragment program performance
with improved driver compiler technology.

Translation: Nvidia released the card withouth having optimized driver s, at least for the fragment program part. This is not impossible, given the problems they had with fabing NV30.

I can see this kind of optimization helping complex shaders as in Doom 3, but not synthetic benchmarks like Shader Mark. I think the raw shader power is just lacking on the NV30.
 
I expect 25% increase in performance from nVidia drivers in the next 75 days. I also expect a 10% increase in performance from ATI drivers in the similar time-frame.


Uttar
 
Reverend said:
I must be getting old... what (are) "complex shaders" in DOOM3?

IIRC, there is a fairly complex shader that applies the shadows derived from the additional geometry in a single pass--it's complex enough that it can't be done with PS 1.4.
 
Back
Top