NV30 to break NVIDIAs Unified Driver Architecture?

I had no idea it had any real impact on performance. I thought it was just a few 3dmarks disappearing. Oh well, lucky I don't game on a GF2 anymore. :)
 
UDA = SHIT!

I was so glad when I heard ATi would be making entirely new drivers for the R300 instead of going to a gimpy UDA. The last thing I want is to have to worry about upgrading my drivers because they might be de-optimized.

I think the only people who really like UDA are the guys at Anandtech, and don't ask me why either.
 
I suspect UDA is something to do with 3rd party boards. ATi UDA came around the time of 3rd party licensing. I have to say there is a lot of fragmentation re clock speeds, memory types, and extra connector functionality. UDA is not flawless though, I have seen 3rd party ATi and nvidia products need the packaged disc drivers before other drivers would take to the cards.
 
The main reason, however, to go for a UDA is ease of development. Since nVidia's driver team has been working on the same driver architcture for close to five years (since the original TNT...fall 1997, if I remember correctly), they have quite a lot of experience in developing these drivers.

This significantly cuts down on development time for new drivers, and allows the driver teams to leap right into debugging and performance optimizations.

In other words, since a UDA is easier to develop for, we get faster, more mature drivers more quickly.

And if you don't like the new drivers, don't download them.
 
This significantly cuts down on development time for new drivers, and allows the driver teams to leap right into debugging and performance optimizations.

And it also limits the types of optimizations you can do. UDAs lose their appeal when the architecture changes drastically. (Such the R-300 / NV30 architetures). At that point, starting fresh and being able to optimize without worrying about the impact on older hardware outweighs the advantage of using a global code base.

There's no reason why nVidia or ATI could START with the previous drivers, and then go in and completely rip out and rebuild sections of code piece by piece.

And if you don't like the new drivers, don't download them.

That's great except that not all games work with older drivers. So you end up having to have one set run run game 1 optimally, a second set for game 2 etc....
 
Joe DeFuria said:
And it also limits the types of optimizations you can do. UDAs lose their appeal when the architecture changes drastically. (Such the R-300 / NV30 architetures). At that point, starting fresh and being able to optimize without worrying about the impact on older hardware outweighs the advantage of using a global code base.

Well, I'm not so sure that from a driver-side perspective that the NV30 is any more of a leap above the GeForce4 than the GeForce3 is above the GeForce2, or than the GeForce is above the TNT.

There's no reason why nVidia or ATI could START with the previous drivers, and then go in and completely rip out and rebuild sections of code piece by piece.

Yes, but that still means that nVidia would need to go back to older drivers from previous video cards and build-in the required backwards-compatibility for newer games. This is another place where the UDA really cuts down on driver development time.

That's great except that not all games work with older drivers. So you end up having to have one set run run game 1 optimally, a second set for game 2 etc....

True, but I usually found that while not all newer drivers worked well with my GeForce DDR, I could usually find newer ones that did work just fine.
 
Yes, but that still means that nVidia would need to go back to older drivers from previous video cards and build-in the required backwards-compatibility for newer games. This is another place where the UDA really cuts down on driver development time.

You're not hearing me.

I KNOW of the "economic" advantage of UDA. That is, given basically zero resources for driver development, the best bet is UDA.

I am saying that there is a performance / efficiency advantage for having a separate driver for every single different architecture. That is, given unlimited resources for driver development, you have one driver for every piece of hardware, specially tuned for that hardware.

In reality, nVidia has to make a decision to allocate somewhere between Zero and Infinite resources for driver development, and somewhere between One UDA driver for all, or a different driver for each.

I am simply making the case that the RIGHT TIME to break from the UDA architecture, is when you have a rather drastic change in hardware architecture. That's the point where the need to have more efficient drivers to get the most out of the new architecture outweighs the additional cost to maintain two driver sets.

The difference bettween "OK" and "very optimal" drivers for a new architecture could be the difference between winning and losing benchmark scores to the competition.

Well, I'm not so sure that from a driver-side perspective that the NV30 is any more of a leap above the GeForce4 than the GeForce3 is above the GeForce2, or than the GeForce is above the TNT....

What do you man...."driver side" persepective? Drivers relate to the hardware. nVidia themselves are touting how the NV-30 is the single biggest contribution to 3D since the dawn of time or something like that. Since when DON'T you go along with nVidia PR?
 
Joe DeFuria said:
What do you man...."driver side" persepective? Drivers relate to the hardware. nVidia themselves are touting how the NV-30 is the single biggest contribution to 3D since the dawn of time or something like that. Since when DON'T you go along with nVidia PR?

Because when changes are made to the hardware, they have different effects to different people.

For example, the increased programmability, in my mind, doesn't necessarily mean that the driver paths for legacy code need change any. The same goes for the addition of the PPP.
 
Back
Top