X1800/7800gt AA comparisons

Status
Not open for further replies.
Nite_Hawk said:
That's great, but you should stop making me jealous, I got my 7800gtx for a good cause! :LOL:

In whatever capacity you can answer, how do your mappings differ compared to what you did previously? ~36% improvements are dramatic, and if you are expecting more improvements that is remarkable. I'm not a memory guy really, but are these mostly dealing with latency or throughput improvements? Better data packing? re-ordering? compression? Caching? Inquiring minds want to know! :)

Oh, any chance for improvments in D3d as well?

Nite_Hawk

D3D already got some tuning for launch, but there's a lot to do there as well.

This specific change has to do more with throughput and data ordering. Though it's empirical in nature, so we need to go back and tune things with a better understanding.
 
sireric said:
D3D already got some tuning for launch, but there's a lot to do there as well.

This specific change has to do more with throughput and data ordering. Though it's empirical in nature, so we need to go back and tune things with a better understanding.

So still in the "warmer" "colder" stage looking for the pattern. Yeah, that does suggest good things for the future unless you were inordinately lucky.
 
sireric said:
D3D already got some tuning for launch, but there's a lot to do there as well.

This specific change has to do more with throughput and data ordering. Though it's empirical in nature, so we need to go back and tune things with a better understanding.

Wow, that's great... I'm still yearning to understand more about what data exactly you are re-ordering and how you are doing it to get that kind of an improvement, but I imagine that you probably can't publically say too much about it. (boo!)

Do you know if these changes will be roled into your linux drivers as well?

Nite_Hawk
 
geo said:
So still in the "warmer" "colder" stage looking for the pattern. Yeah, that does suggest good things for the future unless you were inordinately lucky.

No luck at all. More like being dumb for not doing this earlier. It was very simple. It just required people to start looking into what is going on.
 
Nite_Hawk said:
Wow, that's great... I'm still yearning to understand more about what data exactly you are re-ordering and how you are doing it to get that kind of an improvement, but I imagine that you probably can't publically say too much about it. (boo!)

Do you know if these changes will be roled into your linux drivers as well?

Nite_Hawk

There's some slides given to the press that explain some of what we do. Our new MC has a view of all the requests for all the clients over time. The "longer" the time view, the greater the latency the clients see but the higher the BW is (due to more efficient requests re-ordering). The MC also looks at the DRAM activity and settings, and since it can "look" into the future for all clients, it can be told different algorithms and parameters to help it decide how to best make use of the available BW. As well, the MC gets direct feedback from all clients as to their "urgency" level (which refers to different things for different client, but, simplifying, tells the MC how well they are doing and how much they need their data back), and adjusts things dynamically (following programmed algorithms) to deal with this. Get feedback from the DRAM interface to see how well it's doing too.

We are able to download new parameters and new programs to tell the MC how to service the requests, which clients's urgency is more important, basically how to arbitrate between over 50 clients to the dram requests. The amount of programming available is very high, and it will take us some time to tune things. In fact, we can see that per application (or groups of applications), we might want different algorithms and parameters. We can change these all in the driver updates. The idea is that we, generally, want to maximize BW from the DRAM and maximize shader usage. If we find an app that does not do that, we can change things.

You can imagine that AA, for example, changes significantly the pattern of access and the type of requests that the different clients make (for example, Z requests jump up drastically, so do rops). We need to re-tune for different configs. In this case, the OGL was just not tuning AA performance well at all. We did a simple fix (it's just a registry change) to improve this significantly. In future drivers, we will do a much more proper job.
 
I had no idea that things were API specific at such a low level. Interesting stuff. I really want to see how the XL fares against the GT with the update. It was so very nice of ATi and Nvidia to give us two cards with such similar specs to compare :)
 
sireric said:
There's some slides given to the press that explain some of what we do. Our new MC has a view of all the requests for all the clients over time. The "longer" the time view, the greater the latency the clients see but the higher the BW is (due to more efficient requests re-ordering). The MC also looks at the DRAM activity and settings, and since it can "look" into the future for all clients, it can be told different algorithms and parameters to help it decide how to best make use of the available BW. As well, the MC gets direct feedback from all clients as to their "urgency" level (which refers to different things for different client, but, simplifying, tells the MC how well they are doing and how much they need their data back), and adjusts things dynamically (following programmed algorithms) to deal with this. Get feedback from the DRAM interface to see how well it's doing too.

We are able to download new parameters and new programs to tell the MC how to service the requests, which clients's urgency is more important, basically how to arbitrate between over 50 clients to the dram requests. The amount of programming available is very high, and it will take us some time to tune things. In fact, we can see that per application (or groups of applications), we might want different algorithms and parameters. We can change these all in the driver updates. The idea is that we, generally, want to maximize BW from the DRAM and maximize shader usage. If we find an app that does not do that, we can change things.

You can imagine that AA, for example, changes significantly the pattern of access and the type of requests that the different clients make (for example, Z requests jump up drastically, so do rops). We need to re-tune for different configs. In this case, the OGL was just not tuning AA performance well at all. We did a simple fix (it's just a registry change) to improve this significantly. In future drivers, we will do a much more proper job.

Sireric,

This is much more detailed information than I expected, thank you! Have you considered doing any data mining to match performance against access patterns? You probably could come up with some predictive models to do on-the-fly configuration based on whatever attributes you care about (say Z requests rops, etc), even for unknown situations. Sounds like it would be a very fun research project. :p

Nite_Hawk
 
http://www.hexus.net/content/item.php?item=3668

Updated with Riddick scores and comparison to the GTX.

Adding in results for NVIDIA GeForce 7800 GTX running the 81.84 driver show that with the memory controller tweak an ATI product can overtake NVIDIA's flagship single-board SKU for the first time since Doom3's release.

Man...those nVidia OpenGL driver writers must suck. What have they been doing?
 
Last edited by a moderator:
Although I normally refrain from using this word but, KUDOS to ATi, for the performance and their Open(GL)ness on this matter.

It's really appreciated and makes the X1K series a LOT more attractive all of a sudden.

And Amazing is the speed Rys processes everything, One refresh after another the data (and opinion) trickles in..
 
Nite_Hawk said:
Sireric,

This is much more detailed information than I expected, thank you! Have you considered doing any data mining to match performance against access patterns? You probably could come up with some predictive models to do on-the-fly configuration based on whatever attributes you care about (say Z requests rops, etc), even for unknown situations. Sounds like it would be a very fun research project. :p

Nite_Hawk

There's a rather elaborate system in place, but the issue is that access patterns vary greatly even within one application -- One seen might be dominated by a shader, while another by a single texture and another by geometry (imagine spinning around in a room). You could optimize on a per scene basis, but that's more than we plan at this point (a lot of work). But we do plan on improving the "average" for each application. The basis for this, btw, is us measuring the internal performance of apps (in real time), and then adjusting things based on this. Multiple levels of feedback and thinking involved :)
 
Rys updated (the link upstream) with Riddick numbers, some comparo to GTX and a hint on Serious Sam 2. Also

Just goes to show hardware is nothing without good software. Driver development and release schedules just got interesting again.

Wonder if they are wearing "the lemon face" over at NV today, or still feeling cocky about their 512mb part.
 
geo said:
Wonder if they are wearing "the lemon face" over at NV today, or still feeling cocky about their 512mb part.

teh N000s! another crisis meeting in Amsterdam? I should go there after my ISA2004 "class" tomorrow..
 
sireric said:
This change is for the X1K family. The X1Ks have a new programmable memory controller and gfx subsystem mapping. A simple set of new memory controller programs gave a huge boost to memory BW limited cases, such as AA (need to test AF). We measured 36% performance improvements on D3 @ 4xAA/high res. This has nothing to do with the rendering (which is identical to before). X800's also have partially programmable MC's, so we might be able to do better there too (basically, discovering such a large jump, we want to revisit our previous decisions).

But It's still not optimal. The work space we have to optimize memory settings and gfx mappings is immense. It will take us some time to really get the performance closer to maximum. But that's why we designed a new programmable MC. We are only at the beginning of the tuning for the X1K's.

As well, we are determined to focus a lot more energy into OGL tuning in the coming year; shame on us for not doing it earlier.

How does the change in MC programs affect scores in other games (d3d and ogl)?
The reason why I am asking this is because there seems to a large jump in frames @ 1600x1200 4xAA for doom3, but a smaller jump for Riddick (relatively speaking).
In other words, will the fix work only for Doom3 and Riddick or will it lower scores in other games?
 
What's most interesting is that in CoR the X1800XT loses way more performance going from no-AA/no-AF to 4xAA/8xAF than the 7800GTX (at 1024 and 1280).

Since that is, specifically, the inverse of what we see in pretty much every other game, I guess that means there's an awful lot of optimisation to be done in CoR.

Jawed
 
Jawed said:
What's most interesting is that in CoR the X1800XT loses way more performance going from no-AA/no-AF to 4xAA/8xAF than the 7800GTX (at 1024 and 1280).

Since that is, specifically, the inverse of what we see in pretty much every other game, I guess that means there's an awful lot of optimisation to be done in CoR.

Jawed
I think this much we already knew;)
 
Status
Not open for further replies.
Back
Top