X1800/7800gt AA comparisons

Status
Not open for further replies.
Well, the problem is that how do you maintain a low-level API along with vendor-agnostic interfaces?

Edit:
Actually, now that I think about it, it might be quite nice to have a low-level API as an intermediate step between a higher-level API or language and the hardware. This would allow the implementation of compilers and API's without having to go through the graphics pipeline and also without IHV's having to write drivers for the specific API's. It would then be up to the API/compiler vendors to support the various types of hardware.
 
Chalnoth said:
Well, the problem is that how do you maintain a low-level API along with vendor-agnostic interfaces?

Edit:
Actually, now that I think about it, it might be quite nice to have a low-level API as an intermediate step between a higher-level API or language and the hardware. This would allow the implementation of compilers and API's without having to go through the graphics pipeline and also without IHV's having to write drivers for the specific API's.

That's actually exactly what I was thinking. What I *really* want, is public APIs at multiple levels, so that we have the option of coding closer to the metal or at a higher level abstraction. We want vendor agnostic high level APIs and vendor specific low level apis. Cross platform too. :)

Nite_Hawk
 
Hehe, yep I was thinking along the same lines (if I understood you right). The other APIs would be implemented ontop of the vendor specific low level API. Also for GPGPU, you could have libraries implemented on top of the low level API that are specific to a certain domain. But if you really want just the performance and features, just code to the low level API.
 
Nite_Hawk said:
Why exactly is the color buffer cache needed?
'Cache' inside graphics chips has three uses, not all of which may be familiar if you're only used to the term 'cache' in the way that CPU's use it.

1. Cache avoids going to memory when an item of data is frequently accessed in a short period of time. This is important in some places in graphics chips (vertex accesses, texture filtering, small triangles) but it's not always the main raison d'etre. It's what I thought cache was until I had the other two uses explained to me...

2. Caching to increase burst lengths - speculative reads and holding write data for more items to come into the same cache line and so improve memory efficiency. This is absolutely essential for graphics as most data items are much smaller than the amounts that it is efficient to get from memory in one go.

3. Caching for efficient pipelining; when you find out you are going to need a particular cache line further down the pipe you can issue the request to fill that cache line immediately, and then later in the pipeline the data is already present in the cache by the time it's needed to be used.

So while the colour buffer cache may not always need much of function 1, function 2 is absolutely essential and function 3 allows for more efficient design.
 
  • Like
Reactions: Geo
Dave Baumann said:
On this test on the XL it appears to make no different in 6x FSAA and a slightly detrimental performance in 2x.

What I get out of that is separate settings are required for all three. What I get out of sireric's posts is that they will provide those separate settings by the time this hits a released driver.
 
krychek said:
Hehe, yep I was thinking along the same lines (if I understood you right). The other APIs would be implemented ontop of the vendor specific low level API. Also for GPGPU, you could have libraries implemented on top of the low level API that are specific to a certain domain. But if you really want just the performance and features, just code to the low level API.
To be honest I am not that fond of low level APIs; I would prefer a solution (a gcc-like compiler, like ATI said in one of its presentations) that was built on top of OpenGL (or Direct3D, but OGL is not bound to a certain platform), so that it could use not only ATI cards, but nV cards as well. Of course you have the problem that a) graphics APIs are not really designed to do general programming stuff, so you are bound to miss certain general-purpose functions that must be created somehow b) API built on top of API equals lost speed and efficiency.

Eric, do you have any idea whether nVidia has any plans to head where ATI is now headed with its GPGPU involvement? [wishful thinking]It would be nice if there were an nVidia employee around here to answer that question[/wishful thinking]
 
Nite_Hawk said:
I'm still trying to process everything in your post... Why exactly is the color buffer cache needed? Couldn't the RBE directly request blocks from the MC and remove the CBC layer?
I'm hypothesising that RBE is just another "latency-tolerant" client of the MC.

In order to be latency-tolerant it, presumably, runs render tasks in a disjoint fashion. One way to do this is to implement a really long pipeline, so that by the time the data-sensitive portion of the RBE task occurs, the data is all in place (latency has run its course).

But in typical RBE tasks, I don't know what you'd fill the pipeline with. It's pixel colour data with an address in the back-buffer. There's not very much you can do with that data until you have access to the relevant portion of back-buffer.

So an alternative to this is to "batch-up" RBE tasks. Instead of working on a single quad of pixels (e.g. writing four pixels' colour values into the back-buffer), it makes sense to work in blocks. A block might be 8 pixels. Or 64 pixels. I don't know. Increasing the size makes each access to memory more efficient. But it also costs more in terms of on-die CBC space.

Then you also have to bear in mind that the shader/texture pipelines produce colour-writes out of order (or at least I presume they do, since the threads are themselves able to execute out of order).

So the RBE has to take care to write order-sensitive pixels in the correct order.

So the RBE would seem to have to be able to re-order incoming tasks, and block them up into memory-efficient packets.

I should point out that the "Xenos" AA EDRAM patent covers similar ground.

http://patft.uspto.gov/netacgi/nph-Parser?patentnumber=6873323

But the concepts I'm discussing aren't the main focus of that patent.

b3d16.gif


Instead, notice the use of Packing and Unpacking units. It's a tenuous link, but I think something similar is prolly happening in relation to CBC.

I think the missing peice of information for me is how often the same data gets requested from the CBC over again. If we have 4 RBEs, do each of them only contact one CBC? If so I could see why this is important...
I'm suggesting that each CBC is owned by a single RBE. And each RBE solely owns screen-tiles. The net effect being that a given pixel is always the property of a single RBE. This increases parallelism in the GPU, without creating intra-GPU dependencies.

There is no need to enforce cache-coherency in the CBCs, if the RBEs are screen-tile bound.

In general, rendering into the back-buffer is an incoherent process. Even within a screen-tile of 16x16 pixels I expect you'll find that each pixel is only written once or twice in each "visit" that a screen-tile makes into CBC.

In the natural progress of rendering triangles ("walking them" one quad at a time), the pixel colour data for triangles will tend to arrive in the RBE in bursts. So there is a level of coherency within a screen-tile (or a CBC block, if it's smaller than a screen-tile).

Jawed
 
Meh. I think the point that was _trying_ to be made (before it got lost in all the posturing and nit-picking) was that ATI's design philosophy tends to go for a more flexible and general-purpose design (programmable AA, programmable MC, push for unified shaders), wheras nVidia tend to go for hardwiring in specific features. At an abstract level each approach has its own pros and cons; flexible designs tend to cope with future developments well and allow tuning to deal with future software, but at the expense of transistor count and thus speed in specific design-target functions, wheras hardwired designs tend to give good speed in design-target functions for a smaller transistor count at the expense of the possibility of hardware-utilisation efficiency boosts in the future and the ability to deal well with usage patterns outside design parameters.

That's not to say that either company is the perfect embodiment of either philosophy or that design quality in various areas are the same on either side or anything like that; it's just to say that generally ATI go in one direction and nVidia go in the other. ATI wants unified shaders, nVidia wants to keep them seperate, ATI has programmable this and that, nVidia gives boost in Z/stencil ops or whatever (terminology goes over my head - talking D3 shadows etc) for specific gains in specific circumstances. Both companies show general trends towards design philosophies in different directions. Some prefer the focus on hardwired speed, some prefer the focus on flexibility.


Also, anyone who thinks philosophy has nothing to do with reality has no idea what they're talking about and shouldn't be commenting on the subject.
 
Dio said:
'Cache' inside graphics chips has three uses, not all of which may be familiar if you're only used to the term 'cache' in the way that CPU's use it.

1. Cache avoids going to memory when an item of data is frequently accessed in a short period of time. This is important in some places in graphics chips (vertex accesses, texture filtering, small triangles) but it's not always the main raison d'etre. It's what I thought cache was until I had the other two uses explained to me...

Ah ha. Is this the answer to my question upstream re the hardware elements of the low-hit (relatively) of the performance cost of rotation independent AF vs optimized AF? The large texture cache increase in R5xx?
 
Dio said:
'Cache' inside graphics chips has three uses, not all of which may be familiar if you're only used to the term 'cache' in the way that CPU's use it.

1. Cache avoids going to memory when an item of data is frequently accessed in a short period of time. This is important in some places in graphics chips (vertex accesses, texture filtering, small triangles) but it's not always the main raison d'etre. It's what I thought cache was until I had the other two uses explained to me...

2. Caching to increase burst lengths - speculative reads and holding write data for more items to come into the same cache line and so improve memory efficiency. This is absolutely essential for graphics as most data items are much smaller than the amounts that it is efficient to get from memory in one go.

3. Caching for efficient pipelining; when you find out you are going to need a particular cache line further down the pipe you can issue the request to fill that cache line immediately, and then later in the pipeline the data is already present in the cache by the time it's needed to be used.

So while the colour buffer cache may not always need much of function 1, function 2 is absolutely essential and function 3 allows for more efficient design.

Hi Dio,

Thank you much for the explanation! From Sireric's posts, it really sounds like #2 and #3 are the areas in which they are focusing improvements in the memory controller. Specifically when Sireric mentions data-reordering it makes me think that they are perhaps reordering speculative data (or data that is not immediately needed) to better fill data packets when doing memory reads (sorry, I think of all of this in terms of networking protocols).

Jawed said:
I'm hypothesising that RBE is just another "latency-tolerant" client of the MC.

In order to be latency-tolerant it, presumably, runs render tasks in a disjoint fashion. One way to do this is to implement a really long pipeline, so that by the time the data-sensitive portion of the RBE task occurs, the data is all in place (latency has run its course).

But in typical RBE tasks, I don't know what you'd fill the pipeline with. It's pixel colour data with an address in the back-buffer. There's not very much you can do with that data until you have access to the relevant portion of back-buffer.

So an alternative to this is to "batch-up" RBE tasks. Instead of working on a single quad of pixels (e.g. writing four pixels' colour values into the back-buffer), it makes sense to work in blocks. A block might be 8 pixels. Or 64 pixels. I don't know. Increasing the size makes each access to memory more efficient. But it also costs more in terms of on-die CBC space.

Then you also have to bear in mind that the shader/texture pipelines produce colour-writes out of order (or at least I presume they do, since the threads are themselves able to execute out of order).

So the RBE has to take care to write order-sensitive pixels in the correct order.

So the RBE would seem to have to be able to re-order incoming tasks, and block them up into memory-efficient packets.

I should point out that the "Xenos" AA EDRAM patent covers similar ground.

http://patft.uspto.gov/netacgi/nph-P...number=6873323

But the concepts I'm discussing aren't the main focus of that patent.



Instead, notice the use of Packing and Unpacking units. It's a tenuous link, but I think something similar is prolly happening in relation to CBC.

Ah, thank you for the explanation! That makes a lot of sense. I would expect that you are probably correct in your hypothesis about the packing/unpacking units in relation to the CBC. It seems like it would be basically necessary if you have larger batch sizes. I imagine that the savings you get would far outweigh the die space needed for implementation, though I suppose it depends on what kind of latency you are willing to accept.

Nite_Hawk
 
Since ATI's older GPUs couldn't execute pixel shader threads out of order, packing colour writes in the RBE would have been easier, I guess.

The packing would match up well with the triangle walk, I expect. Though prolly not perfectly (since triangles aren't memory-tile sized).

I doubt the CBC is entirely new in R520 - but I expect the scope of RBE operation has evolved so much in R520 that CBC has had to grow significantly, both in terms of size and functionality. I dare say in much the same way that texture caches have evolved.

Jawed
 
Kombatant said:
To be honest I am not that fond of low level APIs; I would prefer a solution (a gcc-like compiler, like ATI said in one of its presentations) that was built on top of OpenGL (or Direct3D, but OGL is not bound to a certain platform), so that it could use not only ATI cards, but nV cards as well. Of course you have the problem that a) graphics APIs are not really designed to do general programming stuff, so you are bound to miss certain general-purpose functions that must be created somehow b) API built on top of API equals lost speed and efficiency.

There has been some work already on having a gpgpu friendly libs built on top of the graphics APIs. (Brook, Sh ) But this only makes the gpgpu programmer's job easy - they don't expose any special features of a particular chip (as you mentioned). Once the functionalities of a GPU become almost fixed then we could definitely do with a high level non-graphics API. But until then, I don't see how a common API can be designed that can offer all the capabilities of NV and ATI. Now ATI's gpu has support for scatter for this generation but NV's does not, how will this be handled? An extension? This will just slow down the API/ dumb down the API. OGL doesn't change fast enough because it has to be backward compatible and hence only really consistent extensions make it into the core.

The reason for the low level APIs is to immediately provide the low level access for an architecture and not care about backward compatibility. This takes the burden off the IHVs too and its upto the community/academia to come up with any API on top of this that is hardware architecture independent. In this scenario, there is no room for IHVs to disagree with each other or worry about any backward compatibility (other than the graphics APIs) and they can just focus on the hardware.

If it happens that all IHVs agree on the architecture then obviously we would ask for a high level vendor agnostic API :D.
 
This whole thread makes me go :oops: ^3

Not even in wilder dreams I would have expected people from the GPU world being so open about their hardware implementations. At least not a couple of years ago when I started with the simulator. In fact, part of the reason I started it was because there was little information and I like to gather and speculate about topics with little or not very available information. That's why my master thesis was just a large boring document where I tried to explain how to write a computer system emulator (aka console or arcade emulator) as most normal information sources for the topic were very fragmented or lacking key methods. At end it was must about CPU emulation than anything else because lack of time and I still wonder if someone will ever compile something about how all those sprite and tile based graphic systems are emulated (M.A.M.E. IS NOT DOCUMENTATION!). And the other part was ... that I like to emulate hardware in software ...

Being a bit more on topic. If ATI is really commited to become the DEC of the GPU industry (they were releasing almost every bit about their implementations), which I doubt (it's more likely they go the Intel/AMD way), there is going to be an even larger BOOM! in GPU/GPGPU related research. Which I don't like much as I was precissely trying to run from the massification of yet another branch predictor or, at the time I started my PhD, another SMT technique, that computer architecture research was looking like (at least where I am) ... so please don't release information :LOL:

On another matter, Jawed seems to be trying to explain that if you assign work on a framebuffer tile basis to completely separated quad pixel pipelines you have some interesting benefits. The first one being no need for cache coherence between each of the pipelines as each pipeline never ever touches a single bit from a framebuffer region from another pipeline. Then you could assign the ROP in a pipeline to a MC, if you had enough of them, providing something like semi dedicated bw per pipeline, which may be or not a good idea. Each pipeline ROP would be accessing their own separated portion of memory, and mostly (at least related only to the ROPs) having their own separated share of bw, reducing conflicts and similar stuff. One possible problem would be work unbalancing if the queues around the pipeline aren't large enough or if someone with a very bad bad intention only renders to the regions assigned to a single pipeline ;) (but what is the point of chess-like rendering?).

The current implementation of the simulator implements that kind of work partition at the ROP level (but not necessarily at the shader and upper levels) but with our very simple MC and memory simulation we don't even bother to garantee that each ROP pipe goes to a single MC. With tile size set at 8x8 (ATI is 16x16 it seems) basically because I like so and our cache lines are, for a number of bad reasons, of that absurd size.
 
  • Like
Reactions: Geo
Jawed said:
Since ATI's older GPUs couldn't execute pixel shader threads out of order, packing colour writes in the RBE would have been easier, I guess.

The packing would match up well with the triangle walk, I expect. Though prolly not perfectly (since triangles aren't memory-tile sized).

I doubt the CBC is entirely new in R520 - but I expect the scope of RBE operation has evolved so much in R520 that CBC has had to grow significantly, both in terms of size and functionality. I dare say in much the same way that texture caches have evolved.

Jawed

In my opinion (and I don't really know much about rasterization and I had more than enough pain implementing it on the simulator) only Triangle Setup, if anything, may be shared for all the quad pipes in ATI designs. As they have explained a number of times they bin triangles to each pipe based on their tile distribution algorithm and then they seem to implement a fragment generator (I wonder if it would be better named as tile generator, what do they send to HZ test? fragments or a single tile that later may be further divided into quads?) per pipeline. So they have N rasterizers 'walking' each their own assigned triangles.
 
Last edited by a moderator:
krychek said:
There has been some work already on having a gpgpu friendly libs built on top of the graphics APIs. (Brook, Sh ) But this only makes the gpgpu programmer's job easy - they don't expose any special features of a particular chip (as you mentioned). Once the functionalities of a GPU become almost fixed then we could definitely do with a high level non-graphics API. But until then, I don't see how a common API can be designed that can offer all the capabilities of NV and ATI. Now ATI's gpu has support for scatter for this generation but NV's does not, how will this be handled? An extension? This will just slow down the API/ dumb down the API. OGL doesn't change fast enough because it has to be backward compatible and hence only really consistent extensions make it into the core.

The reason for the low level APIs is to immediately provide the low level access for an architecture and not care about backward compatibility. This takes the burden off the IHVs too and its upto the community/academia to come up with any API on top of this that is hardware architecture independent. In this scenario, there is no room for IHVs to disagree with each other or worry about any backward compatibility (other than the graphics APIs) and they can just focus on the hardware.

If it happens that all IHVs agree on the architecture then obviously we would ask for a high level vendor agnostic API :D.
I agree with you, there are many things still to be done. I was discussing with Sireric this very subject in Ibiza, and he said that there were many things to take into consideration for these sort of applications; for instance, one parametre to consider is the fact that, currently, the graphics driver works in a certain way because it is designed with the purpose of game-playing. So certain optimizations, although very valid for games, tend to give totally different results when used by a gpgpu-type application - and if that's not enough, think about different driver revisions which introduce new optimizations, or alter existing ones :D

The reassuring part is that ATI seems very commited to this whole thing, and I believe that, with academia actively working on this, we will have some very pleasant surprises within 2006 :)
 
Kombatant said:
The reassuring part is that ATI seems very commited to this whole thing, and I believe that, with academia actively working on this, we will have some very pleasant surprises within 2006 :)

Who in academia is working on this? Any links?

Nite_Hawk
 
Nite_Hawk said:
Who in academia is working on this? Any links?

Nite_Hawk
If you notice the GPGPU people in the right column of the site, you will see that some of them are students doing their PhD thesis :)
 
RoOoBo said:
(I wonder if it would be better named as tile generator, what do they send to HZ test? fragments or a single tile that later may be further divided into quads?)
Each quad actually contains the HiZ (which is why its effectiveness at higher res actually scales with the number of quads), so each of the HiZ's will only see the tiles that quad is operating on, AFAIK.
 
RoOoBo said:
On another matter, Jawed seems to be trying to explain that if you assign work on a framebuffer tile basis to completely separated quad pixel pipelines you have some interesting benefits. The first one being no need for cache coherence between each of the pipelines as each pipeline never ever touches a single bit from a framebuffer region from another pipeline. Then you could assign the ROP in a pipeline to a MC, if you had enough of them, providing something like semi dedicated bw per pipeline, which may be or not a good idea. Each pipeline ROP would be accessing their own separated portion of memory, and mostly (at least related only to the ROPs) having their own separated share of bw, reducing conflicts and similar stuff. One possible problem would be work unbalancing if the queues around the pipeline aren't large enough or if someone with a very bad bad intention only renders to the regions assigned to a single pipeline ;) (but what is the point of chess-like rendering?).
This isn't a problem specific to having ROPs assigned to MCs. Even if the ROPs have free access to any MC, that doesn't help if all tiles that need to be written belong to one MC.
However, having no fixed ROP-MC relation potentially gives you more freedom in optimizing the framebuffer layout/distribution across multiple channels for a specific bandwidth requirement mix.
 
Status
Not open for further replies.
Back
Top