Multisample of non-uniform resolution Patent

hjs

Newcomer
Slashhead over at the dutch C!T forum found this nVidia patent.
Can someone explain what it means?

Link

Something about reducing the resolution at the edges so there will be less aliasing.
Why? Less banding, or more speed?
 
I just downloaded the entire patent and, after a very quick glance, I have a feeling it's to do with framebuffer compression.

It looks like it works on 2x2 pixels at a time, storing a base and 3 deltas but that may depend on whether there is an edge present or not.

Patents always need careful reading and I don't have a spare hour or three to read it at the moment, so take that with a HUGE grain of salt.
 
This looks sort of similar in some principles to the 22-bit color technique, doesn't it? The idea of more "reduced resolution" data samples representing "higher resolution" color data, and offering performance and storage benefits over fewer "higher resolution" data.

I thought about it, and came up with all sorts of whacky stuff. For your enjoyment and target practice:

Whacky Theory 1:

Would something like a 32-bit primary buffer, with 16-bit multisample buffers, blended on the premise that any stored 16-bit values that matched the primary buffer color value with LSBs dropped off could be treated as the full 32-bit primary buffer value for blending? Of course, the 32-bit/16-bit (8-bit/4-bit) are a bit arbitrary...there is no reason (?) this couldn't be used to reduce the demands of a frame buffer format aimed a bit higher than current...10-bit primary buffer seems pretty reasonable. Color storage solutions should be able to address the challenges that would arise, right?

It would be adding more color information, although less accurate color information. Wouldn't this still serve to successfully add position and color information to a pixel, as samples came from multiple discrete surfaces (i.e., samples at edges), even though less accurate color information is being added? I'd think in such a situation, the benefit of discrete surface sampling would provide benefit even if the discrete surface sampling was at reduced detail (because it is better than no representation at all).

Could the primary buffer sample selection be effictively picked to represent an "interior region", and the other reduced precision samples be picked to represent "exterior"? Trying to presume some connection with the NV40, would a solution that tried to be "stochastic" lend itself to this type of implementation by virtue of flexibility in "primary sample" designation? Perhaps distance based criteria for selecting it?

At the end of the text:

"The enhanced resolution signal is not a significant aspect of one emodiment because in some emodiments the low resolution signal and the high resolution signal are used directly".

It fits my thoughts (assuming they are workable), I think, except it seems to imply that there is an intermediate between my "16-bit" and "32-bit" color storage formats. My outlook depends on that intermediate "resolution reducer/enhancer" being only mathematically expressed by the blending criteria, yet this wording seems to imply there is some process altering values that occurs before the blending for output.
 
It looks to me like this is sort of a lossy compression scheme where the edges of the screen are considered less important than the center. I'm not entirely sure that this is a good idea for most games: in FPS's, most of the HUD is along the edges of the screen. But it is an interesting idea nonetheless.
 
If you process pixels in a 2*2 matrix (SIMD) all the time, you take a preformance hit at the edges of vertices. Anything you can come up with to use the extra processing power (like combining multiple incomplete pixelblocks or using the free processing units to help process the relevant pixels) would be a speed improvement.

First thing that came to my mind. Second: try to reduce the amount of stored data (registers) by doing an approximation first and see if you can improve upon that without having to store intermediate results.

But I have no real idea whatsoever.

:D
 
Chalnoth said:
It looks to me like this is sort of a lossy compression scheme where the edges of the screen are considered less important than the center.
It wasb't edges of the screen but edges of polygons.
 
demalion said:
This looks sort of similar in some principles to the 22-bit color technique, doesn't it?
That seems possible - it was filed in 1998 so maybe it was one of 3dfxs?
 
Simon F said:
It wasb't edges of the screen but edges of polygons.
Argh, sorry, misread that :p

Anyway, is the basis of the paper that some lost data can be regained after recombination through FSAA? That is, combining color data from separate surfaces reduces aliasing patterns, I think?

If this is the premise, then it seems misguided, as with highly-tesellated surfaces, interior triangle edges may well have the exact same color as the neighboring polygon at that point (since they use the same texture), which could result in some strange outlining of interior triangle edges.
 
Chalnoth said:
Anyway, is the basis of the paper that some lost data can be regained after recombination through FSAA? That is, combining color data from separate surfaces reduces aliasing patterns, I think?

If this is the premise, then it seems misguided, as with highly-tesellated surfaces, interior triangle edges may well have the exact same color as the neighboring polygon at that point (since they use the same texture), which could result in some strange outlining of interior triangle edges.
Yes, this is one reason why you can't use centroid sampling all the time.
 
OpenGL guy said:
Chalnoth said:
Anyway, is the basis of the paper that some lost data can be regained after recombination through FSAA? That is, combining color data from separate surfaces reduces aliasing patterns, I think?

If this is the premise, then it seems misguided, as with highly-tesellated surfaces, interior triangle edges may well have the exact same color as the neighboring polygon at that point (since they use the same texture), which could result in some strange outlining of interior triangle edges.
Yes, this is one reason why you can't use centroid sampling all the time.

Ahhh DAMN IT!

Are you trying to kill me?
Damn you guys are so logical, I want your brain. o_O

So centroid sampling may produce more aliasing when applied to tesselated surfaces *cough* Truform *cough*?
 
Truform isn't necessary. The situation will happen basically whenever you have a skinned model: the same texture is spread across neighboring triangles.

And the aliasing will only happen if the rendering uses a method where triangle edges are treated differently.
 
I think I understand.

EXAMPLE:
So I spread a texture across 5 triangles. I start applying strange properties to the triangle edges.
The the problem will appear?
END EXAMPLE

Don't vertex shaders have the ability to alter triangle edges by manipulating the vertices?

So basically centroid sampling will only produce more problems under HL2?
 
Centroid sampling isn't meant to be used on skinned textures. It's meant to be used when the texture ends at the edge of the polygon.
 
So in order to make any game look perfect with FSAA we would need to enable 2 different sampling patterns at once?

Centroid on polys where the texture ends and a spatial pattern on every other poly.
 
In the end MSAA can probably end up more costly in terms of silicon space over SSAA. All we need to do is minimise the the fillrate hit of SSAA and throw it on gamma corrected and a sparse grid.

Problems all solved. :D
Unless there are other problems with SSAA?
 
Pete said:
A minor one: the corresponding memory bandwidth hit. ;)

Isn't the bandwidth hit the same for MSAA and SSAA the only difference is that with SSAA you get half the fillrate while with MSAA there is 0 hit on fillrate?
 
K.I.L.E.R said:
So centroid sampling may produce more aliasing when applied to tesselated surfaces *cough* Truform *cough*?
???

Imagine a quad rendered as a triangle fan, if you use used centroid sampling, then you would see a blurry seam where the two triangles meet. Like I said, you can't use centroid sampling all the time. As Chalnoth mentioned, the best time to use centroid sampling is when the texture should end at the edge of the triangle.
 
Back
Top