Curious new patent by nVIDIA (AA)

http://appft1.uspto.gov/netacgi/nph...=PG01&s1=nvidia.AS.&OS=AN/nvidia&RS=AN/nvidia

United States Patent Application 20090079747
Kind Code A1
Johnson; Philip Browning ; et al. March 26, 2009
Distributed Antialiasing In A Multiprocessor Graphics System

Abstract

Multiprocessor graphics systems support distributed antialiasing. In one embodiment, two (or more) graphics processors each render a version of the same image, with a difference in the sampling location (or locations) used for each pixel. A display head combines corresponding pixels generated by different graphics processors to produce an antialiased image. This distributed antialiasing technique can be scaled to any number of graphics processors.

(from a conversation I was having on MSN a few seconds ago):

Say you have two GPU's
two chips in an MCM module kind of like GTX295
or two GPU's...
well balancing load has a problem
MSAA has another one (surface aliasing ????)
you render on each chip the same scene but you change sample pattern/sample locations
on each card
although each card produces 4 color samples which are equal to each other, inside each each batch of samples,
the two groups of 4 samples could very well have a different color
thus blending the unresolved output
(the 8 samples together)
you get something that is much better than regular 8xMSAA
I think you could do the same ona single GPU rendering the scene twice, but you'd have to keep the RT unresolved and resolve it in the shader at the end
here the display head would do the trick so it would still be a HW MSAA resolve.
so basically the more GPU's you use the better AA gets
some people might see it as wasteful... but if you though about leaner and meaner GPU cores
with a properly adapted display controller/ROP's
you could change a bit the way you go about scaling graphics over multiple GPU's
 
Looking through the claims an offset between render targets is described and stereoscopic 3D rendering is in there too, along with mixing a 3D display area in a 2D "desktop" environment.

So, while the MSAA from multiple GPUs is part of it, it seems to be more than that.

Jawed
 
The patent might have valid claims, but failing to disclose prior art which anyone in the industry should be familiar with and building a set of claims on a clearly invalid one undermines the strength of the patent.
 
The patent seems to focus onn how the samples are combined in the display head.
Only the claims matter - but I don't have time to look at them.

FWIW, from the summaries given here, it just sounds like a parallel version of "The accumulation buffer: Hardware support for high-quality rendering" by Haeberli and Akeley, or 3dfx's "T-Buffer" variant.
 
Are you even allowed to say you have looked at a specific patent Simon? :p Also more than just the claims matter, at least in the US (inequitable conduct).
 
Are you even allowed to say you have looked at a specific patent Simon? :p
I'll happily place my hand on a copy of Foley, van Dam et al, and swear that I have not looked at that patent.
Also more than just the claims matter, at least in the US (inequitable conduct).
Eventually, but in these instances, I'd say (to a third party) read the claims first.
 
If they do have different pixel centers, we get 2xSSAA + 8xMSAA ?
Single NVidia GPUs can already combine SSAA and MSAA. Thinking about it, I don't actually know how the MSAA samples are coloured - how do they choose which pixel centre they're based upon. Or are they based upon the pixel's real centre, rather than either of the two SSAA sample locations?

But anyway, it sounds like 2xSSAA + 8xMSAA. But it also does funky things for 3D. An edge in 3D can be antialiased by the left image and the right image simultaneously, each of which has been generated by a separate GPU.

I've got no idea what NVidia's GPUs currently do with MSAA when stereoscopic 3D is enabled.

Jawed
 
Single NVidia GPUs can already combine SSAA and MSAA. Thinking about it, I don't actually know how the MSAA samples are coloured - how do they choose which pixel centre they're based upon. Or are they based upon the pixel's real centre, rather than either of the two SSAA sample locations?

I think it does the multisampling around each of the pixel centers. E.g. 1x2 supersampling + 2xMSAA actually yields 4 MSAA samples per pixel. After each subpixel with its respective MSAA samples is resolved the 1x2 image is downscaled to output resolution. This is based off of the vague description of the mode in nHancer (1x2 SSAA + 2xMSAA is referred to as 4xS, 1x2 + 4xMSAA = 8xS, etc.).

But anyway, it sounds like 2xSSAA + 8xMSAA. But it also does funky things for 3D. An edge in 3D can be antialiased by the left image and the right image simultaneously, each of which has been generated by a separate GPU.

Thanks. :smile:
 
Hmm, only horizontal edges in 3D could be antialiased, so I'm thinking I've misunderstood :oops:

:oops: Hmm, thinking about it some more, even that's not going to work :oops:

I dunno. The 3D thing needs more careful reading. Think I better stop now.

Jawed
 
Back
Top