Sun's description of Multisampling

Althornin

Senior Lurker
Veteran
With multisampling, all 3-D primitives in a scene are sampled multiple times at each pixel. The entire scene is rendered at a higher resolution into available off-screen memory, then run through a low-pass filter, and displayed to the screen at normal resolution. The graphic below simulates the difference between an aliased image and one that has been antialiased using multisampling.

I am taking osme sun certification stuff.
This quote describing "multisampling" sure sounds alot more like supersampling to me.
any explainations?
 
I have seen this use of "multisampling" before; basically, the idea is that "multisampling" covers any case where more than one sample is taken per pixel, and supersampling being the specific case where all per-pixel data (polygon inside/outside test, texture, gouraud color, pixel shaders) are sampled for every sample. At least 3dfx and the OpenGL 1.3 standard have used the term "multisampling" in this sense.

Which of course leaves the question of what you would call NV/ATI's current schemes that do a polygon inside/outside test for every sample, but sample textures/gouraud colors/shaders only once per pixel per polygon.
 
I posted this a long time agao about SS and MS. I don't know why this topic gets revisited time and again.

Supersampling is just that: "super" sampling, which is calculating more samples. 2x2 supersampling means calculating 2x2, or 4 samples per pixel. Within each pixel, 4 "samples" are calculated independently. For each sample, every rendered polygon that covers that sample point, texture is sampled and filtered (bilinear, trilinear, or anisotropic, as requested), shading is calculated, and stencil and Z are calculated. The frame buffer holds a unique value for each supersampled color, alpha, Z, and stencil. 4X supersampling takes 4X texture bandwidth, 4X pipeline computation, and 4X frame buffer bandwidth and space.

Multisampling is an innovation that was introduced by SGI some years ago. The observation is that for texture operations (like alpha mask), texture filtering, and shading, should be calculated correctly by calculating them once per pixel, or at least approximately (very close) correctly. That's what level of detail (LOD) is: the appropriate level of filtering for the texture, within the pixel. So, in true SGI-style Multisampling, a single textured, shaded sample is produced per pixel, and independent Z and stencil values are produced per pixel. The color/alpha value is replicated across all of the samples within a pixel, but the individual samples have unique Z and stencil. Also, there are unique, separate frame buffer entries for all: color, alpha, Z, stencil, as before. So, for 2x2 (4X) multisampling, you require 1X texture bandwidth, 1X pipeline computation, but still 4X frame buffer bandwidth and space.
 
bether call it adaptive vs constant multi/supersampling..

i prefer multisampling over supersampling.. whats so super about it? dunno. but it does multiple samples => multisampling fits well..

now constant multisampling means it has a constant amount of samples per pixel. adaptive multisampling means it has an adapting amount of samples (up to the set max samples) per pixel.
 
Ho hum ...

Geometric primitives are sampled multiple times, otherwise there would be no effect at all, yet there is one. Think "inside or outside of polygon", that's what's "sampled" at a greater resolution. The deal is that a single color is assigned to the multiple samples over the geometry. Increased spatial resolution, normal color resolution, that's mulitsampling. Nothing new here.

davepermen said:
now constant multisampling means it has a constant amount of samples per pixel. adaptive multisampling means it has an adapting amount of samples (up to the set max samples) per pixel.
Multisampling as implemented on the consumer cards we're surely talking about here always uses a constant sample count. Maybe you're thinking of frame buffer compression (which isn't actually as sophisticated as it sounds), but that's not the same thing as reducing the number of generated samples.

Matrox' FAA is an exception, and it doesn't work right (intersection edges).
Z3 is a true adaptive scheme, but it's lossy.
 
i know, zeckensack.

actually, most implement addaptive, too. nv30 and r300 based chips do. i've followed this discussions now since long, and i still remember that this was written everywhere in the hw specs. they only add samples where "needed".. that is addaptive, then.
 
davepermen said:
i know, zeckensack.

actually, most implement addaptive, too. nv30 and r300 based chips do. i've followed this discussions now since long, and i still remember that this was written everywhere in the hw specs. they only add samples where "needed".. that is addaptive, then.
Are you talking about AF now?

NVidia and ATI currently use a fixed number of AA samples for every pixel. Nothing adaptive here. I believe 3Dlabs have an adaptive multisampling scheme, but I don't know how it works.
 
You're right Xmas. But 3Dlabs adaptiveness isn't wrt number of samples per pixel. They always have the same number of sample positions, but they adapt the number of different colors in each pixel.
Instead of storing one color per subpixel, they store a color and a coverage mask per polygon and pixel. Initially having space for two(*) colors per pixel, and allocating more dynamically as needed.

(*) IIRC, it was some time since i read it.
 
I think Sun's definition is okay, it's just that the true meaning of "rendered" in that definition is not the same ordinary rendering for ATI/NV/3DLabs. When rendering at the higher resolution, they use the same texture and colour calculations for a group of pixels. For ATI's 6xAA, a group of 6 pixels at the higher resolution all share the same pixel shader output (provided they're all in the same polygon), and those 6 pixels are not arranged in an ordinary grid.

It all depends on your point of view. MSAA is just rendering at a higher resolution while taking some shortcuts, and then filtering down.
 
The central problem is that the distinction between (as "we" know it) supersampling/multisampling is that there aren't really any formal definitions of it, it's more of an ad-hoc definition (or distinction). I believe I said something similar on the last incarnation of these forums.

I think "our" current definition (cba to define it myself; I've had a few beers tonight fwiw) / distiction of/between supersampling/multisampling is useful, but that doesn't mean everyone (eg Sun) has to follow it, or that you actually will find it in most textbooks. Furthermore I, for one (with a mother tongue different from English or Latin), can't syntactically deduce that one would mean multiple spatial samples from different fragments and one multiple spatial samples from the same fragments.
 
Back
Top