throw your ndas away

muted said:
take a cube , other techniques would try and AA the face you are looking at directly ..

and i think game developers could implement and "edge detection" mechanism to their code .. ( tell me if that's just stupid )

The first thing is not necessary a bad thing, especially in the case of SuperSampling where this will actually increase the quality of the texture (both edge and texture get supersampled which is kind of like extra sampled anisotropic filtering).

Second suggestion would be kind of hard since the developer does not know where the edges will be given the power of hardware TnL and fully flexible object deforming vertex shaders.

K-
 
muted,

maybe I came out a little neagative. I think Matrox FAA looks really nice, certainly much better than what's currently out there in terms of quality/performance. That squid demo for sure is an extreme - games will not look like that for some time yet. But non the less I don't think poly count will stay as it is. For instance, one of the main selling points of Parhelia is displacement mapping. Even with dynamic LOD that probably won't go easy on poly count. What I'm questioning is that 3% to 5% figure. It sound conservative to me. IMHO it seems strange that they will only AA outer edges of objects and not all visable triangle edges. If adjecent edges of polygons aren't AA'ed - won't that miss a bunch of texture aliasing? Or am I misunderstanding things here?

Regards / ushac
 
I'm just talking about , general guidelines ..

but they probably don't need it anyway

i think i'll get back to coding once i get my hands on a new video card
and it starting to chug when modeling so that's not good

maybe i'll feel a little more like it , once i go back to college, and maybe get an MBA after that .. right now , computers are pissing me off

well today is job hunting day .. back to retail i guess
 
no no , i didn't think you were negative , i'm just saying that .. what really matters is the general geometry of the object ..

sure it'll slow down .. but i'm sure you'll be able to put FAA on
for a lot more games in the future , than most of the video cards on the market now

i'd like to see the squid demo run on matrox hardware .. unless they use their proprietary ext

and the code creature benchmark
 
Agreed. Maybe Parhelia will be able to run the Codecreatures demo with acceptable framrates :)

What I'd really like to see though is for this forum to start working properly again :p

Regards / ushac
 
Kristof,

Care to speculate on how 3dlabs SuperScene AA manages to store a variable number of samples per pixel?

That's the kind of thing that sounds ideal, but for the life of me i can't figure out how to (efficiently) find a pixel's address in memory with such
a scheme...

Regards,
Serge
 
first I heard some rumours that Digit-Life / iXBT's info wouldn't be up to date.

and then I run across this:

Originally posted by Jazz:

Haig do you think there are advantages in using PS2 (pixel shader) over PS1.3 or PS1.4, I don't know much about graphics card so could you tell me if games were to make use of PS2 would there be alot of difference if the card only had PS1.3? (i think you know why i may be asking this question :))

and Haig's reply:
Originally posted by Haig:

Jazz - yes there are plenty of advantages of using PS2 which you will find out tomorrow.

if they don't have PS 2.0 support, I find this pretty strange comment...
and source is: http://forum.matrox.com/mgaforum/Forum8/HTML/001133.html
 
Care to speculate on how 3dlabs SuperScene AA manages to store a variable number of samples per pixel?

I don't think it stores a variable number of samples per pixel, but variable sample points with the same number of samples.
 
Dave,

From this URL : http://www.3dlabs.com/product/technology/superscene_antialiasing.htm

Multisample Buffer
SuperScene antialiasing defines a new OpenGL buffer type - the multisample buffer. The multisample buffer is the same size as the screen and contains multisample pixels just as the image buffer contains image pixels. While there may be a front, back, right, and left image buffer, there is only one multisample buffer.

Multisample Pixel
Each multisample pixel is effectively divided into a 16 by 16 grid from which 2, 4, 8, or 16 samples are taken. Each sample taken has a location as well as a color, depth, and stencil value. When an OpenGL primitive covers a sample, a depth and stencil value is computed using the position of the sample. The sample is conditionally updated with new color, depth, and stencil values depending on the depth and stencil operations. This allows the intersection of primitives to be handled at the sub-pixel level, providing improved edge blending.

The position of a sample taken within the multisample pixel is called the sample location. The sample location for each sample depends on the number of samples and the x and y location of the multisample pixel relative to the origin of the window. The sample locations are varied from multisample pixel to multisample pixel using a pseudo-random noise generator. This creates an apparent increase to the number of samples per multisampled pixel, much as the way color dithering is used to generate an apparent increase to the number of possible displayable colors. Varying sample locations from pixel to pixel also eliminate distracting moiré patterns that could be generated if the same sample locations were used for all multisample pixels.

Dynamic Sample Allocation
Dynamic Sample Allocation - which only allocates the number of memory 'slots' necessary to accurately cover each pixel - assures that the minimum amount of memory is used to perform SuperScene antialiasing while generating the highest image quality. In typical multisample implementations, 16 samples per pixel implies that 16 times the amount of frame buffer memory is required - greatly increasing the cost of the graphics system. Due to this fact, competing multisample implementations - available only on UNIX workstations costing over $100,000 - offer only 4 or 8 samples per pixel. SuperScene antialiasing overcomes this problem by dynamically allocating memory 'slots' for multisample pixels only when they are needed. In most scenes, the majority of the multisample pixels are covered by only one or two primitives with few multisample pixels being covered by three or more primitives. By dynamically allocating sample 'slots', a multisample buffer that is only three to four times the size of the frame buffer is required to render most scenes using 16 samples. This allows Wildcats SuperScene antialiasing to deliver high-performance multisampled antialiasing at significantly lower cost while delivering a higher image quality than competing systems.

Sounds like they are varying the number of samples to me...
 
Personally I think the fixed max number of fragments per pixel with associated subpixel masks could work quite well. If you use a framebuffer compression method simular to NVIDIA's Z-buffer compression you would still have low bandwith requirements with large tri's, and no worse than multisampling (of much lower quality on edges of course, since it doesnt have the masks for greater subpixel accuracy) when tri's are small ... you might even be able to merge fragments while they have not even left the pixel cache for internal edges.
 
Sounds like they are varying the number of samples to me...

Ahhh - its a bit inbetween actually. Each pixel that requires AA will have the same number of samples applied - i.e. if if AA is applied 16 samples will always be stored on the Pixels which need it...
 
Back
Top