Talking nasty with Tim Sweeney

No, parallelism must be built into the language for it to be useful. Doesn't have anything to do with the "guy that writes the compiler."
 
Humus said:
CosmoKramer said:
Yes of course SS is costly, primarily in fill-rate (including shader fill-rate) but it gets the job done while reducing all types of aliasing mentioned in this thread.

The problem is that supersampling does NOT get the job done. Multisampling does exactly what it's supposed to do, no more, no less. Supersampling either does a helluva lot more than it needs to do (which sometimes leads to visual flaws such as blurry text) or it does too little, leaving plenty of aliasing in place. Take for instance the problem with sparkling per pixel specular with normal maps. Applying supersampling will not solve this problem. Yes, the artifacts will be moved a tiny bit further away, but you'd probably need something like 64x supersampling or even more to make it really go away.
While I agree that supersampling is too costly to be applied always, I disagree on it not getting the job done.

Multisampling only reduces edge aliasing, the more samples you throw at it, the better. Same goes for supersampling, which reduces all kinds of spatial aliasing. The only time SS does more than it needs to is when the frequency of your shading function becomes too low, e.g. when you're magnifying textures.

As for blurred fonts (using your words):
The hack here is the use of aliased fonts and omitting mipmaps, not the use of supersampling.

Supersampling simply isn't a general solution. There is no general situation-independent solution for internal aliasing at all, unlike edge-aliasing which can gracefully be handled automatically. The only solution in the general case is that the developer who knows all the facts about the shader's behavior deals with any aliasing it may create himself.
Supersampling is a general solution for reducing aliasing, because it simply works, on all kinds of spatial aliasing. You can't fully "solve" the aliasing problem unless you have unlimited resolution.
There are certainly better solutions than supersampling for specific cases (though for several cases, supersampling is the best way). But that's the drawback, you need lots of different solutions for lots of different cases.
 
Xmas said:
Supersampling is a general solution for reducing aliasing, because it simply works, on all kinds of spatial aliasing. You can't fully "solve" the aliasing problem unless you have unlimited resolution.
There are certainly better solutions than supersampling for specific cases (though for several cases, supersampling is the best way). But that's the drawback, you need lots of different solutions for lots of different cases.
I'm not so sure that supersampling is the best way for several cases. Imagine a complex shader that has some sort of visibility test (similar to an alpha test). It is concievable that only a small portion of the complex shader determines visibility, for example, one texture read. Now, supersampling may be the most general solution to take care of aliasing with alpha tests (if not the best-looking), but in a complex shader there's no reason to calculate the entire shader multiple times, only the portion that determines visibility.

I seriously doubt that there will be many situations at all in coming games that would really be best-handled by supersampling.
 
Chalnoth said:
No, parallelism must be built into the language for it to be useful. Doesn't have anything to do with the "guy that writes the compiler."

And HLSL has parallelism built into it, right? So it's the compiler guys issue how to take what the programmer wrote using HLSL and its paradigms and turn it into machine code for that soi-distant cpu back-end.
 
geo said:
And HLSL has parallelism built into it, right? So it's the compiler guys issue how to take what the programmer wrote using HLSL and its paradigms and turn it into machine code for that soi-distant cpu back-end.
But it's not just that. It assumes the existence of a fair amount of specialized hardware that's just always going to be very slow to emulate on a CPU (triangle setup, texture filtering). Not only that, but due to design differences, the GPU's always going to be many times faster. And no matter how good your compiler is, using that standard Direct3D or OpenGL interface with a software renderer is always going to be very inefficient. There are more efficient (if lower-quality) ways to get things done if you're going to make a software renderer.
 
Chalnoth said:
geo said:
And HLSL has parallelism built into it, right? So it's the compiler guys issue how to take what the programmer wrote using HLSL and its paradigms and turn it into machine code for that soi-distant cpu back-end.
But it's not just that. It assumes the existence of a fair amount of specialized hardware that's just always going to be very slow to emulate on a CPU (triangle setup, texture filtering). Not only that, but due to design differences, the GPU's always going to be many times faster. And no matter how good your compiler is, using that standard Direct3D or OpenGL interface with a software renderer is always going to be very inefficient. There are more efficient (if lower-quality) ways to get things done if you're going to make a software renderer.

Hardware is cheap. Human assets are expensive. I really do take your point, and (finally winding back to Sweeney!) 2007 is not doable for what I'm pointing at. 2017? 2027? Could be --and high-level languages are an enabler on that path.
 
geo said:
Hardware is cheap. Human assets are expensive. I really do take your point, and (finally winding back to Sweeney!) 2007 is not doable for what I'm pointing at. 2017? 2027? Could be --and high-level languages are an enabler on that path.
Well, looking that far into the future is rather unrealistic, as we are fast approaching the limitations of silicon-based technologies. We have entered the region of diminishing returns, which will slow down the implementation of smaller and more advanced processes. It is quite possible that by 2017 we will have some computing technologies other than silicon-based transistor technologies coming onto the market.

Anyway, compilers really aren't that intelligent. You just cannot do that much with a compiler. High-level languages are really there to make complex program design easier for us. Attempting to make code designed for one architecture work on another just isn't going to work well in general, no matter how intelligent your compiler.
 
Xmas said:
Multisampling only reduces edge aliasing, the more samples you throw at it, the better. Same goes for supersampling, which reduces all kinds of spatial aliasing. The only time SS does more than it needs to is when the frequency of your shading function becomes too low, e.g. when you're magnifying textures.

But the difference is that in 99% of the real world situations all of the scene's edge aliasing can be taken care of with constant factor multisample, and more importantly, no work is wasted. Pretty much all multisample efforts by the chip contributes to the quality of the final picture, while supersampling commonly wastes huge amounts of work. You mention textures, but that's a solved problem with mipmapping and AF. It's wasted work already as it is on textures. That supersampling is not the general solution is obvious by the humongous amount of samples that would be needed for a supersample solution to reach the quality of anisotropic filtering.
Also, it's not like low frequency functions are uncommon or anything. I would say they are more common than the high frequency functions. And whenever you have those high frequency functions, supersampling seldom do a good enough job to be worth it.

Xmas said:
You can't fully "solve" the aliasing problem unless you have unlimited resolution.
There are certainly better solutions than supersampling for specific cases (though for several cases, supersampling is the best way). But that's the drawback, you need lots of different solutions for lots of different cases.

Which is the problem with supersampling, no amount of samples will ever cover all cases and many common cases cannot be solved without very high number of samples. A developer however can make proper adjustments to reduce frequencies and prevent these artifacts to occur to begin with. There will never be a toggle on/off solution for internal aliasing. But a developer can always deal with it. In the worst case nothing prevents him from supersample in the shader if neccesary.
 
Xmas wrote: You can't fully "solve" the aliasing problem unless you have unlimited resolution.

The solution to aliasing is to filter properly. If the scene is defined by a sampled image that is itself properly filtered (e.g. a texture taken from a photograph), then filtering by blending the samples can be made to work quite well, provided that there are enough samples relative to the resolution of the final image.

If the scene is defined by a function that is defined everywhere in space, it can still be filtered by generating point samples and then blending them. If the function is linear (e.g. gouraud shading) or close to linear, this works well, because an even (and fairly coarse) distribution of point samples can accurately model the function.

Blending point samples doesn't work as well for non-linear functions, e.g. specular lighting. It can take a *lot* of point samples to accurately approximate a non-linear function. This is quite wasteful of memory bandwidth, since the number of samples must be chosen for the worst-case non-linearity, which typically applies to a very small number of pixels on the surface. The number of samples per pixel typically must be constant for the entire scene, so *all* surfaces suffer the same inefficiency. If that isn't bad enough, the samples per pixel typically must be chosen before rendering the scene and probably can't be changed from frame to frame, so it is hard to make it adaptable to the degree of non-linearity in the functions defining the scene.

So while SSAA does reduce surface aliasing, it comes at a huge cost and it is at best only a limited solution to the problem. The other class of solutions is to change the rendering algorithm to take account of the size and shape of the pixels. This is not as simple as SSAA, but on the other hand, it can be made adpatable, so that most of the cost is incurred at the pixels where there is a lot of non-linearity. It increases the processing required per pixel, but processing power per pixel rendered is increasing faster than memory bandwhdth per pixel rendered.

Edge aliasing is a somewhat different problem. Algorithmic edge filtering is only possible if the rendered triangles are sorted in some fashion. Typically this is far too expensive to even consider. So at present, MSAA or SSAA are the only practical approaches for reducing edge aliasing.

Let's consider two cases for edge aliasing. First, suppose that the object being rendered is small relative to a pixel. In that case, neither SSAA or MSAA works very well, since the object can fall between the samples. This can cause a moving object to flash as it covers different numbers of samples at different positions. So there has to be some other solution for that problem.

MSAA works very well for removing edge aliasing if the object is large compared to the pixels. In that case, a correctly jittered MSAA pattern (e.g. rotated grid) allows N samples to provide nearly Nx higher resolution in both the X and Y dimensions. Extra memory bandwidth is mostly just required for pixels at surface edges, so the cost is adaptive relative to the size of the primitives. The effective increase in resolution is constant over the scene, and can be chosen in advance.

So I think that for edge aliasing, MSAA is a far better solution than SSAA. For surface aliasing, SSAA certainly has some benefits, but it seems to me to be both a costly and an incomplete solution. But what do I know? In the end, the real answer will come from game designers who either choose to develop shaders that filter algorithmically or choose to significantly reduce the number of pixels that can be rendered per frame. Possibly they will choose both, picking whatever level of SSAA the memory bandwidth will support and solving any remaining ruface aliasing problems algorithimcally.

Enjoy, Aranfell
 
aranfell said:
The other class of solutions is to change the rendering algorithm to take account of the size and shape of the pixels. This is not as simple as SSAA, but on the other hand, it can be made adpatable, so that most of the cost is incurred at the pixels where there is a lot of non-linearity. It increases the processing required per pixel, but processing power per pixel rendered is increasing faster than memory bandwhdth per pixel rendered.

Well, what we also know is that developers have real limits on how much effort they can spend on any given problem, and that the easier you make it for them to do so, the more likely it is that they will do so.

This leads me to think it would be awfully nice to have one or more of "the other class of solutions" you point at above standardized and made easier to use either thru the API or the tools available to program for the API, or both.

Anybody doing anything about that?
 
aranfell said:
So while SSAA does reduce surface aliasing, it comes at a huge cost and it is at best only a limited solution to the problem. The other class of solutions is to change the rendering algorithm to take account of the size and shape of the pixels. This is not as simple as SSAA, but on the other hand, it can be made adpatable, so that most of the cost is incurred at the pixels where there is a lot of non-linearity. It increases the processing required per pixel, but processing power per pixel rendered is increasing faster than memory bandwhdth per pixel rendered.
More importantly, in my opinion, is that supersampling requires more fillrate for the reduction in aliasing than a shader-based method.
 
geo said:
Well, what we also know is that developers have real limits on how much effort they can spend on any given problem, and that the easier you make it for them to do so, the more likely it is that they will do so.

This leads me to think it would be awfully nice to have one or more of "the other class of solutions" you point at above standardized and made easier to use either thru the API or the tools available to program for the API, or both.
I expect the "other class of solutions" to be made available through shader libraries in game engines, not through the API.
 
Chalnoth said:
geo said:
Well, what we also know is that developers have real limits on how much effort they can spend on any given problem, and that the easier you make it for them to do so, the more likely it is that they will do so.

This leads me to think it would be awfully nice to have one or more of "the other class of solutions" you point at above standardized and made easier to use either thru the API or the tools available to program for the API, or both.
I expect the "other class of solutions" to be made available through shader libraries in game engines, not through the API.

Which wouldn't be inconsistent with tools/sdk making it easier to develop those shader libraries. It would be a nice differentiator with OGL for MS. These things are getting so complex, and game engines are such a huge investment, I wonder how long before MS buys themself one, or a team to develop them.
 
Well, I'm sure HLSL and GLSL could use some extensions to make it easier to use shader libraries in general, but I don't think anything much beyond that is really feasible.
 
Geo wrote: it would be awfully nice to have one or more of "the other class of solutions" you point at above standardized and made easier to use...

Basically, the idea is to convert a function that defines a value at any point in space to a "filtered" form of the same function, which defines the result of applying a filter function around any point in space, where the area of the filter depends on pixel/sample size. This solution would seem to be specific to the function being computed, so I'm not clear how to "standardize" it in the sense of "just do this and your shader will work," except for "blend lots of point samples", of course, which is the attraction of SSAA.

On the other hand, filtered non-linear blending functions may not be that hard to approximate in practice. In particular, the filtered function may not need to be very complex in order to work well. For example, there's an old Gupta/Sproull paper that shows how to make beautiful AA lines by using a special sort of bell-shaded function to produce the alpha coverage based on the distance from the line center. They computed the bell-curve by filtering on a circular region with linear fall-off, but a couple of employers ago I saw demonstrations that AA lines look just as good with a three-segment approximation. The key turned out to be having a "falloff" region on the curve -- that's what made it look a lot better than a one-segment "sawtooth" falloff function.

So I suspect that if people writing shaders think in terms of doing quick hacks to adjust the shading equations based on pixel size (e.g. to flatten specular highlights in regions where the normal vector is changing fast relative to the pixel size), then they may that the result looks as good or better than SSAA, without the massive fillrate multiple.

Enjoy, Aranfell
 
As pixels become increasingly programmable there are fundamental, hard-to-solve problems.

The classic example is a single pixel of a high-gloss surface which is bump mapped representing three facets in equal proportions. The reflected vector from each of these three facets is lucky enough to impinge directly upon a light source, one each of red, green and blue of identical intensity.

The only 'correct' result pixel will be white (one third the intensity of the light sources). Tiny shifts in eye position should also generate one-third red, one-third blue, one-third green, all the combinations, and black (as you make individual reflections miss the light source).

I can't see a 'filtering' algorithm (that can be solved for 'one pixel') that can generate this (although I'd love to be proved wrong in this respect). You have to upsample one way or another - some raytrace-type solution perhaps, or more obviously supersampling (but out to infinity might be required).
 
I'm not a shader programmer, but I have a couple thoughts about the case of rendering a pixel that contains three micro-facets that point directly to red, green, and blue lights (and which therefore should appear white).

1) Does SSAA handle this case well? It seems to me that with SSAA, slight movements of the surface would cause the pixel to flash between combinations of the three lights, depending on exactly where the samples are relative to the micro-facets, even when the "correct" result (if we cast hundreds of rays from the pixel) still should be white.

2) What if we zoom out, so that the micro-facets are very small relative to the pixel? SSAA can't use enough samples to fix that. The pixel shader could handle this by adaptively sampling multiple positions in the bump map and blending the results to produce the pixel color. That should allow the shader to produce a white (ish) pixel. Presumably that should work even if we don't zoom out. The exception would be for a pixel that is at the edge of its surface, but that is what centroid sampling is for.

3) As an alternative, does anyone ever mip-map the bump map? It seems unlikely that simply filtering the normal vectors would create a good mip-map. So maybe the author needs to provide bump maps in multiple scales, or perhaps there could be an extra channel of information that expresses the rate-of-change of the normal vectors near the sample. This could be encoded in separate channel(s), or possibly could be encoded as the length of the normal vector. A normal that points somewhere between the three lights, with a shallow specular highlight, ought to produce a white-ish result.

4) Finally, the map is not the territory. The purpose of micro-faceting is to create more realistic rendering effects. If the author wants individual pixels to flash on really tiny movements, perhaps there is another way to achieve it. If the author wants smooth color transitions due to small changes in position (e.g. due to water ripples), then perhaps the basic micro-faceting algorithm needs to be adjusted. In any case, knowing the "correct" result in an ultimate sense requires knowing what the author is trying to achieve.

Enjoy, Aranfell
 
(A good point is that the same issue exists if the polygons making up the object are smaller than a single pixel).

1) As I noted, SSAA handles the case well as the number of samples tends towards infinity. Otherwise, you will see aliasing artifcats. The more samples you take, the higher the chance of seeing the correct result.

2) Ignoring the first two sentences, which confuse me: some approach along these lines may work if combined with zero-cost branching. But see the answer to 3...

3) Yes, bump-maps are usually mip-mapped (because without them performance is horrible) and as you say, any conventional filter cannot generate the case I describe. The solution you describe isn't particularly accurate, though: the majority of the time the answer should be black, not any type of white at all, if the surface is highly glossy.

4) I agree, although I don't think the case I describe is a 'designed-in' one - for which a designer would certainly come up with a better solution - it's one that just happens to occur as an object that is usually seen relatively close up moves into the distance.

I suspect that although these particular "rock hard to solve" cases aren't particularly important in isolation, they will contribute to a loss of photorealism as the rest of the rendering improves. Overall, the point of adding it to the discussion was that there are no 'easy hacks' to replace the ideal of infinite supersampling.
 
Dio said:
Overall, the point of adding it to the discussion was that there are no 'easy hacks' to replace the ideal of infinite supersampling.
Or rather, there are no easy hacks that work on a wide varity of situations. Each one that does work well only does so in a very specific situation. I'm hopeful that shader libraries will, within a couple of years, include the option for shader antialiasing for most shaders.
 
Tim said:
MasterBaiter said:
You guys need to upgrade your moniters. ;)

I noticed that Tim mentioned Zbrush, but I'd be curious to know what other software they are using. I would assume Maya....

They might use Maya, but 3ds Max has a bigger markedshare in the gaming industry and has better workflow for polygon modeling and is general more gaming oriented. Maya has an edge in regard to animation, and improved a lot when it comes to gaming oriented features.

Well, we do a lot of special animation, extremely high-rez, everything you can imagine - and yet we use max only for poly-stuff. ;)

The development tools are availeble for both Maya and Max.

True.
 
Back
Top