While I agree that supersampling is too costly to be applied always, I disagree on it not getting the job done.Humus said:CosmoKramer said:Yes of course SS is costly, primarily in fill-rate (including shader fill-rate) but it gets the job done while reducing all types of aliasing mentioned in this thread.
The problem is that supersampling does NOT get the job done. Multisampling does exactly what it's supposed to do, no more, no less. Supersampling either does a helluva lot more than it needs to do (which sometimes leads to visual flaws such as blurry text) or it does too little, leaving plenty of aliasing in place. Take for instance the problem with sparkling per pixel specular with normal maps. Applying supersampling will not solve this problem. Yes, the artifacts will be moved a tiny bit further away, but you'd probably need something like 64x supersampling or even more to make it really go away.
Supersampling is a general solution for reducing aliasing, because it simply works, on all kinds of spatial aliasing. You can't fully "solve" the aliasing problem unless you have unlimited resolution.Supersampling simply isn't a general solution. There is no general situation-independent solution for internal aliasing at all, unlike edge-aliasing which can gracefully be handled automatically. The only solution in the general case is that the developer who knows all the facts about the shader's behavior deals with any aliasing it may create himself.
I'm not so sure that supersampling is the best way for several cases. Imagine a complex shader that has some sort of visibility test (similar to an alpha test). It is concievable that only a small portion of the complex shader determines visibility, for example, one texture read. Now, supersampling may be the most general solution to take care of aliasing with alpha tests (if not the best-looking), but in a complex shader there's no reason to calculate the entire shader multiple times, only the portion that determines visibility.Xmas said:Supersampling is a general solution for reducing aliasing, because it simply works, on all kinds of spatial aliasing. You can't fully "solve" the aliasing problem unless you have unlimited resolution.
There are certainly better solutions than supersampling for specific cases (though for several cases, supersampling is the best way). But that's the drawback, you need lots of different solutions for lots of different cases.
Chalnoth said:No, parallelism must be built into the language for it to be useful. Doesn't have anything to do with the "guy that writes the compiler."
But it's not just that. It assumes the existence of a fair amount of specialized hardware that's just always going to be very slow to emulate on a CPU (triangle setup, texture filtering). Not only that, but due to design differences, the GPU's always going to be many times faster. And no matter how good your compiler is, using that standard Direct3D or OpenGL interface with a software renderer is always going to be very inefficient. There are more efficient (if lower-quality) ways to get things done if you're going to make a software renderer.geo said:And HLSL has parallelism built into it, right? So it's the compiler guys issue how to take what the programmer wrote using HLSL and its paradigms and turn it into machine code for that soi-distant cpu back-end.
Chalnoth said:But it's not just that. It assumes the existence of a fair amount of specialized hardware that's just always going to be very slow to emulate on a CPU (triangle setup, texture filtering). Not only that, but due to design differences, the GPU's always going to be many times faster. And no matter how good your compiler is, using that standard Direct3D or OpenGL interface with a software renderer is always going to be very inefficient. There are more efficient (if lower-quality) ways to get things done if you're going to make a software renderer.geo said:And HLSL has parallelism built into it, right? So it's the compiler guys issue how to take what the programmer wrote using HLSL and its paradigms and turn it into machine code for that soi-distant cpu back-end.
Well, looking that far into the future is rather unrealistic, as we are fast approaching the limitations of silicon-based technologies. We have entered the region of diminishing returns, which will slow down the implementation of smaller and more advanced processes. It is quite possible that by 2017 we will have some computing technologies other than silicon-based transistor technologies coming onto the market.geo said:Hardware is cheap. Human assets are expensive. I really do take your point, and (finally winding back to Sweeney!) 2007 is not doable for what I'm pointing at. 2017? 2027? Could be --and high-level languages are an enabler on that path.
Xmas said:Multisampling only reduces edge aliasing, the more samples you throw at it, the better. Same goes for supersampling, which reduces all kinds of spatial aliasing. The only time SS does more than it needs to is when the frequency of your shading function becomes too low, e.g. when you're magnifying textures.
Xmas said:You can't fully "solve" the aliasing problem unless you have unlimited resolution.
There are certainly better solutions than supersampling for specific cases (though for several cases, supersampling is the best way). But that's the drawback, you need lots of different solutions for lots of different cases.
aranfell said:The other class of solutions is to change the rendering algorithm to take account of the size and shape of the pixels. This is not as simple as SSAA, but on the other hand, it can be made adpatable, so that most of the cost is incurred at the pixels where there is a lot of non-linearity. It increases the processing required per pixel, but processing power per pixel rendered is increasing faster than memory bandwhdth per pixel rendered.
More importantly, in my opinion, is that supersampling requires more fillrate for the reduction in aliasing than a shader-based method.aranfell said:So while SSAA does reduce surface aliasing, it comes at a huge cost and it is at best only a limited solution to the problem. The other class of solutions is to change the rendering algorithm to take account of the size and shape of the pixels. This is not as simple as SSAA, but on the other hand, it can be made adpatable, so that most of the cost is incurred at the pixels where there is a lot of non-linearity. It increases the processing required per pixel, but processing power per pixel rendered is increasing faster than memory bandwhdth per pixel rendered.
I expect the "other class of solutions" to be made available through shader libraries in game engines, not through the API.geo said:Well, what we also know is that developers have real limits on how much effort they can spend on any given problem, and that the easier you make it for them to do so, the more likely it is that they will do so.
This leads me to think it would be awfully nice to have one or more of "the other class of solutions" you point at above standardized and made easier to use either thru the API or the tools available to program for the API, or both.
Chalnoth said:I expect the "other class of solutions" to be made available through shader libraries in game engines, not through the API.geo said:Well, what we also know is that developers have real limits on how much effort they can spend on any given problem, and that the easier you make it for them to do so, the more likely it is that they will do so.
This leads me to think it would be awfully nice to have one or more of "the other class of solutions" you point at above standardized and made easier to use either thru the API or the tools available to program for the API, or both.
Or rather, there are no easy hacks that work on a wide varity of situations. Each one that does work well only does so in a very specific situation. I'm hopeful that shader libraries will, within a couple of years, include the option for shader antialiasing for most shaders.Dio said:Overall, the point of adding it to the discussion was that there are no 'easy hacks' to replace the ideal of infinite supersampling.
Tim said:MasterBaiter said:You guys need to upgrade your moniters.
I noticed that Tim mentioned Zbrush, but I'd be curious to know what other software they are using. I would assume Maya....
They might use Maya, but 3ds Max has a bigger markedshare in the gaming industry and has better workflow for polygon modeling and is general more gaming oriented. Maya has an edge in regard to animation, and improved a lot when it comes to gaming oriented features.
The development tools are availeble for both Maya and Max.