Isn't majority of game media released nowadays exactly that? It's super-sampled with ridiculous sample counts, usually using non-ordered grid(poisson distributions FTW), and of course better then box-filter averaging.
It started as only print-media, but these days it's just common practice.
I don't know that I'd say "majority" actually take that level of care into getting good AA into so-called "bullshots." Most of what I've seen is just obscene oversampling on a regular grid, and at best, a tent filter. To be fair, though, it may be different for the companies that flood the media with a million and one screenshots.
Moreover, I'd say that short of those who are really attentive to these sorts of detail, I don't think there are even that many people within the industry who can catch the difference. You could apply a sinc filter on really high oversampled images, and after downsampling, only those who know anything about image processing and sampling theory (which is an extreme minority, FWIW) would catch the ringing. It usually takes something that dramatically and obviously outlines the failings of specific filter techniques to show how bad it is for most people -- and that often means animations, not still screens.
If you really want to be pedantic, it doesn't. If you tried to sample, say, an audio signal that contained a significant frequency component of, say, (192- 5)kHz, you'd end up with something that would sound like a 5kHz signal.
Fair enough. Though as you say you can prefilter...
The problem with 3D graphics is that we can't easily put in such a pre-filter. It'd be incredibly difficult to do and so we just resort to sampling at a higher rate and hope that any higher frequency components are insignificantly small. You then put in a post-filter to remove frequencies that are above that which is displayable in the target resolution.
Well, I can think of a few ways to accumulate samples that can sort of get that effect, but I don't see it as a possibility for hardware pretty much at any point in time. Not so much because the hardware manufacturers don't care, but because certain other constraints can easily supercede the problem of "correctness."
That too, various methods I can think of for accumulating weighted samples all get wrenches thrown into the mix when overdraw and blending become an issue -- that's part of why direct rasterization is evil. We should be sampling polygons, not drawing them, but again, no such hardware will ever exist on the mass market. We'll just have GPGPU people go through pointless academic exercises to that effect which run in "realtime" when they draw a scene of 3 objects at 8 FPS. And then more idiots will go around on forums thinking this will be the next big thing.
Screen bluring doesn't sound like it can solve the temporal aspect of aliasing.. the way those near horizontal slow moving edges jolt from frame to frame.
Well, it could if the blur is so broad and unbiased that it hides any aspect of previously sampling on a regular grid.
Actually, what you really need is some way of telling you how to adjust the blurring that is (or at least can be) not so dependent on the regularity of the pixel grid. Which is hard to do in image space since you've already lost any information about the actual geometry.
Still, i did wonder if there was an edge-AA type solution that might make sense on the PS3, e.g. with SPU's processing geometry (inc backface cull) to pick out edges... hmmm....
does seem like the sort of "thinking-out-of-the-box" type solution that may have some merit (on a machine with an overpowered CPU and underpowered GPU.)
There are tricks that use an additional geometry pass and look for edges by seeing that the rendered sample is not quite centered on the pixel, which drives a resampling from the previously rendered image. But the real problem is just... can you afford to move geometry down again? If the answer was always yes, I think just about everybody and his brother would use these sorts of tricks.