Will next gen games be fully anti-aliased?

"Will next gen games be fully anti-aliased?"

Yes! Blur filters ala MLAA and FXAA ensure that the screen will be, errr, fully anti-aliased. The remaining edges will be called pixel tears caused by all the washed out non-edge detail being surpressed. Ps- For $10 I can anti-alias all your current gen games. I take no responsibility for any damage Vaseline will do to your screen though.

All I know is that GoWIII's MLAA implementation this gen has been the best AA implementation of any console game i've seen this generation. I didn't spot any blur or subpixel aliasing, and would be perfectly happy for all games next-gen to have that level of image quality or better.
 
Silly question:
Why not just render at a very high resolution (say, QFHD 2160p or higher?) and then use a high-quality hardware downscaler to 1080p?
Wouldn't that achieve better quality than multi-sample + blur filters?



As higher DPI screens become standardized, I think antialiasing will eventually lose its importance at anything above 2xMSAA + FXAA.

For example from my experience, a 1080p 15" laptop screen will hardly make use of more than 2xMSAA.
 
Tottentranz: Shifty already answered that question above. >_>

Ok sorry.

But wouldn't the higher-resolution rendering + downscaling provide the best possible visual outcome?
Is the performance difference so big that it doesn't justify pursuing this method over others that bring some disadvantages to the scene?



BTW, why isn't this a default feature for graphics drivers in PC games?
How hard would it be to "trick" the games into accepting huge resolutions and then the driver would handle the downscaling to the monitor's highest possible native resolution (as we see in videos)?

I know I would definitely enjoy that in less demanding games like console ports.
 
But wouldn't the higher-resolution rendering + downscaling provide the best possible visual outcome?
Of course.

Is the performance difference so big that it doesn't justify pursuing this method over others that bring some disadvantages to the scene?
Well, you're increasing the texture sampling and shading requirements linearly.
 
I sometimes downscale 2720x1536 on my 1360x768 TV and it's about as good as 4x ordered grid SSAA. It's not even close to as good as AMD's RGSSAA modes. I definitely prefer it over FXAA and the like though.

I can see the post process AA options being the preferred route because it's cheap and easy. This combined with 1080p being a lot nicer than 640-720p will be a noticeable improvement for the console folk.
 
4x SSAA is only going ot provide the same edge antialiasing as 4xMSAA. For really smooth edges, algorithmic processes are more effective. The trick is filtering the edges and not the surfaces/textures.
 
From a PC? How do you do that?

The NVIDIA custom resolution trick. You define a huge rez and then let the GPU downscale it to your native rez. It works very reliably but while it helps it's only about as good as OGSSAA. But considering other AA rarely works these days, or you can't use transparency AA, and shader effect aliasing is everywhere, and the current FXAA/MLAA stuff sucks, the downscale trick is useful.

And while MSAA is better for poly edges, the downscale trick arguably improves the consistency of the whole picture. It is supersampling afterall. That counts for something even if it's not RGSSAA. Sometimes I add on 2x MSAA for a little extra aliasing killing but it's hard to notice.
 
Last time I dug around in Catalyst Control Centre there was an option to enable supersample AA. It does a great job on edges but I believe it blurs the textures by softening all detail in the frame:

http://www.hardocp.com/article/2009/09/22/amds_ati_radeon_hd_5870_video_card_review/5

"An old feature being brought back to life on the Radeon HD 5800 series is Supersample AA! Yes, full Supersample AA is now selectable in Catalyst Control Center. You will be able to select whether Multisample AA or Supersample AA is used in all your games. This of course demands a LOT from the hardware, but it has the performance to at least allow some level of it in most games, at lower resolutions. We will show you how this looks in games on the next page. It is great to have this option now present."


"Supersampling AA definitely works in reducing aliasing, on all textures, polygons and even eliminates shader aliasing. However, as you can see above in all three screenshot comparisons, enabling Supersampling AA also causes a reduction in texture quality. By enabling even the lowest 2X Supersampling AA we found that textures started to look blurry, losing their detail and crispness. This, unfortunately, is the nature of Supersampling AA. It literally is a full screen AA method that eliminates jaggies on EVERYTHING, but at the same time degrades texture quality."
 
Ok sorry.

But wouldn't the higher-resolution rendering + downscaling provide the best possible visual outcome?
Is the performance difference so big that it doesn't justify pursuing this method over others that bring some disadvantages to the scene?



BTW, why isn't this a default feature for graphics drivers in PC games?
How hard would it be to "trick" the games into accepting huge resolutions and then the driver would handle the downscaling to the monitor's highest possible native resolution (as we see in videos)?

I know I would definitely enjoy that in less demanding games like console ports.

Adaptive SSAA is already a better solution than this IMO. Less of an overhead with the same advantage of being able to sample from a higher resolution image.
 
Silly question:
Why not just render at a very high resolution (say, QFHD 2160p or higher?) and then use a high-quality hardware downscaler to 1080p?
No it wouldn't. To see this you need to take the fourier transform of a rectangular grid of points (representing your "QFHD 2160p or higher" source image) and comparing the frequency space result with the transform of a "less regular" placement of sample locations. The latter should shift the unrepresentably high frequencies (i.e those that alias) further towards the blue end of the spectrum which make them easier to filter out.
 
Adaptive SSAA is already a better solution than this IMO. Less of an overhead with the same advantage of being able to sample from a higher resolution image.
Are you referring to some sort of special SSAA or to AMD's Adaptive AA (alpha texture anti-aliasing)? Combining MSAA and Adaptive/Transparency AA still misses a lot of aliasing of various effects in a current-day game. One has to go far back in time to find a game that is thoroughly anti-aliased by MSAA+AAA/TAA.
 
I don't know if adaptive SSAA is implemented in any GPU solution, but it's a technique that effectively measures what you're rendering and adds samples as needed. It was used in the Cell terrain demo many years ago. You don't waste cycles rendering 16 samples of blue sky or similar grass, but you'd render 16 samples per pixel along edges or noisy surfaces.
 
I don't know if adaptive SSAA is implemented in any GPU solution, but it's a technique that effectively measures what you're rendering and adds samples as needed. It was used in the Cell terrain demo many years ago. You don't waste cycles rendering 16 samples of blue sky or similar grass, but you'd render 16 samples per pixel along edges or noisy surfaces.
This should be quite easily possible with any deferred renderer using MSAA.
You already have to find edges for secondary pass to perform super sampling.
In this phase you could mark certain materials or search areas which have strong normal variation for extra sampling.

IMHO.
Proper way is to reduce the shading aliasing is in the shader itself, this way you do not have sampling limits, you can do it analytically or prefilter data.. (Clean/Lean mapping..etc.)
 
Last edited by a moderator:
They'll be using the newer versions of FXAA/MLAA but MSAA is still the best option there is..

Why I love being a PC gamer, MSAA+TrSAA is the king of IQ!

And it's even better now TrSAA is fully supported in DX9/10/11 :p
 
This should be quite easily possible with any deferred renderer using MSAA.
You already have to find edges for secondary pass to perform super sampling.
In this phase you could mark certain materials or search areas which have strong normal variation for extra sampling.

IMHO.
Proper way is to reduce the shading aliasing is in the shader itself, this way you do not have sampling limits, you can do it analytically or prefilter data.. (Clean/Lean mapping..etc.)

Well the problem is that you don't want to just supersample the lighting pass, you also want to supersample the shader that outputs the G-Buffer properties (and also the alpha test, if you're using that). This is trickier, unless you decide per shader/material what's going to be run at per-sample frequency. It can also be really expensive.
 
Back
Top