*spinoff* Quality of Edge AA... stuff

I think 900p with FXAA is the better option, aliasing is far more noticeable (and suspension of disbelief breaking) visual artifact than slightly blurrier textures due to post-process AA and 792p with MSAA has more jaggies.

I was very impressed by post-process AA in the later games of last generation, eg. GTA5 looked fantastic with Rockstar's post-process AA (definitely wasn't a noticeable downgrade from GTA4 or RDR's 2xMSAA) - and this was just at 720p, post-AA works far better as resolution increases.

Games like GTA5 definitely look cleaner than sub-720p games with 2xMSAA as well (eg. COD).
So I think here too 900p with post AA would have been a better option than 792p 2xMSAA.
I would also add Skyrim to the list. Skyrim is a game where I have yet so see a single jaggie. :oops:

Globalisateur, you are pretty good at studying pixels, keep it up, you might go far.
 

I wonder if the aliasing problems can ever be solved in realtime. Movie VFX renders spend a lot of performance on computing the extra samples, catching up to it would take insane amounts of power...
 
I wonder if the aliasing problems can ever be solved in realtime.
It's never going to be completely solved as long as we're trying to pick up infinitely high-frequency detail with discrete point-sampling techniques.

But it might be ameliorated to the point that people stop caring.
 
Given the apparent result and quality provided by SMAA T2x (InFamous) and SMAA 1Tx (Ryse), why isn't it more proliferate among other titles? why the insistence on using FXAA? Are they that much more expensive a technique?
 
Given the apparent result and quality provided by SMAA T2x (InFamous) and SMAA 1Tx (Ryse), why isn't it more proliferate among other titles? why the insistence on using FXAA? Are they that much more expensive a technique?

It needs to spread... programmers need to get used to it... and even "find" it first. Performance seems to be a non issue, for the moment (difference between FXAA and SMAA on a high end card is comparable).

FXAA is fully "drop in" post process. Don't know about SMAA and the others.
 
It's never going to be completely solved as long as we're trying to pick up infinitely high-frequency detail with discrete point-sampling techniques.

I dunno, I consider movie VFX level AA to be enough to solve the issues.

This is actually the source of the problem - movies were always expected to deliver aliasing free images, because the effects would not have been accepted as "real" footage otherwise. So VFX and animation has allocated enough offline processing power for the extra samples from the start.

Games were, on the other hand, given a free pass to spend all their resources on an aliased image, because interactivity was such a great extra feature that could not have been achieved otherwise - and there were too many other shortcomings to keep the image from appearing "real" anyway.
But now that they're almost done catching up in the general looks, the IQ issues are becoming a more obvious problem. And people will care about pixel crawl and flickering highlights, they do get in the way IMHO.

But it might be ameliorated to the point that people stop caring.

For a while I thought that the gap is going to start to close soon, as the engines get close to feature complete and the resolution caps at 1080p. Spending more on IQ instead of on more detail would be less of a competitive disadvantage and it could become the new standard in realtime as well, eventually...

But apparently VR will need something like an order of magnitude increase in pixel count, so there goes the room for spending that extra power on better pixels. In fact it's probably going to be all kinds of clever undersampling and reprojection tech that the industry seems to be heading for the next few years or more. Just look at Killzone.
 
I dunno, I consider movie VFX level AA to be enough to solve the issues.
I meant in a kind of theoretical sense. Until you're actually accurately capturing coverages of everything, high-frequency detail (which includes basic geometry edges!) will be prone to misrepresentation.

But like I said; we can ameliorate it to the point that nobody cares. (Maybe that needs an addendum: Or, for any reasonable scene, can see the difference. Obviously you could specify a scene that would inaccurately shimmer like crazy even when someone decides to render it at something silly like 8k16384xSGSSAA, but that sort of imagery probably doesn't come up very often.)

Certainly in most reasonable cases, the insane sampling already in use by CGI studios is sufficient for the human eye.
 
I've been in the CG business for like 14 years - if I'd put theoretical sense first, that would not be possible ;)

Which is also why my question was about the practical solution for AA issues in real time. "Complete" meaning that there's no noticeable artifacting, but the image isn't blurred either and retains the sharpness expected from a photo or a movie.
 
Given the apparent result and quality provided by SMAA T2x (InFamous) and SMAA 1Tx (Ryse), why isn't it more proliferate among other titles? why the insistence on using FXAA? Are they that much more expensive a technique?

I think it's explained by several reasons:

- First like you said SMAA is still a bit more expensive than FXAA (bandwidth and VRam memory even considering only the morphological part SMAA 1x) and it's only more interesting to use it now with those Tflops GPUs with very fast bandwidth.

- Because of its memory buffers requirement, it's more time consuming to implement SMAA in a game than FXAA (we are talking about hours of additional work).

- Also SMAA is rather recent and roughly 2 years younger than FXAA.

- I suppose ignorance. Some devs may not be aware that such an efficient and cheap AA exists.

- Finally, image quality is usually not a top priority amongst game directors especially when you have the PC build version to show off for marketing purposes.

The very good morphological MLAA (which is technically SMAA predecessor) was often seen in multiplats (TombRaider, BF3 hell even BF4 has it) and first parties games (KZ3, God of war 3, LittleBigPlanet) on PS3 only because I assume the devs were given the whole procedure to implement it in their game and most importantly as MLAA is running on the SPEs, it's literally GPU free.
 
But apparently VR will need something like an order of magnitude increase in pixel count, so there goes the room for spending that extra power on better pixels. In fact it's probably going to be all kinds of clever undersampling and reprojection tech that the industry seems to be heading for the next few years or more. Just look at Killzone.
I don't see anything wrong with that, and if it works well, it makes sense for the CG industry to adopt similar solutions for speed of development. How the visuals are done doesn't matter as long as it looks right, and I believe mathematically sound interpolations and reprojections will prove perfectly serviceable, in the same way the lossy compression of video codecs, completely unlike real life, provides a convincing simulation of a real life view.

In ten (or twenty!) years' time, the difference between CGI and realtime/games may just be a matter of scale - the algorithms may be identical.
 
In a VR gear you may have a good idea about where the focus of the viewer is, but in a movie theater, every pixel is equally important (even with DOF or motion blur, as high quality still matters there).

You see, the costs of the infrastructure for a Weta or ILM sized render farm are so astronomical that if there was a simple way to do it with less, they certainly would have found it by now. I mean, even we are somewhat limited in our choices for office space because of the various requirements for networking, electric power, load bearing of the floor and aircon and such, and we're not even at 1% of such a facility's computing power.
 


That's really interesting. Good find!

So the AA in AC4 is really a modification of SMAA 2tx like I suspected when I first saw it (in november 2013).

But I would say that even if AC4 AA has advantages over Infamous AA (which is itself a modified SMAA), the edges are still less clean than the edges seen in Infamous but I admit that AC4 AA may even look crisper/sharper.

Like I already said, I would be very happy if they use AC4 AA in Watchdogs! Now a confirmed modification (like Infamous in fact) of SMAA 2tx!
 
Funny that in the NeoGaf next-gen screenshot thread (which is currently basically the Infamous screenshot thread), some people ask if they could use (with a patch) Infamous AA in Killzone and MGS5 because of complaints those 2 games were comparatively blurry and/or with an average AA.

Specifically with Killzone some have noticed that nasty (?FXAA?) artifacts which doesn't mix well at all with the temporall AA leaving some "dots/dithering" stuff near the edges when in motion. Also one said the game could benefit being sharper.

For Ground Zeroes one said that the game didn't look 1080p because it looked so blurry (again this theory about perceived resolution > real resolution). Anecdotal evidence I know, but still, the size of the evidence is growing.

But for MGS5 I don't think post approximative AA is to blame (for once), I think it's because of the TXAA-ish temporal AA used (but I could be wrong). But anyway those 2 games could really benefit from SMAA 2tx.
 
Has anyone officially done a pixel count for ISS (>1080p)? Something about the pixel density seems more akin of super-sampling at a higher resolution. I’m no expert in this area, however, the pixels sizes seems finer (smaller) and the density of them seems closer together than AC4 and KZSF.
 
Has anyone officially done a pixel count for ISS (>1080p)? Something about the pixel density seems more akin of super-sampling at a higher resolution. I’m no expert in this area, however, the pixels sizes seems finer (smaller) and the density of them seems closer together than AC4 and KZSF.

It's the temporal anti-aliasing SMAA 2TX which adds sampling (information) in the scene particularly for the details of the geometry. It's also called temporal super-sampling AA.
 
It's the temporal anti-aliasing SMAA 2TX which adds sampling (information) in the scene particularly for the details of the geometry. It's also called temporal super-sampling AA.

I somewhat figured that... the IQ is just way to clean at 1080p. Hopefully, Sucker Punch inspires other PS4 developers on creating not just equal IQ, but even greater results. Not saying ISS is perfect, just that it's amazing at this early stage of PS4 life.
 
Why doesn't every game use this type of AA??? Not that I have any complaints from all the other games I saw running on ps4, to be honest.
 
Why doesn't every game use this type of AA??? Not that I have any complaints from all the other games I saw running on ps4, to be honest.

I would think (believe) it boils down to development time, performance wanted/needed for such game and the overall trade-offs on using SMAA 2XT. Unless, SMAA 2XT isn't overly taxing for PS4, regardless of game scope. But I'm doubtful of that...
 
Back
Top