Graphics performance tricks that can be reused for next generation

Proelite

Veteran
Supporter
I am interested in compiling a list of tricks that devs use to achieve the illusion of better graphics that can be used ubiquitously, regardless of architecture, power and platform.

I have these on the top of my mind.

Post processing AA such as FXAA, MLAA, or TAA
Deferred Rendering
Dynamic Resolution
Texture Streaming
Bungie's imposter system
 
This set is so vast that pretty much every technique would fit into it.

Which is why this question is rather meaningless.
 
I am interested in compiling a list of tricks that devs use to achieve the illusion of better graphics that can be used ubiquitously, regardless of architecture, power and platform.

I have these on the top of my mind.

Post processing AA such as FXAA, MLAA, or TAA
Deferred Rendering
Dynamic Resolution
Texture Streaming
Bungie's imposter system
You may be interested in the "Little Big Planet" talk that was given @SIGGRAPH this year (and @ other conferences I believe).

[UPDATE: The slides are here]
 
Impostors and reprojection techniques will become more and more important in the future, as the cost to process each pixel gets higher and higher (more complex lighting and materials). Virtual texturing (with uniquely mapped surfaces) can be also used as a runtime surface cache. The faster frame rate we have, the more identical two frames will become, and the more we can reuse data.

Also pixels are also getting smaller as resolutions are increasing (esp in mobile phones and tablets). Soon you don't need (or want) to render everything anymore at full resolution. Low frequency stuff such as smoke, fog, out of focus areas, etc can be rendered with lower resolution without any noticeable effect on image quality. Some areas in screen need more resolution, and if we can design clever algorithms to spend our cycles in areas that matter the most, the perceivable quality of image will drastically improve. Upgrade from 720p -> 1080p requires 2.5x (pixel shader) performance if we do not do anything to combat the increased number of pixels that need to be processed.
 
I wouldn't call it a trick, but I think (hope?) next gen should net very nice increases in visuals without a ton of extra work on the backend of content creation *in all cases*. Take Gears of War where Epic is already using a lot of very high resolution source art--millions, even tens of millions, of poly source art. Their current ingame content models are in the thousands of poly range with normal maps.

Next gen, hopefully, will allow for a nice bump in the game models geometry + higher quality normals, diffuse, specular, etc as well as POM and maybe even some intelligent displacement mapping where it makes sense. In addition to much better lighting and shadowing techniques. These things will require more engine work, especially displacement maps (but the same thing was said of PRT on animated objects this gen). Anyhow, once the render tech is worked out the same killer source art used this gen can be utilized to much better results in game.

Laa-Yosh and others can correct me if I am wrong, but it seems this generation had a lot of growing pains as many developers were going from hundreds to thousands poly models and then moving on up this generation to models in the 10k-50k range with an emphasis on faked geometry via normals. This requires new tools and new skill sets and a lot more time. The tools are a lot better now, the industry is more experiences, and while cutting edge games may need even more detailed source art I am not sure every games art budget needs to take the kind of hits from the Xbox/PS2 to the 360/PS3 did.
 
i feel this generation is where developers really understood scalable graphics frameworks. so alot of what they do now will carry over without having to throw out the code base.
 
I am interested in compiling a list of tricks that devs use to achieve the illusion of better graphics

Call me pedantic, but I would take issue with this wording. "Good graphics" just means something looks pleasing to the player, so I'd have a hard time coming up with something that developers do that somehow tricks players into thinking they see something that looks better than what they actually see. You could talk about common approximations and simplifications used in real-time rendering where the technique doesn't accurately reflect some physical phenomena, but an accurate physical simulation isn't equivalent to "good graphics".

Either way, things like "deferred rendering" and "texture streaming" have absolutely no place on your list, since these techniques are merely optimizations meant to tailor techniques to current hardware rather than simplifications or approximations.
 
I have the feeling that adaptation will be rather big in future games! Adaptation in the sense of adaptive antialiasing, adaptive screen resolution, adaptive effects resolution,...I wonder which stuff in an engine can be adapted as well?
I also wonder which adaptation goals one should follow: it seems to me that at the moment, the only adaptation goal is "keep high framerate".
 
Please let's get away from post-process AA next generation. On today's very limited resources it makes sense, but the next consoles will be able to do much better.
 
I think we'll see more of the BF3 Xbox "HD install" thing, with standard texture packs delivered to digital distribution, diskless consoles, free users, while HD packs reserved for optical media SKUs, consoles with HDDs, and premium users.
 
Please let's get away from post-process AA next generation. On today's very limited resources it makes sense, but the next consoles will be able to do much better.

Maybe not quite do away with it completely, but I hope for a more selective approach when it's being used. I still think God of War 3 is far and away the best example of the successful implementation of post AA and looks phenomenal because of it. Every subsequent game didn't fare anywhere near as well unfortunately.
 
Last edited by a moderator:
Maybe not quite do away with it completely, but I hope for a more selective approach when it's being used. I still think God of War 3 is far and away the best example of the successful implementation of post AA and looks phenomenal because of it. Every subsequent game didn't fare anywhere near as well unfortunately.

Yeah, I wonder myself...GOW3 looks so good and clean...compare this to Killzone3 which is a jaggie mess in comparison imo!!
I do think that postprocessing AA is valid for future consoles, as I am using it right now in BF3 without MSAA (too expensive, I prefer more effects and bling), although I have a 480GTX in my rig and enough RAM and everything else...
 
MSAA doesn't necessarily have to replace post-process AA - this paper shows MLAA extended to use multisampling
http://www.iryoku.com/smaa/
Nor it should.

Perhaps we will see futurework which will incorporate this edge finding to high amount of sub samples and framebuffer scaling.
We should be able to scale image quite far down before it would be obvious when using 4x or 8x MSAA sample patterns.
When scene is not that demanding we would get nice image quality as well.
 
Back
Top