Hundreds of new graphics rendering papers are released every year. New ground breaking techniques that change our graphics rendering pipelines radically get introduced almost every year. We are nowhere close to knowing how to craft a "perfect" rendering pipeline. I have been working 10 years in the field, and the progress hasn't slowed down, every year we get more new research. The arrival of DX11 compute shaders and new console generation that supports efficient general purpose GPU compute will result in more new reseach than ever occurring in the game graphics field. Mark my words: there will be huge new innovations for graphics rendering coming in the following 3-5 years. And programmable pipelines are the key that allows this innovation to happen.Computer graphics rendering is no magic that needs endless iterations of new approaches. The list of all rendering problems can put on a single page and the list of unique concepts to solve them can be condensed to a couple lines.
Seemingly simple things such as real time shadow map rendering are still mostly based around hacks and shortcuts. There is no perfect way that suits every occasion, and is widely accepted as the best solution. All the main areas of shadow mapping are still heavily debated: shadow filtering / antialiasing (PCF, ESM, VSM, EVSM, CSM, screen space bilateral, etc), various projections / warping / logarithmic rendering, texel distribution / cascade splitting / tile splitting, acne prevention (various biasing techniques, back faces / middle, etc), resolution improvement techniques (storing edge info to pixels), etc, etc, etc. In addition to quality improvements, there's huge amount of open research in improving shadow map rendering performance. I have to remind that this list of open issues is incomplete, and discusses only simple hard edged shadows. Soft shadowing (from area light sources) is an area that has huge amount of new research going on as well (such as sparse voxel octree cone tracing based techniques). And translucent shadow research has started to become a hot topic lately as well.
Link to SIGGRAPH 2013 shadow mapping course slides:
http://www.realtimeshadows.com/sites/default/files/sig2013-course-hardshadows_0.pdf
And this is just one field of real time graphics rendering research (shadow mapping). Single deferred rendering become popular (~2009) there's been endless stream of papers about different lighting/material pipelines that are decoupled from the fixed function rasterization.
I don't think there's any real possibility for fixed function rendering hardware as long as we cannot even decide what is the most efficient way to store our geometry in the future (voxels, mathematical curves / subdivisions, triangles) and how we want to get his data set to the screen (rasterize/project "scatter", ray/path/cone trace "gather"). And this is just the opaque geometry. Dust/fog/water/particles and other volumetric substances require their own techniques as well (and their own fixed function hardware units), if we don't have programmable hardware to render them.
Last edited by a moderator: