Beyond Programmable Shading SIGGRAPH 2010 slides posted

What do you find contentious there? Deferred shading does have high bandwidth costs and complication with MSAA usage.

1) A forward renderer with high MSAA setting would damand even more bandwidth, a lot more than a next-gen deferred system, because ...

2) Given a sufficiently programmable hardware (specifically post pixel shader) MSAA should become obsolete.
Solving edge aliasing along the lines of MLAA is already proving popular and given a fast GPU implementation can become the method of choice very soon. It could easily be extended to include subpixel anti-aliasing similar to how Order Independent Transparency works.
 
Note your #2 there about not using MSAA and wanting different hardware setup. Things are always better if we build you hardware for it. Just like Kayvon's stuff is problematic without the hardware to do it.

The main issue in general is that sometimes people go to deferred shading because the standard pipeline doesn't do what they want. The general problem with this is that the standard pipelines are heavily tuned to be good at what they do. And then people complain that their out of pipe stuff is slow, like multiple round trips through global memory for deferred rendering. ;-)

Most uses of deferred rendering have higher bandwidths because you can't keep data on chip and must take round trips through memory. That being said, if you have massive data movement during forward rendering, say like large expansion in GS on the first implementations, you either go through memory or you drop to single threaded in hardware. (JRK mentioned this in his slides)

In the end, the game is really to manage data movement and keep the ALUs busy. If you have a deferred scheme that is better and keeping the chip busy, you win. If you can do effects that the standard pipe cannot and you can make framerate, you win. But this is where the research is along with the (truly) minor hardware modifications that can be done to improve a technique.
 
Back
Top