I've been thinking on this one for quite a while.
Basically, I had this idea floating about - that for a deferred renderer, you could get away without using MRT at all. Simply render depth + diffuse colour.
Now, if it's actually practical is totally besides the point (although I could see use on the 360, tiling advantages... less bandwidth hit).
My 'ultimate' idea revolves around not using normal maps at all - go all out with displacement maps, and let the depth dictate the lighting. 360 rears it's head here again. However one point crops up - I'll get to in a moment.
Anyway, I was mucking about, and actually got the damned thing to work. Which is always a nice thing. And to an extent it works better than I expected.
So. The whole idea is that you can get the information you need for your light pass, ie, position + normal all from the depth buffer.
The first challenge was getting position. Naturally being me I didn't look for the answer, but worked it out myself. It's a pretty simple quadratic equation problem. But enough of that,
The other bit was getting a normal.
And this is where things get fun, and SM3.0 only
So basically I finally have a use for ddx() and ddy() ;-). ddx and ddy are partial derivatives between adjacent pixels in the quad, horizontally and vertically. So you can work out what the change in a value is compared to the neighbour pixel. This, as it turns out, is nice and easy for working out change in surface position on the x and y axis in screen space. Cross these, and you have a normal
Anyway. Here is some shader code:
And here is a not-so-pretty pictar.
There are lots of ways this could be modified of course, store height in the alpha channel of your colour map to jitter things around. Could work.
Of course the thing I didn't think ofwas that it's all flat shaded (*slap*) - Soo... Er.... MORE TESSELLATION!
I'm not actually too sure if you could realistically see a performance boost using this method. Although it does, in effect, provide a fairly potent form of normal map compression.
I suppose in bandwidth restricted environments, there could well be a boost.
Thoughts? Questions? Links to papers already demonstrating this?
* ohh, and yes that is a quake3 map in an XNA application. I'm slowly porting an old .net Q3 renderer I upgraded a year or so back.. pics: 1, 2, 3 (thats not lightmapped btw )
Basically, I had this idea floating about - that for a deferred renderer, you could get away without using MRT at all. Simply render depth + diffuse colour.
Now, if it's actually practical is totally besides the point (although I could see use on the 360, tiling advantages... less bandwidth hit).
My 'ultimate' idea revolves around not using normal maps at all - go all out with displacement maps, and let the depth dictate the lighting. 360 rears it's head here again. However one point crops up - I'll get to in a moment.
Anyway, I was mucking about, and actually got the damned thing to work. Which is always a nice thing. And to an extent it works better than I expected.
So. The whole idea is that you can get the information you need for your light pass, ie, position + normal all from the depth buffer.
The first challenge was getting position. Naturally being me I didn't look for the answer, but worked it out myself. It's a pretty simple quadratic equation problem. But enough of that,
The other bit was getting a normal.
And this is where things get fun, and SM3.0 only
So basically I finally have a use for ddx() and ddy() ;-). ddx and ddy are partial derivatives between adjacent pixels in the quad, horizontally and vertically. So you can work out what the change in a value is compared to the neighbour pixel. This, as it turns out, is nice and easy for working out change in surface position on the x and y axis in screen space. Cross these, and you have a normal
Anyway. Here is some shader code:
Code:
float4x4 wvpMatrixINV; // inverse world view projection
float near = 50.0; // near clip
float far = 5000.0;// far clip
float2 window = float2(800,600);
float2 windowHalf = window * 0.5;
...
float writtenDepth = .... // load depth from depth buffer;
//quadratic to get real depth (aka w)
float depth = -near / (-1+writtenDepth * ((far-near)/far));
float2 screen = (vpos.xy-windowHalf)/windowHalf; // screen pixel point
float4 worldPoint = float4(screen,writeDepth,1) * depth; // get the vertex position. * depth accounts for the /w perspective divide
float4 worldPointInv = mul(worldPoint,wvpMatrixINV); // mul by inverse of the world view projection matrix
//yay
float3 normal = normalize(cross(ddx(world.xyz),ddy(world.xyz)));
//dance
Output.Colour = float4(normal*0.5+0.5,0);
And here is a not-so-pretty pictar.
There are lots of ways this could be modified of course, store height in the alpha channel of your colour map to jitter things around. Could work.
Of course the thing I didn't think ofwas that it's all flat shaded (*slap*) - Soo... Er.... MORE TESSELLATION!
I'm not actually too sure if you could realistically see a performance boost using this method. Although it does, in effect, provide a fairly potent form of normal map compression.
I suppose in bandwidth restricted environments, there could well be a boost.
Thoughts? Questions? Links to papers already demonstrating this?
* ohh, and yes that is a quake3 map in an XNA application. I'm slowly porting an old .net Q3 renderer I upgraded a year or so back.. pics: 1, 2, 3 (thats not lightmapped btw )