Variance Shadow Maps Demo (D3D10)

How do you fetch the raw samples individually in D3D10? A 512x512 render texture with 4xAA has 1024^2 samples in it.

jbizzler, are you trying to get the samples before resolving? I would expect that to look more aliased due to the jittering of the samples. AA works with VSM because the resolving is mathematically correct, and storage/performance penalties are low. You might get better results with your own custom resolve with a bigger kernel.

BTW, did you mean Andy instead of Alex?
 
Yeah, Andy, not Alex, sorry. It has been edited.

A 512x512 render texture with 4xAA has 1024^2 samples in it.
I'm confused. Does this mean the other "samples" in the multisampled texture are out an beyond the 512x512 I think about the regular texture? Do I have to, then, use the offset or something to access that info?
 
I tried something like that:

Code:
for(int i = 0; i < 4; i++)
[INDENT]moments += Texture.Load(int3(tex.x * 512, tex.y * 512, 0), i).xy;[/INDENT]
moments *= 1.0f / 4;
And it produced the same ugly shadow as before.

Is there something you have to do elseware. Like, after a texture is flagged to multisample, do you have to manually do it in the render loop?
 
I'm afraid I'm seriously outta my depth on this, but I thought you'd like to see another code snippet that accesses the multisamples. Someone else will hopefully help with the VSM specific stuff.

Jawed
 
Sorry for the delay in responding... I've been at a conference for the past few days.

I don't have a lot of time to look at the problem in depth but here are a few notes that may help:

1) As Mintmaster noted, just use a standard resolve on the MSAA surface into a non-MSAAed surface... that works just fine with VSM.

2) Use a standard texture read if you're doing standard VSM with fp32: "Load" does not support filtering. Use a sampler with all of the filtering the hardware can give you. Of course "Load" is (probably?) required if you're using int32 but you won't get filtering with that... of course that isn't a problem with SAVSM since you're doing your own filtering.

3) Full demo source will be in Gems 3. I can post snippets here though if you have specific questions.

Good luck!
 
My problem is, if I just do a regular Sample() call, the shader fails to compile for some reason. It's just a float2 (R32G32_FLOAT), so I don't see why Sample() doens't work. Is there anything special I need to do to the SamplerState?
 
Mintmaster, thanks for solving this problem for me a while back. I had no idea what resolving was, so I didn't try it. But now I do. Thanks so much guys.
 
:eek: AndyTX, you're in the now-released GPU Gems 3! Congratulations!
Thanks! I had to deal with the pain of being at SIGGRAPH (where Gems 3 apparently crushed everything else in sales), but not being able to buy it because I knew my free copy would be arriving at my house that week... but I'm home now and have the copy in my hands and it looks great! I'm really happy with how the chapter came out and I hope that it will be a good reference for anyone working with VSMs or even shadow maps in general (I go over PCF and related algorithms in quite a lot of detail as well).

I'm excited to read the rest of the chapters, but so far it looks like another good addition to the GPU Gems series, and I'd recommend picking it up if you're at all interested in real-time graphics algorithms.

Thanks again :)
 
congratulations Andy! already ordered my copy on amazon, but it seems it won't arrive before mid September :(
 
congratulations Andy! already ordered my copy on amazon, but it seems it won't arrive before mid September :(
Odd... I suspect that will probably get updated once they get stock (probably soon). I doubt that there will be significant production shortages for something like this.

Just a note though, it's significantly cheaper on Amazon.ca for some reason... perhaps to the point of it being worth ordering from there and getting it shipped internationally.

As always if there are any questions about the chapter I'd be happy to answer them here or by e-mail, although it's pretty thorough and hopefully quite understandable :)
 
Here an example for your VSM. Much thanks for inpiration.
Ooh very pretty! Does there happen to be a high-res video available for DL anywhere?

I know this is a little off-topic, but would you care to tell us a little bit more about the specific project and rendering tech in that demo?

Cheers,
Andrew
 
Very nice technique. I tested it on my X3100 aswell, it has some issues with the VSM and PSVSM algorithms... at the edges of the softshadow it sometimes has a black border, seems like a wraparound issue... which is no doubt caused by the driver.
The other methods all work with visual artifacts... However, the SAVSM method with fp is really, REALLY slow on the X3100. It runs at about 3 fps, while the int method runs at 12 fps.
Again probably the driver though :)

Just wanted to throw it in for general reference.
 
problem when going deferred...

Hi guys,

I am performing some tests and implemented a traditional deferred pipeline in my engine.

I am experiencing some issues with my cascade VSM implementation, which worked flawlessly with the forward rendering pipeline.

Using the deferred pipeline, I get some artifacts at polygon edges, as shown below (screenshot showing the result of the shadow being applied, nothing else - no lighting, no texture, no postprocess):

deferred_vsm.png


Now, I have checked everything is reconstructed from the G-Buffers correctly: normals, view space position, etc...

The problem definitively comes from the shadow map not being sampled correctly. See the screenshot below, showing the result of the shadow map sampling (I modified the contrast to clearly show the error):

deferred_vsm2.png


Now, I think this is due to the derivatives used for the texture sampling being wrong. The texture coordinates used for shadow map sampling are constructed in the pixel shader, using the view space position multiplied by shadow light view matrix.

However, the viewspace position is reconstructed using the linear view space depth, which is read from the G-Buffer in the PS.

Hence, the texture coordinates derivatives are not correct.

I am pretty sure I have to manually compute them, however I have no idea how to proceed.

Could anybody help me ?

Thanks,
Greg

[EDIT]:
I found some interesting method here http://visual-computing.intel-research.net/art/publications/sdsm/ (Andy is everywhere :))

Now, one last question:

I can reconstruct the view space position of (x,y), of (x+1, y) and of (x, y+1) (x & y being pixels). Call them ViewPos, ViewPosdX, ViewPosdY.

So, I guess I have to transform these 3 coordinates into shadow light space, LightPos, LightPosdX, LightPosdY.

Then, what should I call SampleGrad with ?

I would say :
ddx = LightPosdX - LightPos
ddy = LightPosdY - LightPos

Am I right ?
 
Last edited by a moderator:
Replied to PM, but yeah you're on the right track. Once you've worked out the position derivatives in screen space you just have to transform them to shadow space and forward difference them there. All the code (including EVSM/PCF/etc.) is in the demo that you linked :)

Note that once you handle derivatives properly like this you also no longer need to play any tricks with respect to ensuring that each of the pixels in a quad choose the same cascade or anything (you may not have done this even in the forward rendering anyways, but just wanted to mention). This should thus also fix the odd lines that you're seeing at the edge of the cascades in the background of your image.
 
Back
Top