Shading Decoupled from rasterization

chris1515

Legend
Supporter
@dankbaker : https://twitter.com/dankbaker/status/559493273241137152?s=09

@Reedbeta Our shading is decoupled from rasterization already. Basically a real time REYES, so these are rasterization samples.

I find this paper a pdf to download:
http://cg.ivd.kit.edu/english/ShadingReuse.php

In this paper we present decoupled deferred shading: a rendering
technique based on a new data structure called compact geometry
buffer, which stores shading samples independently from the vis-
ibility. This enables caching and efficient reuse of shading com-
putation, e.g. for stochastic rasterization techniques. In contrast
to previous methods, our decoupled shading can be efficiently im-
plemented on current graphics hardware. We describe two vari-
ants which differ in the way the shading samples are cached: the
first maintains a single cache for the entire image in global mem-
ory, while the second pursues a tile-based approach leveraging local
memory of the GPU’s multiprocessors. We demonstrate the appli-
cation of decoupled deferred shading to speed up the rendering in
applications with stochastic supersampling, depth of field, and mo-
tion blur.

Oxide Games use it on PC but do you think it is useful for PS4 and Xb1. If I understand well it is good for high AA quality, high quality motion blur and high quality depth of field.
 
Last edited:
There are many decoupled shading papers at this point. Are you claiming Oxide uses this method or is it just the first paper you found?
 
There are many decoupled shading papers at this point. Are you claiming Oxide uses this method or is it just the first paper you found?

No I don't know the method they used. It is the first paper I found.
 
Last edited:
Without conservative rasterization like last Nvidia GPU and future DX12 GPU it is maybe not possible to use this rendering method with good performance on PS4 and Xbox One.
 
Last edited:
Possibly dumb question, but is this similar to what sebbbi talked about regarding the use of compute shaders on the current gen consoles and it allowing them to skip the render outputs?
 
Maybe GDS can help. All the implementation I read in different paper didn't use fuctionality of GCN architecture hidden by Direct X 11 or Open GL. I think the 2 or 3 years to come we will have an answer.

And probably some devs have an idea about the answer. :)
 
Last edited:
I think sebbi is doing something similar, shading the virtual texture that's generated for each frame... But I'd rather not get into this as he's probably far more qualified to talk about it ;)
 
I think sebbi is doing something similar, shading the virtual texture that's generated for each frame... But I'd rather not get into this as he's probably far more qualified to talk about it ;)

Yes I just find the topic out of console forum.:p
 
I think sebbi is doing something similar, shading the virtual texture that's generated for each frame... But I'd rather not get into this as he's probably far more qualified to talk about it ;)
I have been prototyping various data reuse/decoupling techniques since Trials HD (2009). Uniquely mapped virtual texturing + texture space lighting is perfect fit for decoupled shading. Namely it makes finding the shading samples trivial (direct texture lookup instead of hashing) and it trivially supports blending between shading samples (standard texture filtering works perfectly). This kind of technique doesn't need any extra hardware (like many of the other proposed techniques), but it has heavy implications on renderer (and content pipeline) design. Everything needs to be designed around it.

Prototyping radical techniques is easy. However solving all the issues and limitations they cause for content production and how to combine them with other rendering techniques is often hard. It would definitely be interesting to see when this kind of techniques get mature enough (and fast enough) so that we can see them in actual games. This generation consoles might be too early.
 
Pretty sure this is similar to what Oxide is doing. (texture space shading)
https://software.intel.com/sites/de...re_Space_Shading_for_Stochastic_Rendering.pdf
That paper pretty much describes the advantages of texture space shading (for decoupled shading).
To minimize state changes for very large scenes, we would like to develop a system for dynamically allocating properly-sized shading textures (or using a part of a shading texture) on a per-object basis. This would not require any artist interaction and at the same time it would reduce state changes significantly, which would also increase the performance of our algorithm.
They clearly are not using virtual texturing, as it would be perfect solution for this particular problem. With virtual texturing they could also shade only the visible fragments (small fragment tiles). Now they are shading at triangle granularity and also shading triangles that could be occlusion culled. I wish I had more time to do research in this area, but unfortunately this kind of techniques are not yet fast enough for 60 fps console games.
 
Back
Top