Layman's Idea - Deferred Transparencies with Flexible ID Tagged G-Buffer

milk

Like Verified
Veteran
Hi guys.
Disclaimer: I'm admitedly a layman on modern 3d graphics programing. All the programing and game development has been a hobby side thing I did for fun with few fruits coming out of it. Though as an enthusiast, I've always read papers and researched about real time and off-line rendering technology (the latter is marginally related to my job) initially because I was just curious about how that shit worked, now because I'm curious about what are the next steps games are gonna take. I have a fair understanding of how modern rendering engines work, but it's all very high level, and I have no idea what are the true costs of each steps, what are the hardware limitations and such. With that said, as I was reading some rendering related stuff on the internet recently, I realized there was a bunch of separated ideas floating around, that could converge to a very robust solution for many of the most tipically talked about problems of real time rendering. I have no idea of how feaseble actually doing it would be, and that's why I'm here. Out of curiosity, I'd love to hear from the pro devs over here, what could work and what could not of what I though here:

Deferred Transparencies with Flexible ID Tagged G-Buffer (terrible layman guy that reads siggraph papers here and there name)

Deferred rendering is good. It not only reduces overdraw, and decouples your geometry pass from everything, but having a G-buffer around is great for a bunch of cool lighting and SS effects like AO, GI, reflections, SSS, DOF, MB, post AA and such. Even engines that are actually forward, end up incorporating more and more G-buffer layers for those effects that at some point there is not even a reason for they not to become deferred anyways (UE3 is a great example)

All great. Until... transparencies.
So Imagine a world where you can do the shading and post processing of your transparencies in the very same way as your opaque geometry. Transparencies have no sorting problems, intersect correctly, are lit per pixel (or almost) with all lights and shadowing and effects your opaque pixels are getting. They not only do recieve all your SS effects like reflections GI and such, but also contribute to them for other opaque or transparent samples correctly (no SS reflections of your particles warping over the opaque geo behind it). Oh, and they are all in HDR, tonemapped, Dofed, motion blurred. The full package. Plus, some additional layers of depth pealed opaque geometry for better SS effects and cool multi-fragment effects for volumetrics and stuff. And a cleaner AA wouldn't hurt either now would it?

UE4 went for the some of the same goals with its secondary G-Buffer for transparencies, but that solution is anything but robust. Inferred rendering is a bit more flexible (more layers of Transp.) yet not completely, it only supports up to 4 layers of transparencies (Red Faction Armageddon and SR3 got away with it though) But the mentality is clever. If a surface is transparent, well, you see less of it, so you don't really need as much detail there. Equaly, if an opaque surface is behind a thick layer of transparent stuff, neither does that surface need much detail. Yet, sharp edges and high frequency effects might still be noticible (though transparencies very often end up causing defocusing of its BG on the real world anyway...)

So what if instead of doing Spacial down-sampling of your areas covered by transparencies, you downsampled their bit-depth?
Say, you get 16bit precision normals (Not sure of the number, just an example) for your opaque geometry. Once there is transparency in front of it, each gets only 8bit. You are reducing the acuracy of shading of each one, but their resulting color will also have reduced influence over the final pixel color on-screen. That could be done for everything, normals, depth, albedo, specular... You could divide precision according to alpha value, say a particle that is only 10% opaque gets very low precision, while affecting stuff behind it the least possible. Depth for example, could always benefit opaque stuff than transparent ones, since you'll want do use soft z-clipping for most of them anyways (soft particles)
This seems perfectly reasonable for me, but I could be missing something, or it could just not be GPU friendly.
For maximum robustness, each material could have a different bit-precision trade-off, the renderer would read it's ID-tag, and interpret it's G-Buffer data accordingly, which would otherwise just be a bit-soup of values.

This could, also incorporate spacial down-sampling into the mix, further using ID tags to prioritize spacial frequency where needed most. Ex: Flat materials would require less normal frequency, thus it could allow transparent surfaces in front of it to use every other normal sample from its gbuffer. Smoke particles, on the other hand, could require less spacial resolution for its albedo, since they don't have very high frequency detail there. On that note, things like roughness could be of lower resolution for everything, since it's rarely high frequency...

Since you'll be rendering, shading, and post processing all your Transparent geo together with your opaque, is it such a stretch to ask for a slightly larger than 1080x1920 buffer? Say, with the savings of not having to do any additional rasterizing and shading over your deferred stuff, does it become more feasible to have a 1080x2880 G-buffer (50% wider) for some additional samples. You get less blurring when transparencies are around, and sharper AA (super sampled!!) when you have none.

Then yeah, that opens up space for crazy stuff like temporally super sampling fast moving objects for better MB (render multiple transparent instances of the same object instead of one opaque) depth pealing for better everything (less obvious SSR holes, Better DOF, better AO, better SSSSS, and even AA)

Does next gen make this sort of ambitious renderer design possible, (compute shaders would be a must probably) or is this still too complex and costly? If anything, it would be at least a very hard to develop engine. I'm curious.
 
Back
Top