Deferred rendering, virtual texturing, IMR vs TB(D)R, and more...

TBDR are used in embedded systems which are bandwidth starved, it would be surprising if they didn't go for data compression as techniques used on desktops become used on mobile hardware.
Shadow buffers would benefit from taking less memory/bandwidth, and given the number of fullscreen filters run on modern games, they would also benefit from a compressed colour buffer.
Also doing Deferred Shading on chip would be something quite nice...
(Wouldn't help with big kernels that need to access data outside the tile, hence the benefit of compressing your colour buffer if possible.)

Bandwidth is the major issue on CPU and GPU alike IMO.
(And lack of standard for GPU is a major one too :p)
(Also I'd like 4KiB pages, that would save internal fragmentation, I could live with 8/16KiB, 64KiB seems too big, let alone 2MiB...)
(In fact I have th whole design of a streaming engine in my drawer waiting only for VM on CPU/GPU sharing the same mapping and a streamlined low level API + standard texture layouts [although I can go around that using it only for compressed textures which are standard in layout])
 
TBDR are used in embedded systems which are bandwidth starved, it would be surprising if they didn't go for data compression as techniques used on desktops become used on mobile hardware.
Shadow buffers would benefit from taking less memory/bandwidth, and given the number of fullscreen filters run on modern games, they would also benefit from a compressed colour buffer.
Also doing Deferred Shading on chip would be something quite nice...
(Wouldn't help with big kernels that need to access data outside the tile, hence the benefit of compressing your colour buffer if possible.)

Bandwidth is the major issue on CPU and GPU alike IMO.
(And lack of standard for GPU is a major one too :p)
(Also I'd like 4KiB pages, that would save internal fragmentation, I could live with 8/16KiB, 64KiB seems too big, let alone 2MiB...)
(In fact I have th whole design of a streaming engine in my drawer waiting only for VM on CPU/GPU sharing the same mapping and a streamlined low level API + standard texture layouts [although I can go around that using it only for compressed textures which are standard in layout])
TBDRs have less incentive to compress color/z etc since they try to keep it on chip. But when you are designing one for a UAV like use, you would want to compress your buffers.
 
This problem isn't yet a big problem, because most games don't have more than two textures per object (8 channels for example can fit: rgb color, xy normal, roughness, specular, opacity). But in the future the materials will become more complex and the g-buffers will become fatter (as we need to store all the texture data to the g-buffer for later stages).

I wonder about that... we are using quite complex materials, after all it's offline rendering, but we don't have that many textures.

Probably the only exception is skin shading where we need one or two extra RGB layers for the subdermal and epidermal components. Otherwise it's color, normal, displace, spec, roughness - and sometimes opacity and/or self-illumination.

So my question is, what other texture layers do you see for more complex materials? Is it about precalculating lots of data and storing them in the textures instead of computing it at runtime?
 
I wonder about that... we are using quite complex materials, after all it's offline rendering, but we don't have that many textures.

Probably the only exception is skin shading where we need one or two extra RGB layers for the subdermal and epidermal components. Otherwise it's color, normal, displace, spec, roughness - and sometimes opacity and/or self-illumination.

So my question is, what other texture layers do you see for more complex materials? Is it about precalculating lots of data and storing them in the textures instead of computing it at runtime?
In Trials Evo we stored all our material data to two DXT5 textures (rgb,normal.x) + (spec,roughness,special,normal.y). We had the following g-buffer layout: depth (24/8), 8888 (rgb,spec), 10-10-10-2 (normal.xy,roughness,lighting_mode). Obviously it's very hard to beat super tightly packed 64 bits per pixel g-buffer (+depth buffer) by storing texture coordinates.

But that layout isn't good if you want to have better lighting model (rgb specular for metals, proper rgb HDR emissive instead of on/off by using lighting mode bit, LEAN/CLEAN mapping, etc). RGB specular adds extra 16 bits (assuming 8 bit color is enough in the long run), HDR emissive adds 32 bits (11-11-10 float at minimum). If you want to support unnormalized normal vectors, you need at least one extra 10 bit channel for that as well (and 10 bit normal is only barely acceptable in the future). So in total you need around 128 bits (for the main g-buffer, of course you need some extra render targets for real time ambient occlusion, per pixel motion vectors, etc).

In comparison Battlefield 3's g-buffer on PS3 is128 bits (+depth buffer). They have four 8888 render targets on PS3. The biggest changes compared to our method is that they use 32 bits (one 8888 target) just for irradiance data, and have 8+8+8 normal (instead of 10+10). They also have some extra parameters such as envmap id and sky visibility (ambient occlusion term?). If you change all this data to acceptable bit depths for future rendering (and add rgb specular and emissive), you will end with at least 256 bits per pixel. In this case a single 16+16 bit texture coordinate to VT cache would be a very good idea.
 
Okay, so you were talking more about the possible g-buffer channels instead of strictly just texture layers, I understand.

Yeah, for compositing we render out lots of layers too, mostly to have the option of quick fixes when there isn't enough time left to re-render shots at high sampling quality. We use .exr so I can't just quickly check the layout but I think we can do post-process lighting and shader tweaking and such so it must be a lot of data.

Then there's the concept of deep compositing, which could eventually also make it's way into realtime deferred rendering. Although that stuff is beginning to get too complex for me ;)
 
Thanks for this. I am looking for something that has hand writing recognition, math formula -> Latex recognition/conversion. Is there anything out there that does that.

I saw this a while back: visionobjects demo. It'll convert to TeX and MathML.

I can't tell from the main web-site whether that's integrated into a larger app that does the other stuff you want. I also haven't done more than toy with it, so I don't know whether the recognizer is any good.

P.S. If you find something that works well, I for one would be interested to hear about it!
 
Back
Top