OpenEXR format HDR won't survive...

for computer gaming.

OpenEXR format HDR requires a lot of bandwidth and instructions slots, the HDR data needed to be carry through the whole pixel pipeline. Therefore it's impossible to manipulate anything if you want to re-render the HDR data again. This is why AA cannot be applied when OpenEXR format HDR is used(it actually CAN, but the cost is TOO HUGE).

On the other hand, we see fragment shader bloom effect gaining much more appreciation with average joe gamers. Microsoft even claims the bloom effect is actually HDR. The bloom effect is a simple shader trick that cost much less than OpenEXR format HDR, therefore apply AA is feasible. Most importantly, to the eyes of average joes, Bloom and HDR DOESN'T HAVE MUCH DIFFERENCE.

Any thoughts?:?:
 
There was a thread a little while back where DeanoC mentioned a 8 bit per channel format nAo came up with that is as good as FP16. The trick seemed to be changing the color space from RGB to something else. The known cost is a few shader instructions.
 
3dcgi said:
There was a thread a little while back where DeanoC mentioned a 8 bit per channel format nAo came up with that is as good as FP16. The trick seemed to be changing the color space from RGB to something else. The known cost is a few shader instructions.
Yup, definitiely the future - either that, or deferred renderer mania, imo, which is much less likely. Richteralan's explanation of why FP16 is expensive isn't quite accurate though imo... it's not carrying the data that's expensive, it's reading/writing/storing it.
I still want to see the "different blending" in action, but I'm sure that'll be soon enough.

Uttar
 
Last edited by a moderator:
What's this obsession with "openexr" HDR? From what I've read, that's just a file format for disk storeage, used in image rendering/manipulation software. The only thing AFAIK in common with 3D accelerator display buffers is both use the same number of bits of mantissa/exponent per pixel.

Besides, "deep" buffers ought to be useful for other things than just HDR effects (which for the most part has become little more than a popular buzzword in software released so far)... Or that's what people claimed back when we were still debating wether it was a good or bad decision of ATi to go with 24bpp float precision in the R300.
 
Guden Oden said:
What's this obsession with "openexr" HDR? From what I've read, that's just a file format for disk storeage, used in image rendering/manipulation software.
Thank NVIDIA marketing for the misconception. IIRC, OpenEXR just means FP16+Exposure Control. The format itself supports basic compression, which is kind of retarded since zlib or bzip2 will give you better results anyway.
Besides, "deep" buffers ought to be useful for other things than just HDR effects (which for the most part has become little more than a popular buzzword in software released so far)...
Yup, there are plenty of nice uses, obviously, but I can't think of any off the top of my head that require FP16 blending, which is what NV40/R520 added. For example, motion blur (and not trail!) via velocity buffer can work that way. FP16 filtering can still be useful sometimes, though.
Or that's what people claimed back when we were still debating wether it was a good or bad decision of ATi to go with 24bpp float precision in the R300.
That's roughly unrelated, though; 24 bits-per-component (not bpp, damnit!) was internal precision, that was never stored externally.


Uttar
 
This is why AA cannot be applied when OpenEXR format HDR is used(it actually CAN, but the cost is TOO HUGE).

4xAA on top of FP blending HDR is a 20/25% hit on the R520, definately not huge.
 
Apple740 said:
4xAA on top of FP blending HDR is a 20/25% hit on the R520, definately not huge.

I think the reason was R520 need to do FP filtering by pixel shader, so the bandwidth problem was been hide .
 
Last edited by a moderator:
You'll probably find that there aren't that many cases where FP textures & filtering are used where FP16 blending is at the moment, they certainly aren't with Far Cry, hich I assume those rought performance numbers are based on. Filtering via the shader will still require the same number of samples as well, so the bandwidth utilisation in comparison to FP filtering is the same.
 
Sorry that I sound so noobish:oops: . But I do know OpenEXR is just a file format in storing HDR data.

I'm just thinking there's a lot of emphasize that AA cannot be used with HDR. And people will complain that with HDR, the jaggies are more visible so AA is needed more than non-HDR rendering.

And also I argued with some gamer that they think the bloom effect in AOEIII is HDR,the bloom effect in NFS:MW is HDR, etc.. A lot of people can't really distinguish what's HDR and what's bloom.

This made me think to the original question.
 
3dcgi said:
There was a thread a little while back where DeanoC mentioned a 8 bit per channel format nAo came up with that is as good as FP16. The trick seemed to be changing the color space from RGB to something else. The known cost is a few shader instructions.
Never saw that thread and does it work with todays blending hardware or would it require new hardware ( i.e. best thing I can think of is to have data stored in log2 channels ).

Dave Baumann said:
You'll probably find that there aren't that many cases where FP textures & filtering are used where FP16 blending is at the moment, they certainly aren't with Far Cry, hich I assume those rought performance numbers are based on. Filtering via the shader will still require the same number of samples as well, so the bandwidth utilisation in comparison to FP filtering is the same.
Caching might suffer depending on how intelligent they try to make it. If that was truely the case tjere is no difference why isn't it transparently supported? there is no texture read limit in PS3.0 so it should handle it fine.
 
Last edited by a moderator:
bloodbob said:
Never saw that thread and does it work with todays blending hardware or would it require new hardware ( i.e. best thing I can think of is to have data stored in log2 channels ).
I think it works with existing blending hardware, but I don't remember the details. Maybe nAo, DeanoC, or someone else can jump in and point to the thread. I didn't find it with a quick search.
 
Still on info in the thread I have found some more info in a linked thread. Basicly no you can't blending which would mean new chips to actually use it decently. Also it nAo says it doesn't look quite good enough at with 24bits and unless this method has more then 3 channels ( which it appears not ) it definatly ain't doing 8 bits per channel.

Assumable its something like HSV or HLS or HSI or HVC ( i know that one of HSV/HLS/HSI tends to a bit more expensive then 5 instructions since I used a crap load of instructions in my ATI Digital Viberance Smart Shader atleast on prebranching hardware) or maybe it CIE ones.

So unless the R580 or G71 are going to support this new thing we can pretty well forget about it for a while. Could be useful for texture format but once again if your going to do linear interpolation its possible gonna cost you 20 instructions for a bilinear sample.
 
Last edited by a moderator:
Bloodbob: Errr, seems to me you didn't understand the technique at all. The method IS using more than 3 channels: it's using alpha. The hacky part is that two channels are "combined" for luminance, and you can't do 100% accurate blending without ping-ponging and using the pxiel shader (expensive as hell).
However, just LERPing in that domain gives satisfactory results according to nAo, which means it also works nicely for MSAA/Textures. I'd like to see it in action to make sure it appears okay to MY eyes too, but trust me, it's more than usable on current GPUs. What do you think he developped it on, RSX? Doubt it :p

Uttar
 
Uttar said:
Bloodbob: Errr, seems to me you didn't understand the technique at all. The method IS using more than 3 channels: it's using alpha. The hacky part is that two channels are "combined" for luminance, and you can't do 100% accurate blending without ping-ponging and using the pxiel shader (expensive as hell).
However, just LERPing in that domain gives satisfactory results according to nAo, which means it also works nicely for MSAA/Textures. I'd like to see it in action to make sure it appears okay to MY eyes too, but trust me, it's more than usable on current GPUs. What do you think he developped it on, RSX? Doubt it :p

Uttar
What technique where is it friggnig posted? If its hacking by combined to channel into 1 then it definatly ain't using 8 bit channels since its using two 8 bit channels and one 16 bit channel. If its using the alpha chanel your not going to be able to blend anyway unless you have two RGBA8 buffers one for channel 1,2 and one for channel 3 at which point in time you are using as much bandwidth as FP16 which is rather useless since the whole point was to avoid it.

Closest details I can get to a technique is
Yep Vince, colors are moved to another space where we have 2 main components: one is representing luminance and the other one is representing the color itself (don't worry, it's not YUV, it would no better to use RGBE cause you don't have normalized color components in that case ),,even though those 2 quantities are deeply interconntected.
Moreover the second component is someway stretched and clamped in a way that should let us more bits for colors that the human eye can actually see.
It works so well that I also evaluated a 24 bit per pixel version that in the end was not used cause the quality we expect was not there in some cases
(but I have some ideas to improve it..it would be nice to have a spare color channel to store some additional info per pixel ;) )
 
Last edited by a moderator:
Heh, well I asked nAo a bit further about it, so I can't remember exactly what the thread revealed/didn't reveal. Basically, chrominance is stored in RG and Luminance in BA. Luminance would obviously be stored less hackily if there was such a thing as 8/8/16 framebuffers, but that doesn't exist (yet, at least).
The standard algorithm thus uses 32-bits, btu as nAo said, he also developped a 24-bit version with significantly lower luminance range, but even that according to him should be nearly as precise as FP16 implementations (while the 32-bit implementation is significantly more precise, but with "different" looking blending, that would be less accurate, at least theorically).

Uttar
 
You mean 8/8/16/8 buffers unless we are going to ping-pong. Nice as these things are IMHO they have no place trying to force these nolinear formats into the frame buffer until we get programmable blends if you want to use them as texture can I that people please calculate the mip-map chain properly (i.e. convert to RGB colour space and convert back).
 
Last edited by a moderator:
bloodbob: It's 8/8/8/8, trying to simulate LovLuv 32-bit. Amusingly enough, that format was created by an ex-SGI guy, based on CIELuv.
[SIZE=-1]www.anyhere.com/gward/papers/jgtpap1.pdf


[/SIZE]Uttar
 
Uttar said:
bloodbob: It's 8/8/8/8, trying to simulate LovLuv 32-bit. Amusingly enough, that format was created by an ex-SGI guy, based on CIELuv.
[SIZE=-1]www.anyhere.com/gward/papers/jgtpap1.pdf


[/SIZE]Uttar
Read logluv papers before :p quite a nice Texture format.
 
Back
Top