The most Detailed Tech Information on the Xbox360 yet

Are you sure? 8.8 fixed doesn't expand the dynamic range very much. The raison d'etre of HDR is to model dynamic ranges of real world lighting (typical 100,000:1 but up to 1,000,000:1 for special cases like being indoors but seeing outside via a window) You then try various tone mapping techniques to fit this huge range inside the 8-10 f-stops that your monitor can handle.
 
DemoCoder said:
Are you sure? 8.8 fixed doesn't expand the dynamic range very much. The raison d'etre of HDR is to model dynamic ranges of real world lighting (typical 100,000:1 but up to 1,000,000:1 for special cases like being indoors but seeing outside via a window) You then try various tone mapping techniques to fit this huge range inside the 8-10 f-stops that your monitor can handle.

Its the first mistake everybody makes when they think about HDR, thats its about reality level lumonisity. But as we are faking light response, trying to mimic reality just looks crap. Even if you have the range to model reality, its looks terrible because at these massive lumosity levels our brains see through the fake tone-mapping (for 100,000:1 you would need to actually have HDR displays...).

Ignoring reality is just as important in rendering as copying reality...
 
When I make photographs, I "fake" exposures all the time. My camera cannot record more than about 2^7/2^9:1 intensity ratio. As a result, during the development process, I use all sorts of tricks to fit impossible exposures onto film (dodge/burn, 'pushing' film, etc), or for digital cameras, I take multiple pictures and combine them in photoshop. For example, take a look at "unreal" shots here http://www.tawbaware.com/maxlyons/cgi-bin/image.pl?gallery=1 (not my work)

Of course, the final scene will always have a response about 2^8, not just because of monitor limitations, this goes for film as well. But this doesn't mean that artists won't try to work with intermediate formats that have higher dynamic range before the final result, since it just makes things easier.

That's the essence. HDR makes it easier to preserve contrast information so that the final shot, however unreal, has more detail and less artifacting. You could do HDR techniques on a normal RGBA framebuffer if you wanted with workarounds, using alpha or MRT to store an exponent, using scaling operations or gamma to map 8-bits to a bigger range, etc. But HDR makes development easier, and the final result will have fewer problems. Whether you are doing the photoreal or the photo-unreal, I still think having an HDR buffer is better.
 
Back
Top