HDR Method used in Splinter Cell: Chaos Theory

Discussion in 'Architecture and Products' started by bigz, Mar 30, 2005.

  1. Mintmaster

    Veteran

    Joined:
    Mar 31, 2002
    Messages:
    3,897
    Likes Received:
    87
    That's not correct at all (especially the part I bolded).

    Even with tone-mapping, you need to have a scaling factor. You can't just use the same curve for all data, or you'd have really crappy black levels in bright scenes and dim whites in dark scenes. e.g: If outdoors the sky has a value of 1,000 and the shadows are 1, you'd want the shadows to be pretty much black. Indoors, if an illuminated white wall has a value of 1.0, and the shadows have a value of 0.001, you'd want 1.0 to be close to white now, and 0.001 will be black.

    In other words, you can't have a fixed global iris. You can't "not have to worry about the overall light level", or your tone mapping implementation will look like crap. Imagine a video camera with a fixed shutter speed and aperture. It would be next to useless.

    Look at the RTHDRIBL demo. It downsamples the full scene to a single pixel, and uses that value, averaged over a few frames to simulate auto-exposure delay in a camera/eye, to determine the exposure level.

    Now, with FP16 you don't have to worry about the scaling until the tone-mapping pass. With FP10 or FX16 or other alternative, you use the exposure value from the previous frame to scale the value rendered into the buffer, and avoid scaling during the tone-mapping pass. A one frame latency for an effect that looks better with a time delay anyways is meaningless.

    No hardcoding needed.
     
  2. Humus

    Humus Crazy coder
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    3,217
    Likes Received:
    77
    Location:
    Stockholm, Sweden
    That's far from universal.

    I don't think I've stated any opinion at all here. I'm saying there are more than one solution to HDR, I see at least five ways of doing it, all with pros and cons.

    The R300 supports FP16 and FP32 already. It just doesn't support blending and filtering with that. You can't just ignore what's currently on the market. You have an assload of R300 and NV30 cards out there. I'm a bit rusty on the NV30 capabilities, but IIRC it doesn't support FP blending or filtering either. This is obviously something every developer will have to take into account. I'm not saying FP HDR won't be the way to go in the future, but there's certainly more than one solution, and given the installed user base I'm not surprised to see some developers choose others alternatives for now. It's a bit like using stencil for dynamic branching. A good solution with much wider compatibility, without in any way discrediting ps3.0 style branching.

    So you have rendering to FP16. Then do filtering in the shader (if filtering is needed at all). Pros: Good quality. Cons: Slow.

    Rendering to FP16, then doing a conversion pass to Int16 for filtering, with range stored in alpha. Pros: Much faster. Cons: Slight quality reduction (usually not really visible though).

    Rendering to Int16 directly. Pros: Even faster. Cons: Lower quality.

    Rendering to RGB10_A2. Pros: Even faster. Cons: More like MDR.

    Rendering to 2x RGBA8 MRTs. Pros: Fast. Supports blending. Cons: More like MDR.


    Edit: WTH is up with explorer? First the window frooze for like 5 minutes before my post actually made it. Then when I tried to edit the same thing happens again. Even after a reboot. Had to use Firefox.
     
    #42 Humus, Aug 9, 2005
    Last edited by a moderator: Aug 9, 2005
    Jawed likes this.
  3. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    Well, the NV3x doesn't support FP blending/filtering, but perhaps more importantly it is much more limited in what it can do. Specifically, its floating-point support was designed for pixel buffer-only type support. That is to say, the only feasible use for floating-point in the NV3x is for a rendertarget that is read in at full-screen for the next pass.

    But that's the hardware support. I don't think DirectX supported NV's limited FP support at all until DX9c, and even then I'm not certain as to how that support is implemented.
     
  4. Skinner-BH71

    Newcomer

    Joined:
    Aug 5, 2005
    Messages:
    3
    Likes Received:
    0
    Location:
    Belo Horizonte - Brasil
  5. Apple740

    Newcomer

    Joined:
    Aug 9, 2004
    Messages:
    239
    Likes Received:
    2
    Location:
    Rotterdam - NL
    Why? Most likely R300/Nv30 don't have enough power to render HDR in recent and future games anyway, so why bother?
     
  6. Humus

    Humus Crazy coder
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    3,217
    Likes Received:
    77
    Location:
    Stockholm, Sweden
    Uhm, I'd consider SCCT a recent game, with some pretty kickass advanced graphics, and it seems to render with HDR at good framerates on R300.
     
  7. DeanoC

    DeanoC Trust me, I'm a renderer person!
    Veteran Subscriber

    Joined:
    Feb 6, 2003
    Messages:
    1,469
    Likes Received:
    185
    Location:
    Viking lands
    Its not a matter of precision, its the tone mapper operator. Any global operator will cause massive dynamic range to bleed out the scene too much, using a global operator you have to restrict the dynamic range to start with. We found using a sun brightness of ~2000 caused too many issues to the scene light balencing so we reduced it to much less.

    Local tonemaps operators are currently very expensive and I know of no game using one...

    Even with FP16 we don't use much of that range except in a few special effects...
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...