You can't make a blanket statement that game X isn't HDR - or is HDR. In some ways the term is meaningless without context. And that context is individual parts of the rendering pipeline of the game.
Problem is, I don't see 'HDR' as being a good term to use here. Because different stages and data in the pipeline have different data requirements - in terms of both range that data covers
and the precision of the data. The two have a drastic impact on one another, but must be treated differently.
...
In general, most games share an essentially similar abstract rendering pipeline. How each element is generated, stored and used may differ (deferred, forward, pre-pass, etc) - but at the end of the day very similar data is required to generate the output image.
That data pipeline looks something like this:
So. Lots of little parts that add up to the whole. Let me run through them; Note that most are optional, some can be combined and some are intentionally left out or simplified out for sanity sake.
Geometry:
Things like normals, depth, etc.
For hopefully obvious reasons this data needs to be as precise as possible, ranges may be constrained but position/depth may not be
Diffuse + Specular colours:
This is the diffuse and specular reflective colours of the surface. This data generally doesn't need to be very precise - and physically makes no sense to go out of the [0,1] range. Represented in linear space, it is often simply stored as 8bit gamma space - as this is one of the best tradeoffs for size and perceptual quality.
Emissive:
This is light emitting from a surface. This can be from a glowing surface, a lightmap or other baked lighting. Ideally precision should be as high as possible and the data range is arbitrary (hence RGBM 8bit encoding is a very popular choice in many g-buffers)
Light Accumulation:
Dynamic lighting is added together. Typically this ignores diffuse, etc. The only important things are position/depth and normals. Output again is ideally as high precision and large a range as possible - hence this is often done in FP16 in a deferred renderer.
Composite:
Here, the results of light accumulation is combined with diffuse, specular and emissive, etc. Once again, FP16 is ideal.
Early Post FX:
This can sometimes include things like bloom, dof, warps, etc - things that benefit from high precision and range.
Tone Map, Gamma Correction, etc:
The important one. This is where the linear space output of the render (composite, post, etc) is converted into a [0,1] range output preparing for display. This will often include gamma correction as a last step - and should include some form of tone mapping. Tone mapping is used to compress the image into a [0,1] displayable range - this can involve applying an exposure scale (eg eye adaption) and often times will compress the whites and blacks to boost displayable range without too much loss of detail (see uncharted presentations or
this for examples)
Late Post + Display:
Some post Fx can more easily be done with a fixed range (eg, the displayable range). This includes most complex forms of colour grading. Some games perform bloom or effects like dof and motion blur here.
Green represents data that is ideally stored in high range - blue is [0,1]. Note that precision of this data may vary
wildly.
Where a game places each effect in the pipeline will vary game by game - it's all based on tradeoffs made for memory, performance and meeting project goals. For example, roughly:
So why say all this?
Because I'm trying to show that you cannot easily label a game as HDR.
For example, BF3 and KZ3 share similar g-buffer formats. They store emissive light in RGBM 8bit format. Technically this is pretty high dynamic range with moderate precision. They accumulate dynamic light and composite (to the best of my knowledge) in FP16.
What people are seeing in the KZ3 screen shots is (to my eye) a limitation of the colour grading and some of the extra post FX (I believe KZ3 applies the vignette effect post-tonemap in 8bit). This isn't an indication that when the sky was rendered it was limited to 8bit, it indicates when colourization and certain post FX occurred, it was limited to 8bit (to the best of my knowledge).
Technically, the best solution is a forward renderer that does everything in one go, in 32bpp on the shader hardware.
...
As for the whole linear space / gamma space thing - I think it's important to realise that we don't perceive light in a linear way. Gamma space represents how we perceive light - ans is roughly a power of 2.4. What this means, is if we
perceive that the brightness of something has doubled - it is actually emitting approximately
5.3x more light. Hence the very obvious realization that when mathematically calculating how light is accumulated, it must be done in linear space - then later converted into gamma space (for human perception). This also explains why it makes more sense to store most 8bit images in gamma space - as you get more perceptual precision.