Tone mapping operators are not invertible (particularly since they often end up clamping out values based on the current exposure), so you can't reconstruct the proper scene luminance. Unfortunately the issue is particularly bad in the case where you need the luminance the most: when the exposure is changing quickly.
TBH it's stretching it to call what HL2 does "true HDR". It's more of a hybrid LDR/HDR (even their assets) implementation that captures "some" of the effects of fully HDR rendering. I can understand why they chose to do what they did given their target hardware, but it is undesirable moving forward.
To reply to an earlier post, I'm not entirely convinced that texture filtering makes any assumptions about the dynamic range (it works entirely in frequency space) of the underlying data. It *does* assume that the functions that are using the data (usually the BRDF, etc) do not change the frequency content of the data, but reasonable tone mapping should not do that (it certainly shouldn't introduce higher frequencies!).
Thus I'm not totally convinced about the previous comment concerning texture filtering being improper with HDR rendering. With edge AA it's a bit more clear since you're talking about super-sampling and a non-band-limited (effectively infinite frequency) signal, but even in that case it's not totally clear-cut how AA fits in with tone mapping, even in the offline rendering world.
I'd appreciate any references to material covering these things formally... my brief search hasn't turned up many relevant results.