This info is all here but I though I'd synopsis it, as there appears to be some confusion...
There are basically 3 types of video surfaces (actually there are more than 3 but for this discussion 3 will do).
1) Texture
Can be read by the texture units, includes compression formats (DXT), float formats (FP32, FP16, etc.) and integer (ARGB8, ARGB16 etc.). Often these are also 'swizzled' to speed up bilinear reads.
2) Render targets
Can be rendered onto, formats are things like integer (ARGB8, RGB10A2) and float (FP16, FP32). Is a subset of Texture formats (i.e. you can't render to DXT). Some render targets can have MSAA applied but not all (i.e. ARGB8 can, FP16 can't)
3) Scan-out
Can be displayed onto a monitor, a further subset of Render targets. Usually controlled by the DAC which are currently a maximum of 10 bit integer, so the two common formats are ARGB8 or RGB10A2. No cards can scan-out a float render target (actually IIRC 3DLabs top end card can...)
If you want to use the same surface in multiple places (texture and render target), you have to meet the limitations of both AND often a few more. I.e. textures can't read MSAA render-targets, so if you want to texture from something that was rendered with MSAA you have to lose that information (by a copy blit).
A conventional HDR renderer will render the scene into a FP16 render-target with no MSAA in linear colour space. You make that a texture, run a tone-mapper on it rendering into a ARGB8 surface with gamma on (converting linear to gamma 2.2) which can then be displayed to the user. There is no where to get MSAA, the FP16 target can't MSAA and the tone-mapper (which could have MSAA on due to rendering to ARGB8) has no edge information.