When I make photographs, I "fake" exposures all the time. My camera cannot record more than about 2^7/2^9:1 intensity ratio. As a result, during the development process, I use all sorts of tricks to fit impossible exposures onto film (dodge/burn, 'pushing' film, etc), or for digital cameras, I take multiple pictures and combine them in photoshop. For example, take a look at "unreal" shots here
http://www.tawbaware.com/maxlyons/cgi-bin/image.pl?gallery=1 (not my work)
Of course, the final scene will always have a response about 2^8, not just because of monitor limitations, this goes for film as well. But this doesn't mean that artists won't try to work with intermediate formats that have higher dynamic range before the final result, since it just makes things easier.
That's the essence. HDR makes it easier to preserve contrast information so that the final shot, however unreal, has more detail and less artifacting. You could do HDR techniques on a normal RGBA framebuffer if you wanted with workarounds, using alpha or MRT to store an exponent, using scaling operations or gamma to map 8-bits to a bigger range, etc. But HDR makes development easier, and the final result will have fewer problems. Whether you are doing the photoreal or the photo-unreal, I still think having an HDR buffer is better.