Mintmaster said:FP16 filtering, while nice, is not very necessary at all for HDR in games. There are great workarounds with Int16 (see Humus's site), and even better is that these workarounds save speed on all hardware, including NVidia's. Besides, ATI can do pixel shaded filtering if need be at a speed cost, even making it transparent to the developer.
While I generally agree what you're trying to say I don't agree that "[it] is not very necessary at all". With it HDR RT allows faster light blooms. And yes INT16 formats can do the job but not for all situations unless you slave way at the fragment program; with more and longer shaders I think the goal should be to get away from low-level optimisation. And the "16-bit is enough" argument can be risky as was proven in the past. We're not talking about a feature no IHV has and we wish they did. Is this a huge problem? No, but I think filtering is needed and in this gen.
Anyway, similarly, nVidia's vertex texture look-up also requires filtering on the vertex program (not really a big deal) however the need to roll your own LoD there is I think a significant hindrance as instructions begin to pile up; that and the inherent latency of course. Before making up my mind about R2VB what I'd really like to see is a test showing its performance penalty.