They're published on nVidia's site:
http://developer.nvidia.com/object/nvidia_opengl_specs.html
Some highlights:
1. EXT_framebuffer_sRGB: sRGB color space framebuffer support. This appears to be a fully gamma-correct framebuffer (with gamma 2.2 assumed).
2. EXT_packed_float: 32-bit pixel size for three floating point values, with 5 bits exponent for each color, and 665 mantissa bits for RGB, respectively. There is no sign bit.
3. EXT_texture_compression_latc and EXT_texture_compression_rgtc: new texture compression formats for two-component textures.
4. EXT_shared_texture_exponent: another 32-bit floating point format, with 5 bits shared exponent, and 999 mantissa bits for RGB, respectively. There is no sign bit.
5. NV_depth_buffer_float: support for 32-bit floating point depth buffers that aren't necessarily clamped to the range [0,1], support for a 64-bit depth format (32-bit depth, 8-bit stencil, 24-bits unused).
http://developer.nvidia.com/object/nvidia_opengl_specs.html
Some highlights:
1. EXT_framebuffer_sRGB: sRGB color space framebuffer support. This appears to be a fully gamma-correct framebuffer (with gamma 2.2 assumed).
2. EXT_packed_float: 32-bit pixel size for three floating point values, with 5 bits exponent for each color, and 665 mantissa bits for RGB, respectively. There is no sign bit.
3. EXT_texture_compression_latc and EXT_texture_compression_rgtc: new texture compression formats for two-component textures.
4. EXT_shared_texture_exponent: another 32-bit floating point format, with 5 bits shared exponent, and 999 mantissa bits for RGB, respectively. There is no sign bit.
5. NV_depth_buffer_float: support for 32-bit floating point depth buffers that aren't necessarily clamped to the range [0,1], support for a 64-bit depth format (32-bit depth, 8-bit stencil, 24-bits unused).