what is the future of graphics

BrandonFurtwangler said:
(joking of course) Since DX10 is already old news, what's the speculation on DX11?

Seriously though, where do you think computer graphics will be going in the next 5 years? Can we keep scaling up the current model? Will we see languages that blur the line between CPU and GPU? Will the GPU ever really do non-graphics processing on a large scale? (physics?) Any promissing new techniques just waiting for certain hardware features?

Lets here some ideas! Bring on the speculation.
PER-PIXEL BUMP MAPPING!!!!


......I'm just kidding, I don't know what the hell I'm talking about.
 
Xmas said:
No you can't.
Code:
These formats are the only valid formats for a back buffer or a display.

Format      Back buffer      Display 
A2R10G10B10    x                x (full-screen mode only) 
A8R8G8B8       x  
X8R8G8B8       x                x 
A1R5G5B5       x  
X1R5G5B5       x                x 
R5G6B5         x                x
Wait, so what is Far Cry using for HDR that ATI can AA it? I thought it was using an FP16 back buffer. Or is it tone-mapping from an FP16 something into an A/X8R8G8B8 back buffer and then a similar front/display buffer?

I'm mixing things up, aren't I....
 
Pete said:
Wait, so what is Far Cry using for HDR that ATI can AA it? I thought it was using an FP16 back buffer. Or is it tone-mapping from an FP16 something into an A/X8R8G8B8 back buffer and then a similar front/display buffer?

I'm mixing things up, aren't I....
It renders everything to a FP16 render target, then draws a full screen quad onto the XRGB8888 backbuffer with a shader that does a texture fetch from the FP16 target, performs tonemapping and writes the result to the 8 bit per component target.
 
Now I have to reread those Source HDR articles. I thought they were doing the same.

So, is ATI AAing the (tone-mapped) back-buffer or the FP RT?

Does tone-mapping necessitate rendering to something other than the back-buffer first, or does the lack of FP16 back buffers simply mean tone-mapping makes more sense when writing the RT to the back buffer? If NV does end up moving the ROPs into the pixel shaders, would this speed up Far Cry's FP RT -> back buffer process by reducing the # of steps required?

tEd says R5x0 supports FP framebuffers but D3D doesn't. Can ATI implement them in OGL? Not asking why they don't (b/c I know the answer will be "effort not justifiable"), just if they can.
 
I think dynamic lighting models are going to dominate the next few years - a combination of HDR and true dynamic lighting with shoft-shadows. Hopefully we will see the kind of lighting and shadows currently done with static shadow maps/radiosity but in real time.
 
Pete said:
So, is ATI AAing the (tone-mapped) back-buffer or the FP RT?

Does tone-mapping necessitate rendering to something other than the back-buffer first, or does the lack of FP16 back buffers simply mean tone-mapping makes more sense when writing the RT to the back buffer? If NV does end up moving the ROPs into the pixel shaders, would this speed up Far Cry's FP RT -> back buffer process by reducing the # of steps required?

tEd says R5x0 supports FP framebuffers but D3D doesn't. Can ATI implement them in OGL? Not asking why they don't (b/c I know the answer will be "effort not justifiable"), just if they can.

Humus said it's possible that they expose it in opengl via extension at some point.
 
Pete said:
So, is ATI AAing the (tone-mapped) back-buffer or the FP RT?
They're using AA on the FP RT, downsample it and use it as a texture (there is no way of using a multisampled RT directly as a texture on current hardware. This will change with D3D10). This is slightly problematic because the tonemapping should conceptually be applied before the downsampling.

The backbuffer itself is a simple non-multisampled color only buffer since the downsampling already took place.

Does tone-mapping necessitate rendering to something other than the back-buffer first, or does the lack of FP16 back buffers simply mean tone-mapping makes more sense when writing the RT to the back buffer? If NV does end up moving the ROPs into the pixel shaders, would this speed up Far Cry's FP RT -> back buffer process by reducing the # of steps required?
Tone mapping in the HDR context is the process of mapping a non-displayable range of color values to a displayable range. There are basically three points in the pipeline where you could apply tonemapping:
- at the end of the shader, writing to a LDR back buffer/RT. This has to be performed per rendered pixel (including overdraw), but saves framebuffer bandwidth. The limitation is that transparency looks odd because you're blending pixels in tonemapped color space rather than in linear color space.
- at the end of a frame, reading an FP16 RT and outputting it to the LDR back buffer. This is performed per pixel per frame and needs a HDR RT and LDR back and front buffers.
- on output, displaying a FP16 front buffer directly. This could be implemented with the color LUT already used for gamma correction extended to 16 bits. This is performed once per pixel per screen refresh and needs HDR front and back buffers.

The latter two are very similar to AA downsampling at the end of a frame vs. downsampling on scanout. For double buffering and FP16, they need the same amount of memory, and bandwidth requirements depend on the fps:refresh ratio.
End-of-frame tonemapping however is more flexible because it can use shaders instead of either a small LUT that does piece-wise linear interpolation or some fixed-function tonemapping.

Moving the ROPs into the shaders changes nothing, it's a die space optimization replacing fixed function hardware with programmable hardware that is already present.
 
Well, you've given me something to chew on. I'll put off installing DoD:S until I've understood all that.
 
Back
Top