what is the future of graphics

(joking of course) Since DX10 is already old news, what's the speculation on DX11?

Seriously though, where do you think computer graphics will be going in the next 5 years? Can we keep scaling up the current model? Will we see languages that blur the line between CPU and GPU? Will the GPU ever really do non-graphics processing on a large scale? (physics?) Any promissing new techniques just waiting for certain hardware features?

Lets here some ideas! Bring on the speculation.
 
BrandonFurtwangler said:
(joking of course) Since DX10 is already old news, what's the speculation on DX11?

Seriously though, where do you think computer graphics will be going in the next 5 years? Can we keep scaling up the current model? Will we see languages that blur the line between CPU and GPU? Will the GPU ever really do non-graphics processing on a large scale? (physics?) Any promissing new techniques just waiting for certain hardware features?

Lets here some ideas! Bring on the speculation.

Microsoft is creating a physics API for DirectX.
 
More high-level functions in hardware, total virtualization of all resourses, more cleaner APIs, drivers out of kernel mode... and cheaper IC's :p
 
BrandonFurtwangler said:
Is that a prediction, or did you hear that somewhere?

They were hiring people for a project a while back for making a physics engine on a VPU hinting strongly in that direction. What else could it be?
 
Last edited by a moderator:
rwolf said:
Microsoft is creating a physics API for DirectX.

But that doesn't meant the graphics cards wouuld have anything to do with it, after all, there's a lot in DX that has absolutely nothing to do with the graphics card you have
 
Interesting you should mention SA as he was dead on last time about many features that would be added but he was way too optimistic in his timeline...
 
- Much more shader power
- Higher ordered surfaces for real
- Improved indirectly lighting / ambient occlusion algorithms to simulate global illumination
- HRD monitors and framebuffers
 
Something to auto-generate geometry/objects (TruForm2) - content creation is hard and when a good solution is found and implemented in software, it moves on to hardware, because it is very general problem, I think.
 
bdmosky said:
Interesting you should mention SA as he was dead on last time about many features that would be added but he was way too optimistic in his timeline...

If IHVs wouldn't had screamed that it won't fit all in hardware, D3D10 might have contained programmable on chip adaptive tesselation.
 
I've not heard much from the DX team w.r.t. DirectPhysics - but they were definitely recruiting developers for the project. I'd forgotten about it actually - might take this as a reminder to go ask some people some questions ;)

I figured it was much more an XNA-style component than anything directly related to GPGPU style work. As it currently stands we have software-only physics solutions (which are going to be changing a fair bit as MultiCore/MultiCPU gets bigger), dedicated hardware physics (Ageia's PhysX adapter?) and a secondary use of regular GPU's as physical accelerators (like ATI were on about). That sort of diversity strikes me as a nice opportunity for an abstraction API :cool:

As far as the adaptive tesselation/amplification stuff in D3D10 - I keep hearing noises that, at least in the early hardware, it's not going to be the best use of it. The GS allows for 1024 outputs iirc, whereas the word on the grape vine is that more than a "few dozen" outputs isn't going to be very fast. Instead you've got things like object-space motion blurring, improved rendering flow (single pass cubemaps) and types of ray-tracing effects..

hth
Jack
 
If you take a look at offline rendering, you can see it goes a similar way real-time rendering is probably going:

  • In early times (read: 1980), there wasn't enough network bandwidth/storage for high-res textures so shaders were used, this is something that happens as well for graphics card.
  • Geometry becomes more expensive, so high-order primitives are needed - this seems to come with DX10 or later.
  • More sophoisticated scene managment - things like conditional rendering and such stuff (something like RenderMan's Delayed Read Archive in realtime, although this probably goes into hard/impossible to implement on a graphics card).
  • Eventually ray-tracing when the memory/time to create reflection maps and shadow buffers becomes too high/long - I think this is unlikely to happen anytime soon for graphics cards. I'd love to see though ray-tracing capabilities in the shading languages (but before that happens, I want to see light shaders! I.e. shaders evaluating other shaders).

I doubt many non-gpu related stuff will be pushed onto the GPU, other solutions will come like CPUs with dedicated float-point cores (Cell-like stuff).
 
JF_Aidan_Pryde said:
- HRD monitors and framebuffers

i think theoretically we have hdr framebuffers already

i heard (from humus)x1x00 can display fp16 but it's not supported in the api.
 
tEd said:
i think theoretically we have hdr framebuffers already

i heard (from humus)x1x00 can display fp16 but it's not supported in the api.
AFAIK, can't you just set, like, D3DFMT_A16R16G16B16F as the backbuffer format in the D3D Present parameters and get the infamous HDR+AA?
 
Cypher said:
AFAIK, can't you just set, like, D3DFMT_A16R16G16B16F as the backbuffer format in the D3D Present parameters and get the infamous HDR+AA?
No you can't.
Code:
These formats are the only valid formats for a back buffer or a display.

Format      Back buffer      Display 
A2R10G10B10    x                x (full-screen mode only) 
A8R8G8B8       x  
X8R8G8B8       x                x 
A1R5G5B5       x  
X1R5G5B5       x                x 
R5G6B5         x                x
 
Back
Top