Whats the next big thing in realtime graphics?

Fred

Newcomer
When this question was asked a few years ago, SA gave a post that more or less hit the nail on the head.

He correctly pinpointed the rage over shadow implementations, the bandwidth saving mechanisms that would become standard fare, as well as the eventual unification of pixel and vertex shaders with standardized instructions.

Well same question.

Are we going to see merely incremental upgrades to a static featureset (like some developers claim), or is there still room for radically new and different algorithms/lighting solutions/etc etc. What about *real* displacement maps... Feasible or not?

Etc
 
the next big step visually(regarding the quality of what we actually see, not just an engine that is hailed as advanced despite not offering high quality visuals..cough doom 3 cough...) will be when developers actually start utilizing vertex shaders to process some actual gemoetry rather then just taxing our hardware by letting us play in super high res with max aa and af
 
Fred said:
Are we going to see merely incremental upgrades to a static featureset (like some developers claim), or is there still room for radically new and different algorithms/lighting solutions/etc etc. What about *real* displacement maps... Feasible or not?
As far as software goes, the next major leap will be to have shaders that correctly model the light interaction with a variety of surfaces. In recent years, the application of shaders has often seemed little more than a gimmick, with developers making far too many things look shiny, or has been limited by support of low-end hardware, making everything look plastic.

In the coming years, we'll see developers really strive to make the shaders they apply to various surfaces applicable to those surfaces.

As far as the hardware goes, the next big change will really be a programmable primitive processor. This is something that we've been waiting for for a long time, but the ability to create vertices in shaders has been denied us since the inception of vertex processing. I expect this will be a big very change, and will allow for developers to scale geometry counts from low-end to high-end hardware.

Beyond that what we'll see are various efficiency improvements. The primary areas that need efficiency improvements today are dynamic branching, small polygon rendering, and balancing of vertex/pixel processing power.

Dynamic branching has no easy fix. IHV's will just need to put a lot of time and effort into discovering ways to optimize performance with dynamic branching without harming performance without it. I don't think there is any easy solution. For example, if you get rid of today's processing of quads of pixels at a time, you may improve branching performance. But you'll lose the memory bandwidth efficiency gained from rendering in quads in the first place.

When talking about small polygon rendering, what I'm talking about are the inherent inefficiencies in the pixel shader's rendering of small polygons (or even just long thin ones). Perhaps the best way to handle these is to have a polygon cache where you queue up multiple polygons before rendering, figure out what tiles on the screen are covered by which polygons, and render separate tiles (I believe this is what ATI does, actually). But this only helps resolution so much. After this it will likely be beneficial to allow pixel quads to work on different polygons.

Then there's the balancing of vertex and pixel shader power. The simple fact is that with a traditional architecture, it's really impossible to do properly. For example, if you have a shader-heavy game that uses shadow mapping, that shadow mapping will turn out to be much more vertex-shader hungry than normal rendering, and thus for part of the scene you'll be entirely vertex processing limited, and for the rest entire pixel processing limited. But you don't even have to go this far to get these sorts of limitations. In any real game, you will view the same object from many different distances. This means that when the object is far away, it will be vertex processing limited, and it will be pixel processing limited if close by.

So, what we really need to combat this are unified pixel and vertex pipelines. The pipelines are already capable of many of the same things (at least in the NV4x). The primary differences are that the two pipelines are optimized for handling different data. What we need is an efficient way to combine the two different pipelines into one without losing this optimization.
 
GPUs that can create and destroy vertices, efficiently and quickly would be really nice.
 
Chalnoth said:
Perhaps the best way to handle these is to have a polygon cache where you queue up multiple polygons before rendering, figure out what tiles on the screen are covered by which polygons, and render separate tiles (I believe this is what ATI does, actually).

I realise that I have quoted you out of context, but this sounds like TBDR to me. Go wash your mouth out with soap. ;)
 
I think the next big thing are improved materials using things like sub surface scattering, this will get rid of the plastic look of the current generation.

Some might argue that the improved materials are more of same just with longer shaders etc. and not really an inovation. But I think what really matters is that it will result in a mayor image quality improvement not that it might not require any improvements (other than speed) on the hardware side of things.
 
Roger Kohli said:
I realise that I have quoted you out of context, but this sounds like TBDR to me. Go wash your mouth out with soap. ;)
No, because it only queues up a few polygons. TBDR seeks to cache the entire scene.
 
You know I'm surprised we haven't seen anyone start to use some sort of simplified BDRF's yet for lighting different materials. Of course to be able to do that they need to get access to previously done BDRF's or make their own.
 
Chalnoth said:
Roger Kohli said:
I realise that I have quoted you out of context, but this sounds like TBDR to me. Go wash your mouth out with soap. ;)
No, because it only queues up a few polygons. TBDR seeks to cache the entire scene.

This is truly way OT, yet under ideal occassions yes, it is not a necessity though.

And yes considering ATI and one of Eric's most recent posts about tiling/geometry in general, it seems that they are already on that track and are continuing research on the subject.
 
Tons of procedural or semi-procedural content ? ( tuned by artist/level architect, of course ) Like IDV SpeedTree for instance.
 
actually, I think that the single most important thing to be realsed in the comming years will not be a grephics technology per se but, rather, tools that allow artists (no programmers) to really make use of the existing technology (well existing meaning SM2/3/ DX Next... nothing too revolutionary). We have the technical capability but what a lot of games are missing is putting that technology to proper use!
 
Fast dynamic world geometry. Remember the HL 2 E3 video, well right at the begining, when the terrain changed, that was the most exciting part of the entire video.
 
Back
Top