The Future of 3D Graphics

Most image space filters used for movies are too fragile for interactive applications, and compositing does not really need specific support.
 
I'd rather have a floating point frame buffer to comp together color, specular, light and other passes... That is, if it'll be practical to render in passes for realtime, but why not?

And what do you mean by "fragile" filters? Sharpening, bluring, edge detection and combinations of these can do a lot of nice and cool stuff and AFAIK most of these are possible to implement on current hardware (like the bloom effect on highlights using blur + brightness/contrast adjustment). But these require rendering to a texture and re-feeding it to the pixel shaders, which sounds a bit awkward to me...
 
In the sense that it is hard to know when to apply them and when they will cause artifacts without shrinking down the artist to fit on the chip too so he can direct the shot in realtime.

The more physically accurate the models used for rendering are, the less you have to worry about edge cases where your approximations break down. Realtime rendering is more difficult than offline rendering in a way, you control both the view and the scene only in a very limited sense ... and there are no second chances.

Marco

PS. I think in DX9+ shaders the framebuffer will be an input ... so it really is no problem, no specific support necessary IMO.
 
YeuEmMaiMai said:
I seriously doubt that ATi is going to miss one product cycle let alone three. R300 should be a very nice clue as to where ATi is going.
R300 set the bar so high in terms of reality vs. hype, I don't think anything will top it for a while (plus, we may have been expecting a slight disappointment after Parhelia's inauspicious 256-bit DDR debut). For me, R300 redefined my expectations of a high-end card-it literally had no compromises, no cases where one could honestly say performance was not a large improvement over the previous generation. I'm not really expecting the same out of the current next gen (not updates like R350 and NV35, but "new" architectures like Loci and NV40), but I suppose it's possible we'll see something like we have with R300. I'm not sure memory tech will advance fast enough (AA+AF performance was the biggest surprise with R300), though I know even less about that than I do about GPU tech.

I also was in no way expecting something like the R300 given ATi's and ArtX's less-than-exemplary track records. So nVidia may rebound higher than ATi with the NV40. Anything's possible.
 
Realtime...

mboeller said:
Realtime Raytracing :
http://www.saarcor.de/
http://graphics.cs.uni-sb.de/~jofis/SaarCOR/SaarCOR-Descr-Engl.pdf
...
The advantage of this system is that it needs only a very low amount of bandwidth compared to normal texture mapping. The prototype for example uses normal 133MHz SDRAM

Quite interesting. But there are a few glitches:

1. You need an axis-aligned BSP of the whole scene. When something changes you have to rebuld the affected BSP nodes. This is very slow (especialy when the changed geometry spans a root node) and can not be done efficiently in hardware.

2. The architecture requires good ray coherence - e.g. lots of rays span the same BSP nodes and intersect the same triangles (hence the advertized low bandwidth requirements). Not good if you have lots of pixel sized tiangles - will happen a lot when games start to use HOS+displacement mapping or just more dence geometry. Also not good for things like reflections/refractions from bumpy surfaces.

3. I don't see how this architecture can be less complex than the classic hardware implementation - they replace the very simple triangle rasterization+depth compare unit with a bunch of paralel raytracing units that perform BSP traversal and triangle intersections.

This approach has a few advantages - the capability to calculate reflections/refraction and shadows, and the zero overdraw when calculating the shading. But because of glitches #1 and #2 it is unsuitable for rendering anything other than static and relatively(by Y2003 standarts) simple scenes.

All other disadvantages of the standart raytracing still apply - lots of context switching, needs the whole scene in memory at once, etc.
 
MfA said:
In the sense that it is hard to know when to apply them and when they will cause artifacts without shrinking down the artist to fit on the chip too so he can direct the shot in realtime.

Well I generally agree with you, but there are a couple of easy ways you could use filters in realtime:
- perform it on the whole screen, like a sharpen + glow combination
- render elements separately with a matte in the alpha channel and individually manipulate the layers, then comp everything together for display

And artists can just simply setup a few post effects so that they'll look good for a whole level. Also, users are much more forgiving about rendering artifacts in interactive applications. Just look how long we've been going on without antialiasing, or even perspective correct texture mapping :)
And of course you don't have to match the realtime imagery to live action, so it won't require so much and complex manipulation, but just a few effects with high "wow" factor.

Oh and I've forgot to mention color correction, which is just as important. Its wow factor can be pretty big as well, and it should also reduce artist time spent on finetuning assets to get a consistent look.


The more physically accurate the models used for rendering are, the less you have to worry about edge cases where your approximations break down. Realtime rendering is more difficult than offline rendering in a way, you control both the view and the scene only in a very limited sense ... and there are no second chances.

I still don't believe in the supremacy of physically correct rendering. It's a lot harder to manipulate reality to get the effects you want; there are too many rules, restrictions, and it needs to much computing capacity.

Let me bring up an example. A few years ago, several renderers have been introduced that used Monte-Carlo sampling for relatively fast Global Illumination. It created a lot of buzz in the industry, and many people thought that the end for lighting artists has come. Everybody started to pump out those dull greyish or blueish skylight + 1 sunlight renders which looked quite realistic but also incredibly boring.
Things soon went back to normal though, as it turned out that a GI renderer can not replace a good artist. And the new abilities of the software have found their right place as they've got integrated into existing toolsets (see about ambient occlusion above). And the most important requirements remain to be the following for a renderer: good support for large scenes with high complexity, support for good displacement mapping, and also motion blur and depth of field.


PS. I think in DX9+ shaders the framebuffer will be an input ... so it really is no problem, no specific support necessary IMO.

Sounds cool, but then again I'm no programmer :)
 
JF_Aidan_Pryde said:
I wonder how one can shift hardware support from rasterisation to raytracing/radiosity. If 'hardwired' raytracing/radiosity is not on chip, then the 'pool of computation resource' model will have to be exploited. (?)

I'd like to hear some more on this also, in addition to what _GeLeTo_ has already said. Also whether there any current API limitations that prevent the practical realisation of such processing within current architectures. :?:

How does Kirk's hinting WRT NV40 relate to what is coming in terms of API development, if at all? OGL 2.0 finalisation, PS/VS 3.0 and then quite possibly a prolonged period of stagnation. :?:

MuFu.
 
Back
Top