Beyond the current OOTB features like AA and AF

Well in this case i wouldn't change the existing interface, i'd just add option to "Force Max supported refresh to all resolutions". Something like RivaTuner has. But i'd leave the advanced even bit depth settings as they are. Some would certanly prefer very advanced settings, even if they look stupid to you (since you don't need bit depth).
But with added option you'd be able to do what you need.
But i don't think it will change any time soon. Maybe on njew NV CP, hard to say.
 
Luminescent said:
I have allot of fun using B3D competition smartshader entries for enhancing older games.
They are here. The only download link I could find allegedly to them pointed to ATi's examples.
 
RejZoR said:
Well in this case i wouldn't change the existing interface, i'd just add option to "Force Max supported refresh to all resolutions". Something like RivaTuner has. But i'd leave the advanced even bit depth settings as they are. Some would certanly prefer very advanced settings, even if they look stupid to you (since you don't need bit depth).
As Pete said, a GUI can have too many options. The chance that you need separate settings for different bit depths is less than one in a million. And even then the interface does not allow you to force different refresh rates for different bit depths. You can only choose between one forced refresh rate or default.

But indeed, "force max supported refresh" is probably the only thing 99.99% of users ever want.
 
Getting back to the starting point about new OOTB features, I didn't see anyone mention geometry shading, which will be included in DX-10. IMHO, this is the big one.

What is missing with the existing level of 3D graphics is that no real curved surfaces exist - only polygon-based surfaces. If you want to generate a smooth-looking surface today, you need lots of vertices. With geometry shading you will be able to generate vertices with a shader from some arbitrary high-level surface definition.

This will allow only the number of vertices to be used for a figure that are really necessary to describe the shape - the vertex count can be adapted by the shader to the level of detail needed for an object dependent on its distance from the camera, rather than having a limited number of LODs.

The end-effect will be better looking human figures, smoother curved surfaces in general, better lighting and reflections on curved surfaces, and no more popping when the LOD changes.
 
MipMap said:
Getting back to the starting point about new OOTB features, I didn't see anyone mention geometry shading, which will be included in DX-10. IMHO, this is the big one.

What is missing with the existing level of 3D graphics is that no real curved surfaces exist - only polygon-based surfaces. If you want to generate a smooth-looking surface today, you need lots of vertices. With geometry shading you will be able to generate vertices with a shader from some arbitrary high-level surface definition.

This will allow only the number of vertices to be used for a figure that are really necessary to describe the shape - the vertex count can be adapted by the shader to the level of detail needed for an object dependent on its distance from the camera, rather than having a limited number of LODs.

The end-effect will be better looking human figures, smoother curved surfaces in general, better lighting and reflections on curved surfaces, and no more popping when the LOD changes.

While that sounds great it has nothing to do with OOTB features like forced AA/AF,DV etc.
 
MipMap said:
Getting back to the starting point about new OOTB features, I didn't see anyone mention geometry shading, which will be included in DX-10. IMHO, this is the big one.
That's not an "Out of the box" feature: it needs software support. And from everything we've seen, it's seeming more and more doubtful that the use of higher-order surfaces will ever become widespread.

Granted, perhaps the geometry shader will finally allow the flexibility and performance required to make game devs go for it, but we'll have to see about that. It also needs to be easy to make use of them: i.e. good developer tools.
 
Chalnoth said:
That's not an "Out of the box" feature: it needs software support. And from everything we've seen, it's seeming more and more doubtful that the use of higher-order surfaces will ever become widespread.

Granted, perhaps the geometry shader will finally allow the flexibility and performance required to make game devs go for it, but we'll have to see about that. It also needs to be easy to make use of them: i.e. good developer tools.

I'm not sure if just a geometry shader as required in D3D10 is really enough for what developers would really want or would have wanted.

I don't have the impression that developers wouldn't have wanted also fully programmable tesselation and I doubt that IHVs wouldn't had included it either if there would had been enough hardware space to include it.

My impression for D3D10 is that the requirements are already insanely high and that IHVs had to set certain priorities. Despite it being more than early to speculate on future aspects beyond D3D10, I wouldn't exclude the possibility for the future either.
 
I think the best OOTB addition to the desktop for gaming would be high dynamic range monitors. I have heard of their existance, but not of their support in the Pc space. Can't see how this would help older games though.

I'd like to see a feature that would end the 30/60/80/100/10000 FPS debate by properly motion blurring frames. for example calculating how much an object has transformed from one frame to the next. i.e. got slightly further away (smaller), moved to the right and rotated slightly, once this is calculated doing and interpolation drawing the object in many states between the two creating a blurred object. This would be instrumental in fooling our eyes properly into accepting fluid motion.

It also doesn't sound, to me, all that difficult - just expensive in certain situations. bottom line is, if your attempts to hide the fact your scene is being shown as a very fast slideshow turns your output into a slow, motion blurred slideshow then wtf is the point!
 
This sounds more like T-Buffer from Voodoo's ;) A real-time blurring "engine".
Remember the motion blured Quake 3 Arena?
 
Demirug said:
Believe me this could be a very bad idea because old games do dirty tricks with the back buffer.
As long as it works in 99% of all games, I don't think it would be a bad idea.
 
RejZoR said:
This sounds more like T-Buffer from Voodoo's ;) A real-time blurring "engine".
Remember the motion blured Quake 3 Arena?

Motion blur in the T-buffer was temporal antialiasing.
 
RejZoR said:
? Temporal AA has nothing to do with Motion Blur...

Depends what you understand under temporal AA exactly. Motion blur means antialiasing in the temporal dimension.

The method for creating smooth images is known as Spatial Anti-aliasing (means smoothing out space), and the method for creating smooth motion in animations is known as Temporal Anti-aliasing (means smoothing out time).

http://freespace.virgin.net/hugo.elias/graphics/x_motion.htm

If you should be confusing it with ATI's temporal AA, it's in my book an abuse of the term.
 
I wouldn't call that Temproral AA either way.
It's simply motion blurring. Viewing fast moving object in real life gives the same effect if you don't follow it with eyes but look at it statically.
 
RejZoR said:
Viewing fast moving object in real life gives the same effect if you don't follow it with eyes but look at it statically.
Yeah, and you don't see that on a computer. What you see instead is multiple discrete images blended together. This is temporal aliasing. Thus, the motion blur would be temporal anti-aliasing.
 
RejZoR said:
I wouldn't call that Temproral AA either way.
It's simply motion blurring. Viewing fast moving object in real life gives the same effect if you don't follow it with eyes but look at it statically.

Additionally to the other replies above:

The term "motion blur" refers to antialiasing in the temporal dimension. All high quality CGI feature film work performs this, otherwise CGI imagery would look very unrealistic (esp at 24 fps). Not fully understanding temporal antialiasing was a major problem is early film special effects such as hand animated miniatures.

The methods of temporal anti-aliasing (motion blur) are the same as for the spatial dimensions. The most straight forward is to supersample at a higher temporal resolution (frame rate) and then downsample using some filter kernel (often just a box filter meaning all the in between frames are simply averaged) to the desired frame rate. You get better quality by increasing the number of samples and by using a better filter shape (such as a gaussian).

Aliasing gets its name from the fact that frequencies higher than the display frequency alias (take on another identity) as frequencies below the display frequency.

This means that spatial frequencies (rapid changes in image contrast) at resolutions higher than the display resolution alias to lower spatial frequencies and become visible. This is the same phenomena as the beat frequency from two separately occillating strings, tuning forks, etc. In that case, the two frequencies interfere to cause a third, low frequency beat. In the case of images, the spatial display frequency of the CRT screen (say 1280 pixels across) interferes with the spatial frequencies in the image (spatial patterns in textures, spatial patterns formed by triangle edges, etc.) and cause artifact "beat" frequencies. Thus, increasing the display resolution can never eliminate aliasing, only raise the frequency of the artifacts. You must antialias to remove aliasing. In the temporal case, raising the frequency of the frame rate can never remove temporal aliasing. Again it only increases the frequency of the temporal artifacts. You must antialias in the temporal dimension (use correct motion blur), to remove temporal aliasing.

Note that correct temporal antialiasing does not truly "blur" the image any more than correct spatial antialiasing blurs the image. In fact, the image actually appears much sharper, with more resolution than is actually being displayed. In the temporal case, this means a correctly motion blurred image actually appears to be running at a much higher frame rate than it actually is, with no flicker or jerkyness.

It does not depend on where the eyes are looking, any more than spatial antialiasing. It is applied to the whole image on each frame.

http://www.beyond3d.com/forum/showpost.php?p=13331&postcount=46

"Blurring" as you present it is simply wrong. If then we could add a simple blur filter for either the entire scene or fast moving objects in the temporal dimension and call it anti-aliasing. Blurring doesn't anti-alias, it just blurs the hell out of it.
 
Back
Top