GDC Europe - Nvidia and ATI to speak on DX9 shaders together

Nice! So hopefully we won't see a DX9.1 after all.

We'll take a whistle-stop tour through per-pixel lighting, procedural textures, procedural noise, procedural anti-aliasing within a pixel shader, float surfaces, high dynamic range rendering and a few of the other lesser known features in DirectX 9.
Um, how does that work? Never even heard of it...

Regards / ushac
 
Re: GDC Europe - Nvidia and ATI to speak on DX9 shaders toge

ben6 said:
http://www.gdc-europe.com/conference/index.htm


:eek: I wonder if they will break down into an all out fist fight.. 8) That is a rather interesting developement, showing some degree of cooperation, granted on a high level, speaks volumes about the rivalry between these two companies. I bet there will be a lot of cheap shot jokes between them at that conference.(All in good fun of course.) I would love to sit in on that...

Sabastian
 
You know it really makes you thankfull that at least SOMEBODY has the cohonies to at least TRY to take Nvidia head on. Its really cool that they are together in this. It just makes one wonder what it would have been like if ATi had not gotten a new CEO and top management etc... with some vision.

It would be NVIDIA presents DX9, now bow down in humble adoration ;)
 
bad_bx446_th.gif
bad_3bb_th.gif
bad_500hs_th.gif
Brine_ncaa_championship.gif
 
ushac said:
Nice! So hopefully we won't see a DX9.1 after all.

We'll take a whistle-stop tour through per-pixel lighting, procedural textures, procedural noise, procedural anti-aliasing within a pixel shader, float surfaces, high dynamic range rendering and a few of the other lesser known features in DirectX 9.
Um, how does that work? Never even heard of it...

Regards / ushac

It's not procedural FSAA. It's texture filtering done by pixel shader itself on a procedural texture, like brick, or marble.
 
Oh, wow, programmable texture filtering. That could be very cool :)

But, the most exciting thing to me is all the unimplemented conditional statments available in Cg...
 
"Oh, wow, programmable texture filtering. That could be very cool :) "

I recently realised one texture sampling operation that would be nice. It's a variant of anisotropic filtering. With regular anisotropic you give a texture coordinate for each filtered texel and let the HW calc how to stretch the sample pattern. But if you instead give two coordinates, and the texturing unit will sample the texture in a line from the first to the second point, you could do some interesting effects.

Regular anisotropic filtering could of course be done with such hardware (duh).
The anisotropic gloss maps that MfA linked to recently could also be done.
And a cheap way of doing fur (not so good for contours though).
And probably more.

Haven't thought about how difficult that function would be to implement though.
 
Why stop there? How about providing a filter kernel to the HW sampler and do convolutions in HW while sampling? How about providing two kernels: one containing a set of positions to integrate over, and a second one, a list of weights for each.


One problem I see with procedural textures is the inherent inefficiency. There is no caching of the output of a "procedural texture" in todays procedural shaders. Procedural textures are expensive. Instead of one cycle being used to get a sample, you spend dozens computing it. You trade off memory bandwidth for CPU, but historically, using memory has always been cheaper than CPU (it's why caches exist!)


There should actually be a new type of "shader" in todays hardware: texture generators, which run inside the texture unit and can cache their output. Think of it as a more advanced form of DXTC compression.

For example, if I want to use a marble texture, I write a procedural generator for it. The texture unit can cache parts of the generated texture (for those that don't vary based on inputs). These can be "sampled" by the pixel shader stage.

The difference between "inlining" the procedural generator inside the pixel shader, and making it a separate routine inside another HW unit is that the hardware can deal with the generator intelligently, whereas in the pixel shader cache, it is hard to determine what you can cache and reuse.

Another solution is allow pixel shaders the ability to WRITE to a scratch pad memory, so that you can write the cache logic yourself inside the shader, but I think this is bad for efficiency.
 
A procedural texture could theoretically be superior in performance if it was simple enough to be calculated in one clock, including texture filtering.
 
Chalnoth said:
A procedural texture could theoretically be superior in performance if it was simple enough to be calculated in one clock, including texture filtering.

Well, the simplist possible procedural texture generator that does something non-trivial (e.g. doesn't just linearaly interpolate between two colors) is atleast 8 instructions, probably more. When we have 3D hardware that can dispatch 8 or more vector/scalar operations in a single cycle, call me from the year 2010. Until then, caching previously calculated texels will always be a performance win.
 
Procedural anti-aliasing is better called analytical anti-aliasing.

To understand it, you first need to realize that the process of generating images is really just the process of evaluating a multi-dimensional integral, and anti-aliasing is accomplished by integrating over area.

This is normally accomplished by using rectangular approximation (take a number of samples, average them, and use the property that as the number of samples approaches infinity, the error in your approximate integral approaches 0). Since a large number of primitives in computer graphics are sampled, or have no basis in math (e.g., texture maps), using approximations such as rectangular (or trapezoidal) are often the best available techniques.

However, many procedural shaders (the simplest being things like checkerboards) are based entirely in the world of mathematics, and you *can* write an equation that maps f(u,v,w) to RGB values. If, instead of computing f(u,v,w) on just the texture coordinates for the current fragment, you compute the integral of f(u,v,w) on the range from the lowest point on the fragment to the highest point), you can compute the exact value for the shader directly, and get perfect texture anti-aliasing at any resolution.

This requires the ability to compute the derivative of the texture coordinates relative to screen coordinates at every fragment, in order to know what the correct filter width (range of texture coordinates) is.

Procedurally anti-aliasing noise (often used in marble and wood textures) is much more difficult, since most noise implementations are non-integrable (and non-differentiable), resulting in, at best, an anti-aliasing implementation that takes a variable number of samples.
 
elimc said:
side stepping a little.. what the HELL does Cajones mean anyway?

Cajones are eggs. Go into a Mexican resturant and ask for some cajones. They will give you eggs.

Umm, eggs are huevos, as in huevos rancheros.

If you ask for cajones, you'll get strange looks.
 
RussSchultz said:
elimc said:
side stepping a little.. what the HELL does Cajones mean anyway?

Cajones are eggs. Go into a Mexican resturant and ask for some cajones. They will give you eggs.

Umm, eggs are huevos, as in huevos rancheros.

If you ask for cajones, you'll get strange looks.

Drawers.

Ok, I know you are not talking about that ;). Huevos would work too.
 
Back
Top