Graphics Cards using advanced Curved Surface techniques?

jpr27

Regular
First Post !

I was wondering if there has been any word on the next Generation of graphic cards using advanced Curved Surface techniques? With the power and bandwidth of graphics cards ever increasing, do you see how curved surfacing might evolve?


Thanks


***There is no luck only the will and desire to succeed***
 
Nothing definitive. One can always hope, but the fact that Microsoft is not planning any API revisions anytime soon seems to indicate that advanced HOS techniques will not become available with the next generation of cards.

I expect that when we finally do see higher-order surfaces implemented in a satisfactory way, they'll be in the form of a "primitive processor," where the programmer can write a shader for how to split triangles into more triangles. These could be in the form of "patches" where the programmer sends explicit surface shape information along with the primitive, or something closer to ATI's Truform where the programmer can write an algorithm that calculates a higher-tesellated surface based upon existing geometry (but in a much more sophisticated way, of course, to avoid Truform's artifacts).

But, unfortunately, if the fact that Microsoft is not updating DirectX is any indication, it'll be over a year before we see any of this.
 
MS Suck (for curved surfaces at least)

From memory, for doing DX9 rect patches, you cannot submit a list of patches, you have to do a DP call for every patch, thus any speed your would get out patches is lost in DP overhead!

Silly MS.
 
Carmack has said alot on this subject:

http://unrealities.com/web/johncchat.html

Polygons and curved surfaces are both analytic representations that have
serious problems of scalability. The "number of visible polygons
on screen" problem comes up if you build your world out of polys. Curved surfaces seem to help with that, but not for long... soon you run
into problems of "number of visible curves on screen" and you're back to
square one.

John's hunch is that eventually 3D hardware will be based on some kind
of multiresolution representation, where the world is kept in a data structure
that fundamentally supports rendering it in near-constant time from any
viewpoint.

Voxels are one example of such a structure. John mentioned that he
actually wrote a voxel renderer and converted an entire Quake 2 level to
use it. It wound up being about 3 gigabytes of data!  But he
said that that's not actually that far off from today's hardware capacity,
if you use intelligent streaming techniques to move the relevant portion
of the voxel set into and out of memory.  And he said it was really
nice having only one data structure for the entire world--no more
points versus faces versus BSPs... just one octree node (and subnodes of
the same type) representing everything.

Other possibilities include frequency space representations (i.e. wavelet
compression schemes). (Think something like fractal compression,
and fractal-compression-like progressive rendering, applied to an entire
3D space.  He didn't mention fractal compression at all, though; he
just talked about "frequency space" and nodded when I said "wavelets" :)
He mentioned that there is one multiresolution graphics text that includes
some techniques for frequency-space 3D modeling and rendering, but didn't
say which one; one possibility is Tony DeRose's (et al.) Wavelets for
Computer Graphics
.

Their's also been alot in his .plan updates preaching against doing curved surface tesselation in hardware.

<thinking cap> So far real-time graphics has basically been about doing the same thing an offline renderer does, but on a much smaller scale. As real-time 3d continues to become more and more complex, will there be aspects of offline rendering that will not lend themselves to a scaled down real-time implementation? I don't think it would be a smart idea to try and render "Finding Nemo" in real-time with the same methodology an offline renderer uses, even if you had infinite hardware resources.</thinking cap>
 
Oh, but I guess I will have to state that with the availability of texture reads in the vertex shader with PS 3.0, any PS 3.0 part will be able to do a basic form of displacement mapping. Unfortunately, every vertex will still have to be sent to the graphics card for rendering, as there will be no way to generate new vertices in VS 3.0.

So, this form of displacement mapping would essentially consist of sending a flat, highly-tesellated surface, then perturbing that flat surface via the displacement map.
 
Where does he say that he means the upcoming generation of graphics cards and not the one after that?
 
Heh well that article was written way back before Quake 3 came out. I think he was talking about the distant future, like >5 years from now.
 
nobie said:
Heh well that article was written way back before Quake 3 came out. I think he was talking about the distant future, like >5 years from now.
Given the use of curved surfaces in Quake3, he probably thought we'd have 'em by now.
 
Re: MS Suck (for curved surfaces at least)

Jodi said:
From memory, for doing DX9 rect patches, you cannot submit a list of patches, you have to do a DP call for every patch, thus any speed your would get out patches is lost in DP overhead!

Silly MS.
It's been a very long time since I looked at this, but I thought you might have been able to send a batch of patches. The annoying thing, I recall, was you couldn't use indexed control points.
 
Back
Top