Curved surfaces, why not?

MfA said:
The only right way for a tesselation unit in a GPU would be to have a pre and post tesselation vertex shader ... unfortunately that probably wont be the way it will be delivered :(
Wanna CELL? ;)
 
nAo said:
MfA said:
The only right way for a tesselation unit in a GPU would be to have a pre and post tesselation vertex shader ... unfortunately that probably wont be the way it will be delivered :(
Wanna CELL? ;)
What your gonna do PPU on the cell chip stream to the graphics chip and then stream back to the cell chip and then stream to the GPU again?
 
bloodbob said:
What your gonna do PPU on the cell chip stream to the graphics chip and then stream back to the cell chip and then stream to the GPU again?
If you have to 'touch' your vertices before AND after vertex shading u just shade them on CELL, then send transformed vertices to the GPU.
 
So your gonna do texture looks up with the cell chip too? so are we gonna load texture into the GPU and CPU memory now?
 
Or if the tesselation unit is in front of the vertex shader, which seems likely, you could do only the pre-tesselation vertex shading on Cell ...
 
Simon F said:
Approximated perhaps, but that's not as interesting :)
I must agree. :D But IMO our vision is incapable of making a distinction between continuity levels (of compound surfaces) higher than C0. It can only tell whether it's C0 continuous, or higher. And of course, if the component-density is high enough, it can't even tell whether the surface is only C0 continuous. (I go further, it can't even tell whether it's not continuous at all - as with point sprite demos.) So you can approximate well with polygons too. But the C1 continuous approximation is far more effective, because (in the case of simple surfaces) with just a few (say less than a dozen) components you still can trick the human vision, which is not true for polygons. What do you think?
 
Mate Kovacs said:
Simon F said:
Approximated perhaps, but that's not as interesting :)
I must agree. :D But IMO our vision is incapable of making a distinction between continuity levels (of compound surfaces) higher than C0. It can only tell whether it's C0 continuous, or higher.

Compute the normals, do some shading and then tell me if you can't see the difference between C0, C1 and perhaps C2....
 
bloodbob said:
So your gonna do texture looks up with the cell chip too? so are we gonna load texture into the GPU and CPU memory now?
What?! are you going to have texture sampling in every vertex shader? :)
 
jpr27 said:
Well I guess my point was why do we limit outselves to using triangles. Now i do understand that in our current graphic generation that is what we are using but I'm saying could we evolve or implement at some point?

I mention per pixel because I'm thinking in terms of connecting pixels to form your curve instead of triangles. Now forgive me for talking in the simplist terms as like I said I am no expert (Simon F: as putting texture filtering in my initial statement because usually AA and AF are use together in general and shows my limited knowledge on this subject :oops: ).

Razor 1 I think has the general idea of what im getting at. If its possible to coordinate triangles in such a manner what is the limiting factor of doing it per pixel or spines etc in place of the triangle to form the curve or edge? would the processing power need be that much more? (Again as a general theory not in the normal methods we use today.)

Thanks


General theory if the chips were made to do true curves like nurbs we would be set by now, unfotunately M$ came along with Dx and screwed all that up. Right not everything is converted to polygons thats what is really holding graphics back.

Example some raytrace cards that have the computational power of a Althon 64 3 gightz, can do an equivlent of 10 million polys in a scene at 30 fps with full lighting, bump mapping, soft shadows etc. This is unimaginable with todays gfx cards.
 
nAo said:
bloodbob said:
So your gonna do texture looks up with the cell chip too? so are we gonna load texture into the GPU and CPU memory now?
What?! are you going to have texture sampling in every vertex shader? :)

No but okay let me guess now when there is no texture look we do

CELL PPU->CELL VERTEX->CELL PPU->GPU PIXELSHADER
but when there is a texture look up we do
CELL PPU->GPU VERTEX->CELL PPU->GPU PIXELSHADER

I'm sure thats gonna cuase some batching problems.
 
Razor1 said:
Example some raytrace cards that have the computational power of a Althon 64 3 gightz, can do an equivlent of 10 million polys in a scene at 30 fps with full lighting, bump mapping, soft shadows etc. This is unimaginable with todays gfx cards.
If you're thinking of the SAARCOR raytracer, which is in that ballpark performance-wise, it actually requires its input to be polygonized. It can reasonably well handle scenes with totally insane numbers of polygons (~30fps with 180+ MPolys/frame), but it's still polygons.
 
Simon F said:
Compute the normals, do some shading and then tell me if you can't see the difference between C0, C1 and perhaps C2....
Could you show me some pictures of a (shaded) surface made of n patches of 4th order and it's approximation with 4n patches of 2nd order?
I'm really curious by now. :)
 
arjan de lumens said:
Razor1 said:
Example some raytrace cards that have the computational power of a Althon 64 3 gightz, can do an equivlent of 10 million polys in a scene at 30 fps with full lighting, bump mapping, soft shadows etc. This is unimaginable with todays gfx cards.
If you're thinking of the SAARCOR raytracer, which is in that ballpark performance-wise, it actually requires its input to be polygonized. It can reasonably well handle scenes with totally insane numbers of polygons (~30fps with 180+ MPolys/frame), but it's still polygons.

Hmm no this is something different, can't remember off the top of my head the name, they are working with Intel right now if I remember correctly. This is a fairly new company I would say past 5 to 6 years.

Artwork is a real mess though, pretty much textures are procedurally made and thats just a pain artists pretty much have no control over end results.

But you're right about the SAACOR raytracer
 
Given that no-one seems to be able to agree on the surface types such a hardware unit should be able to tesselate, I'm hoping they'll just extend vertex shaders to be able to create and destroy vertices, as well as operate on a group of related vertices (control points or vertices belonging to the tangent/evaluation mask of a subdivision surface).
 
psurge said:
Given that no-one seems to be able to agree on the surface types such a hardware unit should be able to tesselate,

But like jpr27 asked, why would you want to tesselate? Just render the surface directly.
 
Well I have never written code that performs a ray to higher order surface intersection test, but I'm guessing that doing this is significantly more expensive than tesselation. Could someone confirm/refute this?

Anyway - there are other reasons. Some surfaces (subdivision surfaces) are actually defined as the limit of a tesselation process. That is, to evaluate normals, tangents, and surface points you must subdivide/tesselate. I think that methods to compute these that don't require subdivision exist for special points on the surface (depending on the subdivision scheme), but I'm not an expert.

Another benefit of tesselation is that you can apply displacement maps (of finer and finer detail) at each subdivision level.
 
Killer-Kris said:
But like jpr27 asked, why would you want to tesselate? Just render the surface directly.
Interesting HOSs cannot be rendered directly (basically nothing better than a quadric).
 
bloodbob said:
No but okay let me guess now when there is no texture look we do

CELL PPU->CELL VERTEX->CELL PPU->GPU PIXELSHADER
but when there is a texture look up we do
CELL PPU->GPU VERTEX->CELL PPU->GPU PIXELSHADER

I'm sure thats gonna cuase some batching problems.

In the specific case of displacement mapping you will usually want to do it at the end anyway, after tesellation ... letting the GPU handle that is not a problem.

No need to go back to Cell.
 
He's just saying there is no analytical solution for any surface that you'd want to model.

Your pretty much stuck with subdivision or Newton Raphson which is anything but a stable raytracing solution.
 
Back
Top