Wanna CELL?MfA said:The only right way for a tesselation unit in a GPU would be to have a pre and post tesselation vertex shader ... unfortunately that probably wont be the way it will be delivered
Wanna CELL?MfA said:The only right way for a tesselation unit in a GPU would be to have a pre and post tesselation vertex shader ... unfortunately that probably wont be the way it will be delivered
What your gonna do PPU on the cell chip stream to the graphics chip and then stream back to the cell chip and then stream to the GPU again?nAo said:Wanna CELL?MfA said:The only right way for a tesselation unit in a GPU would be to have a pre and post tesselation vertex shader ... unfortunately that probably wont be the way it will be delivered
If you have to 'touch' your vertices before AND after vertex shading u just shade them on CELL, then send transformed vertices to the GPU.bloodbob said:What your gonna do PPU on the cell chip stream to the graphics chip and then stream back to the cell chip and then stream to the GPU again?
I must agree. But IMO our vision is incapable of making a distinction between continuity levels (of compound surfaces) higher than C0. It can only tell whether it's C0 continuous, or higher. And of course, if the component-density is high enough, it can't even tell whether the surface is only C0 continuous. (I go further, it can't even tell whether it's not continuous at all - as with point sprite demos.) So you can approximate well with polygons too. But the C1 continuous approximation is far more effective, because (in the case of simple surfaces) with just a few (say less than a dozen) components you still can trick the human vision, which is not true for polygons. What do you think?Simon F said:Approximated perhaps, but that's not as interesting
Mate Kovacs said:I must agree. But IMO our vision is incapable of making a distinction between continuity levels (of compound surfaces) higher than C0. It can only tell whether it's C0 continuous, or higher.Simon F said:Approximated perhaps, but that's not as interesting
What?! are you going to have texture sampling in every vertex shader?bloodbob said:So your gonna do texture looks up with the cell chip too? so are we gonna load texture into the GPU and CPU memory now?
jpr27 said:Well I guess my point was why do we limit outselves to using triangles. Now i do understand that in our current graphic generation that is what we are using but I'm saying could we evolve or implement at some point?
I mention per pixel because I'm thinking in terms of connecting pixels to form your curve instead of triangles. Now forgive me for talking in the simplist terms as like I said I am no expert (Simon F: as putting texture filtering in my initial statement because usually AA and AF are use together in general and shows my limited knowledge on this subject ).
Razor 1 I think has the general idea of what im getting at. If its possible to coordinate triangles in such a manner what is the limiting factor of doing it per pixel or spines etc in place of the triangle to form the curve or edge? would the processing power need be that much more? (Again as a general theory not in the normal methods we use today.)
Thanks
nAo said:What?! are you going to have texture sampling in every vertex shader?bloodbob said:So your gonna do texture looks up with the cell chip too? so are we gonna load texture into the GPU and CPU memory now?
If you're thinking of the SAARCOR raytracer, which is in that ballpark performance-wise, it actually requires its input to be polygonized. It can reasonably well handle scenes with totally insane numbers of polygons (~30fps with 180+ MPolys/frame), but it's still polygons.Razor1 said:Example some raytrace cards that have the computational power of a Althon 64 3 gightz, can do an equivlent of 10 million polys in a scene at 30 fps with full lighting, bump mapping, soft shadows etc. This is unimaginable with todays gfx cards.
Could you show me some pictures of a (shaded) surface made of n patches of 4th order and it's approximation with 4n patches of 2nd order?Simon F said:Compute the normals, do some shading and then tell me if you can't see the difference between C0, C1 and perhaps C2....
arjan de lumens said:If you're thinking of the SAARCOR raytracer, which is in that ballpark performance-wise, it actually requires its input to be polygonized. It can reasonably well handle scenes with totally insane numbers of polygons (~30fps with 180+ MPolys/frame), but it's still polygons.Razor1 said:Example some raytrace cards that have the computational power of a Althon 64 3 gightz, can do an equivlent of 10 million polys in a scene at 30 fps with full lighting, bump mapping, soft shadows etc. This is unimaginable with todays gfx cards.
psurge said:Given that no-one seems to be able to agree on the surface types such a hardware unit should be able to tesselate,
Interesting HOSs cannot be rendered directly (basically nothing better than a quadric).Killer-Kris said:But like jpr27 asked, why would you want to tesselate? Just render the surface directly.
bloodbob said:No but okay let me guess now when there is no texture look we do
CELL PPU->CELL VERTEX->CELL PPU->GPU PIXELSHADER
but when there is a texture look up we do
CELL PPU->GPU VERTEX->CELL PPU->GPU PIXELSHADER
I'm sure thats gonna cuase some batching problems.
You're the genius, I'm nowhere near... what would qualify as "interesting" HOSs?MfA said:Interesting HOSs cannot be rendered directly (basically nothing better than a quadric).