Tessellation

No one is doing that. They are just being realistic. You're not going to see movie quality CGi generated on a desktop in real time in your life. Cranking up the tessellation factor isn't going to make it happen sooner. It just means you're making other sacrifices.

I'd be very sad if I die before we reach present movie quality CG being rendered in real time in my life. Of course it's always a moving target and until you hit the actual point of diminishing returns you'll never get movie quality CGi out of a desktop but being that I'm only 23 I expected to see some pretty cool stuff in the next 50 odd years or so. :D
 
Hmm, is this some sort of tessellation for the vehicles in GT5 between standard and premium?
pic

Standard cars are GT4 cars with upto 5000 polygons (AFAIK highest LOD for cars in GT4 photomode/menu). The premium are cars modelled with according to devs 200-400k polygons in highest LOD.
 
I'd be very sad if I die before we reach present movie quality CG being rendered in real time in my life. Of course it's always a moving target and until you hit the actual point of diminishing returns you'll never get movie quality CGi out of a desktop but being that I'm only 23 I expected to see some pretty cool stuff in the next 50 odd years or so. :D

The requirement is desktop computers being several billion times as powerful as they are now to reach current CGI renders in real time. I suppose that's a possibility at the outset of your life.

Single frames of avatar often took several hours to render in a 40000 core server farm. 30 frames per second from a single core (you can scale what you think your system would be from there) would need to be more than 4 billion times as fast. And that doesn't account for having a player interact with the frame.
 
The requirement is desktop computers being several billion times as powerful as they are now to reach current CGI renders in real time. I suppose that's a possibility at the outset of your life.

Single frames of avatar often took several hours to render in a 40000 core server farm. 30 frames per second from a single core (you can scale what you think your system would be from there) would need to be more than 4 billion times as fast. And that doesn't account for having a player interact with the frame.

mabye todays stuff. But watching the original Toy story without the redone textures of the newest re release and I have to say we are catching up to the early 90s rather quickly Looks like we will only be 2 decades behind !
 
The requirement is desktop computers being several billion times as powerful as they are now to reach current CGI renders in real time. I suppose that's a possibility at the outset of your life.

Single frames of avatar often took several hours to render in a 40000 core server farm. 30 frames per second from a single core (you can scale what you think your system would be from there) would need to be more than 4 billion times as fast. And that doesn't account for having a player interact with the frame.
I doubt a single frame was distributed among those 40000 cores.
 
I'm not trying to say games should use the same techniques as film, but I do think we shouldn't be so eager to restrict tessellation to making big triangles.

More broadly, on a technology forum, why are so many people trying so hard to defend the status quo? Why are we so eager to ensure future hardware is limited in the same way as today's hardware?

Presumably, everyone here wants richer detail in games. Tessellation, and small triangles generally speaking, are obviously the future, but they can't be shoehorned into a traditional GPU hardware pipeline, because tiny triangles overload the hi-z, clipping, perspective correction, backface culling and pixel shader hardware, which is carefully balanced for large triangles. There is also the issue that micropolygons would require "vertex quads" much as the pixel shader requires "pixel quads," and this in turn requires models to have a consistent UV parameterization. These changes would bubble up the graphics pipeline and out into everyone's game engine and tools pipelines.

There is no way to solve these problems, other than to remove the unnecessary hardware units from the GPU pipeline. And eventually they must go - but not yet, because it would make all existing 3D applications very slow and benefit only as-yet unwritten applications.

The eventual switch from vertex/pixel shading to vertex-only shading of micropolygons (like how movies have always done it) is probably the single most disruptive change to the GPU pipeline in decades, and it will take a long time to make it into mainstream products.

It's even possible that by the time micropolygons make it in there, there won't be a hardware pipeline left to change, because at the time an architecture like Larrabee may have already moved everything into software. In this case, the switchover to micropolygons may be even more piecemeal, since it would no longer ride on the product release cycle of a GPU hardware manufacturer or indeed Microsoft.
 
I'm not trying to say games should use the same techniques as film, but I do think we shouldn't be so eager to restrict tessellation to making big triangles.
Clearly, moving towards smaller polys sounds like a great idea, it's just that if you make them really small the performance hit just might not be worth it - maybe it would look better if you made your polys a bit larger but used some more advanced pixel shaders instead, for example, with the same performance?
 
If the polygons are subpixel cant you just discard them ?
If they do not contribute to the final image, yes.
Subpixel polygon culling is included EDGE tools, it checks sample positions for each polygons and discars them if theres no hits. (If I remember correctly..)

Currently one of the problems I continously see on tesselation benchmarks is undersampling and aliasing of displacement maps.
Problem is that even very small polygons the surface is evaluated too coarsely.
This is easily seen on stonegiant as surfaces deform into shape as camera moves toward them.

View and slope dependant tesselation with quite high tesselation would most likely fix this.
Idea is not to increase the average size of polygons, but use them in areas where they really matter.
 
Also if u tesselate a complicated texture (with bigger height differences like spikes) from a flat low polygon object its quite hard to control the whole thing. (it grows everywhere :LOL:)
In the end maybe it u would be easyer to model a high polygon model with all the basic surface textures modeled and add normal map for finer details from close up.(or use tesselation just for fine details from close up).

It would be interesting to create the stone giant in the stone giant demo from lots of polygons manualy (something like 500k and use more polygons only where its needed) and compare how would it run.
 
Actualy movies doesnt use tesselation just to have sub-pixel shading unlike games :?:
So they use already realy high geometry models which are then tesselated in rendering to extreme levels but without any height map (they are just subdivided). Than they render it in extreme resolutions with supersampling where even the smalest subpixel polygons contribute to the final super fine shading.
 
You are correct that movies start with highly detailed models though they do sometimes use displacement maps.
 
Also if u tesselate a complicated texture (with bigger height differences like spikes) from a flat low polygon object its quite hard to control the whole thing. (it grows everywhere :LOL:)
Further issue might be, how do you make tesselated geometry interact with the player/game world?

Collision detection would vary depending on tesselation factor - or can you even collision detect against algorithmically generated geometry? This could be a big issue for action titles. If you as you say, have spikes or other protrusions generated from flat polys and a player has to run across them - how would they interact with that player vs. a player that doesn't have a tesselation-capable card? Could tesselated bits sticking out help shield a player against things like gunfire for example, giving him added advantage vs. other players, or would anything tesselated basically be transparent, giving a false sense of security?

This is stuff worth discussing IMO...
 
Further issue might be, how do you make tesselated geometry interact with the player/game world?

Collision detection would vary depending on tesselation factor - or can you even collision detect against algorithmically generated geometry? This could be a big issue for action titles. If you as you say, have spikes or other protrusions generated from flat polys and a player has to run across them - how would they interact with that player vs. a player that doesn't have a tesselation-capable card? Could tesselated bits sticking out help shield a player against things like gunfire for example, giving him added advantage vs. other players, or would anything tesselated basically be transparent, giving a false sense of security?

This is stuff worth discussing IMO...
Graphics should have nothing to do with gameplay calculations, ever.
You would be suprised how many games consider player to be a cylinder or ellipsoid moving on simplified geometry.

In terms of tesselation and hit detection, you do the calculations on CPU not GPU.
You can do it in whatever way you want. (ray-casting, ray-marching or with some ridiculously stypid way. ;))
 
because tiny triangles overload the hi-z
Yes.
clipping, perspective correction, backface culling and pixel shader hardware, which is carefully balanced for large triangles.
Clipping is an exception, perspective correction is handled on a per pixel basis, backface culling is trivial and pixel shaders are already fed agglomerated pixels from multiple triangles. All this is trivial to solve.
There is also the issue that micropolygons would require "vertex quads" much as the pixel shader requires "pixel quads,"
Geometry shaders ...

PS. I think NVIDIA's way of solving the Hi-Z problem is the most reasonable way ... rasterize triangles without Hi-Z into a buffer of tiles (with some complicated assignment rules to maintain rendering order). At the output check Hi-Z for the entire tile and throw the tile to the pixel shader if potentially visible.
 
Last edited by a moderator:
Yes. All this is trivial to solve.

I didn't mention vertex shader output parameter memory, which is a real problem even without micropolygons. If a micropolygon vertex has 16 output parameters, that's ~256 bytes per pixel that has to stick around from the end of the vertex shader to the middle of the pixel shader.

Geometry shaders ...

Your sentence no verb. What did you mean to say here?
 
Your sentence no verb. What did you mean to say here?
I meant the geometry shaders could compute difference values at least for inputs ... not for intermediates, but meh ... using shifted differences for differentials is a filthy hack any way. I guess I simply don't agree quads are necessary at the vertex level.

As for storage issues, with micropolygons you would want to reduce the latency between dicing and rasterization ... but the only reason the latency is big now is because it is efficient now, more storage vs. less task switching.
 
Back
Top