Tessellation

Would this be of any help? Current situation anyway.

The only way I can make sense of numbers like that is if the benchmark is purely bandwidth-limited, since all other differences between the 6850 and 6870 are much greater than the mere 1 or 2% difference exhibited by this benchmark.
 
Is your english broken? Froblins is a DX10.1 demo. An unimpressive one at that. Really if that's the best you can come up with you're essentially making my point for me :) Please don't turn this into a defend AMD at all costs campaign. It's clear for everyone to see that tessellation has not been brought to market effectively.

That post was not directed at me but i would like to know why the AMD Froblins is a bad showcase of the tessellator on the AMD hardware?

It is not dx11 but it uses the "old" tessellator hardware that exists in the HD2000 and up to tessellate the terrain and the frogs and imo it seams to do very well what it was meant to do, showcase the Tessellator hardware on the Radeon cards.
 
The froblins demo was actualy rare example of a good tesselation out there. Not another gimick over tesselated demo which comes out every month these days.

The tesselation just kicked in when the froblins were quite close and greatly increased the details while compressed the geometry.
It could be used in many games to add details to objects that are usualy quite small on the screen and u see them only when the object is zoomed in. Something like a fine detail lod.
Thats the worst thing in today games. When u come close to some object that looked realy good from distance and u cant wonder how ugly it looks from close up.
 
All the talk about Good uses of tesselation and Bad uses of tesselation seems a little strange to me. Micropolygons (ie, sub-pixel sized polygons) are widely used in film rendering. So why do we not want them to be used in games?

For interactive 3d fidelity to approach film quality, I think we need to embrace tesselation and use it everywhere, not put it in a sandbox of tricks to be used sparingly and only by programming wizards.
 
All the talk about Good uses of tesselation and Bad uses of tesselation seems a little strange to me. Micropolygons (ie, sub-pixel sized polygons) are widely used in film rendering. So why do we not want them to be used in games?

For interactive 3d fidelity to approach film quality, I think we need to embrace tesselation and use it everywhere, not put it in a sandbox of tricks to be used sparingly and only by programming wizards.
Sounds good in theory but the hw just isn't built to handle that efficiently anyway, even if it could push infinite amounts of geometry without a performance loss. That goes for both nvidia and amd (and others), as long as hw processes pixel quads you will always have a very steep performance hit if you go sub-pixel sized polygons (actually the limit is quite a bit higher than sub-pixel size).
So using overly aggressive levels of tesselation is not an efficient use of hw resources.
 
Real time 3D is all about shortcuts to make it look 'good enough' vs off-line rendering.

In off-line rendering you have Top 500 scale render farms (eg the 5 render farms that Weta digital used for Avatar were 140 equal on the top 500, currently 463 equal) & pretty much as long as you need to take for it to render. (hours per frame)
In realtime rendering you want frames to render every 30ms or better, with a wide variety of hardware & resolutions so of course you need to take shortcuts.
 
Last edited by a moderator:
Sounds good in theory but the hw just isn't built to handle that efficiently anyway, even if it could push infinite amounts of geometry without a performance loss. That goes for both nvidia and amd (and others), as long as hw processes pixel quads you will always have a very steep performance hit if you go sub-pixel sized polygons (actually the limit is quite a bit higher than sub-pixel size).
So using overly aggressive levels of tesselation is not an efficient use of hw resources.

"overly aggressive" is relative. Presumably Pixar is tessellating to micropolygons because it makes sense for them - giving them a good tradeoff between artistic expression, computational load, and visual quality. I want to see hardware evolve and become more capable, to get interactive graphics more like film graphics. All this talk of making sure triangles are always a 16 pixels large in screen space seems very reactionary to me. Why not embrace the challenge of making hardware that efficiently renders micropolygons, since we know they're important to high-quality rendering? Tessellation is a step in that direction.

If we decide that tessellation should only be used to make big polygons, we'd have to proclaim that Pixar et al. are misusing tessellation in every computer rendering they make. Seems weird to me.
 
Real time 3D is all about shortcuts to make it look 'good enough' vs off-line rendering.

In off-line rendering you have Top 500 scale render farms (eg the 5 render farms that Weta digital used for Avatar were 140 equal on the top 500, currently 463 equal) & pretty much as long as you need to take for it to render.
In realtime rendering you want frames to render every 30ms or better with a wide variety of hardware & resolutions so of course you need to take shortcuts.

Not really. Films have only a limited time and money budget for rendering. See, for example, the Wikipedia entry on Reyes rendering: http://en.wikipedia.org/wiki/Reyes_rendering

Of course the tradeoffs are going to be different when you have a render farm for 3 minutes to render a frame, versus when you have a single GPU for 30 ms, but still, any technology that makes it more feasible to use film-quality techniques interactively seems like a good thing to me.
 
That is all in the budget/timeline setting stage though.

If you're going to be doing more impressive CG stuff you set a bigger budget & get bigger render farms or allow for it to take longer to render.
If the budget or timeline is restricted you scale back the CG.
 
If we decide that tessellation should only be used to make big polygons, we'd have to proclaim that Pixar et al. are misusing tessellation in every computer rendering they make. Seems weird to me.
There's more to REYES than just micropolygon tessellation though it's most known aspect. Shading is done differently. Recommending against sub-pixel tessellation for games isn't saying Pixar is misusing tessellation because the goal and target hardware is different.

In order to get the best performance using today's graphics hardware it's best to have quad pixel triangles or larger. For AMD 16 pixel triangles provide the best balance with rasterization hardware so that's what's being recommended. Also, MSAA loses some of its benefit with micropolygons.

If developers want micropolygons they should push hardware vendors in this direction, but if they want the best performance today they should avoid them. The two goals aren't mutually exclusive either.

I always wondered why games never pushed prim rates prior to DX11 tessellation and now I wonder if developers really want tiny triangles or if they're only pushing for it in partnership with IHVs. I'm sure some developers would rather see resources spent on more generalization to facilitate ray tracing and other algorithms.
 
I always wondered why games never pushed prim rates prior to DX11 tessellation and now I wonder if developers really want tiny triangles or if they're only pushing for it in partnership with IHVs. I'm sure some developers would rather see resources spent on more generalization to facilitate ray tracing and other algorithms.

Me too. Why your main characters cant have something like 30k polygons from close up. Or spare at least 200 polygons on objects than look like a octangle. :rolleyes: With low end radeons pushing 500mil tri/s.
 
Not really. Films have only a limited time and money budget for rendering. See, for example, the Wikipedia entry on Reyes rendering: http://en.wikipedia.org/wiki/Reyes_rendering

Of course the tradeoffs are going to be different when you have a render farm for 3 minutes to render a frame, versus when you have a single GPU for 30 ms, but still, any technology that makes it more feasible to use film-quality techniques interactively seems like a good thing to me.

The only problem is that films are not interactive. They render a bunch of images and make a 2 hour film in 2 years. Thats less than 1/10 of what game developers face. They will be always ahead by a mile.
 
The only problem is that films are not interactive. They render a bunch of images and make a 2 hour film in 2 years. Thats less than 1/10 of what game developers face. They will be always ahead by a mile.

Sure.
I'm just trying to make a simple point:
1. Current game visual quality needs improvement
2. Extreme tessellation to sub-pixel sized triangles is used very successfully to make high-quality renderings
3. Therefore, limiting tessellation to producing 16 pixel triangles seems a little weird.

I'm not trying to say games should use the same techniques as film, but I do think we shouldn't be so eager to restrict tessellation to making big triangles.

More broadly, on a technology forum, why are so many people trying so hard to defend the status quo? Why are we so eager to ensure future hardware is limited in the same way as today's hardware?
 
More broadly, on a technology forum, why are so many people trying so hard to defend the status quo? Why are we so eager to ensure future hardware is limited in the same way as today's hardware?

No one is doing that. They are just being realistic. You're not going to see movie quality CGi generated on a desktop in real time in your life. Cranking up the tessellation factor isn't going to make it happen sooner. It just means you're making other sacrifices.
 
If the polygons are subpixel cant you just discard them ?

If you did that, you'd see nothing on the screen while watching a Pixar movie. Or Uniengine. Basically, anybody banging the drum of my-tessellator-is-bigger-than yours wouldn't be happy.
 
Back
Top