Tessellation

The answer's bleeding obvious: Evergreen is a significantly feature-/performance-reduced chip from what was originally planned. AMD kludged the whole thing. That's why it's so inefficient (not just in tessellation, but pretty much everything). See Barts for a pointer on the inefficiences.

As to whether Cayman will be a minor or major improvement (on everything, not just tessellation), well, I guess my first sentence hangs in the balance :p Cayman should be more than Cypress was ever planned to be (1 year of extra development), which will distort comparisons, but I don't for one second believe that Evergreen's tessellation architecture is as it was originally planned. It's just horrible.

As to why Cypress wasn't Barts specification, who knows. AMD could have made much more effective use of 40nm - what turned out to be very limited supply of 40nm - for a negligible performance loss. Seems like another indication that Cypress was a hatchet job.

Barts is a year newer. That means one more year of experience with TSMC's 40nm process, one more year of tweaking, improving, etc. It is far from certain that AMD would have been able to release a 151W Barts XT in Q3'2009.
 
Barts is a year newer. That means one more year of experience with TSMC's 40nm process, one more year of tweaking, improving, etc. It is far from certain that AMD would have been able to release a 151W Barts XT in Q3'2009.
Who cares about that power level - whatever it would have been it would have been less than Cypress :oops:

Regardless, Cypress wouldn't have suffered appreciably with less cores.
 
Let's forget Fermi and its tessellation performance for a minute. What was ATi really thinking when they set out to implement DX11 tessellation? After all those years of pushing tessellation only to get blue-balled at every turn they should have come out with a bang. Instead we got a whimper and nary an impressive tessellation demo in sight. Were they just testing the waters and Cayman will be the hotness or are they in fact not as hot for tessellation as they claim to be?

Why forget Fermi . Without looking at the whole thing we only get a small view point.

Amd was very smart with Cypress , they were able to launch in Sept of 2009 right with Windows 7 the first implementation of Dx 11 . They were able to put out a card that was not only faster than the previous generation but supported the new api and offered power savings over the previous gen.

If we look at Nvidia , they were not able to supply Fermi until the end of the second quarter of 2010 Thus leaving AMD with at least half a year as the only dx 11 hardware in town. Due to problems with 40nm Nvidia wasn't even able to ship a fully functional fermi until over a year after cypress landed. Unlike Amd , Nvidia's gtx 480 sky rocketed power usage also.

If we look at line ups it becomes more apparent that nvidia has chips bigger than cypress loosing out to parts based on barts. its def a win win for amd even if tesselation isn't as fast as nvidias.

As others have said , amd's hardware is providing more than playable framerates with dx 11 games and I'm pretty sure by the time dx 11 games come out that prove to be unplayable on amd's hardware , nvidia will be in the same boat and serious pc gamers will be onto dx 12 cards .
 
Trini my theory is that no new features will really "wow" us anymore. Things are more incremental for the amount of additional computing power needed. I mean honestly that seems the case in all areas. For example how much better is the scene with twice the triangles? It depends on the scene obviously, the artists could come up with something like you ask to highlight the change, but in general it will not be as huge. Same with incremental improvements in physics calculations. The first time I played a game with destroyable objects it seemed neat (Even though they just went to shards and dissapeared). After that adding physics simulations so boxes rolled down seemed pretty exciting, but when they get them just a bit closer to reality it won't make much difference. At some point a bend will be rounded when they finally seem natural (if that ever occurs). Likewise maybe if tesselation makes it so you cannot see polygons or notice them, then it will be enough to wow. But not before.
 
We need some more sensible uses of tessellation by developers to really judge if Radeon's tessellation capabilities are 'good enough' as AMD seems to imply. I am inclined to believe them as all the implementations of tessellation that I've seen have been grossly exaggerated and wasteful, seemingly at the behest of NVIDIA who's architecture isn't so negatively impacted by bucketloads of useless triangles.
 
We need some more sensible uses of tessellation by developers to really judge if Radeon's tessellation capabilities are 'good enough' as AMD seems to imply. I am inclined to believe them as all the implementations of tessellation that I've seen have been grossly exaggerated and wasteful, seemingly at the behest of NVIDIA who's architecture isn't so negatively impacted by bucketloads of useless triangles.


Its never ever "good" enough, if the competition can do more, its a way of downplaying something till they can compete, we have seen this time and time again,

r300 Dx9 capable, gf 4 - fx, we don't need Dx 9 yet
nv40 Dx 9c capable, r420, we don't need Dx9c yet there isn't much difference
r520, physics capable, nv70 we can too (small print just not that good)
g80, strong in physics, the r600, physics not so important, the cpu can handle it
gf100 strong tessellation, cypress we don't need that much tessellation right now.

Its just a marketing ploy....
 
Its never ever "good" enough, if the competition can do more, its a way of downplaying something till they can compete, we have seen this time and time again,

r300 Dx9 capable, gf 4 - fx, we don't need Dx 9 yet
nv40 Dx 9c capable, r420, we don't need Dx9c yet there isn't much difference
r520, physics capable, nv70 we can too (small print just not that good)
g80, strong in physics, the r600, physics not so important, the cpu can handle it
gf100 strong tessellation, cypress we don't need that much tessellation right now.

Its just a marketing ploy....

the question is , is it wrong in regards to cypress ? The card is already over a year old and is performing well in the dx 11 games we have now. Do we look at games in its second year ? third year ? how about the 4th year of its life ? At what point does it stop mattering for a card ?

Do we know how cayman will perform in high tesselation cases ? does it bring it closer to nvidia cards ? will many companys go hog wild with tesselation when cypress and under makes up the large majority of dx 11 cards right now ?
 
the question is , is it wrong in regards to cypress ? The card is already over a year old and is performing well in the dx 11 games we have now. Do we look at games in its second year ? third year ? how about the 4th year of its life ? At what point does it stop mattering for a card ?

Do we know how cayman will perform in high tesselation cases ? does it bring it closer to nvidia cards ? will many companys go hog wild with tesselation when cypress and under makes up the large majority of dx 11 cards right now ?


I think Cayman will be much better in tessellation. Well in regards to cypress they really didn't need strong tessellation and so the design choice came down to save money now and get something out quickly and because of this developers used tessellation sparingly. If Fermi came out in time, the picture would have been much more different. We are getting in to if this then this would have happened.

Go hogwild isn't a term I would use, the more tessellation you have the better, because as you increase iterations in tessellation you will get alaising but you will need sub pixel triangles to reduce that alaising, because typical AA methods right now will down right kill performance with tessellation. MLAA might solve this but just don't know. So in truth "we don't need that much" means nothing.
 
Perhaps they wanted tessellation to be a useful option for enhancing games as opposed to some epeen inflating synthetic that allows you to make more triangles than is in anyway useful.

Good point. It doesn't make any sense (to me) to create more triangles then needed with very little to no visual improvement. Besides, why do some think I should look at a game's level of tessellation (normal vs extreme) when the true comparison in that game is no tessellation vs tessellation.
 
Last edited by a moderator:
The degree that we see in offline renderers, which is the ultimate goal of real time graphics.

So.... we should buy GTX 580's for our games because the developers who use the professional variants can make pre-rendered scenes can do so? I don't follow your logic. In fact, I suspect you are not actually using logic at all.
 
the question is , is it wrong in regards to cypress ? The card is already over a year old and is performing well in the dx 11 games we have now.

But isn't that simply because these dx11 games were written on cypress, and hence are targetted at its capabilities? If cypress had better tessellation, would these dx11 games have made more use of it?

Of course, setting feature performance baselines is the advantage in being first to any DX level. I wonder how different things might have been if r600 or nv30 had got out first?
 
So.... we should buy GTX 580's for our games because the developers who use the professional variants can make pre-rendered scenes can do so? I don't follow your logic. In fact, I suspect you are not actually using logic at all.


no so we can get games to look like offine rendered scenes, thats why you need more tessellation.

Edit:

The main bottleneck for this was memory limitation and for AMD they have less iterations before they hit a bottleneck. AMD has to make design changes to Cayman to alievate the # of iternation limitations, so either way you are going to get more tessellation iterations in the future, just because Cypress can't do what Firmi can do doesn't meen the increased iterations factor is useless, its just that because Cypress can't do it right now, its useless for AMD to push tessellation and if they were to push tessellation they would have a negative effect.
 
Last edited by a moderator:
If cypress had better tessellation, would these dx11 games have made more use of it?

Better than good? Maybe the games would have been worst because everyone's focusing on the cool tess-tech and doing less creative work? ;) That's not an interesting theory.

The ones that saw it as a rich feature, did so in the past (TruForm); the ones that drown in mega-company idiocities buy UT-licenses (which by the way did have TruForm support too) or whatever engine and leaves technical abilities to be served by the provider, there's no way an artist pushes the programmer to enhance the engine.

I got no tesselation in my 3D-package, though I have SDS-objects on that I can tweak Catmull-Clark N-gon subdivision factors for viewport and renderer. What better context there is than that? I just don't have it.

How long does it take until zBrush's SDSs will be hardware displayed? The entire displacementmap to tesselated surface backend is already there, now, just feed the goddamn hardware with it instead of doing "triangulation".

GLU has tesselator-functions, possibly we seriously just need a (vendor-made :rolleyes: again) super-easy library. Plug-it in, no need for function-calls, just think in the objects you want to have tessselated, then think in which way, the card with it's brain-wave receiver will just do it for you.

As a modeller I'm frustrated I'm not supported in my creative processes (see higher resolution SDS-objects, or see SDS-displacementmaps in my viewport). As a programmer I'm frustrated because apparently simple features, apparently good ways to enhance the LOD, are not impemented. As a gamer I'm frustrated to still see extreme low resolution silhuettes, rectangle stuff even here and there (see I'm not frustrated because I don't see 5 trillion tris on screen, I'm frustrated because I still sometimes see just 5000).

And who do I blame? ATI definitly not, I find their persistance in hardware-tesselation incredible, almost counter-logic to business-ideas, almost academic stubborn. I'd never blame a vendor, except for bad feature-documentation. And again ATI is top on that. With that much docs you can almost create your own software implementation of the platform.

Who to blame then? ...
 
Are there any tessellation tests of near game complexity that highlight the memory and bandwidth savings that come with use of tessellation?

For example, I would be interested to see, if possible, how Cypress performs in the Heaven benchmark with all the tessellated geometry precomputed. From the profiler I gathered that the meshes total 20 MBs, so with an amplification factor of x20 (a wild guess) that would put the total geometry at 400MBs which might still leave sufficient space for textures and buffers.

It would be amusing if it would turn out that it does better with precomputed geometry rather than with tessellated geometry ....
 
no so we can get games to look like offine rendered scenes, thats why you need more tessellation.

Are you implying that the only major difference between offline and real-time rendering is geometric detail?



I have to agree with homerdog and Mat3. For now it seems the only applications where tessellation makes an appreciable improvement to image quality is demos/benchmarks, partly because the tessellation-disabled modes have so little detail by comparison. From my layman's perspective, tessellation is a lot more interesting when used more specifically for improving performance with adaptive LOD rather than just adding bucketloads of hardly noticeable tris.
It might be a while yet before game developers are able or willing to sacrifice IQ for all the sub-DX11 users by reducing the quality of the base meshes. In the meantime we've got titles like Metro 2033 which don't appear to have been developed with tessellation in mind and barely use it for anything.

As an aside, I'd like to see a bench/demo that adjusts tessellation factors in real time to approach a framerate setpoint. Then we could discuss the supposed inferiority of AMD's tessellation approach based on perceived IQ difference.
 
The ones that saw it as a rich feature, did so in the past (TruForm); the ones that drown in mega-company idiocities buy UT-licenses (which by the way did have TruForm support too) or whatever engine and leaves technical abilities to be served by the provider, there's no way an artist pushes the programmer to enhance the engine.


I totally diagree with that, creatives have quite a bit of say in what the end product is, everything from the game designers to the graphics artists. End of the day its up to the producers and tech to decide how far to push the tech based on input from the creatives. So if 50% of your intended target doesn't have as much tessellation potential as the other 50% you have to scale down the product to the lowest common denominator.
 
Are you implying that the only major difference between offline and real-time rendering is geometric detail?

I have to agree with homerdog and Mat3. For now it seems the only applications where tessellation makes an appreciable improvement to image quality is demos/benchmarks, partly because the tessellation-disabled modes have so little detail by comparison. From my layman's perspective, tessellation is a lot more interesting when used more specifically for improving performance with adaptive LOD rather than just adding bucketloads of hardly noticeable tris.
It might be a while yet before game developers are able or willing to sacrifice IQ for all the sub-DX11 users by reducing the quality of the base meshes. In the meantime we've got titles like Metro 2033 which don't appear to have been developed with tessellation in mind and barely use it for anything.

As an aside, I'd like to see a bench/demo that adjusts tessellation factors in real time to approach a framerate setpoint. Then we could discuss the supposed inferiority of AMD's tessellation approach based on perceived IQ difference.

Thats not the only difference but that has been one of the major ristrictions for the past few years.
 
But isn't that simply because these dx11 games were written on cypress, and hence are targetted at its capabilities? If cypress had better tessellation, would these dx11 games have made more use of it?

Of course, setting feature performance baselines is the advantage in being first to any DX level. I wonder how different things might have been if r600 or nv30 had got out first?

Would we even have dx 11 games now if cypress didn't come out when it did and instead the gtx 480 was the first game ? As it is now we have so few dx 11 games and we've had capable hardware for over a year.

As for the nv30 vs r600. Whats it matter. The r600 had its comprimises as did the nv30. The nv30 got its special int 8 and fp 16 paths and the r600 got its fp 24bit paths. Real 32bit paths were delayed till hardware came out that could run it.

Looking back the r600 was a great gpu for its time and looking at it in that time period it was a good card to have also. It was better than what came before and allowed you to play games at resolutions and fsaa that were pipe dreams back in the day. The 9700 and 9500 were good cards to own all the way through far cry.

As it stands now the 5870/50 seem to be running everything just fine except from what i can see metro 2033 but it sure does run it better than nvidia's 2009 dx 11 card and runs it comparably with nvidia's dx 11 card in spring of 2010.


I think the more important question is if cayman is going to increase its tessellation capabilitys I'd don't see a major problem if amd continues to increase performance. I thought tessellation was supposed to be dynamic and adaptive.
 
If cypress had better tessellation, would these dx11 games have made more use of it?
Until developers figure out how to go from high poly models directly to displacement mapped models with fully automatic LOD ... no. (Even if they do figure it out they will have to whip their artists with chains to get them to change their workflow.)
 
Back
Top