Tessellation

Well the work flow doesn't change much, displacement maps are used in current bump map techiques anyways. So take a game like Unreal 3 if the engine supports tessellation just have to change the shader used and it should work without a hitch.

There is only so much you can drop in polygon count in the lowest lod anyways. I think pretty much what you see in todays games is what you want to stay at to preserve quality on lower end systems.
 
I totally diagree with that, creatives have quite a bit of say in what the end product is, everything from the game designers to the graphics artists.

They have to say nothing (about the low-level capabilities) if the engine is bought and has external support. You can't tell the Unity-guys to just add tesselation for you, and you can't do it by yourself.
 
They have to say nothing (about the low-level capabilities) if the engine is bought and has external support. You can't tell the Unity-guys to just add tesselation for you, and you can't do it by yourself.

Of course you can depends on what license you get, if you get the typical license on their website, you can't, but you can contact them and get low level access to the core engine code. But in any case most engines that are used for gaming you get access to the code, Unity is a good engine (I have used it in the past), but I wouldn't put it in the league of AAA game engines and thus the cost difference.
 
So take a game like Unreal 3 if the engine supports tessellation...
I think that's where he was going though -- the number of people doing a completely-written-in-house-fully-DX11-enabled 3D rending engine is incredibly few and far between. So for all those folks buying engines that don't support DX11 tessellation (great example: Unreal 3) then you're shit out of luck. Unless of course you want to buy rights to the source, in which case, is tessellation worth the zillions of dollars needed to license it to that level? Getting access to source code is expensive if you didn't know...

Which directly supports his point. And I'm far more willing to believe a guy who builds 3D models for games over an armchair architect. :)
 
I think that's where he was going though -- the number of people doing a completely-written-in-house-fully-DX11-enabled 3D rending engine is incredibly few and far between. So for all those folks buying engines that don't support DX11 tessellation (great example: Unreal 3) then you're shit out of luck. Unless of course you want to buy rights to the source, in which case, is tessellation worth the zillions of dollars needed to license it to that level? Getting access to source code is expensive if you didn't know...

Which directly supports his point. And I'm far more willing to believe a guy who builds 3D models for games over an armchair architect. :)


If you want to build a game and sell it for profit on most AAA game engines they come with the code, there is no way around that. I have produced games using the Cry engine, have created demos and architectural renderings using the Unity engine, and Unreal engines. So Pretty comfortable taking about engine tech and what and what can't be done.

Expensive is a loose term when you look at the cost of creating your own 3d engine and purchasing one specially when you look at budgets in the ten's of millions.
 
Expensive is a loose term when you look at the cost of creating your own 3d engine and purchasing one specially when you look at budgets in the ten's of millions.

We're running in circles ... why exactly doesn't a ten million dollar game support custom programmed tessellation if it's mearly an issue of money for access to the engine-source?
 
Agreed. If you're talking about tens of millions of dollars, then you're either building your own engine or you have a staff of hundreds. "Most games" aren't productions in the tens of millions of dollars, and in fact may not even be in the seven digits to begin with.

Until you or someone else can show me a game where AMD tessellation truly is insufficient, then I call shens on everyone who wants to proclaim it so.
 
We're running in circles ... why exactly doesn't a ten million dollar game support custom programmed tessellation if it's mearly an issue of money for access to the engine-source?

Agreed. If you're talking about tens of millions of dollars, then you're either building your own engine or you have a staff of hundreds. "Most games" aren't productions in the tens of millions of dollars, and in fact may not even be in the seven digits to begin with.

Until you or someone else can show me a game where AMD tessellation truly is insufficient, then I call shens on everyone who wants to proclaim it so.


Most AAA games cost around 5 million to produce today, another 5 possible more go into marketing of the game. Most AAA games only get 250k per platform total sales *units* at the starting price. You do the math.

Now if you were to buy a current engine lets say Unreal 3 and it was only Dx10 you will be spending around 350k + royalties or 1.35 million without royalties *both options come with the code* the only options that you don't get the code is for Architectural Rendererings and virtual walk throughs, or medical advertising and the cost is much lower, its under 100k if I remember correctly. Now to add a feature like tessellation you will have to port the engine to Dx11 (I'm not too sure how long or expensive this will be, but to add tessellation wouldn't be an easy effort since you have to modify the engine at the core level and you will have to take into the physics engine as well, but going from Dx9 to Dx10 wasn't too bad, but going form Dx8 to Dx9 is a major change and would have been rewriting the entire engine), so the cost of buying an engine to port sometimes isn't warranted because of time and money.

The reason to purchase a pre-made engine is to expedite development process, to create a fully featured engine takes around 2 to 3 years with a team of 10 programmers each cost 100k per year, so what is the turn around of buying an engine and making one? Now purchasing an engine you gain 2-3 years on dev and you get a proven tech that works.
 
Last edited by a moderator:
Maybe there isn't really a tessellation issue. Perhaps it is really just an enthusiast issue. Perhaps the bigger problem is that enthusiasts see their $250+ graphics card whether it be a GTX 460 to a GTX 580 or ATI equivalent simply spinning its wheels in the majority of games. These people still want to buy their enthusiast level hardware but then they get frustrated when their overclocked and overpowered gaming rig is seeing 120+ FPS on console ports which make up the majority of the games released. Tessellation nVidia style is simply a way for these cards to actually show some sort of gameplay and or visual advantage over that store bought $500 PC with an HD 5670 slapped into it.

We even have our resident GPU slayer Nebula with an HD 4890, still.
 
The reason to purchase a pre-made engine is to expedite development process, to create a fully featured engine takes around 2 to 3 years with a team of 10 programmers each cost 100k per year, so what is the turn around of buying an engine and making one? Now purchasing an engine you gain 2-3 years on dev and you get a proven tech that works.

Thanks for the confirmation (or logical deduction if you prefer) that tessellation feature-adoption is essencially a middleware-issue, it has nothing to do with the vendor, and nothing to do with artists. And of all things donating a tesselation-implementation to a studio will do zip towards adoption.
 
Thanks for the confirmation (or logical deduction if you prefer) that tessellation feature-adoption is essencially a middleware-issue, it has nothing to do with the vendor, and nothing to do with artists. And of all things donating a tesselation-implementation to a studio will do zip towards adoption.


It isn't essentially a middleware issue, if a company wants it they will implement it even if they license an engine. Considering it takes 3 to 5 years to create a AAA game, if I were to purchase an engine right now I would add those features in because the engine programmers could be working on upgrading the engine while the game is still being programmed so now you don't loose development time but you have to increase resources. But if I was making the game 3 years ago, I wouldn't even though the game would be coming out in in the near future, because we would be too far in the development pipeline to start switching over, but this is also premature to say because:

Its all about $ vs time, if you want me to be descriptive, its the limit of Scope as it approaches quality = as the limit of time approaches quality = the limit of cost as it approaches quality , all of which (each part of the ratio) are affected by resources and inherent risk of the changes.
 
As a modeller I'm frustrated I'm not supported in my creative processes (see higher resolution SDS-objects, or see SDS-displacementmaps in my viewport). As a programmer I'm frustrated because apparently simple features, apparently good ways to enhance the LOD, are not impemented. As a gamer I'm frustrated to still see extreme low resolution silhuettes, rectangle stuff even here and there (see I'm not frustrated because I don't see 5 trillion tris on screen, I'm frustrated because I still sometimes see just 5000).
I haven't tried it, but if you use Maya and have and Radeon card there's a plugin to use the non-dx11 tessellator.
http://developer.amd.com/gpu/wgsdk/Pages/default.aspx
 
It isn't essentially a middleware issue ...

I give up. You're not only not taking into account your own previous conclusions (three posts up), you're also helping maintaining the vendor-blame crowd. Overall tesselation adoption is driven by middleware tesselation adoption. Even DX11 is middleware.

Here's a very simple example from processor-features: If a cpu-manufacturer implements op-code A, it per-se can only be utilized by assembler-programmers which know how to emit raw-machine code. Once assemblers (middleware) know instruction A, all the other assembler-programmers can utilize it. It's only when compilers (middleware) start to adopt the instruction, that the majority of software-developers take advantage of op-code A probably without even knowing about it. Let's make the math (some hypothetical numbers):

Code:
number of hi-level programmers: 9.5 mil.
number of compilers: 95 (equal share)
number of lo-level programmers: 0.5 mil.
number of assemblers: 5 (equal share)

programmers able to use A at time of introduction:
  25.000 (cumulative) => 0.25% overall availability rate
programmers able to use A after adoption by 1 assembler:
  100.000 (cumulative) => 1.00% overall availability rate
programmers able to use A after adoption by all assemblers:
  500.000 (cumulative) => 5.00% overall availability rate
programmers able to use A after adoption by 1 compiler:
  600.000 (cumulative) => 6.00% overall availability rate
programmers able to use A after adoption by 10 compilers:
  1.500.000 (cumulative) => 15.00% overall availability rate
programmers able to use A after adoption by all compilers:
  10 mil (cumulative) => 100% overall availability rate

Now replace "assembler" by "API", "assembler-programmer" by "3d-engine-programmer (doing the engine)", "compiler" by "prefabricated 3d-engine" and "software-developer" by "3d-engine-programmer (utilizing the engine)".

Code:
3d-engine-programmer (doing the engine) able to use Tess. after adoption by 1 API:
  100.000 (cumulative) => 1.00% overall availability rate
3d-engine-programmer (doing the engine) + 3d-engine-programmer (utilizing the engine) able to use Tess. after adoption by 0 prefabricated 3d-engines:
  100.000 (cumulative) => 1.00% overall availability rate

I'd be surprised you find a sound way to raise the overall availability rate without middleswares. I'd not be surprised if a lot of reasons surface which affect the "availability-to-adoption" ratio.
 
I give up. You're not only not taking into account your own previous conclusions (three posts up), you're also helping maintaining the vendor-blame crowd. Overall tesselation adoption is driven by middleware tesselation adoption. Even DX11 is middleware.

Here's a very simple example from processor-features: If a cpu-manufacturer implements op-code A, it per-se can only be utilized by assembler-programmers which know how to emit raw-machine code. Once assemblers (middleware) know instruction A, all the other assembler-programmers can utilize it. It's only when compilers (middleware) start to adopt the instruction, that the majority of software-developers take advantage of op-code A probably without even knowing about it. Let's make the math (some hypothetical numbers):

Code:
number of hi-level programmers: 9.5 mil.
number of compilers: 95 (equal share)
number of lo-level programmers: 0.5 mil.
number of assemblers: 5 (equal share)
 
programmers able to use A at time of introduction:
  25.000 (cumulative) => 0.25% overall availability rate
programmers able to use A after adoption by 1 assembler:
  100.000 (cumulative) => 1.00% overall availability rate
programmers able to use A after adoption by all assemblers:
  500.000 (cumulative) => 5.00% overall availability rate
programmers able to use A after adoption by 1 compiler:
  600.000 (cumulative) => 6.00% overall availability rate
programmers able to use A after adoption by 10 compilers:
  1.500.000 (cumulative) => 15.00% overall availability rate
programmers able to use A after adoption by all compilers:
  10 mil (cumulative) => 100% overall availability rate

Now replace "assembler" by "API", "assembler-programmer" by "3d-engine-programmer (doing the engine)", "compiler" by "prefabricated 3d-engine" and "software-developer" by "3d-engine-programmer (utilizing the engine)".

Code:
3d-engine-programmer (doing the engine) able to use Tess. after adoption by 1 API:
  100.000 (cumulative) => 1.00% overall availability rate
3d-engine-programmer (doing the engine) + 3d-engine-programmer (utilizing the engine) able to use Tess. after adoption by 0 prefabricated 3d-engines:
  100.000 (cumulative) => 1.00% overall availability rate

I'd be surprised you find a sound way to raise the overall availability rate without middleswares. I'd not be surprised if a lot of reasons surface which affect the "availability-to-adoption" ratio.


Adoption of new tech is also affected by target audiance having the new tech available to them, that is one part of risk.

My equation takes all of that into concideration and your resources as well, just didn't want to type it all up because I can't give you all the variables for risk, since I will need alot more data points to create a fully blown out equation. But the end of the day, if AMD's products can't do something as well as their competitor's counter parts, they will and have downplayed it, will it help them, not really, because its pretty obvious it will be used.

The feature to be used is more on devs. But its up the the IHV's to supply the devs what they need to use those features, so if one of the IHV's is not as good, well that holds back everyone.

You are picking up on one step and thinking that it is the only reason, well thats not the case, look at how API's are created and also adopted by the IHV's then by the Dev's then see how products are created on the API's. 4 different steps but each of the steps have their own Scope, Cost, Time equations where each step effects the next step. Its actually kinda of like a game theory is some ways.
 
Last edited by a moderator:
I totally diagree with that, creatives have quite a bit of say in what the end product is, everything from the game designers to the graphics artists.
IMO the number one thing which makes higher order surfaces so damn impossible is the fact that artists don't want to let go of native access to the underlying data, mostly texels. It constrains the solution space to bad solutions. I've said it elsewhere, it's time to provide texture artists with only temporary/local unwrapped 2D textures which get projected to the native texture maps automatically after editing ... together with 3D painting that is all they really need.

As long as the artists don't work at an abstracted level the programmers are constrained in what they can do at the native level.
 
Last edited by a moderator:
IMO the number one thing which makes higher order surfaces so damn impossible is the fact that artists don't want to let go of native access to the underlying data, mostly texels. It constrains the solution space to bad solutions. I've said it elsewhere, it's time to provide texture artists with only temporary/local unwrapped 2D textures which get projected to the native texture maps automatically after editing ... together with 3D painting that is all they really need.

As long as the artists don't work at an abstracted level the programmers are constrained in what they can do at the native level.


I completely agree, I have been with teams like that, and the first thing I do, start some lunch and learns with programmers and artists, and focus on what the optimal way of bringing the latest tech and the best creative possibilities are to the table. Sometimes its extremely painful and yeah it doesn't work all the time, really depends on the team, personally I think creatives actually have the most control of the over all project, this was one of the reasons I became a producer, I wanted the ability to guide the entire team to one focal point. Each discipline are the best at what they do but they tend to forget what rest of the team "has to do" and each change or direction has business ramifications. Its not like 15 years ago when games like paratrooper that could be made overnight, the complexity of games are just, actually possibly more complex then some OS's, games are cutting edge they push tech and future software.

Funny thing is, when I did my own 3d models, what you said that is exactly the way I did my models, I would create low poly 3d meshes in 3ds max, and then increase their detail to around 4 to 5 million polys, and use vertex paint in 3ds Max and render to texture ( there is no need to use mud box or any other high poly modeler), that's the base texture, normal and displacement maps with all the details, and give the color texture to the texture artists to add decals to enhance the textures. The best part about it, we ended up saving about 3 weeks worth of work (per model) and we save money (granted this was 3 to 4 years go), pretty sure its easier to do it now, and we got best of both worlds, tech and art capabilities.
 
Last edited by a moderator:
You are picking up on one step and thinking that it is the only reason, well thats not the case, look at how API's are created and also adopted by the IHV's then by the Dev's then see how products are created on the API's. 4 different steps but each of the steps have their own Scope, Cost, Time equations where each step effects the next step. Its actually kinda of like a game theory is some ways.

Nope:

I'd be surprised you find a sound way to raise the overall availability rate without middleswares. I'd not be surprised if a lot of reasons surface which affect the "availability-to-adoption" ratio.

I "clearly" say the considered variables give you an optimistic upper bound for adoption, and you just get worst adding more variables. A ratio above 1.0 is not permited. ;)

Anyway I don't know if we disagree or not or if we're just talking for the talk. I try to develop a coherent explanation why tessellation isn't adopted (and essencially couldn't be adopted even if wanted), and why ATI or nVidia can't do anything about it. You're in disagreement with that?
 
Nope:


I "clearly" say the considered variables give you an optimistic upper bound for adoption, and you just get worst adding more variables. A ratio above 1.0 is not permited. ;)

Anyway I don't know if we disagree or not or if we're just talking for the talk. I try to develop a coherent explanation why tessellation isn't adopted (and essencially couldn't be adopted even if wanted), and why ATI or nVidia can't do anything about it. You're in disagreement with that?


I'm am in disagreement of that, again you only had one variable, there are many more variables, which I explained to you.

You are picking up on one step and thinking that it is the only reason, well thats not the case, look at how API's are created and also adopted by the IHV's then by the Dev's then see how products are created on the API's. 4 different steps but each of the steps have their own Scope, Cost, Time equations where each step effects the next step. Its actually kinda of like a game theory is some ways.

Within that one step you have picked you have multiple variables I agree, but there are 3 other steps you aren't taking into concideration.
 
Right now, detailed models are turned into super detailed models, but that's not what tessellation should be for. It should be taking low and medium base models and turning them into super detailed ones and the substantial memory savings that come with it.

Doing so would actually increase the tessellation level, not reduce it. It's not about the complexity of the resulting mesh but the level of tessellation required to get from the base mesh to the end result.

Why forget Fermi . Without looking at the whole thing we only get a small view point.

To avoid discussions about whose tessellator is bigger cause that's where a lot of people seem to end up and in doing so completely miss the point. Even if there was no Fermi we could still make the argument that tessellation as we've gotten in games and hardware is rather unimpressive. I see folks saying that AMD's tessellation hardware is handling the weak implementations in current games just fine. That's not exactly a redeeming argument.

Trini my theory is that no new features will really "wow" us anymore. Things are more incremental for the amount of additional computing power needed.

That's the thing, this change isn't incremental at all. It's a massive leap in the count of the basic building blocks of a 3D scene. I'm not buying that the triangle counts we've had for years are somehow adequate and we don't need any more - that's just a lack of imagination talking. Current game models are woefully lacking in detail.

We need some more sensible uses of tessellation by developers to really judge if Radeon's tessellation capabilities are 'good enough' as AMD seems to imply. I am inclined to believe them as all the implementations of tessellation that I've seen have been grossly exaggerated and wasteful, seemingly at the behest of NVIDIA who's architecture isn't so negatively impacted by bucketloads of useless triangles.

Yes exactly, you can't say something is "good enough" without defining what it's good enough for. Current DX11 games are a laughable basis upon which to make that statement. I'm looking forward to seeing Futuremark's implementation. Hopefully they were able to put the additional geometry to good use and if they were we'll have a better benchmark to determine what's good enough. I hope DICE does the same with Frostbite 2 - that's probably going to be it until the console refresh cycle.
 
That's the thing, this change isn't incremental at all. It's a massive leap in the count of the basic building blocks of a 3D scene. I'm not buying that the triangle counts we've had for years are somehow adequate and we don't need any more - that's just a lack of imagination talking. Current game models are woefully lacking in detail.
Another micro polygon fan, I like that. ;)
 
Back
Top