Xenos hardware tesselator

I'm sure we've hit over 1m triangles/frame in some recent games. Crysis will definetely go over that. I think PGR3 hit 2-3 million (something like 8 cars @ 90k average, brookline bridge at 1.5million + rest of environment though obviously there would only be rare peak cases where all 8 cars + most of the bridge + good chunk of rest of env. was visible).

Good point :) I was meaning more in terms of a pixel/triangle ration getting close to 1:1 average accross the screen. Midnight + posting = confusion (such as now).
 
Good point :) I was meaning more in terms of a pixel/triangle ration getting close to 1:1 average accross the screen. Midnight + posting = confusion (such as now).

A pixel:triangle ratio below 8-10 (some even say 20) is a very bad idea with today's GPUs - you'd be wasting a lot of fillrate, as pixels are shaded in groups of 2x2. Not sure about the G80, but with Xenos and RSX you definitely want to avoid 1-pixel triangles.
 
So I'm wondering, in terms of usefulness, is this another TruForm, or something actually usable?

It's a pretty useful feature even if not too flexible, not only to implement displacement mapping, but can be used for other purposes (very fast particles instancing springs to mind), although it's not as flexible as a Cell or a Geometry Shader.

I was under the impression that when using automatic tiling that memexport could not be used...wasn't there a thread about this a few months back? Something to the effect that using predicated tiling breaks memexport, or doesn't allow you to use it effectively?

It's correct. You can't memexport inside a begin/end tiling bracket.
 
If a developer is advanced enough to use adaptive tessellation then manual tiling should pose no problem. I'd guess most games will eventually use manual tiling, though I have no numbers or inside info so support this guess.

That's a tough problem. Tiling is a pain in the butt whatever angle you look at it from. Manual tiling tends to be pretty efficient if you have tight bouding volumes, but you are basically trading CPU time and engine complexity, for more vertex throughput.
Using PTR in general is not a walk in the park.
 
That's a tough problem. Tiling is a pain in the butt whatever angle you look at it from. Manual tiling tends to be pretty efficient if you have tight bouding volumes, but you are basically trading CPU time and engine complexity, for more vertex throughput.
Using PTR in general is not a walk in the park.

How much easier would your life be if Xbox360 had enough eDram to fit 720p and 4XAA without tiling? How much would that be actually? or how about with 2xAA?
I have got this impression that just maybe the inclusion of 10MB of eDram and tiling may not have been the best way to go, especially considering the large amount of multiplatform games and for example UE3 not using tiling.
 
Last edited by a moderator:
How much easier would your life be if Xbox360 had enough eDram to fit 720p and 4XAA without tiling? How much would that be actually?
Too much. 720p with 4xaa needs 3 tiles, so you'd need at least 2 and a bit times the eDRAM size. 20+MBs of eDRAM would have been very costly, if even doable.

The 10MB buffer seems a fair compromise for performance and cost. The problem with UE3.0 is more Epic's fault. They're just dragging over their PC engine, rather then create a tiling friendly engine, and MS can't say 'oi! You use tiling like we intended!' as UE3.0 is essential to the success of the platform. Thus Epic can ignore the eDRAM and produce non-tiling games, wasting the abilities of the machine somewhat - though I don't know if they use the eDRAM for other tasks.
 
How much easier would your life be if Xbox360 had enough eDram to fit 720p and 4XAA without tiling? How much would that be actually? or how about with 2xAA?
I have got this impression that just maybe the inclusion of 10MB of eDram and tiling may not have been the best way to go, especially considering the large amount of multiplatform games and for example UE3 not using tiling.

PTR giveth PTR taketh :)
I think 10mb of EDRAM is a good compromise, since it opens up several possibilities and rendering to it is insanely fast. But you have to think your engine with PTR in mind and that's not easy.
 
PTR giveth PTR taketh :)
I think 10mb of EDRAM is a good compromise, since it opens up several possibilities and rendering to it is insanely fast. But you have to think your engine with PTR in mind and that's not easy.

Excuse me Fran but what is PTR? and I always thought the functions of the EDRAM were fixed...? What else could you do with it?
 
Excuse me Fran but what is PTR? and I always thought the functions of the EDRAM were fixed...? What else could you do with it?

I think it's predicated tiling rendering. (shoot me if I'm wrong :LOL:)

edit: Even though it's a good compromise, I would say that the pros will have to be pretty darn amazing to outset the cons. I mean the system has been out for a year now. The best looking title is a game based on UE3 which doesn't use tiling also given the fact that tons of games are multiplatform out of which possibly none? will use tiling. It also seems like UE3 will become very popular engine for X360 and if there are future updates that will make tiling possible remains to be seen. So I might be sadly mistaken when I say that it looks like about a handfull of first party titles will actually use tiling and even in those titles developers will have to do lot's of work to get it up and running properly. So often when I think about this, I feel like maybe it could have been more usefull if the transistors had been put elsewhere. Basically I think the system is very neat, but if it's hardly ever utilised like it was meant to be, then something somewhere has gone wrong. of course I will get through this by just looking at the games which are very nice indeed:) I don't mind if somebody shoots my line of though to the ground though and explains to me how eDram actually is the best thing since sliced bread.
 
Last edited by a moderator:
They're just dragging over their PC engine, rather then create a tiling friendly engine, and MS can't say 'oi! You use tiling like we intended!' as UE3.0 is essential to the success of the platform. Thus Epic can ignore the eDRAM and produce non-tiling games, wasting the abilities of the machine somewhat - though I don't know if they use the eDRAM for other tasks.

I think MS probably got as much as they could hope for form Epic all things considered, They were basically looking for a two-fer when they signed Epic, get a game, and at the same time, get a solid, optimized, multiplatform engine for other developers to use.

Now it may not make the best use of the 360 hardware, but looking at Gears of War, they've done their job as it really is the nicest looking game on the platform. Hopefully this strategy is more a bridge for years 1-2 and not a persistant strategy throughout the console life, i.e. hopefully they concentrate now on creating custom engines, even if the title takes a little longer to produce.

Don't want to drag this of course though, thx for the input Fran.
 
Now it may not make the best use of the 360 hardware, but looking at Gears of War, they've done their job as it really is the nicest looking game on the platform. Hopefully this strategy is more a bridge for years 1-2 and not a persistant strategy throughout the console life, i.e. hopefully they concentrate now on creating custom engines, even if the title takes a little longer to produce.

Lets hope. That is one area I think Sony has excelled--a desktop GPU and tools. PC devs have more time/resources (tools, engines, experience) with modern GPUs and their evolution, and leverging that is a good thing. Creating a business model that targets PC<>Console migration, like MS has, but creating a graphic sub-system that is outside the superset of APIs as well as creating a rendering circumstance that is non-intuitive to the workflow of said developers you are trying to leverage... seems like an odd path to take. Almost like RSX and Xenos should have swapped places.

I do hope we see developers are working on their own solutions in the background and in a couple years we see developers exposing features like tesselation, memexport, vertex texturing, and so forth. I would be a little more encouraged if MS had not selected middleware for most of their funded projects. 2 years is a looong time in game years though, and the 360 should be here for at least 5 years as MS main console platform. I guess it is hard to complain when you see games like Gears and Viva. If that is underusing the platform, well, I hope other 3rd parties start underusing it like that ;)
 
The 10MB buffer seems a fair compromise for performance and cost. The problem with UE3.0 is more Epic's fault. They're just dragging over their PC engine, rather then create a tiling friendly engine, and MS can't say 'oi! You use tiling like we intended!' as UE3.0 is essential to the success of the platform. Thus Epic can ignore the eDRAM and produce non-tiling games, wasting the abilities of the machine somewhat - though I don't know if they use the eDRAM for other tasks.

What do you mean by ignoring the eDRAM :?:

If you're referring to the MSAA... isn't the deferred shadowing inherently costly with MSAA (independent of PTR) or is there a way to get around this?
 
At first I thought it was great that all of these features were going into the GPU but then realized that some of this stuff may never really be used in a game this gen. It would be a lot of potential going to waste kind like building the HDD into the console last gen or barely seeing shaders put to good use in most titles the titles last gen. If these features are hardly ever used it would just prove the point of some that MS should have just used the transistors on more shader power. Now things that are in DX10, like stream-out, will eventually be used but I don't think console devs are going to explore this on their own. They will likely wait for PC devs to exploit it before using it and who knows how long that will take. I wonder if devs even asked for some of this stuff.

Have there been any accounts of the Xenos tesselator being used in a game?
 
What do you mean by ignoring the eDRAM
I mean designing their renderer to work optimally with the eDRAM via tiled rendering, which is what it was designed for. They'll still use the eDRAM for back-buffer etc., but not really for it's main purpose by design, and at least the MSAA abilities is going unused.

It'd be nice to see a UE3.0 XB360 version, with a renderer built from scratch specifically for the platform (and a PS3 one too), so that the tools and games are portable, but the rendering is optimal.
 
I am excited to know that there is ample room to grow with xenos once devs come to grips with it (just like cell for ps3).
 
If you're referring to the MSAA... isn't the deferred shadowing inherently costly with MSAA (independent of PTR) or is there a way to get around this?

Yeah, it was discussed in this thread which in turn references even older threads.

The shadow approach in UE3 is not MSAA friendly, which has little to do with PTR (ie. would be as costly to implement on RSX)

Cheers
 
As far as I can tell, Xenos is able to resolve the individual AA samples in a render target when AA is applied - this means that deferred shading and AA are compatible in Xenos, as is also the case in D3D10.

It is only DX9's inability to allow the GPU to see AA samples individually (when the render target is viewed as a texture) that prevents deferred shading (e.g. in the PC version of GR:AW) from having AA.

Jawed
 
I thought tiling takes place regardless at resolutions of 720 and above. What I dont understand is why cant post processing be done to the frame regardless of what was done in EDRAM?
I think its kinda amazing that developers are basically using Xenos as if it were traditional GPU and getting such great results as Gears anyway...
 
As far as I can tell, Xenos is able to resolve the individual AA samples in a render target when AA is applied - this means that deferred shading and AA are compatible in Xenos, as is also the case in D3D10.

It is only DX9's inability to allow the GPU to see AA samples individually (when the render target is viewed as a texture) that prevents deferred shading (e.g. in the PC version of GR:AW) from having AA.

Jawed

I remember a quote from a MS rep that they will also bring DX10 (probably a subset of it) to XB360 in time. Maybe, that's the reason instead of sticking to DX9 for the entire life-time of 360.
 
As far as I can tell, Xenos is able to resolve the individual AA samples in a render target when AA is applied - this means that deferred shading and AA are compatible in Xenos, as is also the case in D3D10.

It is only DX9's inability to allow the GPU to see AA samples individually (when the render target is viewed as a texture) that prevents deferred shading (e.g. in the PC version of GR:AW) from having AA.

Jawed

So essentially, devs are waiting for MS to update the devkits appropriately, and then we should be able to get that method of rendering with MSAA without the performance penalty?

(Hurry it up then, MS!)
 
Back
Top