So exactly how does the Xenos emulate a Physics Unit?

I read recently that it can sacrifice shader power to in effect, be a PPU for the X360.

Anyone have any idea how this works? Is this what memexport is used for? I read this on elitebastards.com:

Just prior to the release of this article, ATI senior architect Clay Taylor replied, confirming the physics processing potential of Xenos. Although specific workloads (i.e. physics-specific instructions) are not assignable to ALU arrays in a discrete manner, he confirmed it is entirely possible that: “Physics processing could be interleaved into the command stream and would use the percentage of the ALU core that the work required.â€￾ The ability to effectively scale the use of Xenos as both a PPU and GPU opens many creative doors for developers. I’m rather surprised that Microsoft isn’t touting this capability, as it was obviously intended by ATI. The physics processing ability will come at the cost of shading ability but the proportion is entirely at the developer’s discretion. I can think of many instances (e.g. indoor scenes) or even game style choices in which the powerful shading unit will have plenty of extra cycles to act as a PPU.

http://www.elitebastards.com/cms/index.php?option=com_content&task=view&id=20&Itemid=28


Thoughts?
 
Barnaby Jones said:
I read recently that it can sacrifice shader power to in effect, be a PPU for the X360.

Anyone have any idea how this works? Is this what memexport is used for? I read this on elitebastards.com:



http://www.elitebastards.com/cms/index.php?option=com_content&task=view&id=20&Itemid=28


Thoughts?

Im sure it could act as a limited PPU but at what cost? Sure it has a decent share of shader power but certainly not enough to spare a significant percentage for physics comapred to the PS3, never mind PC GPU's for the next few years.
 
pjbliverpool said:
Im sure it could act as a limited PPU but at what cost? Sure it has a decent share of shader power but certainly not enough to spare a significant percentage for physics comapred to the PS3, never mind PC GPU's for the next few years.


what makes the PS3 special? both contain the same SDKs integrated via Gamebryo, niether system has a dedicated PPU chip.
 
Faux Moderator steps in... ;)

Let's not mix the PS3 or PC into this in some lame attempt to start a system war here. Keep replies on-topic, thank you...
 
I'm guessing it's marketting buzz.

that said, the xenos and R520/R580 have an advantage in their 'super multi mega thready zesty nature' engine. Simply put they can be far more efficient working on smaller batches of more dymaic data. IE, they can branch a lot in their shaders. And physics, generally, requires a hell of a lot of branching.

This doesn't mean that a geforce FX can't do physics, or a 7800, and therefore the RSX too, it's just what we know of these architectures makes them significantly less suitable to the task.

Naturally a dedicated PPU will still kick xenos' butt.

It's most likly just a way of saying 'well, yeah, so they say cell can do this little physics thing, well, so can we, but on the gfx chip, because we are spangly'. Yeah, well, great, but the short answer to that is physics are still dammed hard to do, and even harder to do efficiently, so it will far more likely come down to the implementation than the actual hardware it's running on.


in one word:


markettttting
 
Last edited by a moderator:
Griffith said:
there are different way to obtain this, but I think the best will be via MEMEXPORT

Congratulations on reading the article.

In PC's I can understand doing physics on the GFX chip because the FP performance of the CPU sucks. But with Xenon and Cell why offload it to the GFX chip? With MEMEXPORT the 360 has a very blurred distinction between what you use the CPU for but pushing physics onto the GPU and moving mesh generation etc. onto the CPU seems a bizzare choice? Seems simpler to keep everything nicely where it's supposed to be in this case.

Why the article stresses this point seems ridiculous because Cell and RSX will be able to do just the same (perhaps with a branching penalty requiring a bit of a re-think). Seems to be more idle speculation and extrapolation on what is known about the PS3 vs. the Xbox. Hopefully Sony will release more details and shut these stupid arguments up ( because an nVidia console call saying 'no royalties' implies 'PS3 is late' and Meryll Lynch really think the BOM will be $900 - please).
 
Kryton said:
In PC's I can understand doing physics on the GFX chip because the FP performance of the CPU sucks. But with Xenon and Cell why offload it to the GFX chip?

I doubt you'd want to do it with Cell, really, though RSX could be employed in this manner (Havok's FX libraries have been originally developed on G70s). It's been suggested Xenon's capability in this regard is smaller, thus you see speculation about how Xenos could be used to help bridge the gap. Of course, bridging this gap with Xenos - if you could do that, which I'm not sure you could comprehensively do if GPU physics effects were limited to non-gameplay objects - would open a wider one elsewhere.

I agree it makes a hell of a lot more sense in PCs, but not necessarily for the same reason. Mostly because you have

a) SLI systems were you're not getting "full value" out of one card, which could be tasked with physics instead

b) graphics cards that are doubling in power pretty quickly, such that the higher end is often surplus to requirements for today's games - thus on the higher end you often have power to spare for things like physics etc, at least for a while.

BTW, you can't do "mesh generation" on Xenos or RSX. Xenos has a tesselator, but original mesh creation, if it's being done at runtime (procedurally or whatever) would happen on the CPU.
 
Last edited by a moderator:
Titanio said:
BTW, you can't do "mesh generation" on Xenos
Why not? It can write arbitrary FP data to arbitrary memory locations.

or RSX. Xenos has a tesselator, but original mesh creation, if it's being done at runtime (procedurally or whatever) would happen on the CPU.
Xenos is explicitly designed to perform animation, skinning and programmable tessellation/HOS.

The design model for XB360 is very much CPU gives GPU the high-level geometry, and the GPU does the rest. It's crucial to getting the most out of multi-pass rendering techniques as it means the CPU is no longer involved in pushing data to the GPU for each pass, but merely feeds the GPU with the correct data extents and shaders.

Jawed
 
Titanio,

BTW, you can't do "mesh generation" on Xenos...

Xenos has a tesselator, but original mesh creation, if it's being done at runtime (procedurally or whatever) would happen on the CPU.

Why are you stating this as a "fact" when it's clear you really don't know? I mean, I'm fairly certain you don't have any proof of what you speak...
 
Jawed said:
Why not? It can write arbitrary FP data to arbitrary memory locations.


Xenos is explicitly designed to perform animation, skinning and programmable tessellation/HOS.

The design model for XB360 is very much CPU gives GPU the high-level geometry, and the GPU does the rest.

You can't create geometry out of nothing though, correct? At least that was my understanding. You take a coarse mesh and can work with, tesselate it etc. but it's geometry creation capability is less general than say, a geometry shader or CPU, no?
 
A complex mesh is created in a graphics package and loaded in. But creation of a mesh only needs vertex data, and a simple mesh, say a cube, only takes a few vectors describing vertices and triangles. It should be possible for Xenos to produce some vertex data (just vectors) and pass that into it's setup engine to render. A cube can be easily described in a vertex shader, MEMEXPORT the vector data, then channel that back into Xenos as a model for rendering.

That's one way Xenos can be used. Dunno about others, nor if output from the vertex pipes can be piped straight to the setup engine without needing to export to RAM. As to practical use of Xenos creating geometry, I don't know.
 
Shifty Geezer said:
But creation of a mesh only needs vertex data, and a simple mesh, say a cube, only takes a few vectors describing vertices and triangles. It should be possible for Xenos to produce some vertex data (just vectors) and pass that into it's setup engine to render. A cube can be easily described in a vertex shader, MEMEXPORT the vector data, then channel that back into Xenos as a model for rendering.

The original vertex data, or where that comes from, is what I'm wondering about.

What's the dealy with geometry shaders then? I thought they explicitly brought the capability create and destroy geometry, relatively generally.
 
Titanio said:
The original vertex data, or where that comes from, is what I'm wondering about.

What's the dealy with geometry shaders then? I thought they explicitly brought the capability create and destroy geometry, relatively generally.

I still don'tunderstand the point you want to make. Are you trying to say it's not possible to procedurally generate geometry on Xenos?
 
http://www.ati.com/developer/gdc/Tatarchuk_Data_Amplification.pdf

Page 45 and then read both backwards and forwards.

That's how SM2 can be made to handle geometry creation. It renders intermediate triangles, upon which are executed pixel shaders which actually generate the required geometry (vertex buffers) via a tricksy render target (i.e. "render to vertex buffer"). The triangle's (or a "screen covering quad") input parameters and textures encode the source data for the geometry creation. The geometry is created as pixels (e.g. floating point) and locality in a 2D render target encodes the 1D or 2D data formats that are re-interpreted as vertex data for further rendering.

In Xenos you have memexport available directly in vertex shaders, so there's no need to go round the houses with pixel shaders performing geometry creation and writing the required data to memory.

In D3D10 you have streamout which has instructions for the direct writing of geometry in RAM.

In all these cases, as I hinted earlier, the geometry is generally going to be required for more than one rendering pass. On that basis there's little point in trying to architect geometry shading as a process that's internal to the GPU. The data might as well reside in RAM, because it'll be too large to hold within the GPU for the duration of the rendering passes it's required for. (Current SM3 GPUs only hold ~20-50 vertices internally, after they've been vertex shaded, before they're assembled into triangles for scan conversion and pixel shading.)

So in XB360 the CPU will pull model and scene data together, send the geometry that requires instancing, animation, skinning, tessellating and shadowing to the GPU. Then when that's all done it can ask the GPU to assemble the entire scene with the final lighting and texturing.

---

As for physics on the GPU, you should watch the Toyshop demo which has a whole wealth of water physics effects generated solely by R5xx:

http://www2.ati.com/multimedia/radeonx1k/ATI-Demo-ToyShop-v1.1.zip - 91MB

There are lower-quality versions of the video on:

http://www.ati.com/designpartners/media/edudemos/RadeonX1k.html

Discussion including links to slides explaining what's going on:

http://www.beyond3d.com/forum/showthread.php?t=24254

Jawed
 
Thanks for the info Jawed, I was not aware of approaches being made for geometry creation like that in existing shader models. Interesting, thank you!
 
Back
Top