*rename* Balancing Work between Xenon & Xenos

Rangers

Legend
We hear so much about Cell helping RSX...is there anything Xcpu can do likewise?

I mean you dont hear anything about that. But if I recall the raw flops numbers, Xenon/XCpu had around half the flops of Cell, so it wouldn't seem to be totally helpless.

I also know that, a few programmers have mentioned Xcpu and Cell can be a bit alike because Xcpu can run 6 threads, which is somewhat comparable to Cell's 6 available SPU's in a very general way.

Now, I have a feeling one answer I'll get is that, you wouldn't want to use Xcpu for graphics because it would take away from AI, physics, and othe general CPU duties.. However thats not really the answer I'm after, more the hypothetical "what can be done" with Xcpu if you set out to squeeze every last ounce of graphics out of 360 like Sony first party programmers seem adept at doing with Ps3.

One thing I'm wondering right off, I know the bandwidth setup on 360 is much different, does that kill this idea?
 
I don't think two threads run on vmx unit. So it's at best 3 integer thread + 3 floating point thread vs 6 "threads" (integer or float). And SPUs are better (no load hit store penalty, lower latency to local memory, more bandwidth,etc.).
It would be nice anyway to see how much triangles a vmx unit can cull par second vs a SPU.
 
It's my impression that Cell is being used for some graphics tasks partly because RSX isn't nearly as spiffy as Xenos. That GeForce 7 heritage and all.
 
the real question xbox fans should be asking is when the tessellator will start to get used.

I think only viva pinata has used it so far and to me thats alot of untapped potential
 
Of course there is; search for T.B.'s post on the Sacred 2 deferred renderer around here.

Halo Wars used the tesselator in a big way, look for the lecture on its terrain system from this year's GDC.
 
Of course there is; search for T.B.'s post on the Sacred 2 deferred renderer around here.

Halo Wars used the tesselator in a big way, look for the lecture on its terrain system from this year's GDC.
I'm not sure we speak of the same thing, I haven't searched for T.B posts but if my memory is right he spoke of the GPU mostly. Most of the issue they had in regard to tearing and framerate were CPU related because they didn't focus enough on it and the model they chose were old (development has been pretty long). I don't understand the relevance of Halo war either. They use the tesselator, they render a low res simulation on xenon and cell (as Dice) but I don't think that is what Rangers is asking.
My understanding of his question is: "would it make sense to offload some rendering/graphics from xenos to xenon?" Basically using the xenon in the same way as SPUs.
would it make sense to do some triangle culling on xenon?
Would it make sense to do some post processing on xenon?
etc.
 
The Halo Wars was in reply to somebody saying something along the lines of "xenon exclusives will benefit from the tesselator, is anybody using it after Viva Pinata?" - that post seems to have disappeared.

T.B. spoke of classifying the screen in tiles to determine which lights affect what tiles. This is where you use the CPU to reduce the GPU load.

I don't think it would make sense to do triangle culling on the CPU; postprocessing is a bit more likely, if you come up with a postprocessing step for which the GPU is pathologically ill-suited (e.g. histogram building).
 
The Halo Wars was in reply to somebody saying something along the lines of "xenon exclusives will benefit from the tesselator, is anybody using it after Viva Pinata?" - that post seems to have disappeared.
Indeed, I was lost.
T.B. spoke of classifying the screen in tiles to determine which lights affect what tiles. This is where you use the CPU to reduce the GPU load.

I don't think it would make sense to do triangle culling on the CPU; postprocessing is a bit more likely, if you come up with a postprocessing step for which the GPU is pathologically ill-suited (e.g. histogram building).
Thanks for the insight ;)
I'll browse T.B post history seems it's a bit too old for my memory.
 
postprocessing is a bit more likely, if you come up with a postprocessing step for which the GPU is pathologically ill-suited (e.g. histogram building).
Does XCPU have direct access to the framebuffer eDRAM in the 360, or would you need to dump out the entire framebuffer to memory first, then read it back in with the GPU to do the actual tone-mapping? That'd lead to a lot of bandwidth getting sucked up... Or is that not how it works? :D
 
Yes, that's how it works; the GPU can't read from EDRAM (except for the "reading" part of alphablending, of course), so you regularly "dump out the entire framebuffer to memory" (called usually "resolve") during different stages of the postprocessing even when doing it entirely on the GPU. This resolves are half the cost of postprocessing for us. If at some point you want to do one of the stages on the CPU, you'll actually save one resolve.
 
Thanks for the explanation!

Seems a relatively cruddy method of doing things. Surely it must have been smarter if ATi had engineered the GPU with a read-back capability? Considering the 360's total memory b/w is only what, 25ish GB/s, how many such 'resolves' can you really afford when considering the CPU surely needs some of that same b/w...
 
Of course there is; search for T.B.'s post on the Sacred 2 deferred renderer around here.

I think I just hinted at that during the DigitalFoundry interview. :)
But yes, we used the CPU on both platforms to classify screen tiles based on light influence (i.e. detect which light-sources affect which 8x8 pixel tiles). Now on the PS3, we can basically go crazy: grab the z-buffer, reconstruct screen-space coordinates and determine for each pixel which light sources influence it. This gives us a really nice "shader culling", so to speak. On the Xbox, we take the corners of a tile, shoot rays through them and see which light-volumes they interact with.

The Xbox algorithm works well enough with our scenes, but sometimes the difference is very pronounced.

So yes, you can use the CPU to accelerate rendering. It's really not specific to cell or anything and people have been doing it forever. That said, both platforms have their issues when it comes to implementing it. The Xbox is a little weak on the processing and bandwidth side, but provides easy access to "VRAM", while the PS3 is the exact opposite.
 
the real question xbox fans should be asking is when the tessellator will start to get used.

I think only viva pinata has used it so far and to me thats alot of untapped potential

afaik halo wars use some features of tesselation to render tarrain to but i may be wrong
 
What does the tesselator on the Xenos have to do with Xenon? Talk about taking a thread off topic.

The reason I brought up the tesselator in this thread was because its something barely being used that can take a load off the cpu and with that load off the cpu/ gpu / bus / ram. It can save othe resources to help with graphics.
 
The reason I brought up the tesselator in this thread was because its something barely being used that can take a load off the cpu and with that load off the cpu/ gpu / bus / ram. It can save othe resources to help with graphics.

HUH?!?!?!?

How does the use of the tesselator take any load whatsoever off the cpu/gpu?

By tesselating your geometric meshes, aren't you in effect creating more polygons?... More polys that will need shading?

Surely the tesselator will increase GPU vertex load more than anything, hence maybe the reason it isn't used much...?

You really need to explain how using the tesselator can help with anything as as far as I understand how tesselation works, it would likely decrease performance overall.
 
Back
Top