Xbox360 CPU for "animation, post fx, HDR and so on"

Titanio said:
I wouldn't take it as absolutely certain that the CPU is doing those things in the MGS4 demo, despite Kojima's comment, as some devs have a habit of referring to "CPU power" as meaning the whole system, non-specifically. I know Kojima is a bit more technically minded, but in the context of the unfinished dev kits it would also seem to make more sense that way. Same reason I'm wary of what the GRAW designer was saying here.
Those are my thoughts.
 
ERP said:
Both the systems are completly capable of doing this, although it would be a bit painful right now in the PS3 devkits, the question is, is it the best use of the available resources. If your game is simple and your massively under utilising the CPU moving any operation from GPU to CPU could be a win as long as that operation could be performed in parallel, no matter how much faster the GPU can do it. It's not quite this simple since you're going to bang the bus a lot with the CPU while your doing it, but one pass on the screen is unlikely to be a killer.

A lot of what you choose to do on the CPU depends on the focus of your game, if it's pretty graphics, there are ways to use your CPU there, if it's physics, you'll probably try and move stuff from CPU to GPU.

The thing to remember about multiprocessor systems, and a GPU is just a very specialised processor, is the fastest place to do something doesn't necessarilly equate to the best place to do something in a given application.

When you say a 'filter' do you mean things like giving the image a sepia color pallette or adding film effects like grain/dirt/etc? Thats what i interpreted and it seems that GRAW does use this 'dirty film' look from time to time, kind of like th emovie 'Traffic'.
 
It seems true one of the XB360 CPU cores is dedicated to graphics processing.

The game is across all formats including the Xbox 360. What will be the differences beyond the obvious visual enhancements; is the Xbox 360 version based on entirely new technology or are their restrictions imposed by the current-gen versions – to conclude please tell us a little about how you’ve managed to make GR3 look so impressive?

To clarify for the X360 development, GRAW will have no limits as a result of the old gen consoles as the engine is built from the ground up. All assets are created especially for the next gen platforms. This translates into larger textures, more shaders and more animation blendings for starters!

But we do not just have more power visually we are very focused on immersive and gameplay improvements that the next generation develops allow us to achieve, such as sound, physics, AI and so on. Effectively we now have 6 ‚virtual’ processors and we use two of them for the visual content of the game. That leaves us 4 to do a whole lot of very cool stuff.

http://www.totalvideogames.com/arti..._Warfighter_Feature_8658_4595_0_0_0_20_40.htm



Also from the same interview, they say that HDR and Post rendering FX are what they're most proud of when it comes to the visuals.

GRAW has become a signature title for the Xbox 360; what do you feel are the game’s most significant visual tricks?

Personally I think the use of Post rendering FX and High Dynamic Range will give the player a new sense of realism and effectively a more immersive feel of the battlefield. With these visual treatments, you really feel the heat of the Mexican sun, and the intensity of the urban battlefield.
 
Last edited by a moderator:
LunchBox said:
i think it's quite possible for the processor to be used to aid the GPU...

i vaguely recall a discussion where it was stated that aside from the Xenos being able to lock and use the CPU cache, it can also lock and use one or two of the cores...

don't quote me on that as my memory is hazy...

does anyone remember that thread?

or am i mixing it with the rsx/cell spu thing?

Algebraic Ring posted the Hot Chips slides a while back the mentioned that stuff.

The GPU can read directly from the CPU Level 2 cache.

The CPU includes D3D compressed data formats so that it can share data with the GPU while effectivly doubling the bandwidth between them.

http://www.beyond3d.com/forum/showthread.php?t=21587&page=2


Here is an article from Electronic design discussing the Hot Chips info.

The CPUs can provide high-bandwidth data streaming support with minimal cache thrashing by using a 128-byte cache line size. A tight data streaming capability between the CPU and the graphics processor unit (GPU) is also available. The GPU can read 128 bytes at a time from the L2 cache and it provides low-latency cacheable writebacks to the CPU. The GPU also shares Direct3D (D3D) compressed data formats with the CPU to at least double bus bandwidth for graphics data.

http://www.elecdesign.com/Articles/Index.cfm?AD=1&AD=1&ArticleID=11014


So it seems likely that GRAW has one of the CPU cores slaved to the Xenos GPU to aid in graphic processing.
 
Brimstone said:
It seems true one of the XB360 CPU cores is dedicated to graphics processing.

It's completely up to the developer how they want to configure the cores, so in other games they could set it up differently.
 
Last edited by a moderator:
Brimstone said:
The CPU includes D3D compressed data formats so that it can share data with the GPU while effectivly doubling the bandwidth between them.
This is the same silly stuff Ninteno paraded off on GCN. The CPU includes instructions for scalar quantization of data, something many SIMDs did over the past years without PRing a pie in the sky about it.
But then again if IBM has the balls to parade off "preferred slot paradigm" -_- , I guess I shouldn't be surprised.

Anyway Gekko's approach was still the best as far as CPUs go - compression/decompression was embeded as part of load/store instructions - which especially on in-order CPU can make a great difference.
 
aaaaa00 said:
It's completely up to the developer how they want to configure the cores, so in other games they could set it up differently.

I should have been specific with the statement, which was targeted at those in this thread, that were unsure about the GRAW developers utilizing one of the CPU cores, to assist with the rendering graphics. Yes, they could have used the core for other things, it's as you said, up to the developers to configure the cores for whatever type of workload they want them to perform.


Fafalda said:
Anyway Gekko's approach was still the best as far as CPUs go - compression/decompression was embeded as part of load/store instructions - which especially on in-order CPU can make a great difference.
11-Dec-2005 07:53

Jeff Brown, the Chief Engineer of the XB360 CPU, mentioned some of tweaks to the CPU cores in his blog.

VMX 128
We developed a Microsoft unique implementation of VMX called VMX128 which focused on improving graphics, game physics, and artificial intelligence.

Power management within the FPU / VMX128 units is especially valuable as it is rare that all three cores would be running threads with active numeric computation.

We implemented a Delayed Execution Issue Queue which reduces the effective load latency to 2 cycle vs 8-10 cycles without it. There are separate load target buffers for the FPU and VMX128 units that essentially enables Out of Order FP/VMX execution relative to Loads and Stores

We made a number of architectural changes to the VMX unit when we created VMX128. We extended the number of Vector Registers from 32 to 128. All 128 Registers are directly-addressable and the original 32 Registers are mapped to the first 32 entries of 128-entry vector register file. We also added a number of instructions:

• floating-point dot-product instructions supporting 3-vectors and 4-vectors
• Permute-class instructions for rotate and insert operations
• Pack / unpack instructions for converting Direct3D data types to/from single-precision FP format
• storage access instructions to improve access to misaligned data

http://gametomorrow.com/blog/index....etails-described-at-mpr-fall-processor-forum/

Why is Gecko's approach better than "Microsoft VMX128"?
 
Brimstone said:
Why is Gecko's approach better than "Microsoft VMX128"?
I was referring specifically to built in quantization instructions, nothing else (although there are some other things I don't like about VMX period, but I've made that clear before).
Anyway here:

On Gekko:
LoadCompressed
...do some math
storeCompressed

on VMX128:
load
decompress
...do some math
compress
store
 
Alstrong said:
I vaguely recall Dave saying something about it...

yeah...

i can't recall for the life of me where i read it...

but i know it read it somewhere...

Brimstone said:
Algebraic Ring posted the Hot Chips slides a while back the mentioned that stuff.

The GPU can read directly from the CPU Level 2 cache.

The CPU includes D3D compressed data formats so that it can share data with the GPU while effectivly doubling the bandwidth between them.

http://www.beyond3d.com/forum/showthread.php?t=21587&page=2


Here is an article from Electronic design discussing the Hot Chips info.



http://www.elecdesign.com/Articles/Index.cfm?AD=1&AD=1&ArticleID=11014


So it seems likely that GRAW has one of the CPU cores slaved to the Xenos GPU to aid in graphic processing.

Thank you for confirming :)


_________________________________________________
on a side note... edit function rocks!!! :)
 
Last edited by a moderator:
No prob. I was just trying to remember which words he definitely used in the post before searching. :)
 
Probably just postprocessing the output buffer to add bloom from HDR. A single pass shouldn't be expensive. Perhaps it facilitates doing other post-processing in the same pass.

Anyway, it seems more likely that an element of HDR is done on CPU simply out of convenience, but utilizes the multiple cores.
 
Back
Top