Optimizations on Xbox 360

Btw can anyone tell me approximately how much better Xenos is vs. Hollywood
Very.

'Better' is a relative term. Typically we use flops as a ballpark, but where you have an improved architecture, what you can do with that maths is more. So where a new GPU may have 2x the flops of an older model, it may be able to do 3x as much on screen. But then beyond the flops, you have the other aspects like how many pixels you can draw or texels you can read. And where you have these various aspects to a GPU, do you consider the GPU's overall performance the aggregate of them? Or the lowest value, as that's the weakest part of the design bottlenecking the rest?

Basically, your question as presented cannot be answered - there is no 'Xenos is n times better than Hollywood'. It's superior in every way and applying a metric to that is impossible. We can't even settle on a way to measure system bandwidth, let alone the whole GPU system! In real world GPU comparisons, we use benchmarks to get a comparison. You could try MW3?

If you really, truly want numbers to compare the two instead of looking at the screen, you'll have to ask for specific metrics such as for flops, bandwidth,..ah, just don't! Just look at pictures and see how much better Xenos is than Hollywood!
 
Last edited:
Very.

'Better' is a relative term. Typically we use flops as a ballpark, but where you have an improved architecture, what you can do with that maths is more. So where a new GPU may have 2x the flops of an older model, it may be able to do 3x as much on screen. But then beyond the flops, you have the other aspects like how many pixels you can draw or texels you can read. And where you have these various aspects to a GPU, do you consider the GPU's overall performance the aggregate of them? Or the lowest value, as that's the weakest part of the design bottlenecking the rest?

Basically, your question as presented cannot be answered - there is no 'Xenos is n times better than Hollywood'. It's superior in every way and applying a metric to that is impossible. We can't even settle on a way to measure system bandwidth, let along the whole GPU system! In real world GPU comparisons, we use benchmarks to get a comparison. You could try MW3?

If you really, truly want numbers to compare the two instead of looking at the screen, you'll have to ask for specific metrics such as for flops, bandwidth,..ah, just don't! Just look at pictures and see how much better Xenos is than Hollywood!

Ah, thanks I knew it was more complicated than x times better, but I wondered if someone with know how had thought about it that way before haha.

Yes I can see how much better Xenos is :p MW3 on Wii is a chopped to hell port though, like it'd run on a ps2 like that. I'd say Eurocom did the best work with realistic graphics on Wii.
 
Both COD Black Ops and MW3 were actually really well done Wii ports. Considering just how dated the hardware for Wii was compared to the 360/PS3, it was quite the accomplishment for Treyarch really. Both games ran a mostly constant 30fps in multiplayer, with the host lag being the exception. If you were the host, your framerate felt like you were running in the teens. While the games certainly look rough on Wii compared to 360/PS3, the game was incredibly intact, and the best shooters on Wii. Goldeneye had a nicer look to it, but COD had the more consistent framerate, and more complete online multiplayer. Playing the jungle map in Black Ops was actually amazing to me when playing for the first time. I honestly couldn't believe it was running on Wii. The amount of foliage was something that I hadn't seen in a Wii game, and it looked pretty darn good considering. The last time I was that impressed with Wii level hardware was Black on PS2, which I still consider to be the best looking game of that gen.
 
Both COD Black Ops and MW3 were actually really well done Wii ports. Considering just how dated the hardware for Wii was compared to the 360/PS3, it was quite the accomplishment for Treyarch really. Both games ran a mostly constant 30fps in multiplayer, with the host lag being the exception. If you were the host, your framerate felt like you were running in the teens. While the games certainly look rough on Wii compared to 360/PS3, the game was incredibly intact, and the best shooters on Wii. Goldeneye had a nicer look to it, but COD had the more consistent framerate, and more complete online multiplayer. Playing the jungle map in Black Ops was actually amazing to me when playing for the first time. I honestly couldn't believe it was running on Wii. The amount of foliage was something that I hadn't seen in a Wii game, and it looked pretty darn good considering. The last time I was that impressed with Wii level hardware was Black on PS2, which I still consider to be the best looking game of that gen.
The best port was actually reflex, it was the only port that was fully intact and not missing anything. Or if it missed 1 map or an easter egg, it got something in return.

And it didn't look THAT far away from CoD4, because MW kinda had one foot stuck in last gen. The other games are missing substantial content, they took their time with reflex though. Mutiplayer, sure the cod games were better than goldeneye. But i've had frame drops in the cod games too.

Wii isn't ps2 level hardware though, it's over twice as strong. Black has simple, few frame animations and typical ps2 quality textures and geometry and goldeneye quality sound. Never understood the gawking over that game.
 
Black has simple, few frame animations and typical ps2 quality textures and geometry and goldeneye quality sound. Never understood the gawking over that game.

Me neither... Always felt like there were dozens of more impressive games before it.
 
Halo 3 to 4 is just insane
And far from a strict improvement. It's a great illustration of the kind of compromization you get as a generation progresses.

Some of it is industry-wide things that compromise apparent quality a little for massive throughput gains, like half-res transparencies.
On the other hand, a lot of stuff was just cut down, like Bungie's area specular that directionally modulated area cubemaps according to the lightmap, so that objects wouldn't have specular reflections from nowhere. Halo 4 also has the most feature-limited dynamic lights in the entire series, even counting the original Xbox games; dynamic lights can't cast specular reflections, and there are no spotlights.
There are other compromises that are more oblique, like the greater geometry density seemingly doing poorly with split-screen mode.

The amount of real, unqualified "optimization allows us to do this much more than before" that happens over a generation tends to be a lot smaller than this thread makes it out to be, I think.

on PS3 almost a hard work were done on Cell.
This is false. PS3 did have a "CPU" which was unusually capable in patterned parallel computations. While RSX was perhaps not quite as good as it should have been, it was still a viable 2006 GPU. Cell was not.

Saying that the CPU did a greater chunk of the work than normal doesn't mean that the GPU wasn't doing a ton of the work.
 
Interesting why Xbox 360 need so big EDRAM bandwidth? It's 32 GB/s between main die and EDRAM and 256 GB/s inside EDRAM die. I understand what difference is so big because of MSAA, but why there is need for such high bandwidth at all? On PS2 it's because of multipass. But why on Xbox 360? There only 10 MB of memory, so even if tiling is used and there is 3 tiles, and game run at 60fps that is 1.8 GB of data.
 
They designed it so that the 8 ROPs could read & write 32bpp to the eDRAM at full speed (500MHz). The 256GB/s figure is just a figure really since that's exactly what the 8 ROPs needed when 4xMSAA was utilized with blending (read+write). If MSAA wasn't used, the ROPs would only consume 32GB/s per read or per write (64GB/s with blending).
 
Interesting why Xbox 360 need so big EDRAM bandwidth? It's 32 GB/s between main die and EDRAM and 256 GB/s inside EDRAM die. I understand what difference is so big because of MSAA, but why there is need for such high bandwidth at all? On PS2 it's because of multipass. But why on Xbox 360? There only 10 MB of memory, so even if tiling is used and there is 3 tiles, and game run at 60fps that is 1.8 GB of data.
in addition to what AINets said, which explains many things btw, I remember posts from devs commenting on the fact that while the bandwidth was great, compression techniques used these days are much better than the raw 256GB/s of X360 EDRAM. What I don't get is that games are running better overall on the Xbox One, a game with full 4xMSAA like Grid and similar games should run worse on the Xbox One
 
I understand that but back buffer and z buffer is only 3,51 MB and 3,51 MB, 7 MB total. That amount of data goes from main die to EDRAM die per frame. Why there is 32 GB/s? :D
 
I understand that but back buffer and z buffer is only 3,51 MB and 3,51 MB, 7 MB total. That amount of data goes from main die to EDRAM die per frame. Why there is 32 GB/s? :D

This is something that was explained in the PS2 thread, so I apologize if I'm repeating other people. But I think you have a flawed understanding of bandwidth figures.

32GB/s doesn't mean that can perform 32GB worth of writes in a second whenever and however you want. It means that's the speed you can attain if you are doing nothing but writing to it. But you can't always be writing to it because often you are busy doing other things and don't have the writes prepared, or the EDRAM is being blocked because it's getting copied back to main RAM. Any time when you're not writing to the eDRAM, that bandwidth is wasted and cannot be recovered.

So real games most likely realistically did not come anywhere close to writing 32GB/s, but if the bandwidth was lower they'd probably be slower, because there are select times when they need to write to the EDRAM that quickly.

The other thing is, your characterization of buffers doesn't really match with how real games tended to work. XBox 360 was well into the era deferred rendering, so you'd instead be doing something like rendering material content to a G-buffer (maybe after a pre-Z pass), resolving those to main RAM, then combining it with light volumes into a backbuffer, rendering alpha content on top of it in forward, etc. And you'd be performing additional renders and resolves for separate things like shadow maps or maybe SSAO.
 
Ok, many thanks for explanation!
How strong tiling affected Xbox 360 capabilities? Why so many developers tried to avoid using tiling?
 
EDRAM size didn't play so nicely with deferred rendering. Worth reading this thread: https://forum.beyond3d.com/threads/xenos-c1-and-deferred-rendering-g-buffer.42770/page-2
Many devs giving many opinions.
always interesting to read old posts, this one from you:

From reading your comments, it would seem the PS3 benefits more from a deferred rendering engine setup, and Xbox 360 more along the lines of forward rendering. Both rendering styles/engines having benefits and draw backs.

Not quite. You can implement a deferred renderer on any platform, but XB360 is set up for a traditional renderer with the eDRAM choice. A deferred renderer would be a huge gobbler of RAM BW of which XB360 has about half of PS3's, something that isn't a problem when much of the BW requirements is moved onto the eDRAM and logic.

And if you had a choice between the two (deferred / forward) which one would you select, and why?
It depends entirely on the game and what you want to create! You may as well 'if you had the choice between using a seven-seater family wagon and a sports-coupe, which would you select and why?' The choice of vehicle, or graphics engine, is determined by the work you need it to do
.

It would appear that making XBO follow X360 they encountered the same issues. The only difference is that all the developers moved towards deferred setups, and the eventual shift away from embedded RAM with Scorpio.
 
Despite only having 10MB of EDRAM deferred rendering was AFAIK still pretty common on XBox360. Developers used less than ideal resolutions and got really creative with packing lower precision values to save on G-Buffer size.

That said, I agree that XB360 was probably designed with forward renderers in ind, although the magic ROPs/free RMWs does help save bandwidth when compositing lights to a G-Buffer.

It may be that some developers used it to more make multi-platform support easier, or because they thought it was a good idea without really exploring the problem thoroughly (ERP hints at this in the linked thread). Still, it was used a lot.

In that thread Fafalada also commented on some other uses for rendering to places other than backbuffers:

Fafalada said:
Every single game on 360 has to do that multiple times per frame for stuff like shadowmaps, reflectionmaps, HDR resolves, postprocess resolves etc.
 
I had to wonder if 10MB was a design minimum chosen for 480p 4xMSAA. If only they had gotten it up to 16MB or so. :p (though maybe the choice at the time was either 512MB GDDR3 or larger eDRAM. :/)
 
Last edited:
I have some more sceens to compare. First is from Call of Duty Adwanced Wafare (60fps), second is is from Crysis 2 (30fps).
COD AW a.jpeg
Crysis 2 a.jpg

COD AW b.jpeg
Crysis 2 b.jpeg
COD AW c.jpeg
Crysis 2 c.jpeg

Crysis 2 was released in March 2011 and Call of Duty Adwanced Warfare in november 2014, hat is 3,5 years later. Really COD AW looks very great, some moments look even bette than in Crysis 2, especialy characters faces. What do you think about it?
Maybe if modeators allow, I can seach for more sceen for some games and post them here.
 
Back
Top