Optimizations on Xbox 360

Discussion in 'Console Technology' started by Liandry, Aug 3, 2016.

  1. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    43,577
    Likes Received:
    16,028
    Location:
    Under my bridge
    Very.

    'Better' is a relative term. Typically we use flops as a ballpark, but where you have an improved architecture, what you can do with that maths is more. So where a new GPU may have 2x the flops of an older model, it may be able to do 3x as much on screen. But then beyond the flops, you have the other aspects like how many pixels you can draw or texels you can read. And where you have these various aspects to a GPU, do you consider the GPU's overall performance the aggregate of them? Or the lowest value, as that's the weakest part of the design bottlenecking the rest?

    Basically, your question as presented cannot be answered - there is no 'Xenos is n times better than Hollywood'. It's superior in every way and applying a metric to that is impossible. We can't even settle on a way to measure system bandwidth, let alone the whole GPU system! In real world GPU comparisons, we use benchmarks to get a comparison. You could try MW3?

    If you really, truly want numbers to compare the two instead of looking at the screen, you'll have to ask for specific metrics such as for flops, bandwidth,..ah, just don't! Just look at pictures and see how much better Xenos is than Hollywood!
     
    #21 Shifty Geezer, Aug 27, 2016
    Last edited: Aug 27, 2016
    Goodtwin and phoenix_chipset like this.
  2. phoenix_chipset

    Regular Newcomer

    Joined:
    Aug 26, 2016
    Messages:
    546
    Likes Received:
    246
    Ah, thanks I knew it was more complicated than x times better, but I wondered if someone with know how had thought about it that way before haha.

    Yes I can see how much better Xenos is :p MW3 on Wii is a chopped to hell port though, like it'd run on a ps2 like that. I'd say Eurocom did the best work with realistic graphics on Wii.
     
  3. Goodtwin

    Veteran Newcomer Subscriber

    Joined:
    Dec 23, 2013
    Messages:
    1,151
    Likes Received:
    627
    Both COD Black Ops and MW3 were actually really well done Wii ports. Considering just how dated the hardware for Wii was compared to the 360/PS3, it was quite the accomplishment for Treyarch really. Both games ran a mostly constant 30fps in multiplayer, with the host lag being the exception. If you were the host, your framerate felt like you were running in the teens. While the games certainly look rough on Wii compared to 360/PS3, the game was incredibly intact, and the best shooters on Wii. Goldeneye had a nicer look to it, but COD had the more consistent framerate, and more complete online multiplayer. Playing the jungle map in Black Ops was actually amazing to me when playing for the first time. I honestly couldn't believe it was running on Wii. The amount of foliage was something that I hadn't seen in a Wii game, and it looked pretty darn good considering. The last time I was that impressed with Wii level hardware was Black on PS2, which I still consider to be the best looking game of that gen.
     
  4. phoenix_chipset

    Regular Newcomer

    Joined:
    Aug 26, 2016
    Messages:
    546
    Likes Received:
    246
    The best port was actually reflex, it was the only port that was fully intact and not missing anything. Or if it missed 1 map or an easter egg, it got something in return.

    And it didn't look THAT far away from CoD4, because MW kinda had one foot stuck in last gen. The other games are missing substantial content, they took their time with reflex though. Mutiplayer, sure the cod games were better than goldeneye. But i've had frame drops in the cod games too.

    Wii isn't ps2 level hardware though, it's over twice as strong. Black has simple, few frame animations and typical ps2 quality textures and geometry and goldeneye quality sound. Never understood the gawking over that game.
     
  5. milk

    milk Like Verified
    Veteran Regular

    Joined:
    Jun 6, 2012
    Messages:
    3,442
    Likes Received:
    3,323
    Me neither... Always felt like there were dozens of more impressive games before it.
     
  6. sebbbi

    Veteran

    Joined:
    Nov 14, 2007
    Messages:
    2,924
    Likes Received:
    5,293
    Location:
    Helsinki, Finland
    betternatethanlever, Cyan and Liandry like this.
  7. HTupolev

    Regular

    Joined:
    Dec 8, 2012
    Messages:
    936
    Likes Received:
    564
    And far from a strict improvement. It's a great illustration of the kind of compromization you get as a generation progresses.

    Some of it is industry-wide things that compromise apparent quality a little for massive throughput gains, like half-res transparencies.
    On the other hand, a lot of stuff was just cut down, like Bungie's area specular that directionally modulated area cubemaps according to the lightmap, so that objects wouldn't have specular reflections from nowhere. Halo 4 also has the most feature-limited dynamic lights in the entire series, even counting the original Xbox games; dynamic lights can't cast specular reflections, and there are no spotlights.
    There are other compromises that are more oblique, like the greater geometry density seemingly doing poorly with split-screen mode.

    The amount of real, unqualified "optimization allows us to do this much more than before" that happens over a generation tends to be a lot smaller than this thread makes it out to be, I think.

    This is false. PS3 did have a "CPU" which was unusually capable in patterned parallel computations. While RSX was perhaps not quite as good as it should have been, it was still a viable 2006 GPU. Cell was not.

    Saying that the CPU did a greater chunk of the work than normal doesn't mean that the GPU wasn't doing a ton of the work.
     
  8. Liandry

    Regular Newcomer

    Joined:
    Feb 26, 2011
    Messages:
    319
    Likes Received:
    37
    Interesting why Xbox 360 need so big EDRAM bandwidth? It's 32 GB/s between main die and EDRAM and 256 GB/s inside EDRAM die. I understand what difference is so big because of MSAA, but why there is need for such high bandwidth at all? On PS2 it's because of multipass. But why on Xbox 360? There only 10 MB of memory, so even if tiling is used and there is 3 tiles, and game run at 60fps that is 1.8 GB of data.
     
  9. TheAlSpark

    TheAlSpark Moderator
    Moderator Legend

    Joined:
    Feb 29, 2004
    Messages:
    21,720
    Likes Received:
    7,361
    Location:
    ಠ_ಠ
    They designed it so that the 8 ROPs could read & write 32bpp to the eDRAM at full speed (500MHz). The 256GB/s figure is just a figure really since that's exactly what the 8 ROPs needed when 4xMSAA was utilized with blending (read+write). If MSAA wasn't used, the ROPs would only consume 32GB/s per read or per write (64GB/s with blending).
     
  10. Cyan

    Cyan orange
    Legend Veteran

    Joined:
    Apr 24, 2007
    Messages:
    9,061
    Likes Received:
    2,670
    in addition to what AINets said, which explains many things btw, I remember posts from devs commenting on the fact that while the bandwidth was great, compression techniques used these days are much better than the raw 256GB/s of X360 EDRAM. What I don't get is that games are running better overall on the Xbox One, a game with full 4xMSAA like Grid and similar games should run worse on the Xbox One
     
    betternatethanlever and Liandry like this.
  11. TheAlSpark

    TheAlSpark Moderator
    Moderator Legend

    Joined:
    Feb 29, 2004
    Messages:
    21,720
    Likes Received:
    7,361
    Location:
    ಠ_ಠ
    Majority of GPUs have colour/Z-compression, so they don't necessarily require the full bandwidth for MSAA.
     
    Cyan and Liandry like this.
  12. Liandry

    Regular Newcomer

    Joined:
    Feb 26, 2011
    Messages:
    319
    Likes Received:
    37
    I understand that but back buffer and z buffer is only 3,51 MB and 3,51 MB, 7 MB total. That amount of data goes from main die to EDRAM die per frame. Why there is 32 GB/s? :-D
     
  13. 3dcgi

    Veteran Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    2,439
    Likes Received:
    280
    The buffers are written many times while rendering a frame.
     
    Goodtwin and Silent_Buddha like this.
  14. Exophase

    Veteran

    Joined:
    Mar 25, 2010
    Messages:
    2,406
    Likes Received:
    430
    Location:
    Cleveland, OH
    This is something that was explained in the PS2 thread, so I apologize if I'm repeating other people. But I think you have a flawed understanding of bandwidth figures.

    32GB/s doesn't mean that can perform 32GB worth of writes in a second whenever and however you want. It means that's the speed you can attain if you are doing nothing but writing to it. But you can't always be writing to it because often you are busy doing other things and don't have the writes prepared, or the EDRAM is being blocked because it's getting copied back to main RAM. Any time when you're not writing to the eDRAM, that bandwidth is wasted and cannot be recovered.

    So real games most likely realistically did not come anywhere close to writing 32GB/s, but if the bandwidth was lower they'd probably be slower, because there are select times when they need to write to the EDRAM that quickly.

    The other thing is, your characterization of buffers doesn't really match with how real games tended to work. XBox 360 was well into the era deferred rendering, so you'd instead be doing something like rendering material content to a G-buffer (maybe after a pre-Z pass), resolving those to main RAM, then combining it with light volumes into a backbuffer, rendering alpha content on top of it in forward, etc. And you'd be performing additional renders and resolves for separate things like shadow maps or maybe SSAO.
     
    Goodtwin, TheAlSpark and Liandry like this.
  15. Liandry

    Regular Newcomer

    Joined:
    Feb 26, 2011
    Messages:
    319
    Likes Received:
    37
    Ok, many thanks for explanation!
    How strong tiling affected Xbox 360 capabilities? Why so many developers tried to avoid using tiling?
     
  16. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    43,577
    Likes Received:
    16,028
    Location:
    Under my bridge
    Goodtwin likes this.
  17. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    10,812
    Likes Received:
    10,841
    Location:
    The North
    always interesting to read old posts, this one from you:
    It would appear that making XBO follow X360 they encountered the same issues. The only difference is that all the developers moved towards deferred setups, and the eventual shift away from embedded RAM with Scorpio.
     
  18. Exophase

    Veteran

    Joined:
    Mar 25, 2010
    Messages:
    2,406
    Likes Received:
    430
    Location:
    Cleveland, OH
    Despite only having 10MB of EDRAM deferred rendering was AFAIK still pretty common on XBox360. Developers used less than ideal resolutions and got really creative with packing lower precision values to save on G-Buffer size.

    That said, I agree that XB360 was probably designed with forward renderers in ind, although the magic ROPs/free RMWs does help save bandwidth when compositing lights to a G-Buffer.

    It may be that some developers used it to more make multi-platform support easier, or because they thought it was a good idea without really exploring the problem thoroughly (ERP hints at this in the linked thread). Still, it was used a lot.

    In that thread Fafalada also commented on some other uses for rendering to places other than backbuffers:

     
    TheAlSpark likes this.
  19. TheAlSpark

    TheAlSpark Moderator
    Moderator Legend

    Joined:
    Feb 29, 2004
    Messages:
    21,720
    Likes Received:
    7,361
    Location:
    ಠ_ಠ
    I had to wonder if 10MB was a design minimum chosen for 480p 4xMSAA. If only they had gotten it up to 16MB or so. :p (though maybe the choice at the time was either 512MB GDDR3 or larger eDRAM. :/)
     
    #39 TheAlSpark, Apr 20, 2017
    Last edited: Apr 20, 2017
  20. Liandry

    Regular Newcomer

    Joined:
    Feb 26, 2011
    Messages:
    319
    Likes Received:
    37
    I have some more sceens to compare. First is from Call of Duty Adwanced Wafare (60fps), second is is from Crysis 2 (30fps).
    COD AW a.jpeg
    Crysis 2 a.jpg

    COD AW b.jpeg
    Crysis 2 b.jpeg
    COD AW c.jpeg
    Crysis 2 c.jpeg

    Crysis 2 was released in March 2011 and Call of Duty Adwanced Warfare in november 2014, hat is 3,5 years later. Really COD AW looks very great, some moments look even bette than in Crysis 2, especialy characters faces. What do you think about it?
    Maybe if modeators allow, I can seach for more sceen for some games and post them here.
     
    Goodtwin likes this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...