Xbox One (Durango) Technical hardware investigation

Discussion in 'Console Technology' started by Love_In_Rio, Jan 21, 2013.

Thread Status:
Not open for further replies.
  1. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,579
    Likes Received:
    4,799
    Location:
    Well within 3d
    I'm unsure about some of the numbers he's comparing between the two systems, at least for the ones that I don't think Sony described completely. One risk I do see is the possibility the numbers being cited are not measuring the same thing.
    They could be, but I don't think the tech writers (and leakers) for Sony and Microsoft got together and agreed on what each little diagram blurb or bullet point came from.
     
  2. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    44,106
    Likes Received:
    16,898
    Location:
    Under my bridge
    I moved the versus discussion out. AFAICS there's nothing in Penello's post that gives further insight into XB1's hardware. He's just regurgitating the facts for an audience that isn't B3D
     
  3. Rangers

    Legend

    Joined:
    Aug 4, 2006
    Messages:
    12,791
    Likes Received:
    1,596
    This was never proved. They used a 7990 to demo the MP on PC.
     
  4. XpiderMX

    Veteran

    Joined:
    Mar 14, 2012
    Messages:
    1,768
    Likes Received:
    0
    I have a question...

    Can they "to add" the DDR3 bandwidht to esram bandwith?

    I mean, is a wrong way to describe system bandwidth?
     
  5. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,579
    Likes Received:
    4,799
    Location:
    Well within 3d
    They're two numbers with the same units, so from a math point of view it's fine.
    Is it wrong? I'd say it's accurate, if not precise or detailed enough to dispel ambiguities and reveal the tradeoffs involved.
    That depends on what part of the system you are looking at.

    For the CPU and audio block, no.
    The big bandwidth consumer, for which the eSRAM is tied most closely to and is designed to benefit the most, is the GPU. The GPU can vacuum up the additive bandwidth of the eSRAM and main memory with little problem.

    Even if you don't have a specific load that is able to leverage all the bandwidth, from the point of view of a very complex system that is doing a lot of things at once, the aggregate demand can leverage the bandwidth no single consumer requires.

    The caveat that not everything can use that much bandwidth or even needs to remains, but that's going to be true of almost anything.
    There are workloads that can leverage it, and there is hardware that is designed to be more than capable of using it.
     
    #6225 3dilettante, Sep 10, 2013
    Last edited by a moderator: Sep 10, 2013
  6. Scott_Arm

    Legend

    Joined:
    Jun 16, 2004
    Messages:
    15,134
    Likes Received:
    7,680
    If you can read or write to both at the same time, I don't see why not. That would be the peak theoretical bandwidth for a given time. The ESRAM is only 32 MB, so depending on how it is used, the ability to take advantage of all that bandwidth will vary. We know it is not the case where you will generally be able to hit the peak for the ESRAM. I'm not sure how many systems are able to sustain peak memory bandwidth, whether DDR3, GDDR5, embedded RAM etc.
     
  7. XpiderMX

    Veteran

    Joined:
    Mar 14, 2012
    Messages:
    1,768
    Likes Received:
    0
    Thanks, I got it, then, in the cases where both, GPU and CPU are using each pool, then bandwidth is the sum of both pools?
     
  8. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,579
    Likes Received:
    4,799
    Location:
    Well within 3d
    You're more likely to get it if the GPU is using both pools with an emphasis on the eSRAM, and it's a favorable load that gets past whatever restrictions the eSRAM has for simultaneous operations, and the CPU is pulling its share from main memory, and the audio block is pulling a share as well.

    The CPU portion seems to be unable to fully utilize main memory, and it has a fixed link to the GPU domain it has to go over to access the eSRAM (not sure how that is handled, exactly).
     
  9. Scott_Arm

    Legend

    Joined:
    Jun 16, 2004
    Messages:
    15,134
    Likes Received:
    7,680
    If you were looking at utilization in a particular case, then I guess. DDR bandwidth will get split between the cpu and gpu. Gets a little more complicated when you get to those busses between the cpu and gpu.
     
  10. Jay

    Jay
    Veteran

    Joined:
    Aug 3, 2013
    Messages:
    4,033
    Likes Received:
    3,428
    I can see why it's more correct to add it, than it is not to.

    Neither is 100% correct, guess one may be more correct than the other, as long as you at least mention the esram capacity and speed, which they do. Hence not fiddling the numbers.

    Some people see the esram as a 'band aid', whereas I see it as a considered part of the design of the system.
    In that context, they designed it to work together concurrently with the ram, not separate, hence that's the bandwidth available.

    Games will need to be designed to make good use of the combined bandwidth.
    How it pans out, I guess we'll see.
     
  11. Scott_Arm

    Legend

    Joined:
    Jun 16, 2004
    Messages:
    15,134
    Likes Received:
    7,680
    Why would it not be correct to add them? If you are talking about available gpu bandwidth (not utilization) then adding makes sense, as long as you can access DDR and ESRAM at the same time. I would say that is correct. The size of the pools and utilization are a different issue.
     
  12. DaveNagy

    Newcomer

    Joined:
    Jan 18, 2013
    Messages:
    51
    Likes Received:
    0
    I've always thought of it as both. A chunk of fast, tightly integrated memory has the dual properties of allowing certain operations to proceed more efficiently than possible with any sort of "external" memory, while simultaneously shielding the console from some of the adverse effects resulting from its comparatively slow external memory. Both things seem possible and likely. There's no need to look at it from only one angle or the other.

    (Having ESRAM and super fast system RAM would be the best of all possible worlds, but would be ruinously expensive. So, Sony picked one those two, and MS picked the other. Both seem like valid approaches, but it remains to be seen which will be proven out as the smarter play.)
     
  13. Jay

    Jay
    Veteran

    Joined:
    Aug 3, 2013
    Messages:
    4,033
    Likes Received:
    3,428
    Although I do agree with you, when people say band aid, they usually talk like it wasn't part of the design of the system, and was just thrown in at the last minute to solve the problem. That's what a band aid is.

    I see the bandwidth as a design consideration from the start.
    So I see it as solving a problem, not a 'band aid'.

    Both systems had different approach to the problem, both are just as valid as the other to solve it.
    You could say GDDR5 is a band aid, and if they had thought it through they could have come up with a different solution instead of throwing lots of expensive ram in there. (No I don't think that)
    I.e. bandage instead of only using a plaster that may have done the job.

    It will be interesting if all their simulations etc work as they expect in the real world.
    Like I said I don't disagree with you, just a nice conversation.
     
  14. -tkf-

    Legend

    Joined:
    Sep 4, 2002
    Messages:
    5,634
    Likes Received:
    37
    Isn't the real question that needs to be answered how much extra work the esram will require in order to circumvent the supposed slow ddr3 ram.

    I am talking about 3rd party multi platform titles here. And considering that the 360/ps3 multi platform games ended up looking pretty much identical which I would guesstimate required way more work considering the difference between the ps3 and 360 hardware. I am not worried for Xbox One games.

    I would on the other hand expect 1st party games to take advantage and use the esram for stuff, effects, whatever, that will be hard to duplicate on other platforms.
     
  15. MrFox

    MrFox Deludedly Fantastic
    Legend

    Joined:
    Jan 7, 2012
    Messages:
    6,488
    Likes Received:
    5,996
    It's already 365mm2, how big would it be with 18CU and 32 ROP?
     
  16. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,579
    Likes Received:
    4,799
    Location:
    Well within 3d
    If the die shot Bonaire is accurate, eyeballing it makes me think half or less of the chip with 14 CUs is in the CU array.
    At 180 mm2, half of that is 90. Cut that in half to get the 6 or so CUs plus some ROP area.

    Maybe that puts the chip over 400 mm2?
    Power consumption would be higher as well.
     
  17. TheAlSpark

    TheAlSpark Moderator
    Moderator Legend

    Joined:
    Feb 29, 2004
    Messages:
    22,146
    Likes Received:
    8,533
    Location:
    ಠ_ಠ
  18. Esrever

    Regular

    Joined:
    Feb 6, 2013
    Messages:
    846
    Likes Received:
    647
    Why do people keep saying the ddr3 is slow? 68GB/s is pretty fast for ddr3 ram and is probably decent enough as long as some basic tool exist from MS for devs to use the ESRAM.
     
  19. liolio

    liolio Aquoiboniste
    Legend

    Joined:
    Jun 28, 2005
    Messages:
    5,724
    Likes Received:
    195
    Location:
    Stateless
    I think Bonaire is 160mm2. I would think it could get bigger than 400 mm2 though.
    Mars includes 8 ROPs, 6 CUs, including the various dsp and specialized hardware an the memory controller it weighs 77mm2.

    Anyway I think that had MSFT been that concerned with specs they would have attempted to raise the clock further. All the shipping HD7xxx card AMD has been shipping seems to overclock really well. Even with the minor overclocking it doesn't (edit forgot the negation...) seem to me like MSFT is trying real hard to close the gap with Sony.
    Actually they overclocked the CPU more then the GPU.
    I think the most obvious solution,had MSFT been aiming at higher performance, would have been to clock the whole thing higher (at the least the iGPU). It would burn more power but the chip is not tiny it should not be that hard to dissipate the heat (not too mention the case is huge).

    I think they have a really good design and price reduction as you pointed out multiple times should go really well for them. They are not pulling a Nintendo with the WiiU 35watts but I think the "problem" (if there is really one) of the design does not seem to be on the silicon side to me but on the business one: they wanted a really silent system, etc.
    Looking at AMD cards I would be surprised if an overwhelming majority of the chips that are "good" can't have the iGPU actually their iGPU clocked @950 MHz.
    It means more power drawn, more heat, more noise, it seems that it is a more significant concern to then than absolute gaming performances. And they may have turned a bit "anal retentive" wrt the system reliability too ... :lol:
    --------------------------------
    Attempt to estimate the die size of the esram+ dsp

    I would think that the extra memory controller (in durango) should make up for the lost CUs (assuming MSFT did not relied on coarse grained redundancy).
    Now there is the two jaguar clusters which size could be extrapolate from a kabimi die shot (~).
    Eye balling it, the CPU block is south of a third of the die (in kabini).

    So a gross estimate for Durango with the esram, and he dsp (and some glue) should be around:
    160+70= 230mm2.

    That is 135mm left for the esram, audio dsp, glue/bus, etc.
     
    #6239 liolio, Sep 12, 2013
    Last edited by a moderator: Sep 13, 2013
  20. steveOrino

    Regular

    Joined:
    Feb 11, 2010
    Messages:
    549
    Likes Received:
    242
    Because it IS slow. Have you ever used an dedicated graphics card or integrated graphics that used ddr3?
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...