Xbox One (Durango) Technical hardware investigation

Discussion in 'Console Technology' started by Love_In_Rio, Jan 21, 2013.

Thread Status:
Not open for further replies.
  1. bkilian

    Veteran

    Joined:
    Apr 22, 2006
    Messages:
    1,539
    Likes Received:
    3
    Actually, they did. Yukon and Durango are the same project. Even when I left, I was still checking code into the Yukon tree. In the early days of the 360, the vision doc was called "The book of Xenon". The equivalent doc for the X1 was called "The book of Yukon"
     
  2. Betanumerical

    Veteran

    Joined:
    Aug 20, 2007
    Messages:
    1,763
    Likes Received:
    280
    Location:
    In the land of the drop bears
    Interesting I always assumed they were seperate projects. Its strange it changed so much over time.
     
  3. bkilian

    Veteran

    Joined:
    Apr 22, 2006
    Messages:
    1,539
    Likes Received:
    3
    Why is that strange? For one, the leaked doc was an early vision doc, not a technical specs doc, and for another, the design changed a number of times as things firmed up in talks with suppliers and the software team did exploratory proof-of-concepts.
     
  4. Betanumerical

    Veteran

    Joined:
    Aug 20, 2007
    Messages:
    1,763
    Likes Received:
    280
    Location:
    In the land of the drop bears
    Well the fact that they ditched hardware B/C and a bunch of other stuff is what I found as strange.
     
  5. bkilian

    Veteran

    Joined:
    Apr 22, 2006
    Messages:
    1,539
    Likes Received:
    3
    Standard cost/benefit analysis on that one, I'm afraid. The added cost over the life of the console for a feature that is only really useful or popular in the first year is a bit of a no-brainer. Look at what Sony did for the PS3, for instance. The feature got removed pretty quickly.
     
  6. taisui

    Regular

    Joined:
    Aug 29, 2013
    Messages:
    674
    Likes Received:
    0
    I'm curious on how it would affect the games in practice.

    Take physics/particles for example, CPU preps and batch the data structs
    GPU takes the coherent memory address, run simulation on the data
    GPU takes the prior result and render output.

    Even if CPU needs to read back the result from the coherent memory, what sort of realistic impact is there for the GPU having to flush?
     
  7. Betanumerical

    Veteran

    Joined:
    Aug 20, 2007
    Messages:
    1,763
    Likes Received:
    280
    Location:
    In the land of the drop bears
    If I am remembering correctly a cache flush on it flushes out everything, graphics and GPGPU cachelines / work, read only work, texture caches, etc.

    You have to read in everything back from memory. If your flushing entirely to the eSRAM at a full 512KB this will take ~4096 cycles.
     
  8. adev

    Newcomer

    Joined:
    Oct 2, 2013
    Messages:
    35
    Likes Received:
    0
    I think it's going to be generally bad to use the same memory on the CPU and GPU at the same time on Xbox One.
     
  9. taisui

    Regular

    Joined:
    Aug 29, 2013
    Messages:
    674
    Likes Received:
    0
    Do elaborate?
     
  10. adev

    Newcomer

    Joined:
    Oct 2, 2013
    Messages:
    35
    Likes Received:
    0
    You're going to want to minimise cache contention as much as possible for pure performance reasons.

    The coherency is good in that you can read and write to the same memory on both the CPU and GPU and guarantee that it will be "correct" but accessing it at exactly the same time on both would still be expensive. Interleaving access would likely be faster.

    In Beta's terms the GPU can't "write" coherently technically, it just gets told to flush when the CPU wants to access to data it has in it's cache.

    I may have been wrong earlier when I said the GPU can invalidate CPU cache. I'll udate when I know for sure. That was my understanding but it could be wrong.
     
  11. taisui

    Regular

    Joined:
    Aug 29, 2013
    Messages:
    674
    Likes Received:
    0
    IMO the right way to use it in to interleave the access pattern. The advantage should be that you don't need to copy/stage the data for the GPU because of the coherent memory, not sure how the sync happens, but I imagine jobs being done in batches in high latency.
     
  12. pjbliverpool

    pjbliverpool B3D Scallywag
    Legend

    Joined:
    May 8, 2005
    Messages:
    9,237
    Likes Received:
    4,260
    Location:
    Guess...
    Ok change the 8GB to 'a large pool of main memory' if you wish. It doesn't change the point of my post though. They wanted DDR as opposed to GDDR for the main memory for power and cost reasons and so esram was also included to supplement the bandwidth.

    Low latency does not appear to have been the driving advantage behind its inclusion and from the statements in the article there's at least reasonable reason to doubt whether it will have any major latency based advantages at all. Otherwise this surely would have been touched upon given that the question presented the perfect opportunity to do so.
     
  13. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    44,106
    Likes Received:
    16,898
    Location:
    Under my bridge
    I agree with that assessment, pjbliverpool. The RAM choice was entirely for cost and power draw reasons, and the upshot of ESRAM is only regards those aspects. There's no performance advantage from latency that the design gets from this choice. So the trade off appears to be price+heat gains for added development complications.
     
  14. Airon

    Banned

    Joined:
    Dec 12, 2012
    Messages:
    172
    Likes Received:
    0
    I see them more as different solutions x the same need: large pool memory + high bandwith.

    Sony has made a choice and a bet.

    MS just follow their X360 experience with ESRAM.
    It is quite telling that we know that Esram/edrams was there from the beginning. It could be possible or reasonable to have esram + gddr5 ? I suspect they never consider gddr5 as a solution.

    In the end is a different solution that allow X1 to have an high bandwith (with higher peak than the competitor) , with an esram that this time around allow x more creative uses x developers, and and a large memory pool of ddr3 that is better suited x CPU. Because MS (as Sony) has high concern for the CPU. Without forgetting power and cost.


    Are you really sure that considering each platform as a whole one solution is incredibly better than the other?
    To me, x ex., it seems that the 2 platforms could be bandwith bottlenecked in a very similar way.
     
  15. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    44,106
    Likes Received:
    16,898
    Location:
    Under my bridge
    I never said anything about anyone having a better solution. I never even compared solutions. I never even said it was a bad solution. Every engineering decision has to prioritise, and MS prioritised cost and power draw, and perhaps a little extra peak BW over what they could realistically target with GDDR5, over ease of use. Simple an observation without comparison, and an identifying that a previous theory that the choice of ESRAM included a performance interest due to low latency is pretty much proven invalid.

    This isn't a versus thread and Sony's choices are immaterial, save as proof that there was another option available (not that such proof is necessary).
     
  16. Cranky

    Newcomer

    Joined:
    May 22, 2013
    Messages:
    134
    Likes Received:
    0

    Well they also ended up with, according the DF article, a 45% bandwidth advantage over the alternative memory configuration (200gbs/(172*0.8)) in typical use cases. it may have been a case of having their cake and eating it as well.
     
  17. Strange

    Veteran

    Joined:
    May 16, 2007
    Messages:
    1,698
    Likes Received:
    428
    Location:
    Somewhere out there
    What????

    How did you come up with these numbers? :roll:

    Why would you put a 0.8 coefficient before the 172 GB/s figure?
     
  18. french toast

    Veteran

    Joined:
    Jan 5, 2012
    Messages:
    1,667
    Likes Received:
    9
    Location:
    Leicestershire - England
    Although they dont make a direct case for latency benefits, they do st least mention it a couple of times in the interview..could it be they didnt prioritise latency as bandwidth is their main concern...but an upshot of that decision is the much lower latency...could the reason for them not going into detail about those benefits is because they have simply not looked into it?
     
  19. Ceger

    Newcomer

    Joined:
    Aug 21, 2013
    Messages:
    59
    Likes Received:
    1
    because the equation is 80% of a peak bandwidth is what is attainable in real world as the average in most cases (as MS asserts and makes sense until proven otherwise). The numbers they gave were on such measures between the ESRAM and DDR3 (80% of bandwidth) which they say has been proven with actual real code, not tests.

    So Cranky was using that to make comparative points. If you want to leverage the full GGDR5 bandwidth, then make sure to apply full ESRAM and DDR3 bandwidth as well. One party does not get an exclusion while the other doesn't.

    And please do not bring up the indie dev that said they got the high end of bandwidth on the other console as no information has been provided as to what exact code or such was used to leverage that. I am sure any dev can write code to maximize bandwidth, but it doesn't represent real world application.
     
  20. zupallinere

    Regular Subscriber

    Joined:
    Sep 8, 2006
    Messages:
    768
    Likes Received:
    109

    Something I'd like to know to. Other presentations discuss 47 MB of coherent memory IIRC but that doesn't say anything about bandwidth just size ( which is biggest for a console or the like ). The MS guy makes the comparison and states the big bet on coherent memory speed but what exactly is that metric ?
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...