Xbox One (Durango) Technical hardware investigation

Discussion in 'Console Technology' started by Love_In_Rio, Jan 21, 2013.

Thread Status:
Not open for further replies.
  1. brasnial

    Newcomer

    Joined:
    Jan 22, 2013
    Messages:
    151
    Likes Received:
    0
    Location:
    london
    Hi guys
    I'm totally new......just a interested party with a question for the more informed out there ......why would Microsoft put 8 gigs of ram in there machine if they didn't need it and were not able to use it .
    Would that not be a case of over kill and a waste of resources........
    So what I'm asking is this if we assume neither company Sony or Microsoft are fools and no what there doing ...what in your opinion would Microsoft be looking to do with 8 gigs of slow ram...?

    Sorry if my question seems dumb but no one puts stuff into a machine that the don't think they need or don't intend using ...
     
  2. McHuj

    Veteran Subscriber

    Joined:
    Jul 1, 2005
    Messages:
    1,613
    Likes Received:
    869
    Location:
    Texas
    Flops are really only a measure of the number of multiply-adds that the GPU does, they say nothing else about the rest of the ISA. It would not surprise me if MS decided to add instructions to the ISA that they have in mind and would help with future directX releases.

    Perhaps there are common instruction patterns in GPU code and some instructions can be combined into one. That's not always possible, but when it is it makes the GPU more efficient. I would expect MS research to be pretty forward thinking about future GPU usage and how rendering algorithms may evolve over the next 5 years or so, whereas AMD only needs to provide the best solution for the next year or 2.


    On paper they seem very similar in terms of raw performance, I would speculate that 12 CU's at 800MHz will be more power efficient than 10CU's @ 1GHz and that's why they are going with that configuration.
     
  3. brasnial

    Newcomer

    Joined:
    Jan 22, 2013
    Messages:
    151
    Likes Received:
    0
    Location:
    london
    Hi me again
    I just want to add to my earlier question ....so if we believe Microsoft is using a 1.2 Tf gpu like the leaks state why would you pair it with 8 gigs of ram ........?
    I have never heard of any PC gpu of the same spec being paired with this much memory ........have any of you .
    So my question still stands let's forget about the power of the gpu and think about why Microsoft a software company would to pair a 1.2TF gpu with 8 gigs of memory .
    It just seems such a unusual set up unlike any think I've read about or heard of before on PC .
     
  4. fehu

    Veteran

    Joined:
    Nov 15, 2006
    Messages:
    2,068
    Likes Received:
    992
    Location:
    Somewhere over the ocean
    In fact I specified a cape verde with 12CU@800MHz
     
  5. V3

    V3
    Veteran

    Joined:
    Feb 7, 2002
    Messages:
    3,304
    Likes Received:
    5
    On PC you don't have access to a 32MB of eSRAM. No dedicated PC GPU have that. In the future, when RAM chips reached certain densities, you will see 1.2TF PC GPU paired with 8GB of RAM, just for marketing purposes.
     
  6. TheAlSpark

    TheAlSpark Moderator
    Moderator Legend

    Joined:
    Feb 29, 2004
    Messages:
    22,146
    Likes Received:
    8,533
    Location:
    ಠ_ಠ
  7. bgassassin

    Regular

    Joined:
    Aug 12, 2011
    Messages:
    507
    Likes Received:
    0
    I definitely look forward to learning more about the DMEs. To be honest when looking at the name and some of the explanations, they sound to me right now more like something to maximize the performance of the CPU, GPU, and memory, not something that adds to whatever performance those components may lack if that makes sense.
     
  8. anexanhume

    Veteran

    Joined:
    Dec 5, 2011
    Messages:
    2,078
    Likes Received:
    1,535
    Would it possibly be correct to think of it as a scheduler (at least in part) for the entire system that is coherent to main memory?
     
  9. Prophecy2k

    Veteran

    Joined:
    Dec 17, 2007
    Messages:
    2,468
    Likes Received:
    379
    Location:
    The land that time forgot
    I doubt it would be called a "Data Move Engine" if its as fancy as some kind of HW scheduler.

    Imho its just a DMA engine, and becuase of second hand rumours of people crying "secret sauce", together with people like Aegis on NeoGaf who even admits he's not a HW engineer and has no clue about what he's seeing in the docs in his possession, people are trying to see too much into what are essentially quick-fix bandwidth saving features to mitigate the slow main memory bandwidth.

    In short MS wanted 8GB main RAM. DDR3 was their only option (perhaps they evaluation HMC and stacking with TSVs/Interposers but realised it wouldn't be ready in time). They went with DDR3, but needed a high-bandwidth scratchpad and memexport engine in order to ensure their main components weren't bandwidth starved. Joe public and Joe gaming journalist catches wind after months of being drip-fed false rumours of MS's nextbox being "beast" by overexcited devs and internet trolls, and now start trying to read too much into and rationalise some "secret sauce" and magic voodoo out of what is effectively a relatively low-cost/low-perf console design. Acert93 and Interference are both right.
     
  10. bgassassin

    Regular

    Joined:
    Aug 12, 2011
    Messages:
    507
    Likes Received:
    0
    Yeah. I think it was mentioned a little while back (working off of a few hours of sleep so I'm not all there) in comparing the design to HSA as maybe MS' take on it or something like that. A quick peak at an AMD HSA doc talks about how the scheduler can be handled in software, hardware, or both. Maybe MS decided on a full hardware approach. Speculation obviously.
     
  11. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,579
    Likes Received:
    4,799
    Location:
    Well within 3d
    If the fast memory pool isn't treated as a cache, I would lean towards it being a software (at least partly) controlled scratchpad. Filling and committing data to and from the primary address space would be handled by the CPU/GPU threads using it.

    The scratchpad has some unknown granularity and banking. If not kept coherent, it could be composed of a number of large banks, possibly at the granularity of a memory page.
    DMA engine(s) could offload most of the grunt work of arbitrating access and moving data back and forth to this pool of memory.
    In this scenario, the SRAM/eDRAM pool is on a parallel portion of the non-coherent bus used by the GPU.

    The data engines would have ancestry in the DMA engines discrete GPUs have had for quite a while, or the DMA engines used in a number of server and HPC designs. It would save time and power compared to having a CPU or GPU sending commands and reading in scratchpad memory back into coherent memory space before writing it back out, then reading in new data and then exporting it back to the scratchpad.
    A DMA engine for the GPU, a DMA engine for the CPUs, and maybe a DMA engine for everything else.

    It's not known how many independent accesses the memory pool can support in that sceneario. Even if it's not three, accellerating and offloading all the little moves and access negotiations implied in managing such a large memory space might make the scratchpad more easily utilized.
     
  12. brasnial

    Newcomer

    Joined:
    Jan 22, 2013
    Messages:
    151
    Likes Received:
    0
    Location:
    london
    If we assume both Sony and Microsoft's next Gen spec's are roughly correct what type of game will benefit from the differences in memory size .....?

    As a gamer I don't view kinect as a bad for the industry nor the wii for that matter any think that brings in non gamers is good for the industry especial if some of these new gamers turn into more hardcore gamers .

    It seems to me that to stick with the old idea of more power better graphics more sales is to live in the past .we now live in a world where doing every think we'll at a price your customers are willing to pay is the key to more sales .

    To lose sight of this and get engaged in a pissing contest over who can push the most Pixels and there for base all you business decision's around this is a recipe to lose market share in my opinion .
     
  13. eastmen

    Legend Subscriber

    Joined:
    Mar 17, 2008
    Messages:
    13,878
    Likes Received:
    4,727
    $300 is also the target for the retail occulus rift. I rather get that. I rather get a new video card or a new tablet. $300 is the same price if the 360 back in 2005 and that brought a whole lot of kit comparied to this
     
  14. liquidboy

    Regular

    Joined:
    Jan 16, 2013
    Messages:
    416
    Likes Received:
    77
    What an interesting patent pic...

    [​IMG]
     
  15. Pugger

    Regular

    Joined:
    Dec 8, 2004
    Messages:
    419
    Likes Received:
    2

    Looks like my fuse box. Who's the leak?
     
  16. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,579
    Likes Received:
    4,799
    Location:
    Well within 3d
    It's a patent, those don't need to be leaked.
     
  17. jbq.junior

    Newcomer

    Joined:
    Jan 20, 2010
    Messages:
    1
    Likes Received:
    0
    So, if Durango GPU has 12 CUs, given the GCN architecture, it may be able to output only 1 primitive/clock, like Cape Verde, right?
     
  18. ERP

    ERP
    Veteran

    Joined:
    Feb 11, 2002
    Messages:
    3,669
    Likes Received:
    49
    Location:
    Redmond, WA
    It would depend on how the input is configured.
    it "could" certainly be limited to 1prim/clk, or it could be more these things aren't off the shelf parts jammed onto a piece of silicon. If MS thought they need more I can't imagine that level of customization would be out of order.
     
  19. ROG27

    Regular

    Joined:
    Oct 27, 2005
    Messages:
    572
    Likes Received:
    4
    Perhaps the "Data Move Engines" are some sort of advanced (semi?)programmable schedulers that ensure the GPU is running at or near 100% efficiency at all times.

    If MS could get near 100% efficiency out of the GPU rather than say a more typical 60-70% efficiency at any given moment, they could save even more money (and heat) on the silicon in the future after subsequent die shrinks while still getting similar real world performance that rivals a GPU with greater brute strength but a less efficient architecture.
     
  20. Xenio

    Regular Banned

    Joined:
    Jan 18, 2013
    Messages:
    447
    Likes Received:
    0
    it' a patent from microsoft from 3 years ago, when they start to develop the durango project
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...