Xbox One (Durango) Technical hardware investigation

Discussion in 'Console Technology' started by Love_In_Rio, Jan 21, 2013.

Thread Status:
Not open for further replies.
  1. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    44,106
    Likes Received:
    16,898
    Location:
    Under my bridge
    Yeah, your right, and that may have been the emphasis in the decision.
     
  2. (((interference)))

    Veteran

    Joined:
    Sep 10, 2009
    Messages:
    2,499
    Likes Received:
    70
    I think you'd still want some level of MSAA (2x,4x) coupled with post AA next gen.
    Post AA is poor at things like subpixel aliasing (eg. chain link fences, thin geometry viewed from far away etc).
     
  3. liolio

    liolio Aquoiboniste
    Legend

    Joined:
    Jun 28, 2005
    Messages:
    5,724
    Likes Received:
    195
    Location:
    Stateless
    As a side note, if it is better this gen to have a single big chip, the same may have true last gen too.
    Nvidia G80 launched at +400mm^2, scaled down to 334mm^2 (@65nm) and finished at 270mm^2 on a 55nm process.

    eDRAM served the 360 well but one may wonder if when it all said and done, if it was the right choice (not that the ps3 is a perfect piece of kit either). PS3 did quite well with a lot less bandwidth, we will never what could have achieved a system with plain UMA set-up ala PS4 (even looking at costs).

    At the moment Nintendo takes a lot of bashing but I wonder if the CPU they chose for the Wii and WiiU could have proved a better choice than MSFT and Sony, speed demon throughput oriented design. It was damned tiny and power effficient, even 90nm process allowed for a sane multicores set-up (broadway is 19mm^2 according to the numbers on that very board).

    OK I quit the a posterio thinking, not an option in the real world, not too mention I'm going off topic.
    MSFT choice for Durango are sound and allowed for 8GB at a reasonable cost, if there are production issues it is another matter which might have nothing to do with the merit of design.
     
    #3643 liolio, Jun 6, 2013
    Last edited by a moderator: Jun 6, 2013
  4. Pete

    Pete Moderate Nuisance
    Moderator Legend

    Joined:
    Feb 7, 2002
    Messages:
    5,777
    Likes Received:
    1,814
    Eh? I was under the impression that MSAA does nothing for chain link fences or grass, as they're usually rendered as a texture (possibly leading to heavy GPU lifting like alpha blending). That's why NV and ATI came up with all these new AA mode like AAA/TXAA [sic?] and other acronyms I can't remember.

    Do game engines do things differently these days?
     
  5. Mianca

    Regular

    Joined:
    Aug 7, 2010
    Messages:
    333
    Likes Received:
    19
    I don't see how G80 was in any way related to last gen consoles? Plus, I remember the original die comprising some ~480mm² @ 90nm (if memory serves well - too lazy to check).

    And finally: G80 in my perception basically was the last "big" GPU chip that didn't run into considerable manufacturing problems of some kind. AMD's RV770 (following their R600 disaster) basically marked the beginning of the reign of "sweet-spot" chips - and Nvidia hasn't exactly been very lucky with any of their huge monolithic dies after G80.

    Returning to the topic at hand: Given the track record of large GPUs and the added complexity of the integrated APU design + their heavy customizations, going for a mainstream-chip (and we have to regard console chips as such - it's not some kind of small-volume-high-price-product on which you can accept/compensate below average yields) that is considerably bigger than ~300mm² (maybe even in the range of ~400mm²) basically was a tremendous risk for Microsoft/AMD.
     
  6. Grall

    Grall Invisible Member
    Legend

    Joined:
    Apr 14, 2002
    Messages:
    10,801
    Likes Received:
    2,176
    Location:
    La-la land
    There you have it!

    onQ just double confirmed that Xbone is 100% underclocked!!!11!1

    :lol:
     
  7. (((interference)))

    Veteran

    Joined:
    Sep 10, 2009
    Messages:
    2,499
    Likes Received:
    70
    I might have got confused with alpha textures used for chain link fences which MSAA can't help with and subpixel features where it can.

    http://www.eurogamer.net/articles/digital-foundry-future-of-anti-aliasing?page=2
     
  8. liolio

    liolio Aquoiboniste
    Legend

    Joined:
    Jun 28, 2005
    Messages:
    5,724
    Likes Received:
    195
    Location:
    Stateless
    OT
    I will just answer you quickly as indeed the link I made between the on-going discussion about edram /scratchpad in Durango but also in the 360 is not that obvious. I was just pointing out that if single chip are an option this gen when wafer costs, costs for implementing silicon and overall R&D are at an all time high (and with the associated risks) it was also an option last gen, when all of those costs were lower. eDRAM is sort of parallel to that as a single chip would have meant chip bigger than either the Cell, Xenon, Xenos or RSX which allows for a wider bus even after a couples of shrinks (that is why I referred to the G80 and its heir, as an example, I did not mean that the single chip would have to be this big).
    I think that the cost of wider bus is overstated, it was during the talk about the up coming consoles ultimately both systems use a 256 bit bus. I think it was an option last gen along with a single chip.

    Anyway this is not a redo last gen hardware thread though whereas Durango solution seems pretty good (putting possible production issues aside /noise) I'm not sure that referring to last gen system is a good way to sell eDRAM or eSRAM / scratchpad, as I think that both the 360 and the ps3 were in fact suboptimal on more than one account (cf my ref to Nintendo that may have chose a more proper CPU architecture that both MSFT and Sony which have vouch for a more similar CPU this gen as I would say that Jaguar are closer to broadway than to either the Cell or Xenon).
     
    #3648 liolio, Jun 6, 2013
    Last edited by a moderator: Jun 6, 2013
  9. loekf

    Regular

    Joined:
    Jun 29, 2003
    Messages:
    617
    Likes Received:
    65
    Location:
    Nijmegen, The Netherlands
    AFAIK, Wii's uses a seperate die for DRAM mounted side by side in a single package with the CPU and GPU, much like the original Xbox 360.

    Microsoft appearently went with 32 MB fast refresh-free static RAM running synchronous (guess sync'ed to the on-chip bus). It will save you power (less static currents), but it's killing for yield. As someone explained, you can repair SRAM, but only so much. If the yield is horrible, they have to throw away the complete die, which for a big die is quite costly.

    SRAM also saves you extra mask costs compared to on-chip eDRAM, so it's not surprising they went this route. If latency was a concern, they also could have gone with a small DRAM inside the package.
     
  10. Gipsel

    Veteran

    Joined:
    Jan 4, 2010
    Messages:
    1,620
    Likes Received:
    264
    Location:
    Hamburg, Germany
    No, Wii U integrates it on the GPU die.
    Then it's a severe planning mistake, not enough redundancy. If the yield would be so bad that some redundancy in the SRAM is not helping, one wouldn't get a single CPU core or the GPU working (as one has a lot of non-redundant parts there). As others have said already, getting SRAM to yield is one of the easier problems.
     
    #3650 Gipsel, Jun 6, 2013
    Last edited by a moderator: Jun 6, 2013
  11. McHuj

    Veteran Subscriber

    Joined:
    Jul 1, 2005
    Messages:
    1,613
    Likes Received:
    869
    Location:
    Texas
    SRAM will have much higher leakage than eDRAM would even with a refresh and it will only get worse as nodes shrink.

    IBM paper on Big Blue.

    http://www.d.umn.edu/~tkwon/course/5315/HW/BG/BG_DRAM.pdf
     
  12. bagofsuck

    Newcomer

    Joined:
    Apr 22, 2008
    Messages:
    219
    Likes Received:
    0
    what's new so far? are we still in limbo with the GPU down clock?
     
  13. Ketto

    Newcomer

    Joined:
    Jul 30, 2012
    Messages:
    39
    Likes Received:
    0
    Location:
    Winter Park, Florida; and London UK.
    So far it's still unfounded rumors originating from NeoGAF. Not something to worry about or take stock in unless its being repeated by more place with their own sources.
     
  14. Wynix

    Veteran

    Joined:
    Feb 23, 2013
    Messages:
    1,052
    Likes Received:
    57
    The rumour is on ice.
     
  15. BeyondTed

    Newcomer

    Joined:
    May 20, 2013
    Messages:
    233
    Likes Received:
    0
    Yes, I do see the pattern. Hence what I have been saying about the 1T-SRAM. I know there is a reason they are using EDRAM. That is part of my point. While it is possible that it is 6T or 8T with a transistor count argument I think there is another good argument that it is a type of SRAM physical IP that is actually EDRAM at the core:

    Due to its one-transistor bit cell, 1T-SRAM is smaller than conventional (six-transistor, or “6T”) SRAM, and closer in size and density to embedded DRAM (eDRAM). At the same time, 1T-SRAM has performance comparable to SRAM at multi-megabit densities, uses less power than eDRAM and is manufactured in a standard CMOS logic process like conventional SRAM.

    MoSyS markets 1T-SRAM as physical IP for embedded (on-die) use in System-on-a-chip (SOC) applications. It is available on a variety of foundry processes, including Chartered, SMIC, TSMC, and UMC. Some engineers use the terms 1T-SRAM and "embedded DRAM" interchangeably, as some foundries provide Mosys's 1T-SRAM as “eDRAM”. However, other foundries provide 1T-SRAM as a distinct offering.

    In other words it is EDRAM at the memory cell level but is called eSRAM in the industry.



    I think the reasons for the others using eDRAM apply to the Xbox One too. You see if from pretty low cost Wii U designs all the way up to big PPC server chips.



    As for IBM I do not know but I saw the 32nm linkedin profile (32nm Xbox chip) so if that is real then we can say they worked on that particular project. But I think it is reasonable that they helped with the other project too. MS certainly built up their own team but that does not mean they turn away help from a good partner. I know that a company can have a great design but still benefit from IBM's help on some other areas related to manufacturing and yield. (I know from 1st hand experience.) MS team is not the size of IBM's team and it can really help to use proven blocks and to get help from the big guys regarding yield and manufacturability. MS has no fab, for example. That is a huge other expertise that IBM has and it makes a big difference when you input that expertise before tapeout through reviews, etc.
     
  16. Silenti

    Regular

    Joined:
    May 25, 2005
    Messages:
    711
    Likes Received:
    423
    Controller Question - I assume this is the correct thread at this point.
    From the following story - http://news.xbox.com/2013/06/xbox-one-controller-feature

    The story only mentions rumble/vibration motors and then uses the word "haptic" to describe the feedback. Do we know for a fact that the ability to control pressure required on the triggers is present, or that there are just rumble motors present that affect the triggers? There is a big difference between the 2. Having force feedback/ pressure feedback in the triggers could be very nice, if it is just rumble motors I am no where near as intrigued.
     
  17. bkilian

    Veteran

    Joined:
    Apr 22, 2006
    Messages:
    1,539
    Likes Received:
    3
    As far as I know, it's just tiny rumble motors in the triggers, no actual force feedback.
     
  18. Silenti

    Regular

    Joined:
    May 25, 2005
    Messages:
    711
    Likes Received:
    423
    That seems odd considering the talk of "OMG they can make gun triggers that feel like REAL triggers." Something must have been mistaken somewhere. Bummer.
     
  19. Tap In

    Legend

    Joined:
    Jun 5, 2005
    Messages:
    6,382
    Likes Received:
    65
    Location:
    Gravity Always Wins
    allegedly there is now a magnetic connector in the triggers and no more springs or dead zone so I do not know about the technology or how that would affect the trigger feel.
     
  20. Cjail

    Cjail Fool
    Veteran

    Joined:
    Feb 1, 2013
    Messages:
    2,027
    Likes Received:
    211
    They say reduced deadzones.
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...