News & Rumors: Xbox One (codename Durango)

Discussion in 'Console Industry' started by Acert93, Mar 8, 2012.

Tags:
Thread Status:
Not open for further replies.
  1. Rangers

    Legend

    Joined:
    Aug 4, 2006
    Messages:
    12,693
    Likes Received:
    1,516
    senjetsuu sage on neogaf is really betting against this one...

    he's even willing to verify his info with a mod, on top of already betting his account it's false. not looking good for this rumor.
     
  2. Betanumerical

    Veteran

    Joined:
    Aug 20, 2007
    Messages:
    1,755
    Likes Received:
    267
    Location:
    In the land of the drop bears
    Is that only the downclock or is it to do with the eSRAM stuff over all?.
     
  3. temesgen

    Veteran Regular

    Joined:
    Jan 1, 2007
    Messages:
    1,680
    Likes Received:
    486
    To be fair this wouldn't be the first time someone thought they were right and ended up with a ban over there for bad info... IMO you have to stick to reputable sources that have a track record and take everyone else with a grain of salt.

    Edit: not to mention that several have taken him to task for some of his other comments, he may not be the most objective person on things related to XB1. That said he thinks this is wrong but who knows???
     
  4. Rangers

    Legend

    Joined:
    Aug 4, 2006
    Messages:
    12,693
    Likes Received:
    1,516
    yeah thuway and proelite have been banned for being wrong in the past.

    i know thuway was one of the early ones to get a xbone spec sheet, but since then he hasn't been right on anything provable that i know of. or wrong really, since he's not said anything concrete that i know of.

    i dont really consider either of them too reliable anymore, proelite never was.

    i consider Matt reliable, but he doesn't say or even post much.

    more senjetsuu

     
  5. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,565
    Likes Received:
    4,744
    Location:
    Well within 3d
    My thinking was a scenario where there was a certain percentage of parts that were showing higher than desired error rates early on, but that improvements weren't coming along as expected.

    The errors would be transient and comparatively rare, possibly at corners in the voltage and thermal envelope of multiple test devices in hundreds of runs. The physical size and unknown implementation of the eSRAM could lead to some unexpectedly high error rates if factors like variation or run-time temperature changes proved to be factors that scaled worse than projected.
    Raising voltages and adding timing margin could be used to increase the safety margins, if whatever level of tuning that was originally designed proved less successful than expected.

    It could be a matter of time before manufacturing improvements reduced the severity of variability, which would bring things further within the design parameters of the tuning circuitry and redundancy, which would then allow the safety margins to be reduced. If they felt they had the time.

    One counterpoint is that the error rates probably don't need to meet the same requirements as AMD's mainline x86 chips.
    It also could be a problem that they could work feverishly to improve in the coming months, and perhaps grit their teeth for a time until things caught up to where they needed to be.
    I've only brought this up because the eSRAM->overheating->25% down-clock rumor seems odd to me.
     
  6. babybumb

    Regular

    Joined:
    Dec 9, 2011
    Messages:
    608
    Likes Received:
    24
    What a double whammy..

    If they only started in Fall 2010 they should have maybe went for GDDR5 also.. there was no way this was releasing in 2012.
     
  7. BeyondTed

    Newcomer

    Joined:
    May 20, 2013
    Messages:
    233
    Likes Received:
    0
    Design Techniques for Yield

    If it is 6T or 8T then the heat generated is a non-issue.

    If it is 1T then the area and the heat is pretty small compared with the CPU and GPU.

    Finally, if there is a rumor of a yield issue, whenever you design a large array of repeating structures in a SOC (large = large enough to impact the yield and cost calculations) then you employ various pretty mature design, test and fuse techniques to solve the problem before you tape out the chip.

    Everyone here should be familiar with AMD and Nvidia fusing off clusters for yield.

    Everyone here should be familiar with AMD fusing off CPU cores and/or modules.

    Everyone here should be familiar with Intel using off CPU cores and/or cache and features (SMT).

    Everyone here should not be shocked that the memory industry has had replacement rows, columns and modules for quite some time. Also 2 bits detect and 1 bit correct and much more than that for quite some time too.



    So please stop the non-sense about eSRAM power and/or yield issues. :roll:



    I am quite sure that MS (allied with AMD, ATI and IBM) know how to handle such a simple yield issue (via design) before tape-out. Certainly not after the press conference/release. Please, do you know what you are talking about ?!? :roll:

    Has anyone read about IBM and their yield and fault tolerance techniques for Power8? So GPU and CPU harvesting is enough but is nothing in the face of what can be done today?
     
  8. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    43,576
    Likes Received:
    16,034
    Location:
    Under my bridge
    What has IBM got to do with XB1? I don't believe this rumour at all, but your argument isn't very solid because engineering mistakes and faults do happen. "Do you think MS and AMD's aren't smart enough to be able to design a console that doesn't burn itself up and have massive failure rates? Do you really think Intel, the world leaders in chip design, are going to be stupid enough to make a CPU with a fatal floating-point calculation error?" Shit happens, as the saying goes, and no amount of experience or brilliance is an impenetrable defence to that

    The argument against the rumour needs to based in engineering understanding, not corporate reputations (especially for companies not involved in the product!).
     
  9. dumbo11

    Regular

    Joined:
    Apr 21, 2010
    Messages:
    440
    Likes Received:
    7
    But would there be a potential issue with heat 'across the chip' (e.g. a temperature gradient?).
     
  10. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,565
    Likes Received:
    4,744
    Location:
    Well within 3d
    If the characterization of the device and the process is freakishly dead-on, sure.
    It's not like these companies arrived at their experienced state by not going through trial and error for every design, particularly for actual physical effects and manufacturing unknowns.
    Test runs exist for a reason, and test and fuse didn't save AMD from Llano, or allow it to scale its gate oxide at 65nm, or give it competitive L3 cache array density for Barcelona nor fully erase that gap at 45nm.

    Are you saying there are CUs fused off for Durango, and how does that help problems not in the CU array?
    Same question goes to cores.

    That doesn't work when there's just the One Bin.

    Are you saying that the eSRAM has ECC? It's handy feature to minimize errors, but depending on the error requirements, not always sufficient.

    The lateness of the rumor is a reason for skepticism. The existence of test runs and respins is evidence that not everything can be solved before a chip physically exists.

    You'd have to provide links, and go into more detail on what you mean by fault tolerance. The big iron processors have a massively lower focus on yield than a console component.
     
  11. patsu

    Legend

    Joined:
    Jun 25, 2005
    Messages:
    27,709
    Likes Received:
    145
    It's his job to guide the users and manage the relationships.

    I don't think he needs to reveal the specs to address the concern.

    There you go.
     
  12. HollovVpo1nt

    Newcomer

    Joined:
    Apr 17, 2010
    Messages:
    123
    Likes Received:
    0
    More and more prominent poster on NeoGAF are reporting the downclocking as beeing true.
     
  13. Grall

    Grall Invisible Member
    Legend

    Joined:
    Apr 14, 2002
    Messages:
    10,801
    Likes Received:
    2,175
    Location:
    La-la land
    It'd be exceedingly improbable that it would, or even parity actually. It's not a mainframe product, and those 1.6 billion trannies are a huge-enough-as-it-is investment already, and that's merely for the SRAM cells; even more would be required to turn those cells into a functioning, addressable memory array. Even more memory and logic to make it into a autonomously working cache as has been suggested by some people.

    ECC on top would be what, 20-25% overhead for "sufficient" accuracy? Parity is generally 1 bit/byte. Plus additional logic to manage the detection, and in case of ECC, correction, of course. Sounds rather unlikely.

    Don't think I believe in any 20% downclock rumors. It's rather late in the game for surprise stuff like that, and it's not as if the clocks were enormously high to begin with. Usually these fantastic rumors turn out to be people just making waves to get attention.

    ...And of course, even if it was to be true I'm sure xbone will still be a solid piece of engineering. After all, John Carmack said so. (Appeal to authority, yes I know... Sue me! :))
     
  14. scently

    Veteran Regular

    Joined:
    Jun 12, 2008
    Messages:
    1,113
    Likes Received:
    502
    Indeed. Its not that I have any industry knowledge or anything like that, but just using logical thinking, one would assume that such issues that might require such drastic measures would have been ironed out on time. We already have documents as far back as Feb 2012 and as recent as early this year detailing what the system specs are. There seems to be more certainty from MS this time around with regards to what the system makeup is going to be. So this sounds ridiculous really.

    Btw with E3 around the corner we will see games that are running on the hardware.
     
  15. Rangers

    Legend

    Joined:
    Aug 4, 2006
    Messages:
    12,693
    Likes Received:
    1,516
    err, links? or at least names.

    not sure who's "more prominent" in the rumor community than those already mentioned, anyway. I'm probably forgetting but since Lherre shutup, Matt is the only true insider there that I can think of currently. Oh and Crazy Buttocks on a Train, but I've never noticed him speak on hardware he's software insider.

    yes it will help some, but it wont clear anything up if they dont mention the clocks. those predisposed to this rumor will claim the software was made on pre-downclock dev kits or what have you if it looks too good.

    gaf already seems to have some ready made excuses. Chiefly there's always been a sub-rumor (again from thuway and the like actually, he is one of the prominent ones saying this for sure) that early Durango dev kits used a 7970. Thuway has said if E3 software looks great it's because it was made on a 7970 and we will see historic pre-release downgrades.

    The 7970 thing obviously seems like total bunk for countless reasons, not least I've specifically heard it isn't remotely true from others, yet it has stuck throughout and is basically treated as fact. Such is neogaf.

    The mods just tagged thuway with "The bird speaks truth" LOL, as I said, just spit some anti-MS rumors and your credibility grows by the second retroactively. Semi-sccurate is of course also super accurate in any discussion now, where normally it's treated as the worst of jokes (eg, when reporting bad Nvidia news).

    Thuway is in some trouble if this turns up false...

    then again i'm sure he'll be excused in this case. "plans changed back to 800 mhz" or, ms never reveals their clocks officially and he'll skate by.
     
  16. Themrphenix

    Newcomer

    Joined:
    Jun 1, 2013
    Messages:
    58
    Likes Received:
    6
    Location:
    Westerly Rhode Island
    This is what I heard about the downclock rumor!

    The rumor is true,but not why you think or they claim!Microsoft was doing some test runs with the chip with a higher clock speed on the gpu!I heard they was getting some decent results around 900 to 950 MHz!Then the last couple of batches they wasn't getting good yields and went back original clock speed 800 MHz!
     
  17. Betanumerical

    Veteran

    Joined:
    Aug 20, 2007
    Messages:
    1,755
    Likes Received:
    267
    Location:
    In the land of the drop bears
    Microsofts target clock from the start has been 800mhz.

    A extra 100-150mhz ontop of that would require a large bump in voltage and thermals. I think 800mhz is the sweet spot.
     
  18. Rangers

    Legend

    Joined:
    Aug 4, 2006
    Messages:
    12,693
    Likes Received:
    1,516
    Hmm, very interesting...

    It could be that they still managed to squeeze 900 out?
     
  19. Betanumerical

    Veteran

    Joined:
    Aug 20, 2007
    Messages:
    1,755
    Likes Received:
    267
    Location:
    In the land of the drop bears
    I wouldn't take his word over GAF. He tried to pass himself off as a developer on the forums previously.
     
  20. Bagel seed

    Veteran

    Joined:
    Jul 23, 2005
    Messages:
    1,533
    Likes Received:
    16
    New user on B3D with a rumor is 0-12 so far.
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...