News & Rumors: Xbox One (codename Durango)

Status
Not open for further replies.
senjetsuu sage on neogaf is really betting against this one...

Nobody else on this site outs who their sources are. And clearly I'm in no position to say who their sources are, or how they come by that information, but I've heard from a very reliable individual whom I trust completely, and that individual has said both that this is not true, and that they have heard no such thing. I place extra weight on the person saying it, more so than what I might hear from any site, or even from typically reliable posters on this forum, because it is my firm belief that if this person doesn't know what's going on, then shit has truly hit the fan.

People can take that for what it's worth. If a mod deems it necessary to ask me why I'm so sure, I'm more than happy to tell them why, on the grounds that it remains confidential, of course.

he's even willing to verify his info with a mod, on top of already betting his account it's false. not looking good for this rumor.
 
senjetsuu sage on neogaf is really betting against this one...



he's even willing to verify his info with a mod, on top of already betting his account it's false. not looking good for this rumor.

Is that only the downclock or is it to do with the eSRAM stuff over all?.
 
senjetsuu sage on neogaf is really betting against this one...



he's even willing to verify his info with a mod, on top of already betting his account it's false. not looking good for this rumor.

To be fair this wouldn't be the first time someone thought they were right and ended up with a ban over there for bad info... IMO you have to stick to reputable sources that have a track record and take everyone else with a grain of salt.

Edit: not to mention that several have taken him to task for some of his other comments, he may not be the most objective person on things related to XB1. That said he thinks this is wrong but who knows???
 
To be fair this wouldn't be the first time someone thought they were right and ended up with a ban over there for bad info... IMO you have to stick to reputable sources that have a track record and take everyone else with a grain of salt.

Edit: not to mention that several have taken him to task for some of his other comments, he may not be the most objective person on things related to XB1. That said he thinks this is wrong but who knows???

yeah thuway and proelite have been banned for being wrong in the past.

i know thuway was one of the early ones to get a xbone spec sheet, but since then he hasn't been right on anything provable that i know of. or wrong really, since he's not said anything concrete that i know of.

i dont really consider either of them too reliable anymore, proelite never was.

i consider Matt reliable, but he doesn't say or even post much.

more senjetsuu

Yep, the manufacturing issues have been confirmed many times over, and by many others outside the individual I'm referencing currently. The ESRAM is indeed having manufacturing issues and giving them some headaches, but the headache was serious enough to delay the console back in December of last year, they aren't nearly as serious now and are more or less precisely where they want to be. The yields have improved, and Microsoft are pretty confident that ESRAM will run extremely cool without any heating issues. It's just a bitch to manufacture, but this is nothing they didn't anticipate and, again, they are more or less where they thought they would be at this point (that is said quite often). They also expect it will easily allow them to cost reduce the console faster than might have been the case going with any other design route, because they can have this made at more places than is the case for other parts they considered.

They also believe ESRAM is going to be a bigger payoff than what people are believing right now, which is largely why I've been stressing the ESRAM in other threads, but mostly avoiding using stuff I've heard on my own, because I can't exactly say where I got it from nor prove it to anyone, but instead mostly using posts from Beyond3D to support my suggestions and belief that ESRAM will be a design win for the Xbox One.
 
My question was more in regards to the testing process surrounding eSRAM. The process for fabbing a new chip can often have a bit of back and forth between the fab and design house to tweak the chip after testing for functional problems, yields, etc. As such, like 3dilettante said, it seems surprising that they didn't (hypothetically) catch this till now, so I was wondering if the process for testing eSRAM timings is more prone to error or... I dunno.

My thinking was a scenario where there was a certain percentage of parts that were showing higher than desired error rates early on, but that improvements weren't coming along as expected.

The errors would be transient and comparatively rare, possibly at corners in the voltage and thermal envelope of multiple test devices in hundreds of runs. The physical size and unknown implementation of the eSRAM could lead to some unexpectedly high error rates if factors like variation or run-time temperature changes proved to be factors that scaled worse than projected.
Raising voltages and adding timing margin could be used to increase the safety margins, if whatever level of tuning that was originally designed proved less successful than expected.

It could be a matter of time before manufacturing improvements reduced the severity of variability, which would bring things further within the design parameters of the tuning circuitry and redundancy, which would then allow the safety margins to be reduced. If they felt they had the time.

One counterpoint is that the error rates probably don't need to meet the same requirements as AMD's mainline x86 chips.
It also could be a problem that they could work feverishly to improve in the coming months, and perhaps grit their teeth for a time until things caught up to where they needed to be.
I've only brought this up because the eSRAM->overheating->25% down-clock rumor seems odd to me.
 
What a double whammy..

If they only started in Fall 2010 they should have maybe went for GDDR5 also.. there was no way this was releasing in 2012.
 
Design Techniques for Yield

That's one of the parts I don't get with the rumors concerning overheating and lumping in the eSRAM. Microsoft could be that inflexible in its TDP requirements, but I'm not clear on how a pool of SRAM off to the side can make a chip with 12 CUs founder when a chip with 18 is allegedly just fine.

It may be just be the popular conclusion to jump to, since it's the one large difference between the two architectures. On-die memory should be engineered so it doesn't generate enough heat to force a down-clock that high.
Other tricks like raising SRAM voltage can be used to combat variability, and that can raise power consumption somewhat. Why that can't be contained to the eSRAM would be the question raised, unless they ran out of headroom or some parts of the interface on-die need higher voltages as well.

My first reaction was that there was an uncore downclock like the one VGleaks said happened with Orbis.
The bandwidth numbers in the Durango diagrams are higher and point to a more aggressive uncore implementation (and maybe some diagram ambiguities too, pick your poison), and that would mean Durango's uncore could draw more power.
The counterargument is that there's no reason given for why that would drag down the GPU or eSRAM in particular. The CPU and GPU are capable of operating at a different multiplier ala Sony.

If it is 6T or 8T then the heat generated is a non-issue.

If it is 1T then the area and the heat is pretty small compared with the CPU and GPU.

Finally, if there is a rumor of a yield issue, whenever you design a large array of repeating structures in a SOC (large = large enough to impact the yield and cost calculations) then you employ various pretty mature design, test and fuse techniques to solve the problem before you tape out the chip.

Everyone here should be familiar with AMD and Nvidia fusing off clusters for yield.

Everyone here should be familiar with AMD fusing off CPU cores and/or modules.

Everyone here should be familiar with Intel using off CPU cores and/or cache and features (SMT).

Everyone here should not be shocked that the memory industry has had replacement rows, columns and modules for quite some time. Also 2 bits detect and 1 bit correct and much more than that for quite some time too.



So please stop the non-sense about eSRAM power and/or yield issues. :rolleyes:



I am quite sure that MS (allied with AMD, ATI and IBM) know how to handle such a simple yield issue (via design) before tape-out. Certainly not after the press conference/release. Please, do you know what you are talking about ?!? :rolleyes:

Has anyone read about IBM and their yield and fault tolerance techniques for Power8? So GPU and CPU harvesting is enough but is nothing in the face of what can be done today?
 
What has IBM got to do with XB1? I don't believe this rumour at all, but your argument isn't very solid because engineering mistakes and faults do happen. "Do you think MS and AMD's aren't smart enough to be able to design a console that doesn't burn itself up and have massive failure rates? Do you really think Intel, the world leaders in chip design, are going to be stupid enough to make a CPU with a fatal floating-point calculation error?" Shit happens, as the saying goes, and no amount of experience or brilliance is an impenetrable defence to that

The argument against the rumour needs to based in engineering understanding, not corporate reputations (especially for companies not involved in the product!).
 
Finally, if there is a rumor of a yield issue, whenever you design a large array of repeating structures in a SOC (large = large enough to impact the yield and cost calculations) then you employ various pretty mature design, test and fuse techniques to solve the problem before you tape out the chip.
If the characterization of the device and the process is freakishly dead-on, sure.
It's not like these companies arrived at their experienced state by not going through trial and error for every design, particularly for actual physical effects and manufacturing unknowns.
Test runs exist for a reason, and test and fuse didn't save AMD from Llano, or allow it to scale its gate oxide at 65nm, or give it competitive L3 cache array density for Barcelona nor fully erase that gap at 45nm.

Everyone here should be familiar with AMD and Nvidia fusing off clusters for yield.

Everyone here should be familiar with AMD fusing off CPU cores and/or modules.
Are you saying there are CUs fused off for Durango, and how does that help problems not in the CU array?
Same question goes to cores.

Everyone here should be familiar with Intel using off CPU cores and/or cache and features (SMT).
That doesn't work when there's just the One Bin.

Everyone here should not be shocked that the memory industry has had replacement rows, columns and modules for quite some time. Also 2 bits detect and 1 bit correct and much more than that for quite some time too.
Are you saying that the eSRAM has ECC? It's handy feature to minimize errors, but depending on the error requirements, not always sufficient.

I am quite sure that MS (allied with AMD, ATI and IBM) know how to handle such a simple yield issue (via design) before tape-out. Certainly not after the press conference/release.
The lateness of the rumor is a reason for skepticism. The existence of test runs and respins is evidence that not everything can be solved before a chip physically exists.

Has anyone read about IBM and their yield and fault tolerance techniques for Power8? So GPU and CPU harvesting is enough but is nothing in the face of what can be done today?
You'd have to provide links, and go into more detail on what you mean by fault tolerance. The big iron processors have a massively lower focus on yield than a console component.
 
Are you saying that the eSRAM has ECC?
It'd be exceedingly improbable that it would, or even parity actually. It's not a mainframe product, and those 1.6 billion trannies are a huge-enough-as-it-is investment already, and that's merely for the SRAM cells; even more would be required to turn those cells into a functioning, addressable memory array. Even more memory and logic to make it into a autonomously working cache as has been suggested by some people.

ECC on top would be what, 20-25% overhead for "sufficient" accuracy? Parity is generally 1 bit/byte. Plus additional logic to manage the detection, and in case of ECC, correction, of course. Sounds rather unlikely.

Don't think I believe in any 20% downclock rumors. It's rather late in the game for surprise stuff like that, and it's not as if the clocks were enormously high to begin with. Usually these fantastic rumors turn out to be people just making waves to get attention.

...And of course, even if it was to be true I'm sure xbone will still be a solid piece of engineering. After all, John Carmack said so. (Appeal to authority, yes I know... Sue me! :))
 
Indeed. Its not that I have any industry knowledge or anything like that, but just using logical thinking, one would assume that such issues that might require such drastic measures would have been ironed out on time. We already have documents as far back as Feb 2012 and as recent as early this year detailing what the system specs are. There seems to be more certainty from MS this time around with regards to what the system makeup is going to be. So this sounds ridiculous really.

Btw with E3 around the corner we will see games that are running on the hardware.
 
More and more prominent poster on NeoGAF are reporting the downclocking as beeing true.

err, links? or at least names.

not sure who's "more prominent" in the rumor community than those already mentioned, anyway. I'm probably forgetting but since Lherre shutup, Matt is the only true insider there that I can think of currently. Oh and Crazy Buttocks on a Train, but I've never noticed him speak on hardware he's software insider.

Btw with E3 around the corner we will see games that are running on the hardware.

yes it will help some, but it wont clear anything up if they dont mention the clocks. those predisposed to this rumor will claim the software was made on pre-downclock dev kits or what have you if it looks too good.

gaf already seems to have some ready made excuses. Chiefly there's always been a sub-rumor (again from thuway and the like actually, he is one of the prominent ones saying this for sure) that early Durango dev kits used a 7970. Thuway has said if E3 software looks great it's because it was made on a 7970 and we will see historic pre-release downgrades.

The 7970 thing obviously seems like total bunk for countless reasons, not least I've specifically heard it isn't remotely true from others, yet it has stuck throughout and is basically treated as fact. Such is neogaf.

The mods just tagged thuway with "The bird speaks truth" LOL, as I said, just spit some anti-MS rumors and your credibility grows by the second retroactively. Semi-sccurate is of course also super accurate in any discussion now, where normally it's treated as the worst of jokes (eg, when reporting bad Nvidia news).

Thuway is in some trouble if this turns up false...

then again i'm sure he'll be excused in this case. "plans changed back to 800 mhz" or, ms never reveals their clocks officially and he'll skate by.
 
This is what I heard about the downclock rumor!

The rumor is true,but not why you think or they claim!Microsoft was doing some test runs with the chip with a higher clock speed on the gpu!I heard they was getting some decent results around 900 to 950 MHz!Then the last couple of batches they wasn't getting good yields and went back original clock speed 800 MHz!
 
This is what I heard about the downclock rumor!

The rumor is true,but not why you think or they claim!Microsoft was doing some test runs with the chip with a higher clock speed on the gpu!I heard they was getting some decent results around 900 to 950 MHz!Then the last couple of batches they wasn't getting good yields and went back original clock speed 800 MHz!

Microsofts target clock from the start has been 800mhz.

A extra 100-150mhz ontop of that would require a large bump in voltage and thermals. I think 800mhz is the sweet spot.
 
This is what I heard about the downclock rumor!

The rumor is true,but not why you think or they claim!Microsoft was doing some test runs with the chip with a higher clock speed on the gpu!I heard they was getting some decent results around 900 to 950 MHz!Then the last couple of batches they wasn't getting good yields and went back original clock speed 800 MHz!

Hmm, very interesting...

It could be that they still managed to squeeze 900 out?
 
Status
Not open for further replies.
Back
Top