Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
I've reopened the Durango rumours thread and posted the last few pages of non-technical rumour discussion there. We can theorise on a technical level how true the downclock it is/isn't here. We can talk about how biased and insiderish GAF posters are there. ;)
 
(((interference))) spoke about 3GB for the OS long before Kotaku so no need to go far to look for a reliable source.
Also DF and VGleaks, if am not mistaken, says the same.
 
I've removed the rumour talk because it's still GAF rumouring. Wait until some reputable sites provide some reference points. I'm sure Richard Leadbetter is phoning around. The topic of the downclock only belongs here on a technical level not reliant on reputations of supposed insiders.
 
Yeah, I wouldn't put any stock in that. (And no, I don't know anything, but I do know they would have to drop to 600mhz for that much of a downgrade, and that seems extremely unlikely)
I ask this knowing you are under NDA because the console hasn't been released to the public, among other reasons, so I will try to be very tactful and careful when asking this. My question is:

Do you know if the hardware was reliable as it is, bkilian?

I mean, weren't they performing tests all the time?

It's not that the specs weren't the ones everyone expected. They had a LOT of time to test everything out. And the specs weren't astronomically over the top either.

If the rumours are to be believed I'd expect a better job from MS engineers.

It is going to affect games that are in development if it's true.
 
I haven't really been keeping up with X1 developments, just skimming through a few threads here and elsewhere.

I do not recall any downclocks for Xbox 360's Xenon CPU nor its C1/Xenos GPU.


I clearly remember for the original Xbox, officially announced at GDC 2000, the Nvidia GPU (called the XChip at that time) was to be clocked at 300 MHz.

All kinds of insane figures were thrown around. like Bill Gates stating 141 GigaFlops (re: NvFlops). Other figures: a fill rate of 4.8 billion pixels, basicly implying there'd be 16 pixel pipelines like PS2's Graphics Synthesizer. 300M micro polygons/sec, etc. Remember at this time, the GPU was thought to be Nvidia's NV25. Most of the specs were the same as the GPU that lost the Xbox contract at the 11th hour, GigaPixel's GP4 (anyone remember that?)

The Xbox demos at GDC 2000 that were realtime, were running on the newest Nvidia GPU at the time, the yet-to-be-released NV15 (GeForce 2). The XChip was said to be 3 generations beyond.

When the Xbox GPU was named NV2A, I believe the clock was still 300 MHz but soon dropped to 250 MHz, and finally 233 MHz in the final production version. Around that time, Nvidia was still claiming NV2A pushed 80 GFlops (still NvFlops). In the book Opening The Xbox, on page 270 it states: "The Xbox had 21.6 gigaflops of computing power" Now that's unconfirmed, yet a much more reasonable figure. Naturally, the vast majority of that was in the NV2A, as the 733 Mhz PIII/Celeron CPU could provide only either 1.5 or 3.0 GFlops -- Don't know which but it hardly matters.


So my point is, these rumored downclockings of Xbox One's ESRAM and/or GPU portion of the APU seems fairly mild to me, when compared to what happened with Microsoft and Nvidia's original effort during 2000-2001.
 
Continuing a train of thought from elsewhere concerning clocks and eSRAM:

SRAM on its own is one of the structures most capable of being tweaked to resist defects and variation. I don't think the eSRAM needs to meet the reliability metrics that server processer caches are held to, and it's trucking along at a more sedate clock than the high-end chips use.

The eSRAM is a rather active bit of memory, with traffic of the same magnitude as both Jaguar L2s, but even if it were drawing more power than expected, cutting the clocks of the whole GPU complex by a quarter is drastic.

Something like that doesn't sound like a power consumption fix, unless the eSRAM somehow really, really, missed its power targets.
Another possibility is that the eSRAM's variability measures did not meet AMD's or Microsoft expectations for the full 32 MB, whereas their characterization of the more compact L2 arrays may have been more on-target.
In theory dropping clocks could provide more slack so that a given chip's eSRAM can be considered functional despite worse than expected variation across the arrays (edit: skew over the many wires in a wide read port?).
The knock-on effects from there would have to come from a GPU side that isn't designed to operate at such a clock disparity.

It seems pretty late in the game to get caught by something like that, though, especially since they let some hard numbers slip.
Yes, that's what puzzles me the most. It is pretty late, really late to make any significant changes -aside from downclocking the whole APU (I think the CPU should be downclocked too if that was the case).

If it's true it would be a case of huge ineptitude on MS's part. :rolleyes:

Developers would have to redesign their games to meet the new standards. :/

I assume this is not happening because of the fact they were too focused on the TV side of things.

I had pretty much accepted the specs since the very beginning -which were actually nothing to write home about, but the architecture is interesting at least-, many people here trusted an engineer who actually worked on the hardware and our trust in him was well placed.

And now... this? Sigh, Microsoft... this sucks so much it's unbelievable.
 
I had pretty much accepted the specs since the very beginning -which were actually nothing to write home about, but the architecture is interesting at least-, many people here trusted an engineer who actually worked on the hardware and our trust in him was well placed.

What exactly was interesting ?
That in every conceivable way it's falling behind the competition ?
 
What exactly was interesting ?
That in every conceivable way it's falling behind the competition ?
What's interesting is that consoles don't suck, they are just different. And the architectural design of the Xbox One is just different from the innards of a PC.

This has been happening since the PlaySation 1 days. Consoles are meant to be durable and unique, and don't tend to get as obsolete as in the PC world, where PCs are much more affected by the natural "wear & tear" of the technology as they are continuously evolving.

What do you mean by "in every conceivable way"? Specs wise, yes, save maybe latency, deferred engines and virtual texturing it is behind overall, really far away from the PS4 specs, but that was pretty much known before.
 
*AHEM* This is not a versus thread so keep all comparisons out of it.
 
I ask this knowing you are under NDA because the console hasn't been released to the public, among other reasons, so I will try to be very tactful and careful when asking this. My question is:

Do you know if the hardware was reliable as it is, bkilian?

I mean, weren't they performing tests all the time?

It's not that the specs weren't the ones everyone expected. They had a LOT of time to test everything out. And the specs weren't astronomically over the top either.

If the rumours are to be believed I'd expect a better job from MS engineers.

It is going to affect games that are in development if it's true.
I can't say. Not because of NDA, but because I left before the first beta kits became available. Can't really test out a cooling solution without near-final silicon and near-final enclosure.
 
I can't say. Not because of NDA, but because I left before the first beta kits became available. Can't really test out a cooling solution without near-final silicon and near-final enclosure.
Many thanks for the reply, as usual. I expected better from them anyways. And now they come up with this.

I hope they can solve it, there is no way I am going to buy a downclocked console, especially if they are using the same silicon it makes no sense to me.

I hope Microsoft do not finally get around to destroying the Xbox One as it is. It was a bit weak, sure, but it seemed a capable and neatly designed console.

Paraphrasing Laa - Yosh here, this is the main issue for me if they specs change for the worse:

It's unrealistic to gain more than a few million units of a headstart in the first year. And even the PS3 has managed to catch up to the X360.

Dropping the clock speeds on the other hand would be a setback for nearly a decade.
 
I hope Microsoft do not finally get around to destroying the Xbox One as it is. It was a bit weak, sure, but it seemed a capable and neatly designed console.

my feelings exactly on all points.

<900 gflops is just a bridge too far.
 
Those who didn't understand percentages have been right all along... they said xbox one was 50% less powerful, and now it actually is. ;)
 
Those who didn't understand percentages have been right all along... they said xbox one was 50% less powerful, and now it actually is. ;)

I don't remember anyone saying it was 50% less powerful, people said the PS4 was 50% more powerful which was true & now it looks like it's going to be about 100% more power than the Xbox One.
 
I don't remember anyone saying it was 50% less powerful, people said the PS4 was 50% more powerful which was true & now it looks like it's going to be about 100% more power than the Xbox One.
Sorry it was a joke. A few months ago some people started equating 50% more for one as being 50% less for the other. the information got lost in general conversations. The B3D residents were quick to correct it, because the B3D crowd is pretty good at math. The rest of the internet, sadly, wasn't as lucky.

I've... seen things... you people wouldn't believe.
 
I have to say B3D has behaved much better than Gaf . Gaf is kinda like Ghost busters 2 with all the pink ooze on everything amplifying the hate for MS . Its really bad.

We will know whats going on soon and if there is anything true about this
 
I have to say B3D has behaved much better than Gaf . Gaf is kinda like Ghost busters 2 with all the pink ooze on everything amplifying the hate for MS . Its really bad.

We will know whats going on soon and if there is anything true about this
It does smell like FUD, or misunderstanding.
 
Still Going On About The 32MB...

Ok: Here is some fact from IBM straight from IBM's website:

https://www-950.ibm.com/events/wwe/grp/grp017.nsf/vLookupPDFs/PatO'RourkeDeep%20Dive/$file/PatO'RourkeDeep%20Dive.pdf

Please see slides 15, 16 & 17.

From IBM on IBM's website it is 10MB per core. 10MBx8 = 80MB without the sky is falling horrible yield, TDP, downclock, etc, issues. And no, it is not 800 MHz. It is much higher.

Not saying it is easy, but 80MB is certainly possible. 32MB is not crazy at all. Unusual but not crazy.



Note: I am not saying it is a PPC core. I am talking about the 32MB and is it possible or not.



So I hope we all can please stop repeating the "32MB is impossible! Too Hot! Too Large! Horrible Yields!!! Must Down Clock!!! The End Has Come!!!!! MS in Disarray!"



360 had 500 MHz, 10 MiB daughter embedded DRAM (at 256Gbit/s) framebuffer on 90 nm, (NEC). That was ***2005***.

32MB in 2013 on 32/28nm at 800 MHz is not a big deal.

90nm/28nm gives 9x the density (super rough estimate) and 32MB is only 3x and 500 MHz to 800 MHz is NOTHING going from 90nm to 28nm or 32nm.


Sorry but the nonsense rumors and fake insiders is getting to be far too much. Are the insiders working as janitors at MS?
 
Ok: Here is some fact from IBM straight from IBM's website:

https://www-950.ibm.com/events/wwe/grp/grp017.nsf/vLookupPDFs/PatO'RourkeDeep%20Dive/$file/PatO'RourkeDeep%20Dive.pdf

Please see slides 15, 16 & 17.

From IBM on IBM's website it is 10MB per core. 10MBx8 = 80MB without the sky is falling horrible yield, TDP, downclock, etc, issues. And no, it is not 800 MHz. It is much higher.

Not saying it is easy, but 80MB is certainly possible. 32MB is not crazy at all. Unusual but not crazy.

But IBM is using eDRAM. Isn't that a whole different story than using eSRAM? I think it is because that IBM POWER7 chip with 80MB eDRAM has just a bit more transistors* than whole eSRAM package in XBOne.

* IBM POWER7: 2.1B Transistors. Rumoured XBOne eSRAM takes 1.6B to 2B transistors alone.

EDIT: What I mean is you cant compare apples to oranges. ;)
 
Yes. IBM switched to embedded DRAM instead of SRAM for their large caches. likewise, the Xbox 360's daughter die was embedded DRAM. The Xbox One is using SRAM which is composed of 6-8 times as many transistors as embedded DRAM, and there has never been a commercial product with such a large contiguous block of SRAM.
 
Status
Not open for further replies.
Back
Top