Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
HD 7770 (10CU)=1.5B transistors...

CPU=??? (I remember somebody estimating this at ~1B, no idea if accurate)

That would put you at 4.5 and missing 2 CU's, close enough that you probably cant say for sure there's any mystery transistors (but yet there still could be).

Another thing is for example, I'm not sure how accurate the 1.6B estimate for ESRAM is. Or the CPU.

And the GPU is probably a lot different than a 7770 too. 256 bit main DDR3 bus vs 128 bit GDDR5 bus? ESRAM bus vs no ESRAM bus? 12 CU's vs 10. Redundancy vs no redundancy (since there is cut down Cape Verde=7750)?

Bonaire=14CU=2.1B transistors...
 
Each compute unit might be around the 125M transistor mark (ballpark). I assume the VCE/UVD transistor counts are part of the GPU figures AMD has been giving out.
 
Another thing is for example, I'm not sure how accurate the 1.6B estimate for ESRAM is.

Doubt 1.6B for the ESRAM. People kept saying no 1T for 28 nm TSMC. Yet here is a TSMC publication from 2011 about 28 nm EDRAM:

TSMC in 2011 said:
This paper presents industry's smallest 0.035 um2 high performance embedded DRAM cell with cylinder-type Metal-Insulator-Metal (MIM) capacitor and integrated into 28 nm High-K Metal Gate (HKMG) logic technology. This eDRAM memory features an HKMG CMOS compatible (low-thermal low-charging process) high-K MIM capacitor with extreme low leakage (<;0.1fA/cell). Access transistor with HKMG shows excellent driving capability (>;50uA/cell) with <;1fA/cell leakage in 28 nm cell and <;3fA/cell in 20 nm cell (0.021um2). We demonstrate first functional silicon success of 28nm eDRAM macro. 600/550 MHz operating frequency is achieved at typical/worse cases.



Consider that only 105 millions transistors bought 10MB, 8 ROP and 192 parallel pixel processors in the eDRAM of the 360:

Wiki Xbox 360 Hardware said:
500 MHz, 10 MiB daughter embedded DRAM (at 256Gbit/s) framebuffer on 90 nm, 80 nm (since 2008 [3]) or 65nm (since 2010 [4]).
NEC designed eDRAM die includes additional logic (192 parallel pixel processors) for color, alpha compositing, Z/stencil buffering, and anti-aliasing called “Intelligent Memory”, giving developers 4-sample anti-aliasing at very little performance cost.
105 million transistors [5]
8 render output units
Maximum pixel fillrate: 16 gigasamples per second fillrate using 4X multisample anti aliasing (MSAA), or 32 gigasamples using Z-only operation; 4 gigapixels per second without MSAA (8 ROPs × 500 MHz)
Maximum Z sample rate: 8 gigasamples per second (2 Z samples × 8 ROPs × 500 MHz), 32 gigasamples per second using 4X anti aliasing (2 Z samples × 8 ROPs × 4X AA × 500 MHz)[1]
Maximum anti-aliasing sample rate: 16 gigasamples per second (4 AA samples × 8 ROPs × 500 MHz)[1]

People saying that it is this or that don't know. The apparent MS and AMD people that are around that answer the question said that they don't know if it is 1T or 6T or ?. I see no sign of someone who does know that is willing to answer. If they did know they most likely would not be posting anything.

Also there is evidence that MS spoke to several people over the years about the CPU and yes also the GPU. And I think they did actually fab another CPU for the canceled "Apple TV" like box. With the amount of money that MS invested I would expect that they fabricated more than one of the APU designs over the last 2-3 years. Likely 2-3. I am not talking steppings or revisions of one design, I am talking separate designs.

I think the rumors over the years point to MS talking to AMD, Nvidia and Intel. I would be really shocked if after the 360 they did not talk to IBM too. The 360 chips (IP and manufacturing) worked out way better than the first Xbox (Nvidia) yet they still spoke to Nvidia this round. They likely didn't talk much during the 360 round.


Also:

http://www.memorystrategies.com/report/embeddeddram.html
 
I wish that MSFT would actually give more details about the hardware some statements as this one:
Penello proceeds to say, "Our guys'll say, we touched every single component in the box and everything there is tweaked for optimum performance."
make me curious about what MSFT tweaked. I do not believe in dust fairies and the like, the CPU and GPU looks like standard GCN and Jaguar CPU so I wonder about physical implementation?
 
I know that's what they used to be called, but I think different names mentioned in the XB1 architectural roundtable?

....

Can we figure out where the 5 billion transistors are 'hiding' in XB1?

6T SRAM = ~1.6 billion
SHAPE block = ~400 million
CPU = ?
GPU = ?
Move engines, display planes, video encoder/decoders = ?

There's also the dedicated Kinect silicon which, even if its in the camera/mic module, i'm sure is part of the 5 billion number.
 
I wish that MSFT would actually give more details about the hardware some statements as this one:
make me curious about what MSFT tweaked. I do not believe in dust fairies and the like, the CPU and GPU looks like standard GCN and Jaguar CPU so I wonder about physical implementation?

3dci (?) already said both Sony and MSFT made changes to their GPUs. I can't remember if he said CPU as well.
 
3dci (?) already said both Sony and MSFT made changes to their GPUs. I can't remember if he said CPU as well.

You'd have to make changes to the GPU for the eSRAM, but aside from that the only thing I can think of thats different from the desktop cards is the low latency graphics pipe.
 
You'd have to make changes to the GPU for the eSRAM, but aside from that the only thing I can think of thats different from the desktop cards is the low latency graphics pipe.

If I could remember the correct user name, I believe he works at AMD and worked on the customizations. He obviously can't give details, but he said both companies made changes to the GPUs, some more significant than others. Some changes he said would be carried into other products and other changes would not.

Edit: Found it - 3dcgi

http://forum.beyond3d.com/showpost.php?p=1761117&postcount=4238
http://forum.beyond3d.com/showpost.php?p=1761314&postcount=4287
http://forum.beyond3d.com/showpost.php?p=1761127&postcount=4243

So, I'm not sure bog standard is the right way to describe these GPUs, even though the major parts look to be the same.
 
Do you know that or are you making it up? Have you seen a dev kit? Did anyone tell you what is in the dev kit?

I guess you didn't see photos of things being demoed at E3 with things like a Geforce Titan?

Only thing I saw (and I dont read everything in gaming forums, although, almost :p ) was Lococycle, a non graphically demanding indie type game, shown to be running on a PC with what looked like a GTX 780.

It was always stupid for some to try to play "haha look, XB1 games were all running on PC's!" nonsense with that.

If nothing else I'm sure you dont need a PC to run Lococycle. A non-demanding game like that, it practically doesn't even matter whether it's on a generic PC or not.

Tahiti is the first product with working silicon that has the GCN architecture, a large departure from previous architectures and the fundamental IP basis for our current/future basis. When talking about "dev kits" consider what versions and when they were sent out. Tahiti working silicon was available in 2011.

Aha, I'd argued a lot with guys like thuway on neogaf who said 7970 was in early XB1 dev kits. I just didn't believe such a disparity between dev kit and final GPU would be allowed.

Mystery solved I see. I was wrong. But not for the reasons I thought.

I think most everyone finds it strange. But a 7790 in a massive 5 billion transistor chip comes off as a strange choice too. Who knows.

Now I don't want to sound like I believe the pastebin rumor, but just for fun: If MS already knew they were working on a new/more powerful design (more powerful than 7790 due to dev complaints) that might explain why they have a more powerful card in the dev kits.

Now don't misinterpret and say that I said that is likely or that I support it. Just arguing one possible reason why.

Another argument would go that they wanted the dev kits to exciting and almost promotional in nature. (Sandybridge-E 8 Core, for example.)

Or maybe it is a 1.6GHz Q19D Sandybridge-E and they wanted a 4 channel DDR3 controller in the dev kit instead of a 6 or 4 Core AMD with a 2 channel DDR3 controller in the dev kit.

http://www.ebay.com/itm/INTEL-XEON-...SOR-/120991964684?pt=CPUs&hash=item1c2baede0c

If it was a Q19D "Intel Confidential ES" with 20MB cache maybe it is related to the memory controller & ESRAM ideas. Maybe MS even talked to Intel about a CPU at one point. Maybe they talked to Intel about Knights Corner at one point too. (Long since dead.)

Maybe just the first 8 core they could get their hands on.

We can't guess at this point. Just have to wait and see if any wild rumors end up being true.

No idea about Ballmer but if I saw a 5 billion transistor count budget being spent and ended up looking worse than a 3 billion transistor count budget I might knock a few heads. Especially if I had signed off on a 50kW water cooled monster simulation/emulation system to design it. (And hired a bunch a big names in Si Arch.)

http://www.zdnet.com/blog/microsoft...rch-and-why-did-they-hire-a-sun-chip-guy/2477

We dont know the transistor count or die size in other designs...

I guess you stick to the angle it's 1T SRAM and somehow XB1 has significantly more CU's than reported. Well, there's almost no way you're right.

Again all you have to do is assume ~1.6B ESRAM trans, and all mysteries are solved. Suddenly 768 shaders in a 5B SOC is accounted for.


*businessy*snippity*copity*


I would wager, that when the ESRAM is used effectively, the performance of the Xbox One's graphics subsystem will far an away outstrip any of those discrete parts you mention.

Now that's a pretty interesting comment, considering the 7790 is 1.8 teraflops.
 
I would wager, that when the ESRAM is used effectively, the performance of the Xbox One's graphics subsystem will far an away outstrip any of those discrete parts you mention.

Now that's a pretty interesting comment, considering the 7790 is 1.8 teraflops.

It is interesting but is it due mostly to the ESRAM, or because its a closed box? The comment Implies that its primarily a result of the ESRAM, otherwise, why mention it...

Dave, can you expand any more on this?
 
Last edited by a moderator:
We dont know the transistor count or die size in other designs...

I guess you stick to the angle it's 1T SRAM and somehow XB1 has significantly more CU's than reported. Well, there's almost no way you're right.

My argument is a little different: I say maybe 70% it is X, 20% it is Y and 10% it is Z. We don't know. It might be 6T, it might be 1T. It might be > 6T and wrapped into some sort of new transactional memory.

So I would not be surprised if it was X, Y or Z. But I call bogus on blanket statements that anyone knows it is X or Y. I strongly suspect from the 360 heritage that they at *least* looked at 1T.

But I think it is clear that no one here knows and that those who do know are not talking.



Again all you have to do is assume ~1.6B ESRAM trans, and all mysteries are solved. Suddenly 768 shaders in a 5B SOC is accounted for.

I strongly disagree. There is way too much unknown. There are way too many different ways to use 5B. That is why I call bogus on claims to know the makeup of the chip.



Finally:

Dave Baumann said:
I would wager, that when the ESRAM is used effectively, the performance of the Xbox One's graphics subsystem will far an away outstrip any of those discrete parts you mention.

THAT is what I am expecting. And once it is revealed and the inner works explained I think most everyone is going to stop saying that they knew this and that about it.

I expect new stuff, new configurations and new ideas that none of us are debating plus some of our debating turning out to be false and others turning out to be true or slightly correct.

I want to see the results of the compression/move engines, ESRAM and PRT used together in concert.
 
Last edited by a moderator:
Did not Xbox guy Albert Panello say that final specs would be published sometime before launch recently?

I recall a post of his being quoted somewhere here.
 
His latest comments actually suggest that MS has no intention to reveal the specs of the Xbox One at all.
 
Is Microsoft going to pull a Nintendo and never give us even cursory info like clock speeds?

If they are smart they will not unless the specs better than leaked. Which is very unlikely.

Once they release the spec people lose all hope. Wiiu is a great example that even today people still believe it some beast just waiting for the right developer to take advantage of that hardware.

Just look at this thread and even though we have a huge amount of leak data. People throw it out for hope that its change. :p
 
If they are smart they will not unless the specs better than leaked. Which is very unlikely.


You could still have a winning hand and still take to your grave, A famous sheriff proved it. :smile2:

dead-mans-hand-a-gun-and-a-shot-of-whiskey-semmick-photo.jpg



maybe the highlight to Microsoft has been the games and not the tech while still willing to bend if necessary for performance.

And I thought it was already said the specs were to adjust all the way to the mass production of the console. so maybe if (Microsoft wishes) next year will be official info directly from them.
 
If I had to guess at why 3 OS', and I have no insight.
I'd suggest it was a technical solution to a political problem. They were probably required to be able to run Windows RT apps and they likely wanted to keep the GameOS as close to the existing bare bones OS as possible, with minimum isolation from the hardware.
That would dictate that the hypervisor sit under both OS'.

I doubt it would be Windows RT as that implies ARM. Id guess its Windows x86 or AMD64 , a very custom one ?!
 
Status
Not open for further replies.
Back
Top