Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
Do you know that or are you making it up? Have you seen a dev kit? Did anyone tell you what is in the dev kit?

I guess you didn't see photos of things being demoed at E3 with things like a Geforce Titan?

I understand what the reports are about the dev kits. I just don't understand why.

A 7970 has 32 CU, 32 ROPs, 3.8 Tflops... does that sound remotely like whats in the XB One box? The dev kits are seriously overspecced unless theres something we don't know about the final silicon in xb one.
 
It'll mean the difference between 30 and 33 frames a second. Or rather between 30 and 27 fps, with a smidgeon of screen tearing versus none. Or 1920x1080 versus 1728x1080. If placed side by side, you might notice the higher spec'd XB1 a little better than the normal-clocked XB1, but in terms of consumer experience it's not enough to make a real, notable difference. Heck, there's even debate over how much difference 50% more CUs will actually make on screen and whether that'll be enough to sway consumers. It's all a matter of value. If that overclock comes at negligible extra cost, we want it. But if it results in, say, increased failure rate, would you really prefer the 10% extra framerate/resolution over an increased chance of your console dying? Or if it's the difference between silent operation and a noticeable noise? Different folks will value it differently, but from a business POV I'm not seeing the value in spending big bucks on an upclock. I'm not really seeing the value in spending small bucks on an upclock. ;)

If the cost is negligible for such an upclock, I'd go for it. They already have a hefty case and fan in the system. Have it spin a little more noise under load. They could still be silent in low power modes, if that's what they were going for.

It might bring the 2 systems in line (moreso), if the lower latency memory and eSRAM also have some positive effect, and PS4 doesn't have the equivalent of the XBOnes SHAPE audio block.
 
Tahiti is the first product with working silicon that has the GCN architecture, a large departure from previous architectures and the fundamental IP basis for our current/future basis. When talking about "dev kits" consider what versions and when they were sent out. Tahiti working silicon was available in 2011.
 
Tahiti is the first product with working silicon that has the GCN architecture, a large departure from previous architectures and the fundamental IP basis for our current/future basis. When talking about "dev kits" consider what versions and when they were sent out.

Well, then it would be firmware limited gpus to emulate the final silicon I suppose.
 
do you find it strange also?

I think most everyone finds it strange. But a 7790 in a massive 5 billion transistor chip comes off as a strange choice too. Who knows.

Now I don't want to sound like I believe the pastebin rumor, but just for fun: If MS already knew they were working on a new/more powerful design (more powerful than 7790 due to dev complaints) that might explain why they have a more powerful card in the dev kits.

Now don't misinterpret and say that I said that is likely or that I support it. Just arguing one possible reason why.

Another argument would go that they wanted the dev kits to exciting and almost promotional in nature. (Sandybridge-E 8 Core, for example.)

Or maybe it is a 1.6GHz Q19D Sandybridge-E and they wanted a 4 channel DDR3 controller in the dev kit instead of a 6 or 4 Core AMD with a 2 channel DDR3 controller in the dev kit.

http://www.ebay.com/itm/INTEL-XEON-...SOR-/120991964684?pt=CPUs&hash=item1c2baede0c

If it was a Q19D "Intel Confidential ES" with 20MB cache maybe it is related to the memory controller & ESRAM ideas. Maybe MS even talked to Intel about a CPU at one point. Maybe they talked to Intel about Knights Corner at one point too. (Long since dead.)

Maybe just the first 8 core they could get their hands on.

We can't guess at this point. Just have to wait and see if any wild rumors end up being true.

No idea about Ballmer but if I saw a 5 billion transistor count budget being spent and ended up looking worse than a 3 billion transistor count budget I might knock a few heads. Especially if I had signed off on a 50kW water cooled monster simulation/emulation system to design it. (And hired a bunch a big names in Si Arch.)

http://www.zdnet.com/blog/microsoft...rch-and-why-did-they-hire-a-sun-chip-guy/2477
 
No idea about Ballmer but if I saw a 5 billion transistor count budget being spent and ended up looking worse than a 3 billion transistor count budget I might knock a few heads. Especially if I had signed off on a 50kW water cooled monster simulation/emulation system to design it. (And hired a bunch a big names in Si Arch.)

http://www.zdnet.com/blog/microsoft...rch-and-why-did-they-hire-a-sun-chip-guy/2477

>>><<<

I wish I knew more about HW and SW development to add to the conversation. This is a very mysterious launch.
 
No idea about Ballmer but if I saw a 5 billion transistor count budget being spent and ended up looking worse than a 3 billion transistor count budget I might knock a few heads. Especially if I had signed off on a 50kW water cooled monster simulation/emulation system to design it. (And hired a bunch a big names in Si Arch.)

http://www.zdnet.com/blog/microsoft...rch-and-why-did-they-hire-a-sun-chip-guy/2477
Well either way MSFT did not expect in the slightest that Sony would upgrade to 8GB or worse they though it was possible but unlikely, then it happens, some high executives though they were fine with it. Some mess in between /lose a couples of months. Now a few heads have been cut and does MSFT feel the same? I don't know.
 
Last edited by a moderator:
Well either way MSFT did not expect in the slightest that Sony would upgrade to 8GB.
Worse they could have though it was possible but unlikely, then it happens, some high executives though they were fine with it. Some mess in between. Now that a few heads have been cut and do MSFT feels the same? I don't know.

Realistically though...4gb versus 8 gb... with the budget that the hypervisor and system OS take up, its kind of shrugs... to hinge your entire hw design on whether or not someone else can acquire 4GB is rather foolish...
 
Realistically though...4gb versus 8 gb... with the budget that the hypervisor and system OS take up, its kind of shrugs... to hinge your entire hw design on whether or not someone else can acquire 4GB is rather foolish...

I doubt their thinking had much to do with planning around the competition. More likely they felt their console needed 8 GB and gddr5 would be too expensive or unavailable at that size. APU design would have started years ago and they'd be projecting the availability of RAM at launch. So they went with safe DDR3 and added the embedded RAM to improve bandwidth.
 
Realistically though...4gb versus 8 gb... with the budget that the hypervisor and system OS take up, its kind of shrugs... to hinge your entire hw design on whether or not someone else can acquire 4GB is rather foolish...
Please that post was a grammar disaster could you copy/paste the corrected one in your post (/edit)( I guess it is still not perfect but definitely better... :oops: ).
 
As far as I know early dev kits are grossly overpowered to handle the early incomplete and inefficient dev tools (debuggers, O/S, emulation, etc) while still allowing a reasonable approximation of the final hardware. Also it makes it hard for anyone to divine what your final specs will be if your dev machine is overpowered relative to the final specs. I presume the monitoring tools will get better and the dev kits far more modestly powered in line with the final silicon.
 
Or 1920x1080 versus 1728x1080.

1920x972 (why not infuse a little more widescreen @ 960. ;))

That said, I wonder about optimum resolution dimensions to fit certain caches. Sebbbi's already done some preliminary tests with particles (128x128, FP16x4 pixels.

1920 is easily divisible by 128.

1080, not so much. 1024 would be the nearest whole-number divisible by 128 (or 64).
768 is also whole-number divisible by 128 (ultra-widescreen cut-scenes?)
816 would give the typical cinematic ratio.

hm...
 
Imagine they launch with those alpha or beta kits finally with that monster Intel CPU and discrete graphics card!. In this fantasy realm could it even be possible legally to say AMD we don´t want that APU anymore?.
The alpha kits had 6 fans, sounded like a 747 taking off, weighed about a hundred pounds, and were in the largest PC case I'd ever seen. Would you want that in your living room? The Beta kits are close to identical to the final hardware. (In hardware manufacturing terms, they would be EVT or DVT models)

The 12GB rumour is not new though and people said it seemed possible cos the developers kits have that amount of memory already.
Alpha dev kits, the ones with 12GB, are PCs. They have DIMM slots. Do not base any speculation about the final product on the contents of the alpha kits.
Or was e3 just smoke and mirrors and games like forza will take a big visual hit in the final build?
Forza was running on XBox One close-to-final hardware, so was Ryse, and a number of other games.
 
I think it would be safe to avoid that type of speculations that are pretty baseless, it won't do you any good. On the matter it is wiser to simply wait and see ;)

That was my friendly advice of the day.
 
*AHEM* this is not a versus thread. This is a technical thread, so please keep things technical instead of visual impressions which is highly subjective.
 
That said, I wonder about optimum resolution dimensions to fit certain caches. Sebbbi's already done some preliminary tests with particles (128x128, FP16x4 pixels.

1920 is easily divisible by 128.

1080, not so much. 1024 would be the nearest whole-number divisible by 128 (or 64).
768 is also whole-number divisible by 128 (ultra-widescreen cut-scenes?)
816 would give the typical cinematic ratio.
Why not adjusting the tile size to 120x120 pixels for 1080p? Or just using non-square tiles (128x120)?
But if that higher bandwidth of the eSRAM can be really used, the tiling gives probably relatively small increases (as the XB1 has only 16 ROPs, at 800 MHz and FP64 blending the highest usable bandwidth without MSAA would be 204.8GB/s [without Z traffic] assuming the ROPs can really sustain FP64 blending at full speed). Sebbbi tested on a GPU with twice the ROPs as the XB1.
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top