Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
They glossed over the data move engines. Which are pretty important factors to Durango's high performance.

If that's what's keeping the GPU at the supposed 100% utilization.

this diagram reminds me of the WiiU hardware architecture, wow just wow, its almost an overclocked WiiU...:LOL:

and where are the xbox fanboys who laughed at WiiU hardware ? (1.2 GHZ CPU ? single threaded cores ? only 32 Mb of edram ? slow DDR3 main RAM ?....)

just incredible....and where are those guys betting on "microsoft is wealthier, sony financially is doomed, so microsoft will come up with a more powerful console", "there wont be big differences between sony and microsoft nextgen consoles, because they are buying silicon from the same vendor"....

where are you guys ? :LOL:

Even this is still significantly more powerful than the Wii U. You can do a base TF comparison on CPU or GPU, RAM size, memory throughput to see it's still quite far ahead of what the Wii U is.
 
Still not sure I understand the utility of a data move engine. Unless there is some kind of dual porting on the DDR3, having a DMA engine isn't going to save any on bandwidth. Were 360 developers having to do memory copies with the CPU or GPU taking compute performance away in any meaningful way?

Edit: That last neogaf quote let does suggest it's a matter of reducing processor loading, so that makes sense.
 
My guess is that manufacturing is probably the reason. Perhaps there maybe unpredictability with how well eDRAM will shrink in the future on 20nm and 16/14nm nodes.
Just saw that Mosys claims their 1T-SRAM cell can be done on standard process at TSMC and doesn't require the expensive steps of eDram.

There's also coolSRAM-1T which supposedly can be made on bulk.
http://www.mentor.com/products/ip/memory-ip/coolsram-1t
 
Yep...im expecting a considerable amount of budget on controllers. ..after what happened with the wii....but jist hope they havnt undershot the performance of this thing...looks uninspiring to be honest.
I think the Wii U is proving that a mediocre touch screen isn't a selling point especially since consumers are used to high res, high quality ones in phones and tablets. Will they add an expensive peripheral? It could go either way. Without it this should have a low BOM. That doesn't mean MS will pass the savings to the consumer.
 
Still, it seems the data mover would only be of use due to the separate fast ESRAM pool. Presumably such copies would not be needed with HSA and a single gddr5 pool a la Orbis.

Unless it has compression built in or the like?
 
More from Aegis [he has documentation for Durango it seems, but cant makes heads or tails from it]:
http://www.neogaf.com/forum/showpost.php?p=46711414&postcount=1387

derFeef: Can the eSRAM buffer be sideloaded into the main memory? Guess not.

aegies: Let me see if this helps. For Durango:

Rendering into ESRAM: Yes.
Rendering into DRAM: Yes.
Texturing from ESRAM: Yes.
Texturing from DRAM: Yes.
Resolving into ESRAM: Yes.
Resolving into DRAM: Yes.

For the 360, that would be yes, no, no, yes, no, yes.
 
this diagram reminds me of the WiiU hardware architecture, wow just wow, its almost an overclocked WiiU...:LOL:

and where are the xbox fanboys who laughed at WiiU hardware ? (1.2 GHZ CPU ? single threaded cores ? only 32 Mb of edram ? slow DDR3 main RAM ?....)

just incredible....and where are those guys betting on "microsoft is wealthier, sony financially is doomed, so microsoft will come up with a more powerful console", "there wont be big differences between sony and microsoft nextgen consoles, because they are buying silicon from the same vendor"....

where are you guys ? :LOL:

The same 1.6 ghz CPU rumored to be in PS4?

And hmm, 8GB DDR3 on a 256 bit bus vs 2GB on a 64 bit bus...

Besides everything else.

With people talking up the DMA engines now too, this will run circles around Wii U and probably compete with Orbis (hence lherre's comment they are close like PS3/360)
 
Would it be possible to come up with a more practical implementation for tiling?

Especially as polygon sizes can be expected to shrink further, especially if tessellation gets more common... They'd need to process all geometry first and do some sort of binning, sure, but at least splits or re-submitted polygons would be less common and cause a smaller performance loss...

Edit: after all, a lot of offline CGI renderers work with quadratic buckets of pretty small sizes.

The CGI tiling is mostly about managing enormous textures efficiently.
The issue with tiling is two fold, duplicate geometry work, which is complicated by indexed primitives and the extra copies of the tiles to main memory.

You could probably come up with a way to make the overhead smaller, by binning primitives somewhere in the pipeline (which is what the TBDR GPU's do).

The reason I wouldn't hold out hope for it, is that MS almost certainly looked at how people use 360, and designed around those usage patterns.
 
Still not sure I understand the utility of a data move engine. Unless there is some kind of dual porting on the DDR3, having a DMA engine isn't going to save any on bandwidth. Were 360 developers having to do memory copies with the CPU or GPU taking compute performance away in any meaningful way?

Edit: That last neogaf quote let does suggest it's a matter of reducing processor loading, so that makes sense.

I keep thinking back to lherres statement that you cant compare the GPU's in next consoles to PC because of their custom parts (when everybody was whining about the weak flops numbers)

It definitely seems he was on to something now that we are learning more, and even learned Orbis has special sauce too, since he referred to both consoles.
 
Sony will have a lite version, this is the design for MS's lite version. Its also what will go into OEM brandable products.

The lite version for both Sony and MS is what comes out this year. The 'Next' versions, the powerful ones, will be based off the lite design BUT with extra SoCs..

Lite in 2013
Next in 2014


From a strategic standpoint, something seriously weird is going on here. 2013 was always the worst year to launch, 2012 or 2014 either one would have been signficantly better. given the specs from VGleaks, that box could probably have launched last novemeber and sold at a profit at $299.

2014 brings a host of improvements that would enable 2-4X the power of 2013 box for same price, so launching in 13 is very odd.

If those specs are correct, the MS should meet its goal of a $299 box with stereo kinect 2.0 that is Gross Margin positive from day 1.

The box still seems seriously underpowered for gaming. I am still on board for a PRO model of the xbox with significantly better specs. I could definitely see, HW BC and dual GPUs for the pro. If the base ships at $299, I would gladly pay 450+ for that box.
 
microsoft is really not doing a good job. the leaks are a flood now.

it's less than 11 months, nothing they can at this point.
There are so many "leaks", I guess it means we're close to the official announcement ;)
 
Personally, I wouldn't hold out much hope for the special blocks being able to make up for such raw differences. Sure, it'll help bridge some of the gap but generally there's only so much to be gained by being clever over raw performance potential. But we won't know until we have more details on both systems.
 
From a strategic standpoint, something seriously weird is going on here. 2013 was always the worst year to launch, 2012 or 2014 either one would have been signficantly better. given the specs from VGleaks, that box could probably have launched last novemeber and sold at a profit at $299.

2014 brings a host of improvements that would enable 2-4X the power of 2013 box for same price, so launching in 13 is very odd.

If those specs are correct, the MS should meet its goal of a $299 box with stereo kinect 2.0 that is Gross Margin positive from day 1.

The box still seems seriously underpowered for gaming. I am still on board for a PRO model of the xbox with significantly better specs. I could definitely see, HW BC and dual GPUs for the pro. If the base ships at $299, I would gladly pay 450+ for that box.

They likely couldn't have launched in November of 2012 due to poor 28nm volume and availability.

You launch when the market is right, otherwise, you could suspend indefinitely waiting for more power.
 
Personally, I wouldn't hold out much hope for the special blocks being able to make up for such raw differences. Sure, it'll help bridge some of the gap but generally there's only so much to be gained by being clever over raw performance potential. But we won't know until we have more details on both systems.

Well, if it can move the needle to Durango GPU is "like" 1.5 TF instead of 1.2, I think thats all it needs, combined with more RAM.

What I find a bit odd is why MS spent all this engineering time and effort over just adding a few more CU's. Nothing's more expensive these days than people. They'll quickly get more expensive than silicon.

It would seem like MS/AMD would have expended a lot of effort on these "DMA engines" and it's kind of weird.
 
I guess these DME pieces might be what's giving them a pseudo HSA environment in the absence of being able to wait for AMD's parts that'll have it built in.
 
It would seem like MS/AMD would have expended a lot of effort on these "DMA engines" and it's kind of weird.

If I had to guess, the decision to use DDR3 and a fast memory pool probably happened very early on,and the DMA engines are specifically to deal with moving data between the pools.
I would bet they correlate to perceived pain points developers had dealing with the two pools on 360.
 
Status
Not open for further replies.
Back
Top