Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
That also flies in the face of what insiders have been saying for a long time now and in the face of a few dev comments saying the two are pretty close in performance. I think when they show games that look every inch as good as the PS4 titles ppl will take notice and share the bewilderment. In the meantime, they will continue to assume there is a vast power gap (which may be true, but it doesn't explain the things I mentioned above). Maybe the Durango really does have a notable efficiency advantage in real world applications?

One thing I don't understand is that some ppl say the extra hardware kit are all bog standard things in any GPU (DME's are just DMA's, display planes are worthless, eSRAM is only there as patchwork to make up for low bandwidth)...if that is true then why didn't they use DDR3 for OS and GDDR5 for application accessible RAM and save on the die space? Why go through the effort to design special DME's when the AMD GPU has them already built in anyhow? Why move them outside the GPU?

I personally don't think the specs are changing (though I do think there may be more to them than meets the eye at first glance relative to Orbis). That said, I also think there is still some room to speculate what the real world outcomes of MS's design has compared to their more obvious alternatives.


Actually, those additions, especially the DMEs, do have a specific role to play in the architecture of the new box. They will perform tiling/untiling of textures and resources. And of course they include texture compression/decompression hardware and also texture swizzle/unswizzle. All this in addition to their typical DMA functions. They seem to be gearing the system towards virtual/mega texturing and mega meshes as they link to the pdf of id tech 5 and the lionhead's mega mesh pdf. Although this can be done by any system, it seems they are trying to provide hardware support to make it easier and cheaper to use.

And we already know what the ESRAM can do. In addition to providing the space and bandwidth for the gpu, it is low latency, something they keep pointing out as, apart from its use in compute jobs, it seems ROP are also sensitive to latency, going by the info on vgleaks. Also it doesn't have the limitations of the EDRAM in the 360. So while these components can help alleviate bandwidth bottleneck, that is not the only reason why they choose to use them. Quite simply, it seems like the have a vision on the way graphics development will go and they designed a system to support it, and one that should provide optimum performance given the resources available in it.
 
So MS brought this team on specifically to head up the entire architecture of the new Xbox and spend 3 yrs doing so and you think they did this just to investigate their options for BC?
No, I wasn't saying anything other than there can be other reasons why we're not seeing ex-IBM engineer's obvious involvement in the CPU and GPU design. There are plenty of other possibilities too. My point is only a challenge to the notion that "MS hired senior IBM engineers, ergo there must be major customisation to the CPU and GPU."
 
Don't magazines take like 4 weeks to get to print or something? Most likely the same story as they had on their site a month ago.
I don't know I didn't read the story from the website, as I say what is worth the article is quite extensive and they quote a source who is familiar with both machines.
 
No, I wasn't saying anything other than there can be other reasons why we're not seeing ex-IBM engineer's obvious involvement in the CPU and GPU design. There are plenty of other possibilities too. My point is only a challenge to the notion that "MS hired senior IBM engineers, ergo there must be major customisation to the CPU and GPU."

MS once did had a custom IBM CPU in development, which then was taped out in December 2011. But bad yields resulted to a switch to AMD x86 architecture then, I suppose.
 
I am curious if this will be confirmed or someone feeling self important: 8-16 cycles to ESRAM.

If this is confirmed I have quite a few snippets from the source.

I assume that's GPU cycles. That would still be very fast if that is the case.

Cheers
 
What if the IBM engineers are there for some kind of backwards compatibility adjustments to Durango? Alternatively they could be there to help integrate the eDRAM on the 360 in order to perhaps reduce the size of the chip on 28nm. Maybe one of their SKUs has the Xbox 360 chip installed as a backwards compatibility device and developers are not aware and cannot leak simply because they do not need to know about backwards compatibility at this point?
 
But is strange that the cpu has many more latency cycles to hit L2 that GPU to acces ESRAM.Is almost like a L1 behaviour.

8-16 GPU cycles are 16-32 CPU cycles, so GPU-to-ESRAM latency would be comparable to a CPU L2 cache hit (20 cycles). In general, it is easier to design a low-latency scratchpad memory than a low-latency cache.
 
I am curious if this will be confirmed or someone feeling self important: 8-16 cycles to ESRAM.

If this is confirmed I have quite a few snippets from the source.

how about you post these snippets since since we are all speculating here anyway?
 
What about the current Durango architecture cant 'handle' the 360 games? Is it just the CPU or the whole thing is the wrong fit in the wrong places (CPU, GPU,edram)?

I doubt it could be done on the CPU side. But even if technically it could be brute forced thorough software BC, I think it was bkilian that said the time/manpower/cost of doing it is very high.

Xbox BC for 360 was dropped fairly quickly. I may be wrong, but I think software BC would be even harder this time?
 
Yeah and that is in relation to the 360 gpu which, according to their estimate, had about 53% efficiency.

Would you mind sharing, if and how, efficiency is defined in the document? Are they using a benchmark or is it all generalities?
 
To latency cycles to GDDR5, 400-500 cycles.But is strange that the cpu has many more latency cycles to hit L2 that GPU to acces ESRAM.Is almost like a L1 behaviour.

I don't believe the 4-500 cycles number for a moment.

The internal structure of GDDR5 modules is similar to DDR3, the command/address bus rate is comparable, it just has twice the data rate.

The high latency seen in som GPU benchmarks is likely the result of the memory system delaying individual requests to bunch several requests to indentical banks together for higher aggregate throughput (fewer stall waiting for banks to open).

This is probably performance enhancing for a GPU which can tolerate the added latency, but not for a CPU, requests from the CPU should be serviced ASAP.

I'd expect main memory latency to be on the order 50-100ns, ESRAM (if the 8-16 cycles is true) is 10-20 ns.

Cheers
 
Do you really think CPU showed by VGL is ALSO ready to control the OS of Durango?
In this case, isn't it somewhat weak for that purpose too?
 
Status
Not open for further replies.
Back
Top