Xbox One (Durango) Technical hardware investigation

Status
Not open for further replies.
What's happening on March 6 ?

Absolutely nothing.

In relation to what he was claiming, it's when his bogus rumor is proven false. Though I expect he'll create more FUD saying things have been delayed.
 
The most surprising thing about this thread since the new 'rumor' is that bkilian, knowing what he knows, hasn't completely dismissed the idea outright. It's really the only reason why i haven't completely ruled it out as well. The whole thing just seems very un-"Microsoft 2012".
I can't dismiss it out of hand. He very carefully stated that it is a decision only 2 months old, and I've been gone from the team for 3 months now. I still think it is bull, but I can't say I _know_ it's bull, since I don't. I've seen some of the executives make stupid decisions like this in the past, so it has a small chance of being true (but a very small chance).
 
I can't dismiss it out of hand. He very carefully stated that it is a decision only 2 months old, and I've been gone from the team for 3 months now. I still think it is bull, but I can't say I _know_ it's bull, since I don't. I've seen some of the executives make stupid decisions like this in the past, so it has a small chance of being true (but a very small chance).

Well sure but I suppose what really threw me off was your coming up with ideas as to why this would actually make sense (motherboard redesigns and such.)


All that said, and this is to anyone, before this craziness started i posted a link to the system diagram which showed a block labeled "Display" which i assumed was for the HDMI output. However this block also has a read speed that is over triple what the write speed is. Wondering if this would represent the HDMI IN and why it would be triple what the HDMI out is unless theres more than one HDMI input. Anyone?
 
Well sure but I suppose what really threw me off was your coming up with ideas as to why this would actually make sense (motherboard redesigns and such.)
I would have been much more likely to believe it if he had said they were dropping in a second stock mobile APU, say a 4 core jaguar and 1 or 2 CU GPU, with it's own pool of DDR3, and then told developers, "Remember that system reservation we told you about? It's gone."
Something like that would not be unprecedented, it's how the HD-A2/A20 and HD-A3/A30 HDDVD players worked.
All that said, and this is to anyone, before this craziness started i posted a link to the system diagram which showed a block labeled "Display" which i assumed was for the HDMI output. However this block also has a read speed that is over triple what the write speed is. Wondering if this would represent the HDMI IN and why it would be triple what the HDMI out is unless theres more than one HDMI input. Anyone?
I'd go with display planes too.
 
I would have been much more likely to believe it if he had said they were dropping in a second stock mobile APU, say a 4 core jaguar and 1 or 2 CU GPU, with it's own pool of DDR3, and then told developers, "Remember that system reservation we told you about? It's gone."
Something like that would not be unprecedented, it's how the HD-A2/A20 and HD-A3/A30 HDDVD players worked.

I assume something like that (an OS APU) would be fairly easy to re-integrate into a more powerful single-APU in a future revision of the console?
 
I would have been much more likely to believe it if he had said they were dropping in a second stock mobile APU, say a 4 core jaguar and 1 or 2 CU GPU, with it's own pool of DDR3, and then told developers, "Remember that system reservation we told you about? It's gone."
Something like that would not be unprecedented, it's how the HD-A2/A20 and HD-A3/A30 HDDVD players worked.
I'd go with display planes too.

Actually, what you say is pointed in the Yukon scheme.

By the way, in fact is there a system reservation?
 
Actually, what you say is pointed in the Yukon scheme.

By the way, in fact is there a system reservation?

Outside of the barely rumored 2 cores and 3GB of ram... I haven't heard anything

I still think Yukon is largely in play

We won't know until someone details the system reservation for GPU... That's the most important part of Yukon. If there is reservation outside of display planes (which is helpful for merging 2 framebuffer a from 2 GPUs)- likely no xenos on board which also means likely no backwards compatibility

Right now only thing that appears to have been changed from Yukon is clocks and ram (DDR4 to DDR3), even the 2 core reservation fits into Yukon. ESRam was even listed as an option in the leak (first reference of its possibility)
 
I would have been much more likely to believe it if he had said they were dropping in a second stock mobile APU, say a 4 core jaguar and 1 or 2 CU GPU, with it's own pool of DDR3, and then told developers, "Remember that system reservation we told you about? It's gone."
Something like that would not be unprecedented, it's how the HD-A2/A20 and HD-A3/A30 HDDVD players worked.

As a developer that would be the elegant solution, no?

You don't have to worry about shared resources and from a user perspective, your OS experience is never degraded regardless of what game you're playing.
 
Actually, what you say is pointed in the Yukon scheme.

By the way, in fact is there a system reservation?
Of course there's a system reservation. The PS3 has a system reservation, the 360 has a system reservation, the PS4 will have a system reservation, irrespective of what some people think, and whatever console MS comes out with will have a system reservation. (By system reservation, I mean resources usable only by the system and not accessible to games, whether it's memory, cpu cores, gpu time, or some piece of hardware)
 
What's happening on March 6 ?
Apparently MS are having a meeting with developers behind closed doors.

I don't understand the need to ban everybody. If it's bringing some thoughtful discussion to the table, then why do you care? There's got to be a point where you should just let discussions develop on their own.

Tommy McClain
I for one kind of miss astrograd and interference. Reading Shifty's response to a question I've made, the reason for the ban doesn't seem to be that serious.

I thought it must've been something really truly outstanding to warrant such a ban.

And if they are banned I hope they are back in a couple of weeks or so, but banned til post-E3? That's a very long time.
 
Look at the architecture design. CPU+GPU+DMEs+eSRAM all on the same unit. Is MS doubling all that? 8 DMEs for the devs to worry about instead of just doubling up performance of the 4? Two APUs sharing 60 GB/s DDR3 and then with 100 GB/s local storage each, while framebuffer is supposed to be in DDR3? How do they interact and share workloads? It's a crazy design! A lot of work. Far easier to just add a second GPU and disable that for low-power functions. The whole architecture fits the design of 8 x86 cores, 12 CUs, 60 GB/s DDR3 and local eSRAM.

You're overcomplicating this. It wouldn't be two SoCs sharing 68GB/s DDR3 bandwidth, that's not possible because they'd each have a separate memory controller. It'd be two SoCs each with some bandwidth allotment (possibly still 68GB/s) to a static partition of DDR3 RAM (probably 4GB), where one has access to the other via a coherent interconnect like HTcc. This isn't a crazy exotic design, this is exactly what AMD has already done with its dual die server parts for years now. GPU sharing is also not especially complex because there isn't much data that has to be communicated between the two. The whole thing could be handled transparently in libraries if developers desired - both GPUs get the same primitives with different scissor windows, the same read assets duplicated to their eSRAMs, and different static partitions of render targets in their eSRAMs.

I'm not saying this is a likely design at all but if MS wants to spend the money and deal with the cooling implications it's at least plausible.
 
If there is reservation outside of display planes (which is helpful for merging 2 framebuffer a from 2 GPUs)
not really. There's one plane in front of the other. Unless you divide GPUs into fore- and backgrounds somehow, the display planes won't help. But the logic of the display planes suits one frontbuffer, one UI buffer, and one system buffer - three planes.
 
You're overcomplicating this. It wouldn't be two SoCs sharing 68GB/s DDR3 bandwidth, that's not possible because they'd each have a separate memory controller. It'd be two SoCs each with some bandwidth allotment (possibly still 68GB/s) to a static partition of DDR3 RAM (probably 4GB), where one has access to the other via a coherent interconnect like HTcc. This isn't a crazy exotic design, this is exactly what AMD has already done with its dual die server parts for years now. GPU sharing is also not especially complex because there isn't much data that has to be communicated between the two. The whole thing could be handled transparently in libraries if developers desired - both GPUs get the same primitives with different scissor windows, the same read assets duplicated to their eSRAMs, and different static partitions of render targets in their eSRAMs.
Okay, but that's not what I'd call an efficient solution though, which makes a bit of a mockery of the other efficiency solutions. Surely MS would be better off with dual off-the-shelf 4 CPUs + 10+ CUs APUs? I don't see the sense of designing a system clearly to use a pool of DDR3 for assets and eSRAM from GPU working space, and support units to facilitate their operation, and then banging two together. ;) That said, the rumour wasn't two identical APUs, but two APUs, which is more plausible (if still unlikely).
 
not really. There's one plane in front of the other. Unless you divide GPUs into fore- and backgrounds somehow, the display planes won't help. But the logic of the display planes suits one frontbuffer, one UI buffer, and one system buffer - three planes.

OS from one GPU layered on top of buffer from games GPU

It explains why they did it in logic- at least one of the reasons
 
On the bottom right, theres the block that is labeled as "Display" which has an write speed of 1.1 GB/s (8.8 Gb/s), which i suppose is the HDMI video output?

If that's the case, why is the read speed 3.9 GB/s? if this is the HDMI input, seems like there might be more than 1 input?

The display planes vg-leak (IIRC) says the display hardware, apart from assembling the final displayed imagem from the various planes, which is then sent by the display hardware to the screen, can also output the same composite image to RAM.

At 1080p@60Hz, 1.1GB/s allows for 8 bytes per pixel, or maybe 2x frame buffers including alpha, which could be the "title" (i.e., game) and the final (i.e., with OS overlay) -- that kind of makes some sense to me... -- or else for 3D support (if it's still alive by then :p).

The 3.9 GB/s could then be the corresponding 3 display planes input, triple the output plus some more for overhead.

Let me add that I'll believe the HDMI input when I see it, and I'm glad that the published specs from Sony have already shot it down for the PS4. Not because it's impossible, but rather that a) never seen it done before apart from AV receivers, and b) can't imagine it makes business sense (but I'm sort of limited in this department).
 
Okay, but that's not what I'd call an efficient solution though, which makes a bit of a mockery of the other efficiency solutions. Surely MS would be better off with dual off-the-shelf 4 CPUs + 10+ CUs APUs?

What off the shelf APU are you thinking of? Trinity isn't even GCN, Richland and Kabini aren't out yet. More to the point.. AMD's SKUs for consumer products don't align well with what Sony and MS would prefer for a game console. Hence why they're using a large number of relatively weak Jaguar cores instead of a smaller number of high clocked Piledrivers, because this is a better way to spend their power budget. This doesn't change if you go with a one chip or two chip solution. AMD hasn't shown any public interest in bringing out an APU that marries Jaguars with anything even remotely as powerful as even a 10CU GCN part like you propose. Kabini is something like 2 or 4CU.

Furthermore, it's not like MS wouldn't want their own customization and IP in the SoC, not least of which includes the eSRAM, but would go far beyond that. They're also probably getting it lower cost by footing some of the chip design on their end and merely licensing the IP they need from AMD, which for large volume is cheaper than buying a complete design from them.

MS would end up with two display controllers this way too. Maybe one would be disabled, but even two active could have uses.

I don't see the sense of designing a system clearly to use a pool of DDR3 for assets and eSRAM from GPU working space, and support units to facilitate their operation, and then banging two together. ;) That said, the rumour wasn't two identical APUs, but two APUs, which is more plausible (if still unlikely).

The DDR3 + eSRAM design is completely orthogonal to a multicore design. If anything, local memory oriented solutions can scale nicely. IMG's GPUs do just fine scaling cores with fast tile SRAM.

IMO two different APUs is less plausible because it negates the cost benefit of only having to design one chip. 16 CPU cores is far fetched but if MS has plans out the gate to use a few of them for their own purposes then it isn't that crazy.

If MS did this it may not have been their first choice and could be a reaction to Sony's plans (although it would have needed to be made a long time ago, of course). MS has more resources than Sony to put into design and they have more cash to burn on early console sales; gambling it on a more powerful CPU + GPU makes sense since there's a fairly reliable timeline for cost reduction on those parts. But doing a 500mm^2 SoC is out of the question.
 
Status
Not open for further replies.
Back
Top