Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
What's happening on March 6 ?
I can't dismiss it out of hand. He very carefully stated that it is a decision only 2 months old, and I've been gone from the team for 3 months now. I still think it is bull, but I can't say I _know_ it's bull, since I don't. I've seen some of the executives make stupid decisions like this in the past, so it has a small chance of being true (but a very small chance).The most surprising thing about this thread since the new 'rumor' is that bkilian, knowing what he knows, hasn't completely dismissed the idea outright. It's really the only reason why i haven't completely ruled it out as well. The whole thing just seems very un-"Microsoft 2012".
I can't dismiss it out of hand. He very carefully stated that it is a decision only 2 months old, and I've been gone from the team for 3 months now. I still think it is bull, but I can't say I _know_ it's bull, since I don't. I've seen some of the executives make stupid decisions like this in the past, so it has a small chance of being true (but a very small chance).
I would have been much more likely to believe it if he had said they were dropping in a second stock mobile APU, say a 4 core jaguar and 1 or 2 CU GPU, with it's own pool of DDR3, and then told developers, "Remember that system reservation we told you about? It's gone."Well sure but I suppose what really threw me off was your coming up with ideas as to why this would actually make sense (motherboard redesigns and such.)
I'd go with display planes too.All that said, and this is to anyone, before this craziness started i posted a link to the system diagram which showed a block labeled "Display" which i assumed was for the HDMI output. However this block also has a read speed that is over triple what the write speed is. Wondering if this would represent the HDMI IN and why it would be triple what the HDMI out is unless theres more than one HDMI input. Anyone?
I would have been much more likely to believe it if he had said they were dropping in a second stock mobile APU, say a 4 core jaguar and 1 or 2 CU GPU, with it's own pool of DDR3, and then told developers, "Remember that system reservation we told you about? It's gone."
Something like that would not be unprecedented, it's how the HD-A2/A20 and HD-A3/A30 HDDVD players worked.
I would have been much more likely to believe it if he had said they were dropping in a second stock mobile APU, say a 4 core jaguar and 1 or 2 CU GPU, with it's own pool of DDR3, and then told developers, "Remember that system reservation we told you about? It's gone."
Something like that would not be unprecedented, it's how the HD-A2/A20 and HD-A3/A30 HDDVD players worked.
I'd go with display planes too.
Actually, what you say is pointed in the Yukon scheme.
By the way, in fact is there a system reservation?
I would have been much more likely to believe it if he had said they were dropping in a second stock mobile APU, say a 4 core jaguar and 1 or 2 CU GPU, with it's own pool of DDR3, and then told developers, "Remember that system reservation we told you about? It's gone."
Something like that would not be unprecedented, it's how the HD-A2/A20 and HD-A3/A30 HDDVD players worked.
Of course there's a system reservation. The PS3 has a system reservation, the 360 has a system reservation, the PS4 will have a system reservation, irrespective of what some people think, and whatever console MS comes out with will have a system reservation. (By system reservation, I mean resources usable only by the system and not accessible to games, whether it's memory, cpu cores, gpu time, or some piece of hardware)Actually, what you say is pointed in the Yukon scheme.
By the way, in fact is there a system reservation?
Apparently MS are having a meeting with developers behind closed doors.What's happening on March 6 ?
I for one kind of miss astrograd and interference. Reading Shifty's response to a question I've made, the reason for the ban doesn't seem to be that serious.I don't understand the need to ban everybody. If it's bringing some thoughtful discussion to the table, then why do you care? There's got to be a point where you should just let discussions develop on their own.
Tommy McClain
Look at the architecture design. CPU+GPU+DMEs+eSRAM all on the same unit. Is MS doubling all that? 8 DMEs for the devs to worry about instead of just doubling up performance of the 4? Two APUs sharing 60 GB/s DDR3 and then with 100 GB/s local storage each, while framebuffer is supposed to be in DDR3? How do they interact and share workloads? It's a crazy design! A lot of work. Far easier to just add a second GPU and disable that for low-power functions. The whole architecture fits the design of 8 x86 cores, 12 CUs, 60 GB/s DDR3 and local eSRAM.
not really. There's one plane in front of the other. Unless you divide GPUs into fore- and backgrounds somehow, the display planes won't help. But the logic of the display planes suits one frontbuffer, one UI buffer, and one system buffer - three planes.If there is reservation outside of display planes (which is helpful for merging 2 framebuffer a from 2 GPUs)
Okay, but that's not what I'd call an efficient solution though, which makes a bit of a mockery of the other efficiency solutions. Surely MS would be better off with dual off-the-shelf 4 CPUs + 10+ CUs APUs? I don't see the sense of designing a system clearly to use a pool of DDR3 for assets and eSRAM from GPU working space, and support units to facilitate their operation, and then banging two together.You're overcomplicating this. It wouldn't be two SoCs sharing 68GB/s DDR3 bandwidth, that's not possible because they'd each have a separate memory controller. It'd be two SoCs each with some bandwidth allotment (possibly still 68GB/s) to a static partition of DDR3 RAM (probably 4GB), where one has access to the other via a coherent interconnect like HTcc. This isn't a crazy exotic design, this is exactly what AMD has already done with its dual die server parts for years now. GPU sharing is also not especially complex because there isn't much data that has to be communicated between the two. The whole thing could be handled transparently in libraries if developers desired - both GPUs get the same primitives with different scissor windows, the same read assets duplicated to their eSRAMs, and different static partitions of render targets in their eSRAMs.
not really. There's one plane in front of the other. Unless you divide GPUs into fore- and backgrounds somehow, the display planes won't help. But the logic of the display planes suits one frontbuffer, one UI buffer, and one system buffer - three planes.
On the bottom right, theres the block that is labeled as "Display" which has an write speed of 1.1 GB/s (8.8 Gb/s), which i suppose is the HDMI video output?
If that's the case, why is the read speed 3.9 GB/s? if this is the HDMI input, seems like there might be more than 1 input?
Okay, but that's not what I'd call an efficient solution though, which makes a bit of a mockery of the other efficiency solutions. Surely MS would be better off with dual off-the-shelf 4 CPUs + 10+ CUs APUs?
I don't see the sense of designing a system clearly to use a pool of DDR3 for assets and eSRAM from GPU working space, and support units to facilitate their operation, and then banging two together.That said, the rumour wasn't two identical APUs, but two APUs, which is more plausible (if still unlikely).