On XB1 12 CUs, two geometry primitive and four render backend depth and color engines support two independent graphics contexts. So, It may let devs to use most/all of XB1 GPU resources easily for graphics. As sebbbi and others explained it's possible to use ACEs for graphics, too (in some special circumstances). But it may be easier or more efficient (since it makes it possible to use fixed function hardwares and synchronous compute instead of async compute as well) to use second GCP for graphics.
I'm speculating here, but I would be curious at to why it wouldn't be harder to use the second GCP in rendering towards a unified output.
The ACEs were designed and marketed from the outset as being better-virtualized and capable of coordinating amongst themselves. Due to their simplified contexts, prioritization, context switching, and recently some form of preemption were rolled out for them first.
The graphics front end has not kept up, and significant portions of the fixed-function pipeline have not kept up with this.
It will not be until Carizzo that preemption finally rears its head for the graphics context, and the paranoia over having the GPC being DOSed has been a point of contention for Kaveri's kernel development discussion for Linux. If a platform is paranoid about a game DOS-ing the GPU, or it needs some level of responsiveness, one way to get in edgewise is to have a secondary front end that can sneak something in.
(fun fact: It's not just graphics. The SDK cautions against having long-running compute kernels active when the system tries to suspend. If it takes too long to respond, it's a reboot. Similar GPU driver freakouts can occur on the PC.)
I may be pessimistic, given AMD's slower progress on this front, but it may be harder to get proper results out of a front end that has never needed the means to coordinate with an equivalent front end before.
The delay until the new OS rollout might be another indicator of the complexity involved. The ability to properly virtualize a GPU without serious performance concerns is recent, and both the VM and hardware need to be up for it. If the older OS system model predates these changes, it may have leveraged a secondary GPC as a shortcut to present a "second" GPU for the sake of a simpler target and improved performance.
The quoted passage on cooling solutions is "meh" to me. Unless its a ROG or other boutique solution, why would a cooler be specced to dissipate a power level greater than a level that would likely blow out the VRMs of a PCIe compliant device?
No modern GPU of significant power consumption is physically capable of full utilization without a proper GPU clamping down clocks or voltages almost immediately.
Didn't that dink from Stardock say DX12 would allow draw calls from different threads to be submitted in parallel? It'd be interesting to know if any PC parts already have two command processors, or if the next wave of them does.
Maybe. However, it takes quite a bit to saturate the command processor, particularly if other bottlenecks come into play. If one of the motivations for the two command processors in the consoles was better QoS and system accessibility, Carrizo's introduction of graphics context switching might be a case where upcoming APUs have less of a need for a duplicated GPC. The other reason may be that upcoming APUs will probably bottleneck way before the gains from a second front end could be realized.