Well the crisis is indeed to triple dip... I don't know but if MSFT wants to keep the 360 around for a long time, I wonder if they should more than shrink it, redesign it a bit like what Nintendo did for the Wii to the WiiU though to a lesser extend. An issue I see is that they may want to lower the clock speed of the CPU.
As it is the 360 is for me problematic, I would think that they may want to shift from IBM SOI process to TSMC 28nm. I mean Redwood on TSMC 40nm is ~100mm^2, the "same" GPU in llano on a SOI 32nm process is the ~same size.
Another reason I would see for a massive redesign is that if they want to keep the system around for a long while, extra power may ease porting from next gen to "current gen v2). The same applies for services /kinect.
I don't know haw far the 'emulation ninjas' can go, they got xbox games (some) to run on the 360, how many titles could they have to run on a significantly redesign systems? What MSFT would consider good enough? With the park of 360 being set to stay relevant for a while (and it is a pretty big one) I would think that they would that number to be big. There is also the issue of alienated the owners of a current 360, developing games for the 360, a 360++ and durango sounds like a nightmare. Even if they could get the system at 99$ kind of enforcing the upgrade from the 360 to the 360++ sounds like a disaster in the making.
If MSFT were to do that, I wonder if they could may be have bundle with games ala COD and take a loss to enforce upgrade on users that won't be able to afford Durango), think next CoD&361 @99$
Or they could buy back old 360 if you trade in for a new one (lowering the price of the system to the price of a AAA game). Or/and blending an engagement to gold for one year. They may give users a proper timeline, in one year the system will no longer be supported along with the presentation to their path to upgrade either toward durango or the 361.
It would be costly (at first) but I kind of like the last 3 solutions.
If the 360 is to have a "second life" there is things that need to be changed. For a shrink won't cut it.
Even without speaking of the silicon chips by self, to be cheap you may not want a HDD, DVD size are problematic=> you may want a BRD player and some flash for System and caching and let the costumers use their own solution for external storage. They may want to use the new kinect which could be cheaper to produce.
Wrt to "silicon" in the long run they may want to move from GDDR3, have CPU running at lower clock speed, more RAM would be welcome, removing the artificial (though implemented in hardware) limitation they introduce when they fused Xenon and Xenos, etc.
For the EDRAM I wonder to, is cheaper to have what could a tiny cheap @40nm or ultimately to include 10 of esram on a single chip along with the GPU and CPU (allowing may be extra perfs in non bc mode) that use a denser and cheaper process (you loose density by jumping from EDRAM to SRAM for the scratchpad memory but it is a win for the CPU and GPU).
On a TSMC 28nm Xenon cores should be in the same ballpark as Bobcat/jaguar cores in mm^2, let say really tiny (actually I would think that Jaguar cores should be a tad bigger).
The L2 is pretty tiny too (1MB would not take that much room).
The Xenos should be tiny too, really tiny. Cap verde is 123mm^2 that the fixed function hardware, 10 way more complex SIMD arrays (vs 3), 16 ROPs, the memory controller, etc.
That let the daughter die, the educated people here think that the 32MB of ESRAM in Durango could end up in the 65mm^2, gross guesstimate the daughter should be ~20mm^2, let say 25mm^2 is a worse case scenario.
Putting everything together we speak of something quiet tiny, the issue I see is the CPU that have to run @3,2GHz and how that affects transistor "density", it could also make a tiny chip pretty hot with a lot of Watts to be dissipate per mm^2.
I've no idea of the cost but shrinking is not trivial and I wonder about how that compares to a redesign. In both cases I think speak about lot of money (billions of $) but one option while cheaper doesn't bring anything new on the table.
Again if software guys can wrap their head around that I could see something like that being interesting:
Xenon cores and Xenos SIMD array would be mostly unchanged (fixing some issues with the CPU or including slight refinements could be fine if it doesn't make BC a even greater issue ala WiiU).
* CPU:
- 6 xenon cores 1.6 GHz < X < 2.4GHz (depending on power consumption target, though not dynamic fixed at design time) + 2MB of L2
- the L2 run at the same speed as the CPU cores (vs half speed)
- the cpu cores + cache can run at multiple clocks speed (though not individually, managed by software)
- simple power management cores can be "power" killed (individually) and half the L2 too (managed by software /OS)
* GPU:
4 SIMD (vs 3) and a bump in clock speed (+/- 650MHz)
-"low" latency communication between the game processor and GPU (not like in Valhalla). May be taking from AMD late APU and PS4. Could the CPU and GPU communicate through a "delayed queue" / software trick to recreate latencies in the original design?
* Integrated daughter die:
10MB of ESRAM, same number of ROPs, bump in clock speed to match the shader core throughput, no changes outside of the shift from EDRAM to ESRAM. Linked to Xenos v2
* Misc:
Include an up to date video processing unit.
A sound DSP could be nice.
May be support for display plane as in Durango.
may some encoding hardware if they are to stream video output to other devices.
* IO, memory, storage, peripherals
128 bit bus to DDR3, ~33 GB/s of bandwidth (/half of durango /same memory type => more volume while ordering). 2GB of RAM, 512MB reserved for OS
16GB of Flash
x2/3 BRD
Support for external storage through USB 3.
Integrated wifi
Support of the "new" kinect and durango controller (as well as existing controller).
If software side is doable, that hardware should be pretty cheap to produce, at least every bit as cheap as a shrunk 360, using either 2 chips (they keep the daughter on a separate chip) or one chip on a more expensive (and less dense) process that allows for the use of EDRAM.
I guess they could have to stick to IBM which might be reluctant to design something on a non proprietary process whereas they have free fab capacity, though even if they go with IBM 32nm process they should imo redesign significantly the system (making the L2 out of EDRAM could make sense if they are using a process that allow for that, win in power and make up for some density lost elsewhere).
I would hope they were not stuck to such a process as density of their last revision is pretty awful imho (+400 milions transistors => ~180mm^2
).
May have something to do with money and time and effort as Power a2 looks better in this regard.
I also wondered a few times if it could have something to do with power, CPU (worst offender) and GPU still burn their fair amount of power and a too tiny chip generating the same amount of heat could have create more problems (and generates costs) than it would have solved (I wonder if that affect AMD APU too).
Actually if they were to stick to IBM and its process I wonder if modding a Power a2 would be an better idea (and use of time and money) than reimplementing Xenos on a new process, disable 2 threads (hardware of software), stick a VMX 128 unit to it. If the Emulation were to be able to deal with the type of change I'm speaking about they could possibly handle 'that' too.