There is no Z-RAM for AMD, there will never be Z-RAM for AMD.
AMD abandoned the tech some time ago.
I am generally skeptical of using Bulldozer as we know it in the console space.
In the case of x86, Bobcat would seem to be more likely since it is designed to be synthesizable and portable to foundry processes.
Bulldozer is a server architecture that is ill-suited for consoles. The chip is too big, the cache subsystem is a poor fit, and the acceptable yields for a server/enthusiast desktop CPU are unacceptable for a low-margin volume console CPU.
Redesigning it to meet a wholly different segment would cost serious cash.
Why AMD would do this for a few bucks a chip, and why MS would pay hundreds of millions to get a far diminished Bulldozer-lite does not compute for me.
Do you think it would make sense to "bulldozerized" bobcat cores? It's something that bothered me since we got more info on the bobcat family. When we look at the specs we see that one bobcat core @1.2 GHz consume as much power as the two cores version running @1GHz.
As I see it AMD should have pass on single core bobcat and go with modules as for bulldozer.
I can help but believe that AMD could have achieved something even more impressive than bobcats.
Is there a reason for AMD to pass on the option? Or they simply didn't have the resources for the project (bulldozer, llano,bobcat, gpus, etc. they are spread thin... too thin. I don't know how Bulldozer or llano will fare but I've gut feeling that AMD should have focus on only one project and that it should have been bobcat).
"Bulldozerzize" bobcat would bring a lot of advantages for a low power device or I mislead (happens often), the chip would be 50% than a single core version but would offer 80% of the perfs of a two cores system. Actually it could be even better as AMD could have go for a single 128bits wide SIMD unit. Single thread perfs would still be the best in town in regard to power consumption. Either way AMD could have invest the saved die space to add more cache or GPU SIMD or improve things here and there.
I'm willing to see where AMD is heading with their Bobcat 2.0
For a console I could see the SIMD pumped up (256bits wide instead of 128). Do you think it would be worse it to with more than two modules?
I know some people here want big and huge system but as I see it such a chip while tiny and power efficient (even clocked @1.6GHz) would bit the crap out of nowadays consoles CPUs. (When I see benchmark of Atom vs others X86 cpu on top of that when I consider P IV vs more efficient X86 chips... I feel like oh my god Xenon has to suck so badly... still the 360 pushes what I consider as acceptable graphics).
For me a working fusion chip for a console now (so @40nm) would be:
Two modules, 256KB of L2 per module, 3MB L3 shared by cpus and gpus (AMD should borrow Intel SnB "un-core"), a "kurt class gpu", 128 bit bus to 2GB of GDDR5.
@32/28nm I'm not sure "adding" would be the best way to go, make things better would sound like wiser move to me.