I don't see the upsides to two SoC's or SoC + GPU... You increase board complexity, cooling complexity, split the memory pool, or add latency to it by making the controller external and shared, and end up with a machine that's harder to get the most performance out of. I swear, some people just want exotic hardware for the sake of obscuring comparisons to existing, measurable systems so they can continue to live in a console-utopia la-la-land.
Well my takes on the matter is just production and R&D costs. I suspect that AMD will not have an off the shelves product that include Jaguar cores that is not an APU. On their high power range of products they don't have either chip of sane size that include no GPU. I can't see Sony for example using those high power 4modules chips even with disabled module.
So if you go with an off the shelves parts you might end with an APU. Either way you don't go with an off the shelves CPU.
If you don't go with an off the shelves CPU it costs money, and it can get worse if you also go with a "not of the shelves" GPU. It is quiet some money on top of already quiet some money.
Things is I don't think that either MSFT of Sony would go with real off the shelves parts. Money has to be spend but my pov is that Sony is a bad shape and that they could invest on only one chip, chip of reasonable size that has good yields and sane price from scratch and use two of them.
I would not bet on shrinking coming fast to lower production price, it will require another massive R&D investment on an ever costlier process.
I take it that you want a single SoC because two chip is too complex (well the 360 launched with 3, and most pc still have two main chip a cpu a gpu). Thing is I could see a relatively big (not insane) chip fitting the bill but with shrink being a more long term goals than it used to be, I would be wary especially for Sony. Then you have the bandwidth issue and the amount of ram you want for your system.
I can't defend others only my pov, Sony should aim for something cheap to produce. 2 SoC head to head (through hyper transport link) and each linked to DDR3 should not be such a crazy complex thing, not more that the rsx and cell combo and their separate pools of memory or most pc for that matter.
Just for the ref, guesstimating based on AMD presentation I think that a quad core jaguar set-up (with cache) should be ~80mm2, if you add a cap verde class of GPU you are ~200mm2 (for the whole APU).
It is a bit light imo, you may want to push a bit further than that as you are already pass 185mm2 and going further seems to imply extra cost with no cheap price reduction in size.
On the other hand you could go with a quad core and between 6/8 Cus on a reasonable size chip and use 2.
If you have enough money you could indeed go with a massive single chip, a custom CPU and custom GPU, etc. do the engineering maks, tests, etc. for all those chips and have something better.
Another thing is that 4 Jaguar cores is not that much of a jump versus this generation of product, with a dual SOC you have CPU cores in spare for OS, possible resources consuming future peripheral. You could also use defective part, say you have one SOC that is fully functional and the other as only 2 or 3 CPU cores active (out of four).
If they are confident in their load balancing technique (and if it is something more than an empty patent) they could also paired "odd configuration" for the GPUs.
For example:
System1:
SoC 1: 4 cores 6 CUs
SoC 2: 2 cores 7 CUs
System2:
SoC 1: 4 cores 7 CUs
SoC 2: 2 cores 6 CUs
With system 1 and 2 performing the same or (too close to call) and the SoC would fully functional would be 4 cores 7 CUs.
My POV is focused on costs or try to.
Overall wrt software I would think that being symmetric a dual Soc would be a tad less tricky to begin with, identical GPUs, till better use of resources is mastered (load balancing between the GPUs) you can still rely on AFR kind of solution, on console with v-sync on more than often now it would not be much of an issue.
I can see a lot of Sony's blind cell-pimping fan base being very disappointed when they announce a system that's actually comparable to something existing and testable. They don't seem to understand that an average-in-theory, yet efficient system with an easy learning curve trumps a monstrous-in-theory, inefficient system with a stupidly steep learning curve that no one can hope to fully utilise. They just want to attach emotion to hardware and fantasise about godlike devs untapping 100% of this generations latest edition of inefficient hardware.
It's like entering a 5000hp quad turbo octal-rotary powered tractor into F1 and saying "yes, it's hard to make use of that extra power in a tractor, and the engine isn't tested to be reliable at all - but we have the best driver!".
That was funny, real geeks want a larrabee clone