It is an SoC, but then again, so was Hollywood. Hollywood consisted of the GPU, an ARM core responsible for IO and security, and an audio DSP. The main CPU wasn't part of the SoC (and the ARM core couldn't access the GPU, but that's a different story).Also here's where you said it was an SOC
http://forum.beyond3d.com/showpost.php?p=1571829&postcount=72
You're still saying that, correct?
Edit: I guess the point is how is it "obvious you know someone" (BG's words) if then all your actual specific posts on the hardware are played off as speculation? If not, which of your hardware/info specific posts are not speculation, for reference?
As large as IBM's? Unlikely - the 1Mb edram macro making that possible is IBM's tech they've been perfecting for a few (power) generations now - power4 already relied on massive (off-die) edram L3. I suppose IBM are the de-facto market leader ATM.Question for anyone that knows: is anyone fabbing large, relatively fast processors with large amounts of edram on them other than IBM yet?
We don't, but I'd expect GPU's access to be at least as 'direct' as Xenos'. For reference, Yamato (Xenos' little cousin in the handheld space) has way more direct access to its sram/edram tile buffer.Also: do we know for definite that the GPU has direct access to the edram?
I certainly hope Wii U GPU is at least 640 SP's for your sake, otherwise you're going to look like a fool, and I suspect post a whole lot less.
As large as IBM's? Unlikely - the 1Mb edram macro making that possible is IBM's tech they've been perfecting for a few (power) generations now - power4 already relied on massive (off-die) edram L3. I suppose IBM are the de-facto market leader ATM.
We don't, but I'd expect GPU's access to be at least as 'direct' as Xenos'. For reference, Yamato (Xenos' little cousin in the handheld space) has way more direct access to its sram/edram tile buffer.
No idea. I know a former AMD engineers is referring to the chip as "SoC", and there's once again an ARM core on the GPU die. That's all I know. Someone from IBM's Entertainment Processors division was working on a 32nm VLSI project, which might refer to the Wii U chip, or it might refer to a 360 shrink or Oban.In that case every chip is an SOC, but that's not what's meant by the term. Are you saying the CPU and GPU are on one die?
In that case every chip is an SOC, but that's not what's meant by the term.
I guess the other thing I'd worry is even if the GPU is ok, maybe it'll be crippled by slow RAM, considering the leaks point at DDR3.
Maybe the "SoC" term is used for the fact that it has an applications processor, memory controller, embedded memory and GPU - even though the applications processor never gets to be used for applications per se.
Could that be a confirmation that is has a "Starlet 2"?
For the extra dedicated graphics+sound upstream, maybe a Cortex A5?
it's too weak, a damn slow CPU won't give you 1.5x an xbox 360.
would you wish to make a console with a four core Atom, I guess not. also comes with a useless GPU.
There was this rumor all the way back in 2010.
The PR I've seen from Marvell in past about the processor makes it sound like something Nintendo would want.
it's too weak, a damn slow CPU won't give you 1.5x an xbox 360.
would you wish to make a console with a four core Atom, I guess not. also comes with a useless GPU.
Looking at Marvell's portfolio, I can't really find anything that "fits". Armada 500 and 600 are out of question, since they all have 3D GPUs.
Armada 168 is only ARMv5 (though that's just like Starlet, so BC would be maintained), but it does go all the way to 1GHz.
It would be a replacement for the ARM9 in the Wii, not the PowerPC. Besides, main CPU is already confirmed to be coming from IBM.
You might want to google for "Wii Starlet" to see what we're talking about.
It also connects pretty well to AMD's last conference, where we have subtle claims that they're working on interconnecting their GPUs with ARM CPUs. That could be a byproduct of their current efforts with Wii U's "SoC".
How efficient would it be to have a dedicated ARM CPU for running the graphics driver, memory controller and I/O?
Would a very fast interconnect between a slow ARM CPU and the GPU be better than a slower interconnect between a fast PowerPC and the GPU, for some tasks?
Looking at Marvell's portfolio, I can't really find anything that "fits". Armada 500 and 600 are out of question, since they all have 3D GPUs.
Armada 168 is only ARMv5 (though that's just like Starlet, so BC would be maintained), but it does go all the way to 1GHz.