1 Million Tears or Why the Wii U is Weak *SPAWN*

So there isn't actually an official document that talks about the CPU microarchitecture. Oh well.
Man, if knowing that this CPU it's 3 broadway chips stuck together at 1.24Ghz and a bit more of cache tells nothing about the micro-architecture of the chip, I hope you explain me what will tell us about it.

Of course Nintendo won't tell us "hey look at what we have done", but this is like the Wii, we never had "official documents" but we all know what can be found on there.
 
Man, if knowing that this CPU it's 3 broadway chips stuck together at 1.24Ghz and a bit more of cache tells nothing about the micro-architecture of the chip, I hope you explain me what will tell us about it.
We only have one reference to it being exactly 3 Broadways stuck together, and that's an anonymous forum post if I understand you right. Even if the source is legitimate, it could be 'similar' to 3 Broadways. That's what we've heard elsewhere (Marcan the hacker description of Espresso). The difference could be twice the floating point power for all we know, or a whole extra vector engine. Could even be a whole different PPC design but it's still similar if it runs the same code; depends how technical the sources were being
 
We only have one reference to it being exactly 3 Broadways stuck together, and that's an anonymous forum post if I understand you right. Even if the source is legitimate, it could be 'similar' to 3 Broadways. That's what we've heard elsewhere (Marcan the hacker description of Espresso). The difference could be twice the floating point power for all we know, or a whole extra vector engine. Could even be a whole different PPC design but it's still similar if it runs the same code; depends how technical the sources were being
Well, yes, but on the other hand we have it's die size which is only 33mm^2... considering Broadway was 19mm^2 and making a multi-core design adds some extra logic, I don't think that there is any secret-magic part on it that could bump the specs to decent levels.
 
At what node is that Broadway figure? Googlage says 19mm^2 at 90nm. Thus we get a CPU notably larger than 3x Broadway at 45nm. Some of that's cache and glue, but some must also be architectural difference improving efficiency. It's clocked 70% faster, times 3 cores, maybe 50% more effecient (could be better or worse), so I reckon we're a good 7+ times faster than Wii on the CPU front. It'd have to be to manage the cross-platform games at all.
 
Well, yes, but on the other hand we have it's die size which is only 33mm^2... considering Broadway was 19mm^2 and making a multi-core design adds some extra logic, I don't think that there is any secret-magic part on it that could bump the specs to decent levels.

It's two process nodes ahead of Broadway which gives it what, as much as 3x the density benefit?
 
At what node is that Broadway figure? Googlage says 19mm^2 at 90nm. Thus we get a CPU notably larger than 3x Broadway at 45nm. Some of that's cache and glue, but some must also be architectural difference improving efficiency. It's clocked 70% faster, times 3 cores, maybe 50% more effecient (could be better or worse), so I reckon we're a good 7+ times faster than Wii on the CPU front. It'd have to be to manage the cross-platform games at all.
Yes, Broadway was 19mm^2 at 90nm, but there are some things to point here:
1. More CPU speed = more transistors.
2. Designs like CPU, and if they're old and outdated designs from 1997 even more, aren't possible to scale down proportionally. PS3's Cell, being a much modern CPU approach, went from 235mm^2 to 115mm^2 when fabricated at 45nm. And this was not only a much modern CPU, but also from a company who invests BILLIONS in its hardware.
A reduction from 19mm^2 to 10mm^2 is the absolute maximum someone can expect from WiiU, and this being overly optimistic.
10mm^2 x3 = 30mm^2. The extra 3mm^2 are spent in glue and other meaningless things from a technical perspective.
Furthermore, the outdated Broadway architecture wasn't even made with SMT in mind, so its obvious that 3 cores like those won't scale performance in any way comparable to current multi-core processors.
I would say that even factoring the increase in speed, 2x Broadway is what you get in real applications.

I mean, we all remember how Miyamoto and Iwata said that Wii was "an Xbox 360 without HD, or even better" and that "it's graphics will impress you all". Nintendo is a company that loves to bluff and to make things in a way that seem much more powerful on paper than what they really are when games are thrown into it.
Compared to the more refined, balanced and perfected designs from Sony or Microsoft (both companies always give performance numbers even below of what then is achieved in their games) I can only say that I've been really generous in my analysis of the WiiU.

If the CPU was as weak as you suggest, it could definitely not run AC3 or ME3...
Those ports, that can run as low as 10fps on some certain areas (ME3 on the hospital for example) are more like remakes. The amount of optimization that they have behind must be crazy in comparison to the rushed versions that PS360 got (games where on sale much before than WiiU version, so had less time of development behind). And hell, none of them is comparable to the PS360 versions in any way, running at much lower framerates and with a lot of environment detail or displayed objects lost.

Regards!
 
Last edited by a moderator:
The amount of optimization that they have behind must be crazy in comparison to the rushed versions that PS360 got

I'm sorry but that's not true. All of these games have 5-6 years' worth of an entire development team's experience behind them, the programmers are very familiar with both the systems' limitations and best practices, and none of the games were rushed. ME3 had a longer development cycle than ME2, and the AC3 engine has been in development since AC2's release.

They were making the most out of the current consoles and it is quite clear, too.

Also, the Wii U versions are usually the same on the pixel level. Some ports have a few details missing, like some trees in Darksiders 2, but altogether the games are so close that even an experienced gamer couldn't tell the difference most of the time. A blind test with DigitalFoundry's material (and without slow-downs or framerate graphs) would almost certainly fool anyone.
 
I'm sorry but that's not true. All of these games have 5-6 years' worth of an entire development team's experience behind them, the programmers are very familiar with both the systems' limitations and best practices, and none of the games were rushed. ME3 had a longer development cycle than ME2, and the AC3 engine has been in development since AC2's release.

They were making the most out of the current consoles and it is quite clear, too.

Also, the Wii U versions are usually the same on the pixel level. Some ports have a few details missing, like some trees in Darksiders 2, but altogether the games are so close that even an experienced gamer couldn't tell the difference most of the time. A blind test with DigitalFoundry's material (and without slow-downs or framerate graphs) would almost certainly fool anyone.
Sorry, but I obviously know what I'm talking about, and I'm telling the truth. Those game had several month if not years of pure optimization over the already optimized versions of PS3 or 360, and it's performance is an absolute disaster.

When we factor the tiny chips and the outdated 1997 CPU architecture, and possibly a 2005 GPU technology which also has to execute code from 1999 technology natively, then things get pretty clear. This console has already given ALL it can do from a technical pow.

Will we get "beautiful" games on it? Of course, the same way than SNES, PSX, PS2 or PS3 have games that are still beautiful by today standards, but technically speaking, WiiU has to be placed as an intermediate console between the Wii and PS3 or 360.
 
Those game had several month if not years of pure optimization over the already optimized versions of PS3 or 360, and it's performance is an absolute disaster.

I don't think you've quite got this optimisation thing.

Because the systems are different, not all pre-existing optimisations may transfer across, and there may be some new ones that have yet to be made. The memory difference, for example, could benefit storing assets in a less compressed state, and adopting soft vsync in the overscan area in some games could benefit frame rates considerably (as it does for the PS360).
 
Yes, Broadway was 19mm^2 at 90nm, but there are some things to point here:
1. More CPU speed = more transistors.

Not necessarily. A shrink can enable improved frequencies by itself, as well as process maturity. Plus you don't know that Wii was maxing out Broadway's frequency headroom.. in fact we pretty much know it wasn't because IBM sold the chip specified up to 1GHz.

2. Designs like CPU, and if they're old and outdated designs from 1997 even more, aren't possible to scale down proportionally. PS3's Cell, being a much modern CPU approach, went from 235mm^2 to 115mm^2 when fabricated at 45nm. And this was not only a much modern CPU, but also from a company who invests BILLIONS in its hardware.

That makes no sense. A design's age has nothing to do with how well you can shrink it.

A reduction from 19mm^2 to 10mm^2 is the absolute maximum someone can expect from WiiU, and this being overly optimistic.

Nah that's just you making something up. You're claiming less than 40% average density improvement at each full node shrink. That's absolutely meager, especially if you go back that far. You'd be hard pressed to find an example in the industry of such poor scaling.

Furthermore, the outdated Broadway architecture wasn't even made with SMT in mind, so its obvious that 3 cores like those won't scale performance in any way comparable to current multi-core processors.

Obvious how?

You either support SMT or you don't. Obviously support had to be added. That doesn't mean it scales worse. Not just why would it, but how would it? How do you even plausibly justify this claim?

BTW your older post had numerous errors too, for instance no phone is using 22nm (some are 28nm or 32nm) and there's no reason whatsoever to assume the GPU isn't on TSMC 40nm - outside of business deals there's no real reason why manufacturing both CPU and GPU at IBM's 45nm would save money (and it's probably substantially less dense than TSMC's 40nm). On the other hand, AMD designed their GPUs for TSMC so it would be a real design effort to port it elsewhere, so anything else would be a big cost. There's a reason current PS3s are doing exactly this, IBM 45nm for Cell and TSMC 40nm for RSX. Your stuff about DDR3 being higher latency than GDDR3 is also totally fabricated. We don't know what the main RAM latency is like on Wii U but on XBox 360 and PS3 it was VERY bad - over 150ns. Most of that isn't captured by the memory itself.
 
I don't think you've quite got this optimisation thing.

Because the systems are different, not all pre-existing optimisations may transfer across, and there may be some new ones that have yet to be made. The memory difference, for example, could benefit storing assets in a less compressed state, and adopting soft vsync in the overscan area in some games could benefit frame rates considerably (as it does for the PS360).
Of course, different architecture means different optimizations are possible, so you're right when you say that at the end, we can only speculate about this.
But going by the facts, we have a 1997 CPU clocked at 1/3 or less than Xenon and with a per-clock performance much, much, much lower than the Xbox 360's CPU.
On the GPU side, we have at best a 2004-2005 part that has to be compatible with 1999 architecture (so it can't be as efficient as if it was a 2004-2005 part without that limitation behind), and that although at 45nm, it only is 156mm^2 even when factoring a whole chunk of memory on it (32MB its a HUGE amount, to say the least).
A 50-60mm^2 GPU is the most optimistic scenario I can think of here, looking at how big the 10MB of eDram of the daughter die of Xenos was.

We don't know what the main RAM latency is like on Wii U but on XBox 360 and PS3 it was VERY bad - over 150ns. Most of that isn't captured by the memory itself.
Where did you get that from? Even Wii had much lower latencies than that. Wii's main memory latency was 15ns if I recall correctly, so Xbox360 or PS3 at least must be below 10ns.
WiiU on the other hand, has to be over 100ns because it uses a low class memory (DDR3, not even GRAPHICS DDR3!!!!).

Regards!
 
Last edited by a moderator:
Yep, still an example, though I believe IBM gave Sony a really good deal on fabbing which is why they never started Cell production in the Nagasaki plant.
 
Where did you get that from? Even Wii had much lower latencies than that. Wii's main memory latency was 15ns if I recall correctly, so Xbox360 or PS3 at least must be below 10ns.
WiiU on the other hand, has to be over 100ns because it uses a low class memory (DDR3, not even GRAPHICS DDR3!!!!).

Regards!

Several presentations show XBox 360 L2 cache misses to be over 500 cycles and posts from developers here have confirmed that it's similar for PS3: http://forum.beyond3d.com/showthread.php?t=20930

At 3.2GHz 500 cycles is over 150ns. This isn't the latency of the DRAM in isolation but getting the read back to the CPU over the memory controller.

Why do you think graphics memory would be optimized for better latency? Graphics isn't latency sensitive. Saying that the RAM itself has to be over 100ns is just pulling numbers out of thin air.
 
Of course, different architecture means different optimizations are possible, so you're right when you say that at the end, we can only speculate about this.
But going by the facts...
Which you repeat despite us not knowing them as facts. The CPU is not a 1997 CPU. It may be very close in design, or may be fairly different, but it's not 1997 tech (unlike Wii which was a higher clocked 1997 CPU). And the GPU is not a 1999 architecture - there's nothing the TEV can do that a pixel shader can't. The custom BC hardware could simply be a TEV interpreter for the modern DX10 GPU for all we know.

Where did you get that from? Even Wii had much lower latencies than that. Wii's main memory latency was 15ns if I recall correctly, so Xbox360 or PS3 at least must be below 10ns.
Based on what? XB360/PS3 being more powerful ergo the numbers must be smaller? Wii uses a different RAM tech (1T-SRAM).

I don't mind people having opinions, but be clear your views are nothing like facts and you shouldn't be pushing them as such.
 
Back
Top