Wii U hardware discussion and investigation *rename

Status
Not open for further replies.
I think we can all agree at this point (and probably would have for years) that Nintendo makes console hardware for Nintendo. Or at least this is clearly the case for Wii, DS, 3DS, and now Wii U. This has two obvious implications - the hardware gets the new peripherals that support their game ideas and the raw power is only as much as they feel they comfortably want to utilize for that generation (or maybe just a little more). But it seems like there could be more nuances to this.

Nintendo, as a first party software developer, is very used to the Broadway core at this point. And a lot of the developers probably also have little experience developing on XBox 360 and PS3 since their work is exclusive. Giving them three of them clocked twice as fast and with better L2 cache is going to look like a huge improvement, and more than enough to facilitate a generational improvement in their games. And it's

It's totally reasonable that Nintendo would want to stick with the levels of tech that they are for their own games, since AAA titles on other platforms cost far more to develop. So right now Nintendo is probably making much more profit per game sold, taking much less risk (although most of their titles are inherently pretty safe anyway), and selling almost as many copies. Given that they could be dead last in the console race and probably still consider the console an overwhelming success if they can keep selling the same quantity of first party titles. And I don't think that worse graphics and CPU is threatening that, if anything's threatening it it'd be Nintendo losing franchise appeal and stagnating. But that's a very different problem.

In this light the alleged design actually makes a ton of sense. It's enough to support the budget levels they want to spend on games. It's an architecture they're very familiar with and have a lot of tool support in place for so costs little to move to. You get cheaper BC (putting a separate Broadway on the die might not be a huge area expense but it'd definitely cost some in engineering time) and you get a good licensing deal from IBM who seems about as eager to promote eDRAM on CPUs as pushing newer CPUs.

I could actually see going for better hardware as being a minor disadvantage in some ways, even ignoring costs. Letting third parties produce much better looking games would put more pressure on Nintendo to increase their development efforts. And keeping the generational boosts in check gives them more room to grow next time. Eventually everyone will hit a wall so it makes sense that Nintendo would want to take things as slowly as possible. It could be that Nintendo has a backup plan to release the next console early if this one is in trouble and they think it'll help. It's possible they even had such a plan for Wii but went with it because they thought they could get away with it (and did).

Sure, Nintendo could have probably offered the same performance using even smaller and lower power more modern cores, but if more performance isn't even desirable why bother? The power consumption difference is negligible with the GPU taking the bulk of it and the die area is also pretty negligible and possibly already pad limited (could in fact be a reason why they stuck with a 64-bit DRAM interface).

But yeah, this is all a big downer for third parties and probably not good for the industry at large. But none of that necessarily matters that much to Nintendo. What I find really mind boggling is why they'd hamper the battery life on the controller. They seem really determined to screw up on battery life these days, which is pretty disappointing given the history with their older handhelds. There is no way that the dollar they saved on the battery is justifiable given the bad PR and bad reaction they'll get from users. People want devices they don't have to constantly remember to charge. No one wants to reserve a wall socket for their controller (or be plugged in to the wall for that matter). All it'll take is the thing dying enough times to make some owners want to use the thing less, and while they're using it less they end up buying fewer games.
 
Talk about shooting yourself in the foot if the clock speed is true. For the love of god Nintendo, boost the speed up some and add VMX units or something. Geez, no wonder the 4A guy was so disappointed. While I'm sure plenty of multiplatform titles never pushed the Cell or the Xenon to the max, having a highly refined memory system (supposedly) can only do so much. I'm a bit shocked the multiplatform titles on the Wii U are even capable of running at all. Any indications of huge CPU slow down on any games like AC3 or Darksiders 2?

I don't know for sure, but I think it may could be too late ;)
 
I would imagine that quite a lot of games are fill rate bound by this point in the life cycle.

That's why native resolutions have been dropping slowly, COD being a prime example of this.

COD and Halo raised resolution in their most recent installments ;)
 
Are some people here really thinking they can value a CPU with its clock speed?

I thought the people here have some knowledge about PC tech?
 
Are some people here really thinking they can value a CPU with its clock speed?

I thought the people here have some knowledge about PC tech?

thats the choice of Nintendo, it is mainly a 3 core singlethreaded GC/Wii CPU with more cache and higher frequency (and maybe out of order who knows), how else and based on what criteria do you want people to value it ? :rolleyes: with this CPU you are not going to get miracles at 1.24 GHZ, especially that we know its silicon budget....
 
Are some people here really thinking they can value a CPU with its clock speed?

I thought the people here have some knowledge about PC tech?

No one is just straight comparing clock speed.

We are knowledgeable enough to know that a very old design CPU with limited OoOE is unlikely to be able to compete with a CPU with about 2x the transistor count (assuming similar density) and running at nearly 3x the clock speed.

BTW: This has nothing to do with PC.
 
Last edited by a moderator:
how else and based on what criteria do you want people to value it ? :rolleyes: with this CPU you are not going to get miracles at 1.24 GHZ

No one expects miracles to from the Wii U CPU, but to say it is "slower" than Xbox360-Xenos based on the clock speed is nonsense, if you look at the technical details of Xenos and the known specifications of the Wii U CPU.

The CPU is truly the weakest part of the Wii U system. But with the DSP and the fast connection to GPU and eDRAM it has some big advantages.

In fact with optimised code even the CPU alone should show more power than Xenos in games, but not much more. Code from Xbox360 ported to Wii U really needs optimizing, for now, most games are only straight ports from Xbox360 games and use timings that are good on a Xbox360 system but don't work efficiently with the Wii U system.

The thing is: A straight port from Xbox360 is obviously very easy. Darksiders 2 developers mention that it took only five weeks and three people. And most developers only wanted to get the Wii U game finished before launch. The optimization of the graphics engines for the Wii U will take some time but once it is done the Wii U will show its full potential.
 
No one expects miracles to from the Wii U CPU, but to say it is "slower" than Xbox360-Xenos based on the clock speed is nonsense, if you look at the technical details of Xenos and the known specifications of the Wii U CPU.

The CPU is truly the weakest part of the Wii U system. But with the DSP and the fast connection to GPU and eDRAM it has some big advantages.

In fact with optimised code even the CPU alone should show more power than Xenos in games, but not much more. Code from Xbox360 ported to Wii U really needs optimizing, for now, most games are only straight ports from Xbox360 games and use timings that are good on a Xbox360 system but don't work efficiently with the Wii U system.

The thing is: A straight port from Xbox360 is obviously very easy. Darksiders 2 developers mention that it took only five weeks and three people. And most developers only wanted to get the Wii U game finished before launch. The optimization of the graphics engines for the Wii U will take some time but once it is done the Wii U will show its full potential.

Nice selective quoting :rolleyes:
The parts you missed:
it is mainly a 3 core singlethreaded GC/Wii CPU with more cache and higher frequency (and maybe out of order who knows), how else and based on what criteria do you want people to value it ? :rolleyes: with this CPU you are not going to get miracles at 1.24 GHZ, especially that we know its silicon budget....
He did not say it was slower based just on clock speed!
 
Last edited by a moderator:
No one expects miracles to from the Wii U CPU, but to say it is "slower" than Xbox360-Xenos based on the clock speed is nonsense, if you look at the technical details of Xenos and the known specifications of the Wii U CPU.

Nobody has said this. It's a classic strawman.

In fact with optimised code even the CPU alone should show more power than Xenos in games, but not much more.

How on earth to you come to that conclusion? I mean, it may or may not be true, but you seem to have simply stuck your finger in the air and come up with a complete guess.
 
Some things to consider before saying anything about the CPU.


  • The cores are a very different architecture than the PPE style cores. Among other things they have OoE, shorter pipeline and a larger different instructionset.
  • The OS runs on a separate ARM processor. AFAIR on xenos the os uses an entire thread on one of the cores.
  • Also the sound, while not a huge part of the processing, is handled by a DSP, freeing up even more resources.
  • The cache is much larger.

This means that the CPU is good at doing all of the non vector/SIMD stuff, leaving that to the GPU.

The CPU could well be as powerfull or more at what it is designated to do, than the equivalent in PS3 or 360.

It bears repeating because some people still don't seem to get it: Number of cycles/hz is not a very useful way of comparing architectures that are not very similar.

Also high clockrates are one of the chief reasons for heat.
The lower clockrate you can have while maintaining performance the better.
You'll need less cooling hardware. Have less noise. Have a system that lasts longer and has fewer failures. Etc.
 
A2 is in-order and with four-way multithreading, so it relies on a lot of parallelism. That would be probably bad for games.
A2 has excellent SIMD peak throughput (256 bit SIMD with FMA x 16 cores) and excellent memory/cache performance (4x SMT memory latency hiding + huge 32 MB L2 cache). Both of these factors are very important for game performance.

Yes, A2 relies a lot in parallelism (multithreading, SMT, SIMD and FMA are all forms of parallelism). But games often tend to have lots of parallelism. A2 has hardware transactional memory support built in it's big 32 MB L2 cache, it makes it easier to write efficient multithreaded code.

I wouldn't compare A2 to Cell (someone made this comparison earlier), because A2 has unified memory access, automated (very large) caches and a single instruction set (Cell SPUs had a separate instruction set). You had to manually move memory from/to different computational units in Cell (and ensure synchronization and data correctness). A2 does all that automatically, and it has transactional memory as well, to make things even more efficient / easier (even compared to PC CPUs).
 
Some things to consider before saying anything about the CPU.


  • The cores are a very different architecture than the PPE style cores. Among other things they have OoE, shorter pipeline and a larger different instructionset.
  • The OS runs on a separate ARM processor. AFAIR on xenos the os uses an entire thread on one of the cores.
  • Also the sound, while not a huge part of the processing, is handled by a DSP, freeing up even more resources.
  • The cache is much larger.

This means that the CPU is good at doing all of the non vector/SIMD stuff, leaving that to the GPU.

The CPU could well be as powerfull or more at what it is designated to do, than the equivalent in PS3 or 360.

It bears repeating because some people still don't seem to get it: Number of cycles/hz is not a very useful way of comparing architectures that are not very similar.


Also high clockrates are one of the chief reasons for heat.
The lower clockrate you can have while maintaining performance the better.
You'll need less cooling hardware. Have less noise. Have a system that lasts longer and has fewer failures. Etc.


but we know the following :

1- the WiiU CPU bears a lot of similarities with a very old CPU design from the 1999-2000 era that is the GC/Wii CPU. Whatever the improvements IBM brought to the table, the WiiU CPU should still be considered partly as an ancient CPU. So it is not completely a new design and thus we should not expect unrealistic levels of improvements in efficiency.

2- the WiiU CPU is a 3 core singlethreaded 1.24 GHZ, the xbox360 CPU is a 3 Core dualthreaded 3.2 GHZ. Unless the WiiU CPU Cores are 6 times more efficient than the xbox360 cores (very unlikely bearing in mind my first point), it would be safe to assume that the xbox360 CPU is more powerful than the WiiU CPU. I would even conclude that if the WiiU CPU turns out in practice not a lot less powerful than xbox360 CPU, that would still be considered as a technological achievement for IBM. (improving an ancient CPU design and at only 1.24 GHZ).
 
Last edited by a moderator:
1- the WiiU CPU bears a lot of similarities with a very old CPU design from the 1999-2000 era that is the GC/Wii CPU. Whatever the improvements IBM brought to the table, the WiiU CPU should still be considered partly as an ancient CPU. So it is not completely a new design and thus we should not expect unrealistic levels of improvements in efficiency.
Incorrect. It may be the difference between original performance of the 1999 CPU and twice its performance is a simple tweak in the execution units. Being 'based on' doesn't tell us anything about how efficient the final design is. We'd need low-level details to know that.

2- the WiiU CPU is a 3 core singlethreaded 1.24 GHZ, the xbox360 CPU is a 3 Core dualthreaded 3.2 GHZ. Unless the WiiU CPU Cores are 6 times more efficient than the xbox360 cores (very unlikely bearing in mind my first point), it would be safe to assume that the xbox360 CPU is more powerful than the WiiU CPU. I would even conclude that if the WiiU CPU turns out in practice not a lot less powerful than xbox360 CPU, that would still be considered as a technological achievement for IBM. (improving an ancient CPU design and at only 1.24 GHZ).
In total throughput, Xenon will be more powerful. But in terms of executing game code, a lot depends on the code devs are using. I believe Xenon will be more powerful because I believe devs are writing optimised code that makes efficient use of the processor, but it's wrong to compare the straight numbers. GHz*threads is not at all accurate!
 
DoctorFouad:

There is a lot of "ancient" technology in the computing world, both on the hardware and software side (I'd say most real ideas in use today (not fluff or variations) in the field are over 20 - 30 years old .
That a core is old, doesn't necessarily equate with worse (the PPE core is very likely also based on old stuff). For example the core of the ARM and Intel ISAs date back to the mid 80's with key ideas and mechanisms going back much further.
That a design is old on the contrary much of the time means, that it is more tightly/frugally designed and you can fit a lot of auxiliary stuff or other cores on the same die and run it at higher speeds.

The xenos can't quite be equalled with five 1.6 Ghz CPUs. There is still an overhead for multithreading and some tasks are not good with it (there is a reason people use different cores and not just single CPUs with huge sets of registers).
The Expresso could well be faster at branching and cases where IPS counts (some would call it general purpose code).
xenos also handles a lot of SIMD, where the advanced GPU in the Wii U would be better for all of that.
 
In total throughput, Xenon will be more powerful. But in terms of executing game code, a lot depends on the code devs are using. I believe Xenon will be more powerful because I believe devs are writing optimised code that makes efficient use of the processor, but it's wrong to compare the straight numbers. GHz*threads is not at all accurate!

Plus, isn't the 2nd thread on 360 cores sort of a "light" or hyperthreading type thread? It isn't a true 100% full thread, just something to grab some unused execution resources IIRC.

FWIW Wii U CPU is 3 cores at 1.7 the Wii CPU clock. So 3X1.7=5.1X Wii CPU. Then with more cache maybe you can call it 6X. I got that from a GAF post, but I guess 6X isn't a horrible generational leap, though back in the PS360 day it wasnt a great one either.
 
Plus, isn't the 2nd thread on 360 cores sort of a "light" or hyperthreading type thread? It isn't a true 100% full thread, just something to grab some unused execution resources IIRC.

In some ways, I think you could make a comparison between OOE and this dual-threading?
 
Plus, isn't the 2nd thread on 360 cores sort of a "light" or hyperthreading type thread? It isn't a true 100% full thread, just something to grab some unused execution resources IIRC.
That's what threading is. If you have a thread that has its own parallel code stream and execution units, you've got a core. ;) Threading is about optimising use of execution units by running multiple streams of code through the processor. Depending on the number and type of execution units, threading can have no benefits at all.

For Wii U's CPU capability, we need to know firstly the peek throughput in terms of execution units, and then compare its efficiency vs. code running on Xenon.
 
Status
Not open for further replies.
Back
Top