1 Million Tears or Why the Wii U is Weak *SPAWN*

It just feels like some of the criticism is kind of overblown. I know Wii U is really underwhelming in terms of processing power but I think if developers had given same effort/time of Wii U port as much as their PS3/360 version, then it should have been at least as good as PS3/360 version if not better.
 
It has half the bandwith and a slower CPU, who knows what the GPU is like. Basing it on the results we've seen so far, its just not the case.
 
It just feels like some of the criticism is kind of overblown. I know Wii U is really underwhelming in terms of processing power but I think if developers had given same effort/time of Wii U port as much as their PS3/360 version, then it should have been at least as good as PS3/360 version if not better.

Isn't matching 8 year old hardware setting the bar pretty low?

I believe the wiiu games will surpass ps360, but it's pretty sad that it's up for debate at this point.
 
Isn't matching 8 year old hardware setting the bar pretty low?

I believe the wiiu games will surpass ps360, but it's pretty sad that it's up for debate at this point.

Yeah, I thought Wii U would be able to do at least 360/PS3 level IQ at 1080p, but you know what happened. But I do think its games will look considerably better than what would be possible with 360/PS3, especially with its much more modern feature set. It will take a while unfortunately.
 
Don't rely on a simple step up in shader model and compute proficiency(which is what your talking about) to somehow make games look significantly better if better at all. What it can do with those abilities will be severely limited by the hardware itself.

Just because something is allowed doesn't mean it will be taken advantage of, or can be. The 360 had a tessellation unit inside of it, but because it was a huge resource hog in comparison to the capabilities of the 360 itslf, it was rarely if ever utilized in games.

You should already know all of this, its common sense.
 
Exophase said:
Not necessarily. A shrink can enable improved frequencies by itself, as well as process maturity. Plus you don't know that Wii was maxing out Broadway's frequency headroom.. in fact we pretty much know it wasn't because IBM sold the chip specified up to 1GHz.
Of course Nintendo didn't max Broadway's frequency... but considering that WiiU is even more pathetic hardware-wise compared to its release date than the Wii ever was, it's obvious that Expresso is even less maxed out than Broadway at 2006.

That makes no sense. A design's age has nothing to do with how well you can shrink it.
But the fact that both sony and microsoft invests billions of dollars to make their chips powerful and scalable while Nintendo spends only thousands of dollars grabbing the cheapest and most inneficient chips available surely has.

Nah that's just you making something up. You're claiming less than 40% average density improvement at each full node shrink. That's absolutely meager, especially if you go back that far. You'd be hard pressed to find an example in the industry of such poor scaling.
Yes, it was something I made up... but because I was being overly optimistic in Nintendo's favour!!!!
I mean, I compared the shrink to the one the PS3 had, and I didn't even factor the fact that Sony invested a lot to achieve those results while Nintendo only invested on that granny-gimmik that is it's controller, and not on the chips.
If we try to be realistic, we could assume a shrink from 19 to 16 mm^2 and we would still be really generous in that regard.
Remember that I'm always assuming the best possible scenario for the WiiU, and the worst possible scenario for PS3-360, so no one can say that I'm an anti-nintendo fanboy or something like that.

Obvious how?

You either support SMT or you don't. Obviously support had to be added. That doesn't mean it scales worse. Not just why would it, but how would it? How do you even plausibly justify this claim?
Because it's not the same to add SMT support over a chip that wasn't designed with that feature in mind, that supporting SMT from the roots of the chip.
I mean, what will be better:
A recently cooked meal served at the optimal temperature or a meal that has been put on the freezer during years and then warmed up in the microwave?

On the other hand, AMD designed their GPUs for TSMC so it would be a real design effort to port it elsewhere, so anything else would be a big cost.
Never underestimate Nintendo when talking about hardware. A less powerful console means less costs on game development, so it wouldn't be strange that they invested in making the console even less powerful than what would be if grabbing generic chips from IGN.

Nintendo can also customize its chips, and if less power means less costs when developing, those modifications sure are done and not in small quantities I presume.

The fact that Epic has stated that it's impossible for WiiU to run UE4, and the fact that we will never see a game on the WiiU that can compare to Halo 4 from a technical standpoint demonstrates that this is what we have, and this is what WiiU really is.

Your stuff about DDR3 being higher latency than GDDR3 is also totally fabricated.
We aren't comparing only DDR3 to GDDR3, but Nintendo's cheapest, low-customized DDR3 to higest-quality optimized for insane bandwidth and low latency Microsoft and Sony GDDR3.
This is what we have, and I'm really sorry because I consider myself a great Nintendo fan (although not a fanboy, and this is why I have to defend the truth above any company or personal tastes).

Exophase said:
At 3.2GHz 500 cycles is over 150ns. This isn't the latency of the DRAM in isolation but getting the read back to the CPU over the memory controller.
Ok, that has a lot more sense to me. But taking in consideration that Xbox 360 was a highly advanced design optimized to the limit, this 150ns claim is for sure much less than what Nintendo offers on his new machine. Of course, since the CPU is much slower, in cycles we would be moving on similar figures (lets say 750 to 1000 cycles), but that would be much more time in ns.
Thanks for your informations, that has helped us a lot on determining how inefficient the WiiU is compared to both PS3 and X360!

Shifty Geezer said:
The CPU is not a 1997 CPU. It may be very close in design, or may be fairly different, but it's not 1997 tech (unlike Wii which was a higher clocked 1997 CPU).
Well, I rely on the rumours pointing at a 3-core overclocked broadway as the WiiU CPU. If that same person nailed cores and frequency much before they were leaked or confirmed, and considering that die sizes also clearly point to that direction, I think we can speculate considering this as facts.
It's like when the Wii was confirmed to be a 50% higher clocked Gekko, it wasn't official, but that's what we had.

Based on what? XB360/PS3 being more powerful ergo the numbers must be smaller? Wii uses a different RAM tech (1T-SRAM).
Of course, Wii uses an older, much slower tech. So if those were the numbers for PS3 and 360, I can't even imagine how big they would be (we are speaking of latencies here) on the original Wii.
Of course, on WiiU they would be a bit better, but still far worse than on PS360.

We have the clocks (DDR3 1600), bus width (64 bit), and bandwidth (12.8 GB/s). And only half is available for games.
So maximum bandwith available for games is only 6.4GB/s then. Thanks, it's even worse than I expected.
WiiU having the exact same memory bandwidth than the original Xbox.
 
Of course, Wii uses an older, much slower tech.
You appear to believe older = slower. SRAM is far older than DDR5, so I guess it's far slower...

So maximum bandwith available for games is only 6.4GB/s then. Thanks, it's even worse than I expected.
This perfectly highlights how your biased thinking is affecting your understanding. Read what I wrote in context as a response to a specific question:
How much info is out there about 2GB of main memory on Wii U other than it is DDR3 based?
We have the clocks (DDR3 1600), bus width (64 bit), and bandwidth (12.8 GB/s). And only half is available for games.
Honestly, you are completely out of touch with technological discussion, like theizzzeee who is just as clueless in the other direction imagining, Wii U could have a DX 11 APU and a CPU based on Power7/8 technology. Your rational is just noise and you've a lot of learning to do.
 
Of course Nintendo didn't max Broadway's frequency... but considering that WiiU is even more pathetic hardware-wise compared to its release date than the Wii ever was, it's obvious that Expresso is even less maxed out than Broadway at 2006.

It's not obvious at all what the maximum frequency is for this uarch under this implementation, power, etc.

You're also completely deflecting the entire point of the comment, which was countering your claim that Nintendo must be spending proportionately more die area to reach the higher clock speed. You have no evidence for this, now you're saying that it has plenty of extra headroom! Why use a bigger design only to end up with lots of headroom?

But the fact that both sony and microsoft invests billions of dollars to make their chips powerful and scalable while Nintendo spends only thousands of dollars grabbing the cheapest and most inneficient chips available surely has.

Your arguments sure tend to rely on made-up figures..

Yes, it was something I made up... but because I was being overly optimistic in Nintendo's favour!!!!
I mean, I compared the shrink to the one the PS3 had, and I didn't even factor the fact that Sony invested a lot to achieve those results while Nintendo only invested on that granny-gimmik that is it's controller, and not on the chips.

What are you talking about? Maybe you don't understand that Wii U's CPU is two nodes ahead of Wii's, not one. Compare with other cases of two node leaps:

http://beyond3d.com/showthread.php?t=62651

PS2 shrunk 2.5x from shrinking plus integration.
PS3's Cell shrunk over 2.04x. There's no data on RSX.
Gekko to Broadway shrunk 2.26.
XBox 360 got 2.13x from shrinking plus integration.

So I don't know how your 1.9x figure is highly optimistic for Nintendo.

If we try to be realistic, we could assume a shrink from 19 to 16 mm^2 and we would still be really generous in that regard.

That's just absurd.

Because it's not the same to add SMT support over a chip that wasn't designed with that feature in mind, that supporting SMT from the roots of the chip.
I mean, what will be better:
A recently cooked meal served at the optimal temperature or a meal that has been put on the freezer during years and then warmed up in the microwave?

I will consider listening to you if you can even show you understand what the technical requirements are for SMT, and what parts of the core it impacts. Your vague analogies fall very flat considering this. But I won't spoil it for you just yet.

At the very least give a plausible explanation for this "bad SMT" that has much worse performance scaling, that doesn't rely on poor assumptions for how the technology works.

Never underestimate Nintendo when talking about hardware. A less powerful console means less costs on game development, so it wouldn't be strange that they invested in making the console even less powerful than what would be if grabbing generic chips from IGN.

You were making on an argument on an assumption that both parts must have been developed by IBM. Trying to argue that Nintendo will choose the inferior path is a very poor support. It's not enough to claim that they could have done it, you have to actually give some reason to believe that they did. Given the big cost of engineering in migrating a modification of a design AMD made on TSMC's 40nm (or 55nm, depending) to IBM's 45nm why would you assume Nintendo did this? For that matter, if Nintendo is spending only thousands of dollars by using already designed products as much as possible how does this even make a shred of sense?

Also, grabbing generic chips from IGN.. what on earth does that mean? IGN is a parts distributor now??

Nintendo can also customize its chips, and if less power means less costs when developing, those modifications sure are done and not in small quantities I presume.

This has nothing whatsoever to do with what we're talking about. Moving from TSMC 40nm to the much less dense 45nm IBM node is more expensive to develop AND would most likely cost significantly more per chip. You think it's cheaper because they get other stuff made at IBM but you're basically just making that up too.

We aren't comparing only DDR3 to GDDR3, but Nintendo's cheapest, low-customized DDR3 to higest-quality optimized for insane bandwidth and low latency Microsoft and Sony GDDR3.
This is what we have, and I'm really sorry because I consider myself a great Nintendo fan (although not a fanboy, and this is why I have to defend the truth above any company or personal tastes).

Low customized? There's no such thing as customized DDR3. If you use a custom RAM it stops being DDR3. And I'm not aware of any console ever that has developed their own custom RAM (although a few have used parts that were much less prevalent in the industry like 1T-SRAM or RDRAM). And the DDR3 they're using is far from the lowest spec you can buy - as far as bandwidth is concerned. We don't know about latency and there's no point guessing.

I don't care if you love or hate Nintendo. I care about technical discussions and the technical quality of your posts is just awful. You assume and make up so much about things you don't know about. That may fly in other forums you post on but that isn't going to fly here. And you're doing yourself a great disservice if you aren't willing to listen and learn (and go out and do your own real research)

Ok, that has a lot more sense to me. But taking in consideration that Xbox 360 was a highly advanced design optimized to the limit, this 150ns claim is for sure much less than what Nintendo offers on his new machine. Of course, since the CPU is much slower, in cycles we would be moving on similar figures (lets say 750 to 1000 cycles), but that would be much more time in ns.
Thanks for your informations, that has helped us a lot on determining how inefficient the WiiU is compared to both PS3 and X360!

No it hasn't at all!

You don't know anything about the latency of a cache miss on Wii U. Your guesses are based on NOTHING. Believing a cache miss burns as much as 800ns is insane. The numbers for XBox 360 and PS3 are some of the worst I've seen for a performance optimized part (as opposed to a power optimized one, for instance). There's no way Wii U is going to be several times worse. It could be about the same or it could be better (even substantially better)

Well, I rely on the rumours pointing at a 3-core overclocked broadway as the WiiU CPU. If that same person nailed cores and frequency much before they were leaked or confirmed, and considering that die sizes also clearly point to that direction, I think we can speculate considering this as facts.
It's like when the Wii was confirmed to be a 50% higher clocked Gekko, it wasn't official, but that's what we had.

We don't know how he derived clock speed, but in many cases that's much less work than fully reverse engineering a processor. It'd take a lot of work to really confirm that it behaves just like Gekko with no improved (or modified, even for the worse) timings, no increased buffer sizes (like for OoO), no added execution units, and no new instructions. Since he only had the thing for a few days I really question how comprehensive his research could have been.

Of course, Wii uses an older, much slower tech. So if those were the numbers for PS3 and 360, I can't even imagine how big they would be (we are speaking of latencies here) on the original Wii.
Of course, on WiiU they would be a bit better, but still far worse than on PS360.

It's clear you have no idea about what 1T-SRAM is vs the GDDR2 and XDR used in XBox360 and PS3 respectively. Hint: it's lower latency, not higher.

So maximum bandwith available for games is only 6.4GB/s then. Thanks, it's even worse than I expected.
WiiU having the exact same memory bandwidth than the original Xbox.

He was saying half of the RAM (ie, 1GB) is available for games, not half of the bandwidth. Use your brain. Why would only half of the system bandwidth be available? If we're talking about utilization efficiency, why would rate Wii U at 50% then rate XBox at 100% to make the two comparable? It makes no sense.
 
Back
Top