Wii U hardware discussion and investigation *rename

Status
Not open for further replies.
I hate to be the elephant in the room, but after arguing with a dude that loves to spam articles in his post, his reasoning for not buying into the 160:8:8 GPU conclusion did make me question it a bit. A couple of reasons, one being are there actually enough GPU dye photo's out there to make an accurate comparison? Is there actually a definitive reasoning behind the 160 SPU theory? I know some of the theory comes from the idea that a 320 SPU GPU should trounce the Xenos, but then again shouldn't the 1.2Tflop gpu powering the Xbox One be performing much better using the same assumptions? Not to mention the fact that its known that developers offloaded work to the Xenon CPU, something that may close the gap a bit, since the CPU powering the Wii U would not be able to accommodate that approach. I guess my real question is, did the specs ever get beyond theory? Was anyone ever able to actually prove anything?
 
We've never gotten an official WiiU spec (nor to my knowledge did the Wii ever get a definitive spec sheet from Nintendo either) but 160SP makes the most sense based on what is known from die-shots et al. I'm not the person to discuss this with as I can't differentiate between groups of trannies on a die shot and say "this is X, that is Y" but those arguing for 320 seem less convincing to me as their arguments tend to too quickly devolve into statements of faith 'but Nintendo wouldn't...', 'title X looks awesome and way better than Title Y on 360/PS3', 'Look at X' (I actually really want this to launch so I can 'look at X' and not CG or badly compressed YT), etc, etc
 
8 TUs on a 320 SP GPU would be a strange, lopsided design.

160 SPs and 8 TUs on the other hand was a configuration used by the Radeon 6450 (someone mentioned this earlier actually). The 8 ROPs I see mentioned could be a Nintendo upgrade from the more typical 4 for such a config.
 
We've never gotten an official WiiU spec (nor to my knowledge did the Wii ever get a definitive spec sheet from Nintendo either) but 160SP makes the most sense based on what is known from die-shots et al. I'm not the person to discuss this with as I can't differentiate between groups of trannies on a die shot and say "this is X, that is Y" but those arguing for 320 seem less convincing to me as their arguments tend to too quickly devolve into statements of faith 'but Nintendo wouldn't...', 'title X looks awesome and way better than Title Y on 360/PS3', 'Look at X' (I actually really want this to launch so I can 'look at X' and not CG or badly compressed YT), etc, etc

X really does look like a great game. And a pretty great looking one as well. I just wish Nintendo released a High quality feed of the uncut gameplay segment like they did with the past 2 trailers. The web stream is so horribly compressed you cant even make out fingers or facial features, everythings a chunked out blob.

But the first 2 trailers are a different story, On the Wii u's Eshop, you can find the highest quality videos of the trailers available, while they arent lossless, they are much much cleaner than even the best youtube video of the trailers, to the extent you can see a lot of details that were compressed out on youtube.

And its all real time to boot, if you keep an eye out, even in the hangar bay sequence, you can see the lod engine in action.

This and destiny are probably the biggest things Im looking forward to right now.
 
The consensus seems to be there are four block thingies ; then I learnt the Radeon that had 320SP used block thingies that had 40SP, so the Wii U probably has four 40SP block thingies. I left it at that.

I also remember the Radeon 5450 (not the same gen but just one up) had rather high performance with only 80 SP, you can play some ports of PS360 games on it. Yeah no ûber details and res and rate but the consoles were not that great at that either, they worked thanks to the guaranteed fixed hardware and targetted games. I remember that circa 2007-2008 people would compare the consoles to a 8600GT.
 
Its tough to know what to believe honestly. There are people here and at Gaf that have seemed pretty convinced that that its 160 SPU's based on looking at the dye photo and comparing the density to other GPU dye photo's, but then you have Jim Morrison from Chipworks claiming you shouldnt try and compare the chip to any off the shelf parts. Most of the parts have yet to even be identified, I dont remember anyone conclusively identifying the ROP's or even the TMU's as an absolute. I was pretty confident in the 176Gflop theory presented, but the argument that if a more modern 176Gflop GPU can outclass the older 240Gflop Xenos, then shouldnt a very modern Xbox One 1.2 Tflop GPU absolutely murder the Xenos? It should be even more efficient than the Wii U's GPU, and 6 times the performance. Its been acted as if a 352 Gflop GPU should be able to do what the 1.2 Tflop GPU powering the Xbox One isnt able to do.
 
X really does look like a great game. And a pretty great looking one as well.

I come off as dismissive of X in my post and that's completely unintentional, it really does look great in what limited footage I have seen of it.

On the Wii u's Eshop, you can find the highest quality videos of the trailers available, while they arent lossless, they are much much cleaner than even the best youtube video of the trailers, to the extent you can see a lot of details that were compressed out on youtube.

Gah! Why hide your best footage on the WiiU E-Shop? How are you going to convince people(me) to buy your console that way? Dammit Nintendo are as exasperating to me this gen as Sony were last go around (Day 1 PS3 buyer that time).

Drifting badly OT here but the little footage I have seen of X does suggest they are going to wring every last drop from the WiiU so I look forward to seeing that but I doubt it will settle any arguments about h/w either way.
 
Its tough to know what to believe honestly. There are people here and at Gaf that have seemed pretty convinced that that its 160 SPU's based on looking at the dye photo and comparing the density to other GPU dye photo's, but then you have Jim Morrison from Chipworks claiming you shouldnt try and compare the chip to any off the shelf parts. Most of the parts have yet to even be identified, I dont remember anyone conclusively identifying the ROP's or even the TMU's as an absolute. I was pretty confident in the 176Gflop theory presented, but the argument that if a more modern 176Gflop GPU can outclass the older 240Gflop Xenos, then shouldnt a very modern Xbox One 1.2 Tflop GPU absolutely murder the Xenos? It should be even more efficient than the Wii U's GPU, and 6 times the performance. Its been acted as if a 352 Gflop GPU should be able to do what the 1.2 Tflop GPU powering the Xbox One isnt able to do.

Well, Wii U is deficient in almost every 360 port it runs that I know of. So by your same argument, if it's really 352 GFLOP shouldn't Wii U be "murdering" 360 in straight ports?

If you ran straight 360/PS3 ports on XBO, it would murder them. Instead it's running "next gen" versions, and sometimes struggling, but that's not the same thing.

Beyond that, you're just into diminishing returns to some extent. Nothing on either PS4 or XBO could be argued to "murder" prior gen. Yet, at least.
 
Whatever power level Wii U needed to be a viable next gen port target, it fell well short of, and that's what matters.
Not in this thread. In this thread, we don't care whether Wii U is a business success or not. It's solely about what the hardware is.
 
Well, Wii U is deficient in almost every 360 port it runs that I know of. So by your same argument, if it's really 352 GFLOP shouldn't Wii U be "murdering" 360 in straight ports?

If you ran straight 360/PS3 ports on XBO, it would murder them. Instead it's running "next gen" versions, and sometimes struggling, but that's not the same thing.

Beyond that, you're just into diminishing returns to some extent. Nothing on either PS4 or XBO could be argued to "murder" prior gen. Yet, at least.

I think that is pretty short sighted for a few reason, one of which Jason Gregory from Naughty Dog went over in his explanation of optimizing software. Developers can do tons of hardware specific optimizations, and these optimizations for games like COD and Assassins Creed didnt come in just one development cycle, but over the entire generation. I think developers and publishers has good intentions for launch titles, but Nintendo's crap SDK's held them back, and after that the business side of things made them reluctant to invest resources into a platform where their launch titles sold poorly, and the hardware sales themselves were pretty lack luster.

After that I feel its safe to assume that the assistance from the Cell and Xenon would be significant enough to quickly close the gap. The Expresso has no real answer for what the VMX unit on the Xenon could do, and certainly not what the SPE's could do on the Cell.

I still think observing the X1 does have some relevance. Do I think a 352Gflop GPU coupled with a CPU that has terrible SIMD performance could struggle to run code designed for a 240Gflop GPU coupled with a CPU with very good SIMD performance? Yes I do. The difference is the X1 has enough over head. Even if we assume the Wii U is 352 Gflop, that still makes the X1 roughly four times stronger in shader performance, not to mention it has double the ROPS and TMU's that run at a much higher clock speed. It seems after reading through tons of post here yesterday, one of the main reason for the 160SPU theory is based on poor performing ports, but based on developers comments, it seems more likely that the CPU is the culprit, coupled with dev kits and a lack of business incentive to really dig dip into the hardware.
 
Last edited by a moderator:
After that I feel its safe to assume that the assistance from the Cell and Xenon would be significant enough to quickly close the gap. The Expresso has no real answer for what the VMX unit on the Xenon could do, and certainly not what the SPE's could do on the Cell.

I still think observing the X1 does have some relevance. Do I think a 352Gflop GPU coupled with a CPU that has terrible SIMD performance could struggle to run code designed for a 240Gflop GPU coupled with a CPU with very good SIMD performance? Yes I do. The difference is the X1 has enough over head. Even if we assume the Wii U is 352 Gflop, that still makes the X1 roughly four times stronger in shader performance, not to mention it has double the ROPS and TMU's that run at a much higher clock speed. It seems after reading through tons of post here yesterday, one of the main reason for the 160SPU theory is based on poor performing ports, but based on developers comments, it seems more likely that the CPU is the culprit, coupled with dev kits and a lack of business incentive to really dig dip into the hardware.

Pretty much nailed it. No competitive CPU SIMD capabilities in the Wii U has relegated it to sloppy second past gen ports, Nintendo exclusives, and indie games. Also, I tend to believe in the 320 SP theory, and even with GPGPU, the extra "oomph" it might provide the Wii U compared to the 360's SIMD wouldn't be enough to mean anything or be worth it to developers who know their ports won't sell that well on the Wii U.
 
You can completely forget GPGPU on the wuu regardless of if it has 320SP or not, the ancient VLIW5 architecture in wuugpu is simply not suited for it. Terrible efficiency, terrible latency. That hardware gen simply wasn't geared for those kind of jobs; most GPGPU tasks run terribly on the 5xxx generation of hardware, and trying to interleave GPGPU and traditional rendering would likely completely kill performance, considering all caches and internal buffers would flush repeatedly every frame. Considering nintendo has cheapened out at every chance they've had with the wuu, I'm betting the chip has no facilities whatsoever for rapid thread switching. It would have been (much) easier for them to stick a SIMD CPU core in there rather than try to re-jig a GPU which wasn't designed for GPGPU to do just that.

If they seriously wanted GPGPU they would have gone with a GCN-based GPU; it would have been feasible at the time. Also, even GCN is not at all suited to all types of general computing found in computer games. GPGPU is NOT a crutch for not having a decent SIMD instruction set in your CPU. Stupid Nintendo!
 
I dont doubt that the gpgpu functionality would be very limited, but its still theoretically possible.

My main concern was how the 160 spu theory came to be. I accepted it because it seemed like people in the know were accepting it. Outside of third party ports not performing up to par, no concrete evidence was ever discovered to say what it is for sure. The idea that a gpu with 320 spu's should murder the 360/ps3 is pretty presumptious. It just seems like if that is true, then why is the X1 not easily doing COD Ghost in 1080p? I think its safe to say the X1 build was given more attention than the Wii U build, so lets not pretend its because its a launch game for X1.
 
My main concern was how the 160 spu theory came to be.

It primarily comes from the number of physical register banks. It's been pointed out so many times in this thread. It's even been pointed out accompanied by annotated die shots of various VLIW5 GPU's, across a range of process nodes from different fabs.

Secondary is the issue of die area, fab, and who did the layout. Not enough die area, not TSMC, not AMD.

Relative performance is someway down the list.

Outside of third party ports not performing up to par, no concrete evidence was ever discovered to say what it is for sure.

Third party port performance is far from the most compelling piece of evidence, although it is entirely in line with what a 160 shader part would offer.

You have been posting in this thread while these points have been raised time and time and time again, but always return to claim that "port performance" is the only evidence.

It is not. It is not even the main or most reliable indicator.

The idea that a gpu with 320 spu's should murder the 360/ps3 is pretty presumptious.

There is no presuming necessary as you can see as a matter of fact 320 shader VLIW5 GPUs murdering the PS360 in multiplatform games on the PC, despite all kind of bandwidth and driver constraints and with no low level optimisation.

It just seems like if that is true, then why is the X1 not easily doing COD Ghost in 1080p? I think its safe to say the X1 build was given more attention than the Wii U build, so lets not pretend its because its a launch game for X1.

This is a red herring. Why we still got monkeys etc.
 
It primarily comes from the number of physical register banks. It's been pointed out so many times in this thread. It's even been pointed out accompanied by annotated die shots of various VLIW5 GPU's, across a range of process nodes from different fabs.

Also, didn't bgassassin say that official documents implied 160 SP? Or did I dream that happening?

Personally I think third party performance does say something.. if Wii U had so much more ALU crunching power than XBox 360 wouldn't we expect to see at least something use noticeably more advanced shading? At least some of them should have been able to leverage more from their PC version. We do see some titles capitalize on the extra memory.
 
@Function

Sounds good, i had read about the registers, but i never came across the post where they were able to make a good comparison. Im cool with the 160 spu part, i just didnt like poor ports being a primary factor. Devs have said its more powerful, so a game that underperforms isnt because of the gpu, but other factors. I wish devs could talk more so we could better understand where the problems come from.
 
.....I wish devs could talk more so we could better understand where the problems come from.

Yup Nintendo NDAs must come with a 1000 year serfdom clause or some such, it seems we knew everything about XB1 and PS4 months before launch but we're still guessing on WiiU (even if I regard the guesses as 99% sure). I guess when you stand aside from the graphics wars not addressing it all is a better stance than 'justifying' your hardware choices. Would still love an in depth DF style interview with the h/w architects at Nintendo though.
 
Yup Nintendo NDAs must come with a 1000 year serfdom clause or some such, it seems we knew everything about XB1 and PS4 months before launch but we're still guessing on WiiU (even if I regard the guesses as 99% sure). I guess when you stand aside from the graphics wars not addressing it all is a better stance than 'justifying' your hardware choices. Would still love an in depth DF style interview with the h/w architects at Nintendo though.

I don't think hardware superiority has been a system selling point for Nintendo since the N64, GC was competitive but definitely wasn't superior to the Xbox hardware. Console wars should be renamed forum wars, its pretty much the only place the console wars really exist.

I think a lot of the concern for some Wii U owners when it comes to the specs, is how much will it limit Nintendo's own IP's. For example, many people wondered if the Zelda Tech demo is possible on the Wii U. I think this shows how much easier it is to dissect hardware than it is to dissect software. To many peoples surprise, Links character model in the demo seems to be taken right out of Zelda TP. If you look at pics side by side, they seem to be nearly identical, outside of better textures and lighting. For the majority of gamers, its hard to decipher the difference between something that looks very nice, and something that is technically advanced. Really good developers have a knack for blurring the line between a technical masterpiece and a artistic masterpiece. Zelda WW HD is an example of this, its hardly a technical masterpiece by todays standards, but its still very visually pleasing just the same.

What is somewhat interesting is that if these ports to Wii U have been fillrate bound, why not just scale back back the rendering resolution a bit, or scale back shader complexity? Take AC3 and AC4 for example, if framerate was limited by fillrate, why not just scale back to 600p, a resolution that has been used on the 360 and PS3 quite often throughout the previous generation. This isn't trying to discredit the 160spu theory, but it seems like everything from developers comments to the fact that it wouldn't be that hard to drop resolution to 600p and get the desired framerate if the GPU was the limiting factor, CPU sacrifices and optimizations are much trickier to implement. Not to mention the memory hierarchy leans heavily on the edram, something that doesn't seem to tie in flawlessly, developers comments for not just the Wii U support that. Software performance seems far more likely to be limited by something other than the GPU.
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top