Wii U hardware discussion and investigation *rename

Status
Not open for further replies.
Nintendo's philosophy is only good when it works, and in home consoles it only worked for a few years during the wii era. Nintendo should have taken note and adjusted; they chose to not do that, and now they're paying the price.

Flexibility and adaptability; it's the key to success in most any modern business. If you just keep on like you always have then you're driving yourself right down the road to your own extinction.
 
Certainly you're a tough cookie and apparently I have to be real careful with my wordings so it doesn't get misinterpreted (C++ 101 all over again). At this point I'll just lay this thread to rest and wait for new findings. At the least I'll research the die size comparisons and see the truth/falsehood of your statement.

Until concretely proven (since I'm missing the physical aspect of the GPU chip there is also a hole in my point) I'll stick with 320 SPUs.

Because I'm feeling like much less of a nob end after a good night's sleep, here's a link to one of the most information-dense double uber posts in this thread (it's an uber post containing a quote of an uber post).

http://forum.beyond3d.com/showpost.php?p=1703033&postcount=4524

It highlights that the Wii U SIMD blocks contain only enough register banks for 40 shaders, and so to contain 80 Nintendo would have had to do low level work on AMD's designs (lol) for absolutely no damn reason at all (lool?) and then have Renesas beat AMD for density on a less dense process and do it without the aid of AMD's hugely valuable high density layout tools (that they've develop over many generations of making their processors with TSMC).

In other words ... it's 160. :)

If Nintendo had decided to skip BC and wave goodbye to their long time fab partner at Renesas, AMD could have knocked out a belting little chip for them at TSMC.
 
Last edited by a moderator:
If Wii U was beyond last gen's consoles to any significant degree at all it wouldn't matter that ports were running gimpy code, it'd run faster regardless.

It would't run faster regardless of much hardware A is stronger than B example since a game with gimpy code would not utilize the hardware properly then I guess Xbox 360 was much faster than PlayStation 3 because Xbox 360 ports ran poorly on it or when PlayStation 2 ports ran poorly on Wii even when Wii basically had computational performance of three PlayStation 2's combined. Thus I would't be surprised that your belief is that a Xbox 360 game should run on Xbox One at a higher resolution and/or framerate without any optimizations at all since Xbox One is 7-8 times more powerful, right? No. Dark Souls on PC ran poorly because of gimpy/lackluster game code even on high end rigs that was leaps and bounds more powerful than PlayStation 3/Xbox 360.

(it's six-seven years more recent - ON PAPER, which according to Moore should mean ~400% greater hardware power, all other things being equal.) However, Wii U is simply a crippled piece of hardware whichever way you look at it from, and that's why ports stutter on it.

Let's be logical for a moment... Just because there is Moore's "law" does not mean that everyone should be following it and it is not as relevant as before since things have changed as we hit more and more walls in more and more fields of industry and science.

How can a hardware of a console be "crippled" when nothing is removed in the first place and saying that it is crippled is against logic, rationality and reasoning because it is utter contradiction. Only consoles that were "crippled" were PlayStation 3 when Sony removed backward compatibility with PlayStation 2 software and when Nintendo has released Wii Mini that has no disc drive, no ethernet/wifi(?) and no backward compatibility with Gamecube software. So how can Wii U's hardware be "crippled" when nothing has been removed from original/standard hardware specifications of the Wii U since its release.

@underlined That is utter ignorance, you are just making excuses for developers that didn't not bother much with their ports for Wii U to be optimize for its hardware much and you don't consider the fact that Wii U's hardware is considerably different than Xbox 360 and PlayStation 3's hardware in architecture. It is like saying that ARM code would run well on PowerPC because both of these are using RISC design and the simple answer is NO since the difference between these two is extreme and only similarity is that both use RISC design. That is it, nothing more and nothing less...


I don't understand why we're still discussing it, it's obvious. Everything's been said already, and yet some people feel the need to ride to the console's rescue, making excuses for it.

You don't understand since it seems you are being narrow minded and just because some people have slightly positive opinion that has good/valid foundation/reason then it is a "console's rescue" and that we are "making excuses for it" yet you are making excuses for 3rd party developers that put lackluster ports on Wii U in which you blame Wii U's hardware rather than developers that didn't put enough effort in their ports.

Was hardware the issue Sniper Elite V2 didn't had coop/online multiplayer? No. It was a rushed port while other games on Wii U had online multiplayer and/or coop. Splinter Cell: Black List had online co-op, but not local co-op because development/porting team didn't had enough time to implement it and is it fault in Wii U's hardware? No.

These developers/publisher may set for their port teams strict budgets thus they are limited in what they can do and what they simply can not do plus they are probably understaffed for such tasks as they try to set priorities and tasks what they can do under certain budget.


Would you stop making these fannish comparisons of games with wildly different art and settings to try and prove one console's superiority over another? Thank you. You see what you want to see - we get it. Subjective preference and all; having opinions is OK. However, technical thread; not subjective thread.

Since when art has to do with visual fidelity and am I wrong to compare games that are in same or comparable category/gerne, even within same gameplay setting yet you claim its different when in fact its not.

Little Big Planet is a racing game(kart racer) just like Mario Kart 8, White Knight Chronicles is a Monster Hunter style/type of game just like Monolith Soft's X is apparently, PlayStation All-Stars Battle Royale is a Fighter/Beat'em'up with characters exclusive(mostly) for its console just like Super Smash Bros U is. They are all comparable and so it Bayonetta with Bayonetta 2 in which we see increase in visual fidelity yet you will again say different art and setting yet these games are comparable in their genre as I compare them primarily.

You are just trying to deny it because you don't like what you see yet you call me out for making "fannish comparisons" like I am a fanboy as you suggest even these comparisons are actually/factually valid. Also you said that I see what I want to see yet I have been technical and not subjective as you claim while you were doing just that you were accusing me off... Accusing me of being a fanboy is unfounded accusation and a direct provocation!

Since when being technical is all of the sudden being subjective, what is with you and your double standards? Am I going against your own personal interest and do you have some grudge against Nintendo? The Wii U? Why with all negative bias against them?

I am asking legitimate questions!
 
Because I'm feeling like much less of a nob end after a good night's sleep, here's a link to one of the most information-dense double uber posts in this thread (it's an uber post containing a quote of an uber post).

http://forum.beyond3d.com/showpost.php?p=1703033&postcount=4524

It highlights that the Wii U SIMD blocks contain only enough register banks for 40 shaders, and so to contain 80 Nintendo would have had to do low level work on AMD's designs (lol) for absolutely no damn reason at all (lool?) and then have Renesas beat AMD for density on a less dense process and do it without the aid of AMD's hugely valuable high density layout tools (that they've develop over many generations of making their processors with TSMC).

In other words ... it's 160. :)

If Nintendo had decided to skip BC and wave goodbye to their long time fab partner at Renesas, AMD could have knocked out a belting little chip for them at TSMC.

No, its not... Chipworks said that Wii U GPU chip is highly customized and by looking at die shot of Wii U GPU and compare it to Radeon HD 4870 die shot we can see drastic difference in design and layout. We know that Wii U GPU is produced at Renesas yet that does not mean that AMD didn't had any involvements into development of Wii U's GPU at all nor we know for sure capabilities of Renesas.

You and others are ignoring the fact that Wii U's GPU is highly customized as we can see.

Also if its 160 SPU's then why we have 256 SRAM banks/registers? Why waste all of that silicon for nothing to gain at all? Wii U's GPU could be 320 SPU's yet Nintendo may chose to have 1 SPU to 1 SRAM bank/register ratio which would mean 256 SPU's.
 
No, its not... Chipworks said that Wii U GPU chip is highly customized and by looking at die shot of Wii U GPU and compare it to Radeon HD 4870 die shot we can see drastic difference in design and layout.

It's a custom chip, but that doesn't mean that Nintendo did low level customisation of the IP they licensed from AMD. Of course the layout is different. It's not a Radeon 4870, and it wasn't laid out by AMD.

We know that Wii U GPU is produced at Renesas yet that does not mean that AMD didn't had any involvements into development of Wii U's GPU at all nor we know for sure capabilities of Renesas.

We know what Nintendo lisenced IP from AMD and that's about it. I'm willing to take the bet that Renesas can't do a better job than AMD at realising their own IP using their own tools at their "home" fab of TSMC on a more dense process.

You and others are ignoring the fact that Wii U's GPU is highly customized as we can see.

You can't "see" how customised AMD's IP is in the Wii U.

Also if its 160 SPU's then why we have 256 SRAM banks/registers? Why waste all of that silicon for nothing to gain at all? Wii U's GPU could be 320 SPU's yet Nintendo may chose to have 1 SPU to 1 SRAM bank/register ratio which would mean 256 SPU's.

Wat.
 
Why would't Renesas match AMD?

AMD is not using 40nm anymore thus I assume that AMD would have gladly sell their tools and technology involving 40nm node/process since its no use for them anymore and Nintendo was creating the console thus I would't surprised that they bought IP/License for 4000/5000/6000 series GPU and tools/technology involving 40nm node from AMD and hire some engineers from AMD.

I read somewhere in Iwata ask that employees/engineers from AMD also participated in development of the GPU.
 
I disagree. In fact the gap in power between the Ps4 and Xbox one isnt any different than the gap between the Ps2 and the original Xbox. As far as the xbox one struggling to separate itself from the Wii U you have to be joking. In fact take one title thats on all 3 platforms mentioned above and look at the real differences. Lets say AC4, The X1 and Ps4 share the exact same assets, textures, poly count, shadows and particle effects. In fact the Ps4 and X1 version are identical except in resolution Ps4 1080, X1 900. Then go look at the Wii U version, it shares all the assets of the Ps360 versions. Same resolution but it cant keep up with the same frame rates. In fact this goes for all multiplats on the X1 and Ps4, they all share the same exact assets textures,shaders,polys and so on. Most of them share the same resolution. When there are differences its resolution or frame rates. I understand the Ps4 has 6 more compute units but with both next gen systems having the same cpu and amount of ram they are in the same gen or class. Im not sure if you have ever played any Xbox one games but if you had you would know that the Wii U cannot compare graphically in any way to what even X1 launch titles look like. I know that in this day and age the internet is really pushing game resolution but it's not what makes a game next gen. The increased draw distance,texture quality,lighting,shading,poly count, characters on screen is what makes a game next gen graphicaly.

You didnt read into what im saying, the X1 is struggling to make improvements across the board. Increase the visual fidelity, move from 720p to 1080p, and obtain a stable 60fps. Its been able to do 2 of the 3, but never all three. For example, Batman Origins on Wii U is a 720p 30fps game, I doubt that the Xbox One could run that game in 1060p, 60fps, and still have room to improve the visual fidelity.


Games that have come late to Wii U seem to have less framerate troubles. Both Mass Effect 3(even with cruddy pre launch sdk) and Deus Ex have far more stable framerates than the ports that released alongside the other versions. The publishers werent going to devote much time to a Wii U build when sales potential is so low. Taking time from the builds that will sell millions to optimize the Wii U build doesnt make a lot of sense. I cant blame them, but then again I cant blame Wii U gamers for not supporting substandard ports.
 
Because I'm feeling like much less of a nob end after a good night's sleep, here's a link to one of the most information-dense double uber posts in this thread (it's an uber post containing a quote of an uber post).

http://forum.beyond3d.com/showpost.php?p=1703033&postcount=4524

It highlights that the Wii U SIMD blocks contain only enough register banks for 40 shaders, and so to contain 80 Nintendo would have had to do low level work on AMD's designs (lol) for absolutely no damn reason at all (lool?) and then have Renesas beat AMD for density on a less dense process and do it without the aid of AMD's hugely valuable high density layout tools (that they've develop over many generations of making their processors with TSMC).

In other words ... it's 160. :)

If Nintendo had decided to skip BC and wave goodbye to their long time fab partner at Renesas, AMD could have knocked out a belting little chip for them at TSMC.

So, it has 40 shaders per "CU" instead of 80 in most designs?
That's interesting. AMD did have a part with 40 shaders per CU, in fact only 40 shaders at all, the 785G chipset (and similar)
 
PS2->Xbox had a considerable (huge, really) features gap in addition to a significant performance gap. Xbone and PS4 are both matched evenly on features - except for when it comes to audio, which frankly is not important.

You can trust what Cal_guy posts... He works for AMD. :D (IE: the product in question is probably a re-badged 4/5xxx series dealie.)

Rebadged and lower end cards from the 6xxx series are probably virtually the same as 5xxx. So it's plausible since it was mostly the mid-range/high-end that used the newer VLIW4. Of course I'm just pulling this at the top of my head.

Xbox One and PS4 might be evenly matched on their offerings (OpenGL 4.3 conformant class hardware) but the difference in system resource availability is as clear as night and day. PS4 is quite frankly beyond the Xbox One's reach leaving Art Design to determine how far the Xbox One can reach.
 
So, it has 40 shaders per "CU" instead of 80 in most designs?
That's interesting. AMD did have a part with 40 shaders per CU, in fact only 40 shaders at all, the 785G chipset (and similar)

R600 gen was in multiples of 40. 780G/785G is similar to RV620.
 
So, it has 40 shaders per "CU" instead of 80 in most designs?
That's interesting. AMD did have a part with 40 shaders per CU, in fact only 40 shaders at all, the 785G chipset (and similar)
I think the HD 46xx as a such setting, half the width, half the texture unit.
It had 8 SIMD but only 320 stream processors.
 
R600 gen was in multiples of 40. 780G/785G is similar to RV620.

I remember reading than in a way the chipset are half way between the radeon 3000 and 4000 series. Possibly some tiny difference between 780G and 785G, which claims DX10.1

My speculation is/was the Wii U is based on the 785G/880G GPU

I think the HD 46xx as a such setting, half the width, half the texture unit.
It had 8 SIMD but only 320 stream processors.

I wondered about that for the HD 4350 (it has 80 SP)
Indeed for the 4670 there are 8 vec5 per "cluster", 8 clusters so it's a 320SP part made of 40SP blocks. (reference taken here http://www.hardware.fr/articles/732-2/specifications-detail.html )
 
So Wii U's GPU is based around HD 4670 with improvements that were introduced in 5000 series including latest available API like Open GL 4.3 according to statements of various developers plus software support(engines, tools, etc)...

Marcan who hacked Wii U's hardware said that roots of Wii U's GPU relate all the way back to R600 series while he speculates/assumes that changed GPU's(series) as time has passed thus developing/modifying their earlier designs or something like that.

Thus I assume that Nintendo started developing their successor to Wii a right away or year or two after its release by starting to mess with R600(Radeon HD 2000/3000) series then moved to R700 and latter...

Maybe Nintendo started developing right away or even before Wii's release its HD console since they didn't expect Wii to be such a success... Is this a logical speculation/assumption/guess?
 
Thus I assume that Nintendo started developing their successor to Wii a right away or year or two after its release by starting to mess with R600(Radeon HD 2000/3000) series then moved to R700 and latter...

Maybe Nintendo started developing right away or even before Wii's release its HD console since they didn't expect Wii to be such a success... Is this a logical speculation/assumption/guess?

I don't think so, because from all third party accounts Wii U was barely ready to be released when it was in late 2012. Taking so long to reach a point like that is pretty embarrassing.

However I do think that they may lock themselves down to certain specifications earlier than their competitors do. PS4/XBOne leaks and rumors seem to show an evolution in parts considerations over the years, including for CPU/GPU. I doubt the same can be said for Wii U's CPU/GPU. Face it, Nintendo hasn't used hardware that could legitimately be called contemporaneous since way back with the Gamecube. This applies not just to Wii and Wii U but every handheld they've ever made.
 
I don't think so, because from all third party accounts Wii U was barely ready to be released when it was in late 2012. Taking so long to reach a point like that is pretty embarrassing.

I agree yet you don't consider various events, factors and variables... Nintendo most likely worked on a successor for Wii before or sometime after its release and Wii was selling at a breath taking pace. Nintendo could have worked on it and then halted the development as Wii was selling extremely well and in Iwata ask they clearly said that Wii U's development started in 2009 and we got first prototype in 2011 and the console was released in Q4 of 2012.

Maybe Nintendo wasn't sure what their successor to Wii was to be, should they continue the philosophy behind Wii or return to the path of Gamecube and they choose the former.
However I do think that they may lock themselves down to certain specifications earlier than their competitors do. PS4/XBOne leaks and rumors seem to show an evolution in parts considerations over the years, including for CPU/GPU. I doubt the same can be said for Wii U's CPU/GPU. Face it, Nintendo hasn't used hardware that could legitimately be called contemporaneous since way back with the Gamecube. This applies not just to Wii and Wii U but every handheld they've ever made.

Well blame others and maybe yourself for not buying a Gamecube if you could, its your fault for not speaking with your wallet. Gamecube and philosophy to match/exceed competition was discarded by Nintendo thanks to the consumer. Blame yourself, not Nintendo, you had a chance and you blew it.

It didn't stop PlayStation to rule over Nintendo 64 nor PlayStation 2 over Gamecube which had twice the computational performance.

Mod edit: Fixed quotes
 
I remember reading than in a way the chipset are half way between the radeon 3000 and 4000 series. Possibly some tiny difference between 780G and 785G, which claims DX10.1

Many of RV6x0 are D3D 10.1 too.

The improvements that I know of for the HD IGPs are 1) a superior UVD component 2) Flash acceleration. The Flash aspect is interesting because that was at least partially done via ATI Stream. For some reason ATI deemed the R600 and RV6x0 GPUs incapable of Flash via Stream. Even more interesting is Catalyst 9.11 will do Flash acceleration on all RV6x0 GPUs but it is flaky...

Maybe Nintendo started developing right away or even before Wii's release its HD console since they didn't expect Wii to be such a success... Is this a logical speculation/assumption/guess?
I am certain that I read a news report years ago about a forthcoming "Wii HD". Long before WiiU was known. So yeah I have been wondering if WiiU is tech that was on the back burner for awhile. It makes the R600/R700 roots make more sense, I think.

I take it nobody has seen anything that looks like R800 / Cypress inside Latte? If WiiU had been in development in say 2009, it seems to me that would be the tech in it.
 
In case Nintendo started development of their HD console with R600(Radeon HD 2000-3000 series) and they halted it then I think that Nintendo didn't want to spend more money on up-to-date thus they rather bought necessary part of IP from R700-R800(Radeon HD 4000/5000-6000) series to make it more up to date thus it would explain claims of developers of feature set comparable to DirectX 11/Open GL 4.3 and software that supports API's on that level on Wii U like Unity 4 Pro engine and latest CRYENGINE(successor to CryEngine 3, nicknamed CryEngine 4) also Marcan's statement that roots of Wii U's development have traces of R600 while he believes that it is R700-R800 yet the nature of Wii U's GPU sets apart from any known AMD's GPU.

Maybe we should call Wii U's GPU "Frankenstein" and not "Latte"...
 
backgroundpersona, please start quoting properly. What you're doing - embedding your responses inside the quoted block - makes it really annoying to respond to you because the forum gets rid of nested quotes in responses.

I agree yet you don't consider various events, factors and variables... Nintendo most likely worked on a successor for Wii before or sometime after its release and Wii was selling at a breath taking pace. Nintendo could have worked on it and then halted the development as Wii was selling extremely well and in Iwata ask they clearly said that Wii U's development started in 2009 and we got first prototype in 2011 and the console was released in Q4 of 2012.

That's not considering anything, that's merely making assumptions without any evidence. You even say Nintendo admits that Wii U development started in 2009, doesn't that outright contradict any possibility that they started it before Wii was even released?

Well blame others and maybe yourself for not buying a Gamecube if you could, its your fault for not speaking with your wallet. Gamecube and philosophy to match/exceed competition was discarded by Nintendo thanks to the consumer. Blame yourself, not Nintendo, you had a chance and you blew it.

It didn't stop PlayStation to rule over Nintendo 64 nor PlayStation 2 over Gamecube which had twice the computational performance.

You sound very fanboyish here. You're right, I didn't buy a Gamecube, and I also didn't buy an N64, because neither had games I cared an awful lot about playing. I don't apologize for either of these things, and why should I? If Nintendo drops the ball on getting the third party support that attracts my attention I'm not going to bankroll them just in the hopes that they get it back. I didn't blow it, Nintendo blew it. There's no way that you can argue that Gamecube struggled because its hardware was good, it struggled in spite of having good hardware. I did buy GBA, DS, and 3DS, but that's not because they had weak hardware (although in this case the lower power consumption angle is a bigger deal, although 3DS's battery life isn't very good anyway)

Say what you want about PS2, it could at least be considered a reasonable powerful and novel design for its time and the best looking Gamecube games didn't look an awful lot better than the best looking PS2 games. They took very different approaches, and I think Gamecube had the more elegant and more forward compatible design (and Wii's lazy reuse is somewhat of a credit to this) but they were very different and both had their own strengths and weaknesses. That comparison is not a lot like the comparison between Wii and PS3/XBox360 or the comparison between Wii U and PS4/XBoxOne.

Today Nintendo is making a lot of what I would consider less than ideal decisions, ones that their competitors aren't making, and I don't see what any of it has to do with Gamecube sales. This has been covered already in the thread, but in short:

1) An emphasis on backwards compatibility to the substantial detriment of performance
2) An emphasis on low power consumption, pushing size and noise levels beyond useful diminishing returns, to the substantial detriment of performance
3) Pushing experimental new input interfaces the substantial detriment of price
3) Relying on Japanese manufacturers and suppliers, to the detriment of price and performance

They may have gone from underwhelming sales with a strong hardware design in Gamecube to lightning in a bottle with a lazy hardware design in Wii, but that doesn't automatically mean that lazy design was a virtue (even if that really is what Nintendo took away from this), and it's pretty evident that they've now been suffering the consequences of that mentality.
 
Many of RV6x0 are D3D 10.1 too.

The improvements that I know of for the HD IGPs are 1) a superior UVD component 2) Flash acceleration. The Flash aspect is interesting because that was at least partially done via ATI Stream. For some reason ATI deemed the R600 and RV6x0 GPUs incapable of Flash via Stream. Even more interesting is Catalyst 9.11 will do Flash acceleration on all RV6x0 GPUs but it is flaky...

RV670/RV635/RV620 and newer had DX10.1 (but UVD1)
shader tmu rop was
320:16:16 (256bit GDDR3)
120:8:4 (128bit GDDR3)
40:4:4 (64bit GDDR3)

the IGP from the 780G was basically the same as the older RV610 (also 40:4:4 but with DX10.0 support)
785G (HD 4200) was basically like the RV620 but with improved UVD (UVD2)

785G(HD4200, RV620 based with UVD2) and 780G(HD3200, RV610 based) in games had 100% the same performance (apart from when DX10.1 was used or for video decoding), so the rest of the design was probably left untouched

the smallest discrete HD4000 was the RV710
80:8:4
next was the RV730 with
320:32:8

the RV710 successor for the 5000 (5450) series had exactly the same configuration (but it was an updated design with DX11 if I remember correctly at the same clock there was a small performance reduction outside of DX11, but it could be some software problem)
80:8:4
next smallest 5000 series was the GPU used on the 5670
400:20:8

the smallest 6000 series (6450) was changed to
160:8:4 (still VLIW5)

so yes, they've used a few different configurations,

I don't see why Wii U couldn't have 320:16 or 160:16, maybe 160:8 or something else, but I don't understand the die shots like some people here do,
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top