Wii U hardware discussion and investigation *rename

Status
Not open for further replies.

AlNom

Moderator
Moderator
Legend
Hey folks, we're back again. This time we've rocked our collective brains and extensive knowledge of GPU hardware together to bring you a speculative piece on the GPU powering Nintendo's WiiU based on some of the rumours and hints that have arisen since before and after E3. We present several scenarios whilst trying to bring some modest reasoning to the forefront. So have a look here!

http://www.beyond3d.com/content/articles/118/

Big round of thanks to Farid, AlexV, WillardJuice and a pixel counter for their input and assistance for the timely release of yet another B3D article!
 
On the ED-RAM question: What if the Wii U can use the CPU ED-RAM as the memory target to resolve the frame buffer to? Maybe they simply decided they didn't want to create two separate pools of that memory and the slow downs seen in games like Ghost Recon are because the developers still haven't decided on whether or not to make use of that particular pool for rendering?
 
Are you that pixel counter Mr Strong?
Maybe. :p

On the ED-RAM question: What if the Wii U can use the CPU ED-RAM as the memory target to resolve the frame buffer to? rendering?

I'm not sure how complicated such an interface would be. I mean, we've seen shader export/memexport, but for starters, I don't think the bandwidth is anything near what we'd want for framebuffer reading/writing/blending. So there'd need to be a pretty wide interface between the CPU & GPU RBEs. The problem there is that the two chips ought to be on the same package I would think.

I suppose there's also cache locking on the CPU side to avoid resource contention or thrashing there.
 

You're such a bashful one... :oops:

The problem there is that the two chips ought to be on the same package I would think.

I suppose there's also cache locking on the CPU side to avoid resource contention or thrashing there.

Thats the 10 million dollar question isn't it? The first fusion processor to market was an IBM POWER CPU and an AMD GPU, was it not? Do we have any reason to exclude the possibility that the chip will be yet another APU but with even better integration than the Xbox 360 APU which was merely a cost saving move?
 
Excellent article, concise, accurate within the possibilities offered at the time, unparalleled when compared to what has been discussed in forums, technical sites.
 
Nice article, but, *ahem*, maybe a bit speculative in nature... When it's so hard to find any "meat" on WiiU then the whole article becomes nothing except guesswork. Oh, it'll be an IBM CPU and AMD GPU...and then we extrapolate from there, based on the size of the enclosure shown at E3. :devilish:

Also two very minor nitpicks: SNES had afaik a total max of on-screen colors of 256, shared between backgrounds and sprites. To exceed that you needed to write new values into color registers on a per-scanline basis, but that's fakery and thus not exceeding the total of 256 unique colors.

Also, the SuperFX was not a DSP coproessor, it was from what I understand a custom RISC CPU (clocked at 12 or 21-ish MHz, with up to 512k RAM available to it I believe.) The SNES had a whole slew of in-cart coprocessors, a bunch of them were third-party creations, like Capcom's C4 for example. Some also were actual DSPs, like the chip powering Nintendo's Pilot Wings.
 
So even in the worst case scenario, WiiU will still have more than enough power to run current console games with some better performance.

To be honest, im more interested in CPU and RAM. GPU will be good in any case.
 
So even in the worst case scenario, WiiU will still have more than enough power to run current console games with some better performance.

To be honest, im more interested in CPU and RAM. GPU will be good in any case.

Yeah, it'd be virtually impossible to put in something that's worse. The question will be what they do about memory bandwidth. Even with GDDR5 on 128-bit, they'd be spending a fair bit with the fastest available chips & 2Gbit density. I'm not too sure what the availability is now, but the 6970 uses 5.6GHz. That'd be ~90GB/s on 128-bit. Most other cards use something slower. It'd be hard to expect the absolute fastest GDDR5 (in 2012) anyway when you consider supply constraints and the effect on cost.

4 chips on 128-bit bus makes sense for GDDR5 as well...
 
Worse, in Ubisoft’s Ghost Recon demo, we noticed slowdowns when a lot of smoke was on screen and micro stutters each time we fired the gun (the “muzzle flash” effect uses alpha blending).

Regarding this bit: Here's an interview with Reggie where he admits that all third party footage was actually from the 360 or ps3. (around the 4:40 mark in the video) So you can't really draw any conclusions from that footage.
 
I'm hoping for eDRAM again. It did wonders for the GC/Wii, and the 360 also of course.
 
I'm missing some discussion re:lithographic process for a GPU that is not yet in à material state, and due to be launched Q2 2012. The article seems to assume the same 40nm process AMD first delivered the HD4770 on in spring 2009. Which is not necessarily a given, and worthy of at least some mention.
 
Regarding this bit: Here's an interview with Reggie where he admits that all third party footage was actually from the 360 or ps3. (around the 4:40 mark in the video) So you can't really draw any conclusions from that footage.

um... a 360/PS3 using the WiiU controller? I was playing it at the WiiU showfloor... Along with every other demo they had. Reggie is referring to the demo reel during the conference itself.
 
I'm missing some discussion re:lithographic process for a GPU that is not yet in à material state, and due to be launched Q2 2012. The article seems to assume the same 40nm process AMD first delivered the HD4770 on in spring 2009. Which is not necessarily a given, and worthy of at least some mention.

Just keep in mind that launching on 28nm, a new and certainly immature process, isn't going to be cheap.

For comparison's sake, TSMC had 65nm at the end of 2005, yet we didn't see its use in consoles or even other GPU markets until mid to late 2007. It's not like the complexity of the smaller process nodes is going to make the transition to 28nm any smoother for mass production - just take a look at how long 28nm has been delayed in the first place.
 
Just keep in mind that launching on 28nm, a new and certainly immature process isn't going to be cheap.

For comparison's sake, TSMC had 65nm at the end of 2005, yet we didn't see its use in consoles or even other GPU markets until mid to late 2007. It's not like the complexity of the smaller process nodes is going to make the transition to 28nm any smoother for mass production - just take a look at how long 28nm has been delayed in the first place.

Fair enough - altthough 28nm being delayed apparently doesn't stop AMD from launching their new lineup on 28nm this year, and makes targetting 40nm for a GPU with the assumed launch date less likely, not more since Nintendo would have had reason to assume the process would be well under control by then.
Can't really comment on the die size vs. wafer cost tradeoff, other than that it should swing in favour of the smaller process with time.

Of course, you may also have some undiscloseable inside info. :)
 
Fair enough - altthough 28nm being delayed apparently doesn't stop AMD from launching their new lineup on 28nm this year, and makes targetting 40nm for a GPU with the assumed launch date less likely, not more since Nintendo would have had reason to assume the process would be well under control by then.
Can't really comment on the die size vs. wafer cost tradeoff, other than that it should swing in favour of the smaller process with time.

Of course, you may also have some undiscloseable inside info. :)

No, it won't stop AMD's PC parts, but those aren't quite as high volume as Nintendo's requirements and the PC market has the advantage of binning and lol-prices at the high end, which Nintendo won't. It'll just be tight. Of course with time, things get better, but we're also talking about a target date of mid-2012 (or ever early 2012) for manufacturing, not just release date. Nintendo sure has cash to burn, so that's their prerogative if they really need 28nm at launch.

I mean, it's not like 40nm is crappy at all right now, and shifting to a new process so early doesn't automatically mean good power characteristics (and thus has implications for clocks & usable chips).
 
Status
Not open for further replies.
Back
Top