Wii U hardware discussion and investigation *rename

Status
Not open for further replies.
ue4 support is really a non factor since it won't be able to handle ps4
/720 games anyway. Its like saying ue3 support ipads. Sure it can run on the iPad but its not gears of war. Same way with game design for next Gen consoles on eu4 will not be running on the wiiu.
 
But to go on topic, I dont think Nintendo is going to spend the money to do any custom Wii U GPU work, beyond the necessary (memory bus work if needed, EDRAM integration). So I still think it will be an RV730. I dont see the need to do an e6760 like GPU when the RV730 at 400 mhz or something would probably have the same power envelope, and better yields. And I dont think you can just add a dollop of DX 11 to an existing GPU, without more expense than it's worth. I think it's a DX 10.1 GPU only. If they wanted DX 11, they would have just used a low end DX 11 GPU of which there are many good candidates (why oh why didn't you use a Cape Verde, stupid Nintendo).
 
Last edited by a moderator:
But to go on topic, I dont think Nintendo is going to spend the money to do any custom Wii U GPU work, beyond the necessary (memory bus work if needed, EDRAM integration). So I still think it will be an RV730. I dont see the need to do an e6760 like GPU when the RV730 at 400 mhz or something would probably have the same power envelope, and better yields. And I dont think you can just add a dollop of DX 11 to an existing GPU, without more expense than it's worth. I think it's a DX 10.1 GPU only. If they wanted DX 11, they would have just used a low end DX 11 GPU of which there are many good candidates (why oh why didn't you use a Cape Verde, stupid Nintendo).

What you're saying they won't do is exactly the kind of thing Nintendo would do. Especially if it was their plan to stick to a 10.1-level GPU and 3rd parties they supposedly listened to said "that's not good enough". Nintendo would do just enough in this kind of situation. All they've done (at least in recent generations) is customize, for better or worse, their GPUs. If the only reason you give as support is that you don't think they will, then I have to disagree. That would be fine if you had history to support your belief, but as it stands that's not the case. After all you saw and posted about Arkam's recent post and you know he was very disappointed with how things looked back in January.

The other reason(s) why I don't quite agree is because of someone I spoke to back during Dec./Jan.

http://www.neogaf.com/forum/showthread.php?p=41883633&highlight=amd#post41883633

Well, I can't reveal too much. The performance target is still more or less the same as the last review from around E3. Now it's more balanced and "2012" now that it's nearer to complete and now AMD is providing proper stuff. As far as specs, I don't see any big change for better or worse, other than said cost/performance balance tweaks... It won't make a significant difference to the end user. As far as the kit goes, it's almost like what MS went through. Except more Japanese-ish... If you know what I mean.
http://www.neogaf.com/forum/showthread.php?p=41901585&highlight=2011#post41901585

Anyway, things are shaping up now with the new year. There was some anxiety with some less close third parties about what they were doing with GPU side, whether things were going to be left in the past... but it looks more modern now. You know, there simply wasn't actual U GPU data in third party hands this time last year, just the target range and R700 reference GPU for porting 360 titles to the new cafe control. Maybe now they finally can get to start debugging of the specifics and start showing a difference...
 
For now that kind of interconnect offers really low bandwidth, and parts have to be really low power too. In the mobile world 12.8 GB/s is a lot of bandwidth.

So if you were to be right the thing would suck even more than most people think it already does. It would be in my opinion terrible.

Your saying they are not currently capable of bandwidths higher than 12.8?
I thought that was just a target number for mobile devices this year.

But that brings me to the 32mb edram, if it is shared pool for the CPU and GPU,
then maybe its not meant to be used for as a frambuffer?

Nintendo also stated that the WiiU (2gig) has 20x the total memory of the Wii (91mb).
So Im thinking 1gig includeds the 32mb.
 
I don't see why it would be shared with the CPU, other than maybe a 2 MB partition or something. A console just hasn't really that much use for large L3, and you wouldn't want contention between CPU and GPU on the bus. The CPU isn't grunty enough to contribute to the graphics, so there's no Cell-like cooperation with both processors working on graphics data. 32 MBs would be great for framebuffers/render targets, allowing lots to be stored at once. eg. With full GPU access, reads and writes, you could have very fast render to texture effects with the data immediately available for subsequent use unlike XB360's requirement to export to main RAM. This is a good design, and including the CPU in the eDRAM seems like cost for no benefit.
 
The tablet if its only showing 2d images like maps, inventory ect. really doesn't take much GPU power, I think a guy did the calculations and found that it would take 12MFLOPS of GPU power.
If it only shows 2D imagery it wouldn't take a single MFLOP. You'd just do some simple blitting to build up the inventory items on top of the background and transfer the finished 2D bitmap to the pad. Some FLOPs might be involved if the developer chose to build up the image as polygonal, textured flat objects, but it would be extremely minor. Four vertices per object doesn't amount to a whole lot after all.

Your saying they are not currently capable of bandwidths higher than 12.8?
I thought that was just a target number for mobile devices this year.
iPad3 is capable of roughly that much; 128-bit bus @ ~800MHz DDR.

But that brings me to the 32mb edram, if it is shared pool for the CPU and GPU,
then maybe its not meant to be used for as a frambuffer?
Of course it's meant to be used as a framebuffer. What else would its primary purpose be? A console CPU doesn't need 32MB cache, nor do you need that much texture cache either.

Nintendo also stated that the WiiU (2gig) has 20x the total memory of the Wii (91mb).
So Im thinking 1gig includeds the 32mb.
I'm thinking your reasoning is flawed, but you're entitled to your opinion. :)
 
Last edited by a moderator:
Hmm... I am thinking what I have come up with fits the GPGPU line, 40nm rumor, AMD R700 series rumor all pretty well.

What if AMD Firepro Mobility GPU is in the WiiU?

The FirePro M7740 at 832 Gflops is based on the AMD R740. If its another. Then its likely AMD FirePro M7820 at 1120 GFlops based on AMD R870 well if on DirectX11/SM5. If its newer on DirectX 11 then it could have a little more juice, but most rumors point to AMD r700 series. So I expect 832 Gflops for the GPU in WiiU. Anything more is welcome though if it happens to have more. If ya factor in the possible 264 Gigaflop max that Power7 can output then you get 1.096 Tflops, but I do not know exactly how much of that would factor into the WiiU. If it factors in like that then it makes UE 4 at 720p possible with a little optimization

Just my guess, but a tweaked version of either would be possible at an acceptable TDP considering that GPU is from 2009. They could turn the heat down on it if they had any.
 
Hmm... I am thinking what I have come up with fits the GPGPU line, 40nm rumor, AMD R700 series rumor all pretty well.

What if AMD Firepro Mobility GPU is in the WiiU?

The FirePro M7740 at 832 Gflops is based on the AMD R740. If its another. Then its likely AMD FirePro M7820 at 1120 GFlops based on AMD R870 well if on DirectX11/SM5. If its newer on DirectX 11 then it could have a little more juice, but most rumors point to AMD r700 series. So I expect 832 Gflops for the GPU in WiiU. Anything more is welcome though if it happens to have more. If ya factor in the possible 264 Gigaflop max that Power7 can output then you get 1.096 Tflops, but I do not know exactly how much of that would factor into the WiiU. If it factors in like that then it makes UE 4 at 720p possible with a little optimization

Just my guess, but a tweaked version of either would be possible at an acceptable TDP considering that GPU is from 2009. They could turn the heat down on it if they had any.

You're GPU figure is way off, event the optimists are only saying ~600Gflops.
 
If it only shows 2D imagery it wouldn't take a single MFLOP. You'd just do some simple blitting to build up the inventory items on top of the background and transfer the finished 2D bitmap to the pad.
I'd expect more sophisticated layer blending these days, so combining two images could involve 4x 16 bit float ops per pixel blend (simplest option an alpha lerp). 854x480 pixels is 400k pixels. Combining two screens worth of imagery could be 1.6 million floating point operations, x 30 fps would be ~50 MFLOPS.

It's certainly not impossible to use up any amount of processing power on that second screen. It needs to be evaluated on a per-game basis. However, the impact to the system will typically be no worse than the other consoles. A vector map on that screen won't take much more than the same map in the corner of the TV on a PS3 or XB360. And something like inventory need not be updated when the player isn't actively touching it, so it wouldn't take anything from the game when they are viewing the main screen. Some scenarios like Batman's special vision mode being rendered on the Wuublet as well as normal vision on the TV would have a significant impact, but from what little I've seen so far, those situations are few and far between. And given the developers can't trust the screen will be available to games as the screen could be off around the house doing some other function, why would they dedicate important functions to that screen?
 
For the Wii U, the graphics is based on embedded GPGPU tech from AMD. According to Nintendo Nation they estimate that it's about equivalent to the embedded E6760 brand from AMD, which has about 600mhz clock speeds, 800mhz memory speeds, 128-bit 1GB GDDR5 video memory, 576 GFLOPs, shader model 5.0 support and DirectX 11 equivalent graphics support. In English terms that puts the E6760 around the equivalent to the AMD Radeon 4650 card save for faster video memory and DX11 support, which didn't become available in AMD cards until the 5xxx series. In even more English terms, the Wii U's GPGPU is already several years old based on the stats alone...2009 old and worth about $100.

http://www.cinemablend.com/games/Wii-U-Seems-Too-Expensive-Specs-46908.html

To wrap up, Wii U is packing what Iwata told us is a GPGPU – in other words, the GPU can perform computing tasks, a bit like OpenCL on traditional operating systems. So a fairly modern chip (we’re going to take a guess and say it’s based on one of AMD’s first embedded GPGPUs, released in 2011) without going too much into speculation.

http://www.nintendo-nation.net/nintendo-wii-u-ram-disc-speed/

Would be nice if Wii U was packing something like the e6760, but somehow I doubt it.
 
I really hate statements like this:
In even more English terms, the Wii U's GPGPU is already several years old based on the stats alone...2009 old and worth about $100.
No, it's not "old." The Wii's GPU was "old," as it was truly just up-clocked 2000 technology. But this GPU isn't. Embedded GPGPU tech is "slow," but not "old." If he'd said, "The Wii U's GPU is about as powerful as a midrange(?) 2009 PC GPU," that would have been more accurate, but we are way, way past the point where a processor's age and its clock speed are directly correlated, what with so many computing devices operating the spectrum from low-end phones to high-end desktop PCs. A lot of people look for flame war fodder are going to read this as, "The Wii U is based on 2009 tech."
 
GPGPU debuted on the R300, a 2002 chip, though you may say it was tech demos and proof of concept. G80 really started it and R700 gen hardware got some use for it, with folding@home and a supercomputer with 4870X2 cards. so there's no way the Wii U GPU isn't a GPGPU. these days even cell phone GPUs and VIA chipsets pretend to be GPGPU. it's a cheap way to boast about your hardware and doesn't say much about it.
 
Say they have a single memory channel (64bit) that's not that much bandwidth to feed the CPU and the textures to the GPU.
If they were to use those 2 GB of memory the effective bandwidth would be "halved". Not in real term but if you have twice the data to move with the same bandwidth, the result is the same.
May be Nintendo decided that 1GB was the sweet spot.
After reading your and Shifty's post on the matter I wonder if they decided to go with 1GB of a cheap very average RAM just for cache, which even if it is mediocre it's leaps and bounds faster than a hard-drive, and then they chose a better option regarding RAM just for games, which make use of RAM quite intensively.

I think that's a possibility, especially taking into account they don't feature an HDD in the SKUs.
 
After reading your and Shifty's post on the matter I wonder if they decided to go with 1GB of a cheap very average RAM just for cache, which even if it is mediocre it's leaps and bounds faster than a hard-drive, and then they chose a better option regarding RAM just for games, which make use of RAM quite intensively.
DDR3 will be very cheap. You won't want to add a second type of RAM as it won't be cheaper but will add cost to the system. I'm pretty certain this is 2GBs unified RAM on a single bus, not too fast, with the eDRAM making up the graphics bandwidth.
 
Yes I thought a certain amount of the reserved memory could be used for transparent cacheing as with the unused ram on your linux or windows box.

ddr3 on 64bit is quite sucky thouh, 57% of the xbox 360's gddr3 bandwith if it ddr3 1600, 67% if it's 1866 (why not. just run it at CL11 and count on the high supply of mature chips). so the edram is wholly needed for your buffers (1080p 32bit double-buffered already eats half of it)
 
DDR3 will be very cheap. You won't want to add a second type of RAM as it won't be cheaper but will add cost to the system. I'm pretty certain this is 2GBs unified RAM on a single bus, not too fast, with the eDRAM making up the graphics bandwidth.

That's pretty much what I'm counting on now.
 
So this 1gb OS thing, if for example one person was playing a game on the tv and still also using OS non gaming functions at the same time and a second person was was using the tablet to use the OS, how much ram would that require from the OS?
 
So this 1gb OS thing, if for example one person was playing a game on the tv and still also using OS non gaming functions at the same time and a second person was was using the tablet to use the OS, how much ram would that require from the OS?
That depends on what they are doing.
 
IBM has moved to 32 nm with their new Power7+:

The 32 nm SOI processor will have a die size of 567 mm2 and carry 2.1 billion transistors.

Im wondering, has there been confirmation recently that the WiiU CPU is still at 45nm? Would it have been too late, or even worth it to have moved to 32 nm since their 45nm announcement? Here is something:

Of course with a larger cache there will be fewer cache misses and consequently potentially lower benefit of SMT because multithreading takes advantage of stall time in the CPU when it has a cache miss.

As I recall some rumors that the WiiU CPU doesn't do much multi-threading, is that because the CPU is fed with a large enough cache?
 
Status
Not open for further replies.
Back
Top