Wii U hardware discussion and investigation *rename

Status
Not open for further replies.
Nice try, but seriously ... applying H.264 compression to lower the bandwidth without introducing latency ?

480p @ 30 fps = 15.5 MB/s (assuming YUV 420). Raw won't work with 802.11g (though the controller only was 802.11g-like).
I don't think they use h264. The system was explained in a recent Iwata Asks.

EDIT:
Generally, for a video compression/decompression system, compression will take place after a single-frame of image data has been put into the IC. Then it is sent wirelessly and decompressed at receiving end. The image is sent to the LCD monitor after decompression is finished.

But since that method would cause latency, this time, we thought of a way to take one image and break it down into pieces of smaller images. We thought that maybe we could reduce the amount of delay in sending one screen if we dealt in those smaller images from output from the Wii U console GPU on through compression, wireless transfer, and display on the LCD monitor.

Generally, compression for a single screen can be done per a 16×16 macroblock. And on Wii U, it rapidly compress the data, and the moment the data has built up to a packet-size15 that can be sent, it sends it to the Wii U GamePad.
http://iwataasks.nintendo.com/interviews/#/wiiu/gamepad/0/1
 
Last edited by a moderator:
Anyway, the problem I see here is that, as stated several times, Nintendo themselves highlights the feature. Why should they? They usually simply don't talk about tech, this was a very rare exception. Do you think they fell for some AMD snake oil? That's very hard to believe.

No, but I believe in Occam's Razor. Some consumers have fallen for Nintendo's Snake Oil of GPGPU.
 
No, but I believe in Occam's Razor. Some consumes have fallen for Nintendo's Snake Oil of GPGPU.
What's the point? With the DS, Nintendo simply didn't talk about the technology. Same with Wii. Same with 3DS. It's their MO for roughly a decade. They usually do not consider technology a selling point. Why should they do it now? Why focus on this particular feature? Why not focus on the eDRAM instead, which is there, will be used, and has obvious benefits? And why highlight the feature in spec sheets aimed at developers who would/ should know better? Not to mention most consumers don't even know what GPGPU means, and neither do they care. Think about it for a second, and you'll probably agree that it makes no sense at all - unless there's more to it.
 
Last edited by a moderator:
There's nothing more to it. Too many people drank too much Nintendo cool-aid. Nintendo themselves have not mentioned GPGPU as being the savoir of the WiiU. It's the NDFers that have been hyping it as some WiiU savoir. Once again, GPGPU is no magical bullet. It is no savoir.
 
There's nothing more to it. Too many people drank too much Nintendo cool-aid. Nintendo themselves have not mentioned GPGPU as being the savoir of the WiiU. It's the NDFers that have been hyping it as some WiiU savoir. Once again, GPGPU is no magical bullet. It is no savoir.
And once again: Nintendo did mention it (I even posted when and where, so just look it up if you don't believe me). Not as a "savior", because there's no indication that the system needs one to begin with, And there definitely wasn't any such indication at the time they mentioned it.
 
Wii U was made for quick PS360 ports until Nintendo starts dropping their 1st party games that everybody is actually been waiting for.

There is no hidden "GPGPU" function that developers have yet to discover and use to propel Wii U performances to new heights, its simple. Wii U CPU was made to be backwards comp with Wii and cheap to develop first, performance came second.

Every 3rd party game that you could consider demanding (BO2, Arkham City, ME3) has performance issues with more CPU intense scenes on Wii U. ACIII has it too only Digital Foundry hasn't yet made a comparison. Game is locked at 30fps but it drops to mid 20s in even most basic scenes (360 version runs those at avg 35fps) so I wouldn't expect much more from Wii U on 3rd party front. Can't even imagine how bad GTAV would chug on it...
 
Except the whole GPGPU stuff isn't based on rumors, it's a feature highlighted by Nintendo itself, both in public presentations and in the documentation for developers.

Still doesn't change the fact that most of us here would consider the stuff GPGPU is good for to be superficial features - it can't really help you in running gameplay related code, or critical rendering engine related stuff.


Think of it as having the ability to add another 100K particles into your scenes that can't have any effect on gameplay or anything happening, they're basically just dressing. You can design a game from the ground to have visuals that rely on this feature and produce some nice results - but you can't take an existing game's code and just make some critical elements of it use GPGPU instead of the CPU.
 
Sebbi, your insight to game programming and your willingness to share your knowledge are both exceptional. Just wanted to say that I'm really grateful for your participation in this forum.
 
No, you don't understand the reality of the situation. I suggest you reread the posts of other developers and well informed posters such as Sebbi, Erp, and Shifty Geezer. GPGPU is no magic bullet. Please stop with the silly dreams. Yours will just get crushed as it will not amount to anything to vastly improve the performance of the WiiU.

[EDIT: Function, I truely hope you were being tongue-in-cheek sarcastic. If so, it was missed the first time I read your post.]

Of course he is joking.
 
And once again: Nintendo did mention it (I even posted when and where, so just look it up if you don't believe me). Not as a "savior", because there's no indication that the system needs one to begin with, And there definitely wasn't any such indication at the time they mentioned it.

Nintendo's claims that it supports GPGPU don't mean anything. You could probably find some useful non-graphical task that could be performed on XBox 360's GPU too, not to say that it'd be a good use of resources. I don't know what you're really expecting here.. nVidia and AMD's latest GPUs are far ahead of R700 in terms of programmability/GPGPU features. There isn't some special feature that AMD could have added to Wii U's GPU that would have suddenly put it ahead. So regardless of whether or not it's useful and practical for some real world codes that aren't running in the standard rendering pipeline (and I'm sure it is), if physics engine developers feel that the latest crop of PC GPUs are poor fits for their core functionality then I guarantee you the same will hold for Wii U.

Wii U's GPU is going to largely remain a streaming processor. If the tasks need heavy caching to work well then they're going to bomb on GPGPU. Having slow main RAM only makes the situation worse, particularly when you don't want to spare any eDRAM for non-GPU tasks (which you normally wouldn't I'd think, especially if you're texturing as much as you can from there)

The scary thing is that I have seen people state that and not be joking.

function is satirizing these people. He was exaggerating and trying pretty hard to look silly on purpose. If there are people who would phrase it the way he did and actually mean it then there's something pretty wrong with them.
 
Anyway, the problem I see here is that, as stated several times, Nintendo themselves highlights the feature. Why should they? They usually simply don't talk about tech, this was a very rare exception. Do you think they fell for some AMD snake oil?
Not at all. Wuu's GPU is definitely 100% capable of GPGPU. Only we don't know to what degree, and how that helps in games. Xenos and RSX are also capable of GPGPU work - the origins of GPGPU were in using graphics functions on non graphics data by formatting the data into a way that matched the graphics functions. It's a very clever, innovative solution to extracting performance from limited hardware. AMD and nVidia and friends (MS, Khronos Group) have taken these ideas and worked towards improving the flexibility of the GPUs to enable a broader range of workloads to be performed, but GPGPU itself isn't a feature that is or isn't present in a GPU. Like DirectX - a GPU isn't DirectX or not; a GPU has a degree of hardware support for various features in DirectX. A GPU isn't a GPGPU or not; it'll support a number of features that aid GPGPU work. We have no details on what Wuu's GPGPU level is, and can only guess by likely GPU architecture.

I tend to believe that hardware manufacturers like AMD or Nvidia usually have to follow PC paradigms, even more so considering neither of the two is the market leader. It makes no sense putting too much time and money in a feature nobody will use, especially not if it requires costly and rarely used additions or compatibility breaking changes to the hardware. In the embedded space, there's no reason to hold back.
GPGPU is seeing its way into supercomputers. It's on all the GPU roadmaps. It's a feature that has the complete investment of the GPU IHVs as a necessary component to remain competitive, even if devs aren't using it particularly well in PCs yet.

And I'm afraid a few Linkedin profiles got changed/ erased after one or two got too much attention, so you probably won't find the sources for my claims anymore. But Nintendo has (or had) people working on that stuff. 3rd party middleware optimizations I mean, I never found a concrete mention of GPGPU either.
They can also buy in people with experience such as from AMD or nVidia. However, it's a field still in its early days and there's no way Nintendo can be trusted to bring broad expertise that helps other devs refactor their game engines - there's no reason for them to have this superior expertise unless they've heavily invested in research for years prior to Wuu's release. Furthermore, a GPU cannot (yet) replace a CPU in terms of types of code it can handle or types of jobs it can do. I'm no expert on the game code so I could entertain the notion of things like physics being executed on Wuu's GPU, but with the likes of ERP are telling us otherwise. Nintendo's commentary about GPGPU is thus pretty meaningless. Yes, they did highlight it which is rare for Nintendo, but then they were facing an uphill struggle dealing with the game media and throwing them a bone of optimism seems a likely PR move.

If you just follow all the positive PR surrounding Wii and Wii U, all the talk of amazing, secret features, and special abilities, and Nintendo expertise, every single one was bunk. Is there really reason to think that this time Nintendo has a technological feature that'll make all the difference, especially in light of the evidence t the contrary? The GPGPU capabilities of Wuu will have some support roles, I'm sure, but it's highly unlikely that the GPU will be doing a lot of the game code heavy lifting.
 
Would it be safe to say that GPGPU is at least an order of magnitude more difficult than regular programming? Even compared to the Cell GPGPU is still quite a bit harder if you want to do anything useful/important/critical for the game code, isn't it?
 
And once again: Nintendo did mention it (I even posted when and where, so just look it up if you don't believe me). Not as a "savior", because there's no indication that the system needs one to begin with, And there definitely wasn't any such indication at the time they mentioned it.
We have no details on what Wuu's GPGPU level is, and can only guess by likely GPU architecture.
Nintendo did mention compute shader support (I guess wsippel refers to that). But at the most basic level this just means the GPU can execute shader programs outside of the graphics pipeline (so you don't need to set up a the complete pipeline with pass-through vertex/geometry shaders and render a quad of the desired size with your shader disguised as pixel shader to an offscreen target). That basic level got introduced in the R700 generation (plus the LDS) and basically comes for free.
 
Last edited by a moderator:
I think the 360 has been able to do that since 2005 thanks to MEMEXPORT. Sebbbi talked about using it in Trials Evolution - iirc he said the 360 was beyond even DX 10.1 in that regard. If Nintendo allowed a similar feature through their WiiU API then they would have no reason not to list compute shaders too.

It wouldn't have to mean that Nintendo had rearchitected AMD's graphics chip to be GPGPU monster though, or that they had solved all the issues around GPU physics, or that it was the reason they chose such a weak CPU etc as wsippel seems to be implying.
 
So, WiiU edram bandwidth.

Looking at this (http://www.eurogamer.net/articles/digitalfoundry-black-ops-2-wii-u-face-off):

Digital Foundry said:
What's interesting about the read-out overall is that similar events can stress all three engines, but it's the extent to which frame-rates are impacted that varies dramatically. The initial scene doesn't look too promising for Wii U: indeed, we see three distinct performance bands - Xbox 360 at the top, PS3 in the middle and the new Nintendo console right at the bottom. It's clear that plenty of characters and full-screen transparencies are particular Achilles Heels for the Wii U, a state of affairs that persists in further clips later on. However, beyond that we see a fairly close match for the PlayStation 3 version in other scenarios and occasionally it even pulls ahead of the Sony platform.

... and at the missing in trees in Darksiders 2 (which seem to use alpha textures and would fill the screen up close) it seems like the WiiU might have some issues. The GPU clock is 550 mHz and it probably has 8 ROPs so triangle setup and raw fillrate shouldn't be the issue, but bandwidth might be.

So what kind of bandwidth would we be looking at? I started thinking about the PS3, where RSX had its 256-bit memory bus split into two buses (sort of), with a 128-bit bus connected to GDDR3 and the other half bent around and pointing at the CPU with that FlexIO thing.

So then I thought (perhaps a little naively) "the WiiU has a 64-bit bus going to main memory, what if it had another 64-bit bus (the "other channel") pointing at the edram?" The simplest way might be to run both channels at the same speed and simply address the different banks of memory sequentially. So I wanted to compare data transfer rates, and it seems that the 40nm edram process from NEC can scale up to 800 mHz:

http://www.simmtester.com/PAGE/news/shownews.asp?where=533424&num=10720

... which is the possible (likely?) clock of the DDR3 the WiiU uses. Would this be possible? Can edram be accessed using DDR data rates/protocols/whatever? Could ~13 GB/s of video memory bandwidth be in the right kind of area for what we're seeing? Seems very low, but, yunno .... Nintendo, and that.

MSAA, transparencies and lots of z-tests (no fancy CPU for culling on the WiiU) are the kind of things that should eat up available frame buffer bandwidth. How common are these things on WiiU and what kind of performance do we see where they occur?
 
Last edited by a moderator:
So then I thought (perhaps a little naively) "the WiiU has a 64-bit bus going to main memory, what if it had another 64-bit bus (the "other channel") pointing at the edram?" The simplest way might be to run both channels at the same speed and simply address the different banks of memory sequentially.

... which is the possible (likely?) clock of the DDR3 the WiiU uses. Would this be possible? Can edram be accessed using DDR data pins? Could ~13 GB/s of video memory bandwidth be in the right kind of area for what we're seeing? Seems very low, but, yunno .... Nintendo

No. Just no. No.
 
No to what? Can't access edram using something like a DDR3 bus?

Or no to the bandwidth? Because as bad as that sounds - and it's just a far-out bit of pondering, I'm not pushing it as Truth - an A10 5800K with the same aggregate bandwidth absolutely kicks the WiiU's face off.

Hell, Llano / A8 will kick its face off with less.

And when the PS3 is beating you at transparencies ....

Edit: here you go, performance scaling with bandwidth on Trinity. At DDR3 1600 (and lower) you have something vastly beyond the WiiU, with probably 4X the CPU (more?) and a much beefier GPU.
http://www.tomshardware.com/reviews/a10-5800k-a8-5600k-a6-5400k,3224-5.html

And the Batman: Arkham City bit is somewhat topical:
http://www.tomshardware.com/reviews/a10-5800k-a8-5600k-a6-5400k,3224-15.html
 
Last edited by a moderator:
No to what? Can't access edram using something like a DDR3 bus?
You probably COULD, but it wouldn't make sense as DDR3 and such are designed as off-chip interfaces, to tolerate use with memory board add-in slots (DIMM, SODIMM etc) and so on.

To have eDRAM and NOT have massive on-chip bandwidth would be completely illogical, as the whole - in fact ONLY - point to put DRAM straight on the chip is to provide large amounts of bandwidth.

To have a (presumably large) chunk of eDRAM with piddly bandwidth would not be a help, but rather a hindrance as instead of a big, expensive fast pool of memory you'd have a big, expensive SLOW memory. That cost could have been sunk into something else that would have provided a better return for investment.
 
Status
Not open for further replies.
Back
Top