Wii U hardware discussion and investigation *rename

Status
Not open for further replies.
There were rumours that the Wii U GPU was around 500GFLOPS (I think they got lherre to do a hot-cold thing to narrow down the range) possibly a RV730/740 variant.

Is that borne out by the games to date? Is the GPU (and visuals) being held back by slow RAM & eDRAM?

Or should we should we revise this down to something more like 350 GFLOPS based on what we're seeing (no resolution increases, no better AA/AF etc)
 
Last edited by a moderator:
Revise it down. With that kind of GPU, even these constraints, the games would show some marked improvement over the other console's GPUs, especially with GPU related tasks. So far, that hasn't been the case in any of the games so far whether they are multiplatform or first party.

350gflops seems fair to me. Although which games have better AA or AF?
 
Revise it down. With that kind of GPU, even these constraints, the games would show some marked improvement over the other console's GPUs, especially with GPU related tasks. So far, that hasn't been the case in any of the games so far whether they are multiplatform or first party.

350gflops seems fair to me. Although which games have better AA or AF?

Yeah, that's what I thought, especially if it had 2x the flops of Xenos with the efficiency gains of more modern architecture.

I meant that we haven't seen games with better AA/AF
 
Well yes, that's even worse. You'd expect it to atleast have better aliasing but most games which even feature aliasing on other platforms, seem completely absent on Wii U
 
Well yes, that's even worse. You'd expect it to atleast have better aliasing but most games which even feature aliasing on other platforms, seem completely absent on Wii U

well im just gonna wait and see what ground up games look like before i dig a grave for this console. We really dont know anything its all over the place.
 
Yes he was misquoted. He stated as much right here.

I’m a bit reluctant to talk about the Wii U specs, when I mentioned something earlier it got totally blown out of proportion and twisted

While not to put words in his mouth but that's basically the definition of cover your ass mode if I've ever seen it.
 
While not to put words in his mouth but that's basically the definition of cover your ass mode if I've ever seen it.


i dig it but that was the first of two occasions when he was misquoted. i think it was just websites trying to make something out of nothing. just like the producer of sonic all star racing saying the Wii u version of the "GAME" was on par with ps360 version. they flipped it and posted that he said the Wii U (hardware) was on par with ps360.
 
There were rumours that the Wii U GPU was around 500GFLOPS (I think they got lherre to do a hot-cold thing to narrow down the range) possibly a RV730/740 variant.

Is that borne out by the games to date? Is the GPU (and visuals) being held back by slow RAM & eDRAM?

Or should we should we revise this down to something more like 350 GFLOPS based on what we're seeing (no resolution increases, no better AA/AF etc)

It's possible that it has more ALUs but it's still held back in other areas compared to PS360.

Personally, I just don't see them going for 2-3x better floating point performance when everything else indicates they were aiming to match current gen consoles. The 90% of a PS3 figure seems to hold fairly well based on what we've seen.
 
*ahem* This thread is about technical hardware discussion not about blind faith or worthless PR statements and lies. Can we please stop with all those worthless words and get back to the technical bits?

I moved all the blind faith worthless items into something slightly less technical thread over here: http://forum.beyond3d.com/showthread.php?t=62800

Keep those sort of discussions out of this technical thread.
 
We still don't have any numbers on the EDRAM do we? Damn...i've really been wanting to figure out the bandwidth. It has to be a huge part of why the Wii U can run these games considering the very terrible speed of the main ram.
 
In which case, why spend the cash on a fast GPU? That wouldn't be very Nintendo like (or very sensible) would it?

From what I understand the Bandwidth restriction of the Wii U comes primarily from the fact that its shared between read and write whereas the 360 can do both at once, effectively doubling its bandwidth. Also, is it not the case that because of the 360's low amount of eDRAM that resolves to main memory are often required? This would be quite nasty for Wii U. If an engine was modified for primarily read only operations to main RAM that could provide some significant performance enhancements over the 360 could it not, given that the Wii U eDRAM is likely operating at some mental speed compared to main RAM?
 
From what I understand the Bandwidth restriction of the Wii U comes primarily from the fact that its shared between read and write whereas the 360 can do both at once, effectively doubling its bandwidth. Also, is it not the case that because of the 360's low amount of eDRAM that resolves to main memory are often required? This would be quite nasty for Wii U. If an engine was modified for primarily read only operations to main RAM that could provide some significant performance enhancements over the 360 could it not, given that the Wii U eDRAM is likely operating at some mental speed compared to main RAM?

What are you talking about exactly? If you're referring to the GDDR3 on the XBox 360, it's not double ported for reads and writes like you say. It just happens to be twice as wide (but at a lower clock speed of 700MHz vs 800MHz).

The eDRAM is a different story.. it's not so much that it's fully read/write dual ported as that it has built-in logic for certain read-modify-write operations, which can be optimized vs separate read and write. But we don't know anything about Wii U's eDRAM and if it's similar to this or not, unless you do somehow know something.
 
Just to point something out
However while developing Tag 2, since it’s very graphic intensive because of the multiple characters on screen,

from the official blurb
Multiplayer Match: Fight in a 2-on-2 tag-team battle, 1-on-1 or 1-on-2
Pair Play: The ultimate team brawl with up to 4 players each controlling a character
So multiple characters would be 4
 
What are you talking about exactly? If you're referring to the GDDR3 on the XBox 360, it's not double ported for reads and writes like you say. It just happens to be twice as wide (but at a lower clock speed of 700MHz vs 800MHz).

The eDRAM is a different story.. it's not so much that it's fully read/write dual ported as that it has built-in logic for certain read-modify-write operations, which can be optimized vs separate read and write. But we don't know anything about Wii U's eDRAM and if it's similar to this or not, unless you do somehow know something.

I can't find where I read that about simultaneous read/write so I could be misremembering and apologise if I've misdirected the thread a bit but I was pretty sure that's what someone was suggesting. I guess in the end it's a similar result. Devs on the 360 are more than happy (And actually required for things like tiling) to use main memory for writing whereas the Wii U's eDRAM pool might very well be able to alleviate the reliance on main memory writes.

What I'm wondering though is if 32MB of eDRAM is enough for most applications (Let's assume 720p)?
 
Enough for what exactly?
Framebuffer, backbuffer, G-buffer (deferred rendering) will probably fit there easily. Alpha blended particles, secondary render targets too.

Big cascaded shadow map buffers, now that's something else.

All textures necessary to render a scene without accessing the slow main memory, definitely not.
 
So the slow main mem is going to detract from what can be done then even with the EDRAM. Makes you wonder if Nintendo knew about that going in. Well, they would have had to. But to okay such a lop sided configuration...
 
Well, people managed to do incredible things with the PS2's 4MB of EDRAM, using it as a sort of manually managed texture cache - although games in that time tended to use a lot of tiling.

Still, it should be possible on the Wii U to work in a way like this:
- start rendering the frame, allocate say 16MB of texture memory, 8-10MB frame buffer, 6MB for shadows or something
- calculate shadow buffers for the ground
- upload textures for the ground, render ground polygons
- calculate shadows for vegetation
- upload textures for vegetation, render that
- calculate shadows for characters
- upload textures for character #1, render that etc. etc.
- upload textures for smoke / fire / dust effects, render that

I suppose there are state switches or such as the GPU processes the various objects in the scene anyway, so it should be possible to structure memory in a way that provides various caches that get refilled many times throughout the rendering of a single frame.
It would take a lot of painstaking optimization to split up the scene into chunks that will not overload these relatively small caches, and it could have a bad effect on efficiency and GPU utilization; but it could still be better than trying to read textures from the main RAM.


Or maybe it would work well with a virtual texturing approach, even if it's not completely unique texturing like Rage; the very reason to implement such a system is limited video memory. Carmack first wrote about it when we had 16-32MB video cards used at 1024*768 32 bit resolutions; and he specifically mentioned EDRAM. Granted, it was in March of 2000 (!) and a lot has happened in the 12 years that passed...
 
Please, correct me if I am wrong. With deferred shading don't you only run pixel shaders once per pixel*? How many texture reads do the average game perform for each pixel?

Considering a resolution of 1280x720 and 30 fps, even without taking caches into account, you perform
R = 1280 * 720 * 30 * Z
texture reads, with Z being the average number of texture reads required to shade each pixel. Assuming 32-bit textures and 12.8 GB/s of main memory bandwidth, ignoring anisotropic filtering (which I suppose does a pretty good usage of texture caches):
R * 4 byte < 12.8 GB/s
Z < ~124 texels per shaded pixel

This means that for each pixel shaded, up to 124 texels can be read from the main memory at the given resolution and frame rate. This seems more than enough for current generation games. This does not sound like a terrible limitation even if only half the bandwidth is actually usable for texturing and the game is running at 60 fps (up to 31 texels per shaded pixel). A forward renderer might be significantly less efficient if a lot of overdraw is present, though. Does the allegedly high bandwidth requirement for texturing depends some pretty big efficiency loss that I am not aware of?

* You might still need separate rendering of transparent objects, though.
 
You might want some memory B/W for CPU, audio, I/O, video scanout (which at 1080P isn't peanuts when you only have 12ish GB/s to work with) etc... :p
 
From what I understand the Bandwidth restriction of the Wii U comes primarily from the fact that its shared between read and write whereas the 360 can do both at once
No, 360, like any other system using standard commodity DRAMs cannot both read and write to main memory at the same time. Don't get confused by the CPU-to-GPU bus having separate read and write lines, that's just a convenience to avoid having bus turnaround penalties slapping you and increasing what is already quite high main memory latency (from the CPU's standpoint, that is.)
 
Status
Not open for further replies.
Back
Top