Accordingly, the GPU would be clocked at 486MHz (4 x 121.5), and the eDRAM at either 486 or 729MHz.
.
edram can't be 729 MHz, there that is too fast.max 600 can be reasoable
The aligned clock rates can decrease the latency.
Accordingly, the GPU would be clocked at 486MHz (4 x 121.5), and the eDRAM at either 486 or 729MHz.
.
Where does this fit in?
http://n4g.com/news/1123950/rumor-wii-u-cpu-could-be-clocked-at-1-22ghz
It's very close to, but not quite, 10X121.5
On the RAM though, that's even LESS than the worst case prior thought bandwidth. Ouch. Would be more like 11.7GB. HALF the 360's. Just ouch.
Pretty sure the DDR3 isn't clocked at 800MHz. It should be 729MHz. I was told the DSP would be running at 120MHz. Looking at Nintendo's MO, it's probably not really 120MHz, but 121.5MHz - same base clock as the Wii. Nintendo likes clean multipliers, so I would assume the RAM to be clocked at 729MHz (6 x 121.5). Same as the Wii CPU. Nintendo likes to keep RAM and CPU in sync, so the CPU should be running at 1458MHz (12 x 121.5). Accordingly, the GPU would be clocked at 486MHz (4 x 121.5), and the eDRAM at either 486 or 729MHz.
I don't know why Nintendo always seems to do this. I guess using a single fixed base clock and only changing multipliers for various components is simpler. And it definitely gives more predictable results. I don't see Nintendo giving that up.
I have to say though that given what we know today, it seems to punch above its weight even at this point in time. There are a number of multi platform ports on the system, at launch day with what that implies, that perform roughly on par with the established competitors. And those games are not developed with the greater programmability, nor the memory organization of the WiiU in mind. So even without having its strengths fully exploited, it does a similar job at less than half the power draw of its competitors at similar lithographic processes! And its backwards compatible. To what extent its greater programmability and substantial pool of eDRAM can be exploited to improve visuals further down the line will be interesting to follow.
How what we have seen so far can be construed as demonstrating hardware design incompetence on the part of Nintendo is an enigma to me.
Developers have it wrong - the WiiU is powerful
OK, I just read through this article, and a bunch of the comments on top of that. I can't see why I should suffer alone so I dare some of you to read it as well. I warn you, you may feel your brain cells dying as you read through it.
Well I read it and i am now enlightened
"but it’s unlikely the PS4 and 720 will feature GPU’s that are any beefier than the Wii U"
@strange batman arkham city did suffer framerate drops when enabling physx its funny the writer understands it and then later on forgets and takes the opposite stance
Seriously what did you expect from ZeldaInformer website? If that doesn't spell Nintendo Fanboys I don't know what else would.
So, what are the changes that CPU and GPU have quite fast bus between them and gdram on GPU can be properly accessed by CPU?
This certainly would allow some interesting possibilities.
I've read some people on NeoGAF saying things like "when developers build "gpu centric" games, Wii U will show next gen face...
What's mean "gpu centric"? Can the devs stop using cpu and begin to use gpu for general purpose code?
All this looks like Nintendo fans words (IMO).
They basically meen that when developers start making games for the Wii U hardware the graphics will improve, at the moment it's only really running games that are ports and thus we not designed for Wii U's architecture or memory lay out.
From what I've seen some ports on Wii U have better graphics in places and others show worse graphics. But the most obvious thing seems to be reduced shadow resolution as show in COD and Mass Effect.
However if reduced shadows is all what's really suffering on ports then I don't think the bandwidth problem is as bad as people think.
GPU centric:
If developers would design the games around the WiiU architecture.
-They can can run those same games at higher resolutions, and higher frame-rates with no tearing. (ex: 1080p 60fps)
-They can run AAAA next gen games at lower resolutions (Sub HD or 720p 30 to 60fps) and a steady frame rate. By the time we see games like this come out, it will probably be mid to late 2014. Depending on which developers are still around.
But, how many AAAA games will we have next gen? And how many will be multi-plat and not first party exclusives? Most likely, the majority of the games will stay within the same budget ranges as this gen, with enhancements the WiiU is most likely designed to handle.
Conclusion
the CPU and memory is good enough to handle current gen games.
Which is what Nintendo wanted this first year.
Nintendo appears to have put their money into latencies and utilizing advanced memory controllers to offset memory bandwidth restrictions. So Im not seeing the memory as being a bottleneck. The bottleneck are the publishers who decide the money, the budget and quality of the team working on ports, and games in general.
Then again, Publishers can also decide in the future to use WiiU as a base, and cheaply port games to the Durango and PS4. Because if those consoles are that more powerful than the WiiU, they should run games made on the WiiU with no problem. Which means visual and performance differences between WiiU games and future consoles will be minimal. But this will be based on the target audience install base WiiU will have by E3.
What's mean "gpu centric"? Can the devs stop using cpu and begin to use gpu for general purpose code?
Well sort of, in a limited sense. If you are aware of whats been going on in the pc space we have a lot of games that are doing physics on the gpu instead of the cpu.
maths based stuff that is parallelize-able like particles (smoke ect) object destruction and waves
heres an example
But we are talking about using the Wii U GPU for CPU things. I know your example, dual gpus on pc for physics and so, but a good gpu for graphics + a second gpu for gpgpu + a good cpu (PC) is not the same than poor cpu + a (unknown) gpu for graphics and gpgpu.
I don't know if I am explaining my point.
PhysX can be used with a single GPU.
This demo is essentially demonstrating what GPGPU is best used for.
I believe many people have stated that trying to get GPGPU to take over other CPU tasks is a big nono.