Wii U hardware discussion and investigation *rename

Status
Not open for further replies.
I was wondering where the next 8-core POWER8 console was coming after it was mooted and dropped (then brought up again, then dropped) for every console of the now current generation.
 
Really no need to post that silly Fusion "rumour" here. It's beyond baseless and not even the silliest of silly Nintendo fans actually believe there's a shred of truth to it. Even the website which put it up claims they don't know who the source is, lol.

Concept is cool though, although again patently unbelievable and probably not very sound.
 
Maybe someone with a much better understanding of the industry could give a guess as to what Nintendo could/should realistically go with? Someone who is actually working in CPU/GPU system design?

I've already put forth my suggestion ie multicore MIPS/PowerVR console/portable.

Another option would be a bunch of TK1 processors.
 
If Nintendo releases a system before 2016 and abandons the Wii U early a lot of people who bought into it are going to be pissed (including me). Can they afford that bad rep right now when they're trying to rebuild their reputation with the core gamer audience, after the casual focused Wii? I realise MS did start designing the Xbox 360 after the Xbox had been out for 2 years though and it was clear they were going to lose a lot of money on the thing, but they at least did plan a 4 year lifespan.
 
Last edited by a moderator:
If WiiU isn't selling and 3rd parties are barely making games for it....... I wouldn't expect N to keep pumping out 1st party games that don't have much of an audience to buy them. But it seems like N is out of touch with reality so I have no idea what to expect.
 
I must be one the few people who bought it for 3rd party titles then. Nintendo co-owns the Fatal Frame IP. That seems like it was made for the tablet, and they could start funding some Japanese studios to develop the more mature titles like Grasshopper studios.
 
Last edited by a moderator:
*ahem* this is the Technical investigation thread about WiiU so keep all the garbage bullshit rumors about possible future Nintendo failings out of it. Go start some new rumor nontechnical thread in the appropriate general console forum.

Kthx, bye!
 
I have been fortunate enough to get to talk with a developer who has worked on a title for Xbox Arcade and now working on the Wii U with an eshop game. After the Eurogamer article came out it really confirmed some suspicions. Such as the fact that the memory bandwidth is not an issue, even though some still wanted to act like it was, the GPU is quite a bit better than current gen consoles, but the CPU on the other hand has some issues. Like you guys here were saying, optimized 360 code that doesnt work to well on the Wii U cpu most certainly translates to poor SIMD capabilities in comparison to the 360 and PS3.

The developer I spoke with is an Indie developer, and he seemed to believe that the CPU in the Wii U had plenty of performance for the majority of Indie developers out there, but could see how it might be a problem for AAA developers, especially when trying to port software to Wii U. He did bring up a good point though, if you have SIMD heavy code from the 360/PS3, wouldnt it make sense to move that onto the GPU? It seems like the GPU offers quite a bit more performance than the current gen consoles, I was told his game became fillrate bound on the 360 and had to go to 600p to get the desired performance, while on Wii U he is able to apply even more post processing effects, and render the game at 720p.
 
He did bring up a good point though, if you have SIMD heavy code from the 360/PS3, wouldnt it make sense to move that onto the GPU?

It would depend very much on the type of SIMD heavy code they'd want to port. GPU compute is more than just SIMD; it's wide SIMD with severe branching and synchronization penalties/deadlocks and potentially tens of milliseconds in startup latency (for GCN, which is touted as being significantly improved over older generations).
 
AMDs VLIW5 arch never ran most GPU compute tasks very well; I can't imagine nintendo spent even a dime to pay for any actual compute-specific improvements to the ancient GPU they picked, considering how fucking CHEAP the entirety of the wuu hardware is.
 
I was told his game became fillrate bound on the 360 and had to go to 600p to get the desired performance, while on Wii U he is able to apply even more post processing effects, and render the game at 720p.

Not sure which fillrate (pixel or texel) he may be speaking of, but there is the +10% core clock and the newer shader/tex (including cache hierarchy) to consider. (rv7xx RBE's are also quad-rate for depth-only IIRC.)

Anything after R6xx (additional L2) should be better than Xenos' 32kB L1 texture cache (saving bandwidth to a degree). *shrug*
 
It would depend very much on the type of SIMD heavy code they'd want to port. GPU compute is more than just SIMD; it's wide SIMD with severe branching and synchronization penalties/deadlocks and potentially tens of milliseconds in startup latency (for GCN, which is touted as being significantly improved over older generations).

Like I said, being an Indie developer he wasnt finding any trouble with the CPU, so for him there would be no reason to look to the GPU for any assistance, hence he wouldnt be that familiar with how well that would work.

Do you guys know specifically what task in games today are SIMD heavy on the CPU? I reading up on Havok's physics engine, and it seems they have a SIMD and a non SIMD version available, but I couldnt really find much info on how much of a speed up the SIMD version had.

I was also told that the feature set for the Wii U GPU is far better than that of the 360. Im not sure how much that really matters since I seem to remember an article talking about with the PS3 and 360, developers were actually using the CPU to perform certain effects that were beyond DX9. Do you guys know if that is true?
 
I was also told that the feature set for the Wii U GPU is far better than that of the 360.

Most sources (rumours) point to the Latte being derived from the R7xx family, which is DX10.1 whereas the Xenos is DX9+, so it should be better.



Im not sure how much that really matters since I seem to remember an article talking about with the PS3 and 360, developers were actually using the CPU to perform certain effects that were beyond DX9. Do you guys know if that is true?

Yes, it's true. AFAIK it was more frequent on the PS3 because its G70-derived RSX is a lot more limited than Xenos but Cell proved to be efficient for some of those tasks.
 
Imo the best thing that could happen to the Wii U now is the gamepad getting hacked wide open. Let's see what some pc indie developers can develop, then Nintendo could be inspired by some of the ideas.

AMDs VLIW5 arch never ran most GPU compute tasks very well; I can't imagine nintendo spent even a dime to pay for any actual compute-specific improvements to the ancient GPU they picked, considering how fucking CHEAP the entirety of the wuu hardware is.
They should've gone with a cheap Fermi architecture rather than Radeon 7xxx. I've read Fermi can switch between gpgpu and graphics instructions in the same clock cycle.
 
Last edited by a moderator:
They would have picked the same bus size regardless, hence the eDRAM.


edit:

Nevermind getting nVidia to do a custom design as needed... AMD have the ATi/ArtX IP for Gamecube/Wii as well.
 
Imo the best thing that could happen to the Wii U now is the gamepad getting hacked wide open. Let's see what some pc indie developers can develop, then Nintendo could be inspired by some of the ideas.


They should've gone with a cheap Fermi architecture rather than Radeon 7xxx. I've read Fermi can switch between gpgpu and graphics instructions in the same clock cycle.
And I've read Fermi can't have graphics and compute work loaded into an SM at the same time.
 
Nvidia claimed, around the time of fermi launched, to have very fast thread switching delays (~2ms, or whatever), although I'm not sure that was anything but marketing fluff. You run compute and graphics simultaneously on a geforce 680 (one generation ahead of fermi, so should be "even" better at these things), and you're met by stutter city.
 
Nvidia claimed, around the time of fermi launched, to have very fast thread switching delays (~2ms, or whatever), although I'm not sure that was anything but marketing fluff. You run compute and graphics simultaneously on a geforce 680 (one generation ahead of fermi, so should be "even" better at these things), and you're met by stutter city.

My GK104 cards don't stutter when I activate PhysX, at least. Neither did my previous GTX580.
 
Status
Not open for further replies.
Back
Top