I was wondering where the next 8-core POWER8 console was coming after it was mooted and dropped (then brought up again, then dropped) for every console of the now current generation.
He did bring up a good point though, if you have SIMD heavy code from the 360/PS3, wouldnt it make sense to move that onto the GPU?
I was told his game became fillrate bound on the 360 and had to go to 600p to get the desired performance, while on Wii U he is able to apply even more post processing effects, and render the game at 720p.
It would depend very much on the type of SIMD heavy code they'd want to port. GPU compute is more than just SIMD; it's wide SIMD with severe branching and synchronization penalties/deadlocks and potentially tens of milliseconds in startup latency (for GCN, which is touted as being significantly improved over older generations).
I was also told that the feature set for the Wii U GPU is far better than that of the 360.
Im not sure how much that really matters since I seem to remember an article talking about with the PS3 and 360, developers were actually using the CPU to perform certain effects that were beyond DX9. Do you guys know if that is true?
They should've gone with a cheap Fermi architecture rather than Radeon 7xxx. I've read Fermi can switch between gpgpu and graphics instructions in the same clock cycle.AMDs VLIW5 arch never ran most GPU compute tasks very well; I can't imagine nintendo spent even a dime to pay for any actual compute-specific improvements to the ancient GPU they picked, considering how fucking CHEAP the entirety of the wuu hardware is.
And I've read Fermi can't have graphics and compute work loaded into an SM at the same time.Imo the best thing that could happen to the Wii U now is the gamepad getting hacked wide open. Let's see what some pc indie developers can develop, then Nintendo could be inspired by some of the ideas.
They should've gone with a cheap Fermi architecture rather than Radeon 7xxx. I've read Fermi can switch between gpgpu and graphics instructions in the same clock cycle.
Nvidia claimed, around the time of fermi launched, to have very fast thread switching delays (~2ms, or whatever), although I'm not sure that was anything but marketing fluff. You run compute and graphics simultaneously on a geforce 680 (one generation ahead of fermi, so should be "even" better at these things), and you're met by stutter city.