Wii U hardware discussion and investigation *rename

Status
Not open for further replies.
Can you please explain why a gpu demoed in 2006, which as far as I'm aware is only open gl es 1.1 with a few extras(I could be wrong here as nintendo may have optimised for open gl es 2.0)
Is a good gpu for 2011??

PowerVR SGX was also demoed in 2006..

Its not as if it is power efficient either as the device sucks juice at a phenomenal rate, and it's only very low resolution.
The pica 200 has only a small advantage of raw processing power over psp, even if other does push out a healthy texture fill rate.

I'm pretty sure there's no wattage info for the 3DS GPU publicly available, and also not enough firm info on its "processing power" to make the kind of claim you just made.

Also very low resolution compared to what?

Nintendo could have gone with a single sgx 543 at 200Hz and walked all over it...its relevant because nintendo has a very very low priority over all internal working especially its gpus, they prioritise one feature at the expense of graphics.don't forget the nintendo 3ds is not exactly cheap either.

Can you explain to me how a single core 543 at 200Mhz would walk all over the 268Mhz PICA200 based GPU in 3DS? I mean apart from the usual "its super duper programmable" argument.
 
Yes comparisons to early ps360 developments are non existant, the technology inside wuu is for all intents and purposes the same architecture as xbox360, of course some big differences but for the most part very very similar.

I can't see there being a massive learning curve like there was starting multi threading and such, programming for the cell must have been a nightmare, likewise with box 360s limited storage space, xenos was like nothing devs had seen before, Wuu gpu will be an evolution of xenos, maybe more advanced, but in the same ball park performance wise.

360 has a In order execution CPU and DX9 GPU. WiiU has an out of order CPU + Audio DSP + I/O controller with a DX11 level GPGPU. So no its not the same architecture, in fact they're pretty different.
 
The big open question isn't can it compete with PS360, it's can it remain relevant after PS720 ships? and no one can answer that, the answer is far more complicated than hardware specs.

The only real issue since PS360 are pretty much out the door.
 
Last edited by a moderator:
iPhone 4 has 535 @ 200Mhz. Epic Citadel seems pretty nice at 960x540 https://www.youtube.com/watch?v=V_rR8ZEQztM

Does anyone else remember the PowerVR village benchmark? :)

What's impressive about Epic Citadel graphically in comparison to 3DS games is the resolution and the texture quality. The texture quality comes down to memory and the resolution mostly comes down to the screen. After all the SGX 535's fillrate is less than half that of the GPU in 3DS. Adding in extra hidden surface removal efficiency with a TBDR in a pretty perfect scenario like the village area of Epic Citadel will bring its effective filtrate up. But not to the point were its well past that of 3DS's GPU. Also not all games can be designed to be the perfect scenario for your particular GPU (lots of opaque overdraw).

What's particularly unimpressive is the total lack of dynamic shadows or lighting. This seems to be a recurring trait of any game on single core SGX GPU's. I've never seen a game on a single core version of that GPU throw around even 3DS level effects such as lighting, shadows, particals ect, even Epic just focused on high resolutions and static everything. No doubt SGX has its strengths, but AFAICS it also has its weaknesses compared to a GPU like PICA200. I find the whole assumption that its just better in every way and would automatically walk all over PICA200 to be quite silly.

I'd have loved to see a dual core SGX 543 in 3DS though, but it wasn't to be.
 
Last edited by a moderator:
WiiU has an out of order CPU + Audio DSP + I/O controller with a DX11 level GPGPU.
(My emphasis.)
How do we know that it does, there's been no confirmation of this or even any solid, believable rumors. Only assumptions, speculation and wishful thinking - and as I recall, you were one of the staunchest proponents for Hollywood being more than just a gamecube graphics chip clocked 50% faster... :)

Now, don't get me wrong... I'd love to see a DX11 GPGPU in Wuu. Nothing would please me more. But I'm gonna assume there's nothing more than a Radeon 4xxx-class chip until it's been proven otherwise. I'm not gonna get all my toes stubbed and hopes dashed all over again like back in 2006. :p
 
(My emphasis.)
How do we know that it does, there's been no confirmation of this or even any solid, believable rumors. Only assumptions, speculation and wishful thinking - and as I recall, you were one of the staunchest proponents for Hollywood being more than just a gamecube graphics chip clocked 50% faster... :)

Now, don't get me wrong... I'd love to see a DX11 GPGPU in Wuu. Nothing would please me more. But I'm gonna assume there's nothing more than a Radeon 4xxx-class chip until it's been proven otherwise. I'm not gonna get all my toes stubbed and hopes dashed all over again like back in 2006. :p

As far as Hollywood goes it was just a case of the chip being bigger than you'd expect Flipper to be at 90nm so it seemed likely something was added. I don't remember making any grand claims of what I thought had been added and I don't remember hearing any rumours of additions either (?) Who knows if anything was added in the end, if it was it was some relatively small stuff.

When it comes to WiiU's GPU we know that it started development around 2009 based on the R7xx GPU. Its been in development for 3 years at least, that's the first clue that its going to be more modern than the DX10.1 GPU its based on.

Also we do have specific info/rumours on this. WiiU docs talk about Compute Shaders and this goes along with the info posted on this forum by Prophecy2k about devs having some problems with the WiiU CPU because its GPU has been designed to do the heavy physics lifting.

Note I'm not saying it'll have every feature in DX11, because I've got no idea if it will, it certainly won't use DirectX anyway. Saying DX11 level just seemed like the easiest way of saying that it'll have some features that weren't available before DX11.
 
Last edited by a moderator:
Sure there are things to be gained, but as I said the "bulk" of the difference is production and technique, not platform level optimizations.

To be fair if a title plays to the strengths of a particular piece of hardware say fillrate on the 360, the other platforms will struggle. But you usually only see that in exclusives.
The bulk of the bad ports were early PS3 titles, because SPU programming was "hard", and RSX had issues with the geometry load from early 360 games. I just can't imagine Wii having anything comparable to that.

Yes I'm not expecting anything dramatic either,but some are claiming there will be no advantages or improvements.It seems that these discussions sometimes degrade into arguing at the extremes.
I guessing the truth lies somewhere in the middle.
 
(My emphasis.)
How do we know that it does, there's been no confirmation of this or even any solid, believable rumors. Only assumptions, speculation and wishful thinking - and as I recall, you were one of the staunchest proponents for Hollywood being more than just a gamecube graphics chip clocked 50% faster... :)

Now, don't get me wrong... I'd love to see a DX11 GPGPU in Wuu. Nothing would please me more. But I'm gonna assume there's nothing more than a Radeon 4xxx-class chip until it's been proven otherwise. I'm not gonna get all my toes stubbed and hopes dashed all over again like back in 2006. :p

There are several reasons why people are concluding that the Wii U has a GPGPU:

1) Prophesy2k's post stated that the GPU has compute support

2) The "leaked" document of Wii U's dev kit stated "compute shader support." Other insiders has said that several pieces in this document matches the info that they have seen before.

3) The first two pieces of evidences are backed up by the anonymous developers that was complaining about the CPU. One of them even said "I suppose you don’t need sophisticated physics to make a Mario game." This info would make sense if the Wii U's GPU was suppose to handle physics instead of the CPU. If this is the case, the Wii U has a GPGPU.
 
How are we defining "compute shader support"? The Radeon 4850 I had could run OpenCL stuff, so I doubt we can conclude WiiU's GPU goes beyond a DX10.1 feature set based on that. In any case, the fact that you have to steal seemingly meager GPU resources to make up for a weak CPU is hardly evidence of a strong design.
 
FWIW Supporting compute shaders and being a viable platform for compute aren't the same thing.

If you've never tried to write a none trivial compute shader, you should give it a go... I bet a trivial CPU version runs faster than your first attempt using compute.

I do believe that compute is likely a big part of any forwards looking platform, but it's application is limited, and it requires a level of micro optimization that very few people are capable of for anything other than trivial problems.

If this is what Nintendo intended, I think we'll be well into the PS720 lifetime when developers have a real handle on it. Which brings us back to can it remain compelling after PS720 ships.
 
Yeah, and? "Support" doesn't mean it's viable. Sandy bridge IGP has compute shader support too.

2) The "leaked" document of Wii U's dev kit stated "compute shader support." Other insiders has said that several pieces in this document matches the info that they have seen before.
That's part of DX10 spec. Every member of radeon 4000 family have that, even the most lowly of the low budget models.

One of them even said "I suppose you don’t need sophisticated physics to make a Mario game." This info would make sense if the Wii U's GPU was suppose to handle physics instead of the CPU. If this is the case, the Wii U has a GPGPU.
You base this conclusion only on hearsay and speculation. It doesn't mean it's truly the case. Kinda mindboggling really you'd use Mario as evidence in an argument that the GPU is geared for physics computations... :) Mario games have traditionally had some of the most fake "physics" ever in gaming, and nothing that actually needed anything more powerful than a MOS 6502-derivative to compute. :LOL:

You could say a radeon 4000 family chip is a GPGPU purely on its (reluctant) DX10 compute shader support, but not many code kernels run well on its VLIW architecture as evidenced by poor video transcoding performance, very poor folding@home performance and so on. It was designed for graphics tasks, not general purpose code.

If WuuGPU is radeon 4000/VLIW-based it's not going to do well at GPGPU. It's just a fundamental shortcoming of that architecture that AMD never managed to get around no matter how much they tweaked their compiler. Even writing directly for the architecture using their close-to-the-metal STREAM API usually didn't give good performance (except for a few notable corner cases such as Milkyway@Home for example).
 
FWIW Supporting compute shaders and being a viable platform for compute aren't the same thing.

If you've never tried to write a none trivial compute shader, you should give it a go... I bet a trivial CPU version runs faster than your first attempt using compute.

I do believe that compute is likely a big part of any forwards looking platform, but it's application is limited, and it requires a level of micro optimization that very few people are capable of for anything other than trivial problems.

If this is what Nintendo intended, I think we'll be well into the PS720 lifetime when developers have a real handle on it. Which brings us back to can it remain compelling after PS720 ships.
Thank you for your reply. I was curious on how difficult it was to program with a GPGPU architecture. If these assumptions about the Wii U is true, it would be a very good reason why third-party ports that we have seen so far would not take advantage of it.

If the other next-gen consoles are also designed to take advantage GPGPU functions, this would mean that Wii U's architecture could greatly benefit from the advancement of new techniques throughout the next generation. I understand that there are alot of factors involved in this, this could positively affect on how port-friendly the Wii U will be with the other systems.
 
Thank you for your reply. I was curious on how difficult it was to program with a GPGPU architecture. If these assumptions about the Wii U is true, it would be a very good reason why third-party ports that we have seen so far would not take advantage of it.

If the other next-gen consoles are also designed to take advantage GPGPU functions, this would mean that Wii U's architecture could greatly benefit from the advancement of new techniques throughout the next generation. I understand that there are alot of factors involved in this, this could positively affect on how port-friendly the Wii U will be with the other systems.
Is there a misunderstanding here in thinking there is another gpu in the wiiu for gpgpu?

Or do people really think they based the gpu design on r700 and then they redesign the gpu for more GPCPU functions?

Seems like A LOT of wishful thinking. :LOL:
 
Yeah, and? "Support" doesn't mean it's viable. Sandy bridge IGP has compute shader support too.


That's part of DX10 spec. Every member of radeon 4000 family have that, even the most lowly of the low budget models.


You base this conclusion only on hearsay and speculation. It doesn't mean it's truly the case. Kinda mindboggling really you'd use Mario as evidence in an argument that the GPU is geared for physics computations... :) Mario games have traditionally had some of the most fake "physics" ever in gaming, and nothing that actually needed anything more powerful than a MOS 6502-derivative to compute. :LOL:

You could say a radeon 4000 family chip is a GPGPU purely on its (reluctant) DX10 compute shader support, but not many code kernels run well on its VLIW architecture as evidenced by poor video transcoding performance, very poor folding@home performance and so on. It was designed for graphics tasks, not general purpose code.

If WuuGPU is radeon 4000/VLIW-based it's not going to do well at GPGPU. It's just a fundamental shortcoming of that architecture that AMD never managed to get around no matter how much they tweaked their compiler. Even writing directly for the architecture using their close-to-the-metal STREAM API usually didn't give good performance (except for a few notable corner cases such as Milkyway@Home for example).
The GPU may have the r700 series as its base, but who knows how it looks now after several years of development.

I was just listing reasons on why other people and myself are theorizing that Wii U's architecture may be relying on GPGPU functions. I would have no idea on how good it is at it. If it is some type of GPGPU, I do agree with you that it would have to be seriously different from that family of GPUs to be effective.

You know.. that Mario quote was just there to quickly point out how the dev was specifically claiming to have trouble with physics on U-CPU. I meant nothing else from it. :smile:
 
If it's R700-based it's gonna be VLIW, and if it's VLIW it's not for GPGPU. VLIW is a giant terrible pain in the butt (just think "Itanic" here) on top of the other pain in the butt that is GPGPU in general, and multiple pains in the butt are never a good thing... :)

I can't see how any number of years of development would change this: if they're working on the R700 it's gonna stay a R700 fundamentally, otherwise you'd start from a clean slate and build something new (more or less) from scratch, like with xenos for the 360. Also, really doubtful they've actually been developing for three-ish years on a R700-based chip, because in that time you actually could have built a new GPU instead.

No, if it's R700-based it hasn't been tinkered on for three years, but rather less than one year effective time most likely, and that just to mesh it with eDRAM and any custom CPU interconnect that Wuu may have instead of PCI express and such. There's a reason you pick an off-the-shelf design, and that's because you want to use that design. Not to tear it down to its foundations, throw most of it out including its most fundamental parts, and then rebuild something new there in its place. I mean... Who does that? :)
 
FWIW Supporting compute shaders and being a viable platform for compute aren't the same thing.

If you've never tried to write a none trivial compute shader, you should give it a go... I bet a trivial CPU version runs faster than your first attempt using compute.

I do believe that compute is likely a big part of any forwards looking platform, but it's application is limited, and it requires a level of micro optimization that very few people are capable of for anything other than trivial problems.

If this is what Nintendo intended, I think we'll be well into the PS720 lifetime when developers have a real handle on it. Which brings us back to can it remain compelling after PS720 ships.
This issue could be alleviated by including some well optimized ready to use compute shaders with the SDK. And there's also this:

AMD Demonstrates Optimized Executions of Havok Middleware on AMD platforms

– Balanced Platform of CPU + GPU Processing Delivers Optimal Game Experience –

SAN FRANCISCO, Calif.— March 26, 2009— Advanced Micro Devices, Inc.(NYSE: AMD) and Havok, the premier provider of interactive software for physics simulation and content development, are presenting new,optimized executions of Havok’s physics middleware on AMD platforms at the 2009 Game Developers Conference. The demonstrations include the first OpenCL supported execution of Havok Cloth™.

Havok offers a complete modular suite of products that help visual and interactive content developers create more realistic games and cinematic special effects. As the latest software developer to take advantage of ATI Stream technology to leverage multi-core architectures and accelerate execution of highly parallel functions, like real-time cloth simulation, Havok will enable game developers to offer improved performance and interactivity across a broad range of OpenCL capable PCs. AMD has recently introduced optimized platform technologies, such as “Dragon” desktop platform technology, which balance performance between the CPU and GPU with ATI Stream technology to deliver outstanding value.

“Havok is committed to delivering highly optimized cross-platform solutions to our game customers and we are pleased to be working with AMD to ensure that gamers enjoy a great user experience when running Havok-powered games on AMD platforms.,” said David Coghlan, vice president of development for Havok. “Unlocking the parallel processing capability of AMD’s hardware provides real advantages to our customers, and the greater the total computing resources available, the better the gaming experience developers can deliver.”

"Havok’s awesome toolset has allowed us to deliver astonishing physics interactions in our games, including detailed real-time destruction and complex ragdoll models, and we are excited about using ATI Stream technology to pursue more astounding in-game accomplishments,” said Andrey Iones, chief operating officer of Saber Interactive. “We are excited that AMD and Havok are working together and leveraging an open standard like OpenCL.”
I don't think the OpenCL versions were ever released to the public, but they were shown on a R700 GPU.
 
You know that BS IGN started when it reported devs saying Wii U will be the most difficult console to program for might have a ring of truth to it. If the GPU is meant to offload tasks like physics then it could be devs complaining about a new paradigm shift. It would be novel and doesn't surprise me that Wii U's graphics chip is a GPGPU, could be a good way to increase performance on a variety of tasks. The problems I do see going into the future is that the CPU will still need to be beefy especially going into the future with PS4X720 on the horizon. If the hardware isn't up to snuff then the system will be left behind by the devs just like Wii was. And whatever this Ninty system can do graphically and processing power wise, the other two upcoming systems will be able to run circles around it. I think that's a safe assumption.

Kaotik, your argument of these being pre-release games and comparing them to EoL games for PS360 is a flawed argument. I do expect the graphics to get better on Wii U as devs gain a better grasp of its architecture. I just don't seeing it turning these games into stupidly awesome graphics 5 years down the road. As ERP said, all of the hard work has been done by devs. Nintendo may have the biggest problem coming to grips with its hardware simply due to the inexperience of their devs making and delivering HD games; with that also means they have the most to improve upon and could really make the hardware shine. Other than that it is absolutely disappointing that Wii U's games aren't a step above PS360 games even with 6 months left to go, which is more like 5 months if they wish to launch in prime holiday shopping season.
 
If WuuGPU is radeon 4000/VLIW-based it's not going to do well at GPGPU. It's just a fundamental shortcoming of that architecture that AMD never managed to get around no matter how much they tweaked their compiler. Even writing directly for the architecture using their close-to-the-metal STREAM API usually didn't give good performance (except for a few notable corner cases such as Milkyway@Home for example).

And if Nintendo really wanted to drive GPGPU it is unlikely that it will use Radeon 4000's tech for it. Neither at AMD nor at Nintendo they are idiots, if they wanted to offload the CPU they will put tech in the GPU that is capable of it.

EDIT: Well, you wrote more or less the same in post #1536 :)
 
Last edited by a moderator:
And if Nintendo really wanted to drive GPGPU it is unlikely that it will use Radeon 4000's tech for it. Neither at AMD nor at Nintendo they are idiots
In an ideal universe you would be right, but in the real world things aren't always logical.

Nintendo or AMD may not be idiots, but assuming they started down one development path years ago with radeon 4000 tech only to discover much later, too close to release to change anything, that they need additional capabilities. Then they wouldn't have any other choice but to go with what they've got, try to squeeze some GPGPU out of that R700 even though it isn't suited for it and do the best of it that they can.

This isn't a terribly uncommon situation in the world of consumer devices btw.
 
Status
Not open for further replies.
Back
Top