Ok, full interview from anonymous third party about Wii GPU.

What a clown. I stopped reading right there. There has been no CPU, ever, that can match a 2.4 P4 running at only 729 MHz. Even the Core2 Duos and Athlons, which are the champions in work per MHz, aren't 2x better than a P4 per clock, let alone a relatively low-end PowerPC.

Forget that Wiinside stuff. It's not even worth the virtual paper it's written on.
 
There has been no CPU, ever, that can match a 2.4 P4 running at only 729 MHz.
I picture that the "logic" goes kinda like -
GC CPU @ 480Mhz >>>> 733P3.
P4 = 70% of P3, so 1GhzP4.
* 150% ('conservative' Wii CPU upgrade estimate) * clock increase - and presto -
Wii Cpu > 2.2Ghz P4.

:p
 
The information on the Wii GPU that has made it into the public domain hasn't changed one bit as far as I can see.
* GPU clock is up by 50%
* Die area indicates roughly a factor of three increase in logic gate budget
(* Early Nintendo ballpark statement that the Wii would be around three times as powerful as the GC.)

And that's it.
A solid leak of an engineering document would be appreciated.
 
I picture that the "logic" goes kinda like -
GC CPU @ 480Mhz >>>> 733P3.
P4 = 70% of P3, so 1GhzP4.
* 150% ('conservative' Wii CPU upgrade estimate) * clock increase - and presto -
Wii Cpu > 2.2Ghz P4.

:p

Wait, this funny "logic" implies that the P4 is worse than the P3? :oops:
 
Wait, this funny "logic" implies that the P4 is worse than the P3? :oops:

Yes, remimber the P4 1,5Ghz it's under P3 1,2 Ghz but it's long time ago…
And G4 1,25 Ghz fight P4 2,5 Ghz in a lot of areas…
The Wii CPU not exactly equivalent of a P4 2,2 Ghz all the time but may be in game code it's real?
 
So TEV is now 16 stage.

I like this part
"Almost all the shader effects on PC, Xbox 360 and PS3 can be reproduced on the Wii"

I check the rev gx.h and the dolphin gx.h
Both of them contain 16 unit for the TEV.
So it is not a proof.Max for the skills of the dev.:)
 
Regarding the CPU of the Wii, there is a significant part of the equation that keeps being ignored around here (in CPU performance discussions generally, actually), and that is the memory subsystem. A large part of extracting good performance from any processor lies in managing your memory.
All evidence points towards the 750CL as the CPU of the Wii although it may or may not be slightly modified. Be that as it may, given the likely base processing power of the CPU, it is backed up by a relatively spiffy memory hierarchy. Unless the memory controller is completely borked (and there is no reason it should be) the CPU should enjoy very low latency access to main memory and a high bandwidth even for fine grain accesses, making corner cases rare and lessening algorithm sensitivity to memory peculiarities. (Although I wonder a little about how the boundary between the 24 and the 64 MB partitions are handled.)
For most codes, it would seem that the Wii should offer quite robust performance.

It would be nice to know if the Wii CPU uses the 60x bus protocol, as it would seem a reasonable place to offer a custom solution for the Wii.

Quick edit: Of course, the CPU is never going to be a stream computing powerhouse, nice memory hierarchy or not. But still a pretty capable little chip particularly considering that it draws a maximum of 5-6W at full blast.
 
Last edited by a moderator:
Random question, but is it possible to have a RAM expansion pack for the Wii like N64 did for certain games?

Nope. There's no high-bandwidth port to plug it into.

That is one of the worst blogs ever. Why did that even get posted? I like where he said the Gamecube devkits were less powerful than the retail hardware. Seriously, the fake Revolution insider blogs should be dead now. It's not even English. I wish this were some student of mine so I could fail him.
 
I think it's rather humorous that we're arguing over whether Wii's CPU can keep up with a P4 2.2. That's not exactly much of a processor. Especially in 2007!

Besides, wouldn't it be more warm and fuzzy to compare it to, say, a Athlon XP 2000+? Not like these comparisons are worth jack anyway.

But remember, you can't crunch the vertices if you can't get them to your graphics chip from RAM fast enough. Despite the XGPU having that extra vertex shader over the GeForce 3 and the time spent optimizing the game, Doom 3 on Xbox didn't even rise to the level of the settings with which a Geforce 3 could run it.

I'd blame RAM amount, and the fact that Doom3's shadows are very CPU intensive. Doom3 went for more geometry work on the CPU than the video card, if I remember correctly what I've read about it. It had to work on NV17, after all.

And let's not forget that Cube apparently wasn't capable of that at all. Doom3 uses normal mapping and that's not so possible on Flipper. Ever see what Doom3 looks like on a DX7-era card, when you can't do the lighting or normal mapping!? Ick!

From these interviews I doubt that Wii could do that much normal mapping or per-pixel lighting, either. But who knows, I guess. Personally, I think I'll believe what these guys have said about the hardware now. It's been repeated enough from enough sources that it feels rather authentic to me.
 
Last edited by a moderator:
And let's not forget that Cube apparently wasn't capable of that at all.

I'm not saying it was. I think you incorrectly read my post as comparing Gamecube and Xbox, which it wasn't. The only point I was making is that it doesn't matter what your processor can do if you can't get information to it fast enough. That's an absolute statement, and it's completely true. And yes, the XGPU was severely bottlenecked. In theory, it was more powerful than any GF3. In practice, in actual games, a GF3-equipped PC with enough CPU and bandwidth would outperform it, meaning that extra power was largely wasted.

Anyway, I thought the Doom 3 engine only used the CPU to do its shadows if you were running an GF4 MX, which partially executed vertex shader code on the CPU. On any of the Ti class cards, it used the vertex shader hardware and ran much faster. I could be wrong, though.
 
Last edited by a moderator:
I'm not saying it was. I think you incorrectly read my post as comparing Gamecube and Xbox, which it wasn't. The only point I was making is that it doesn't matter what your processor can do if you can't get information to it fast enough. That's an absolute statement, and it's completely true. And yes, the XGPU was severely bottlenecked. In theory, it was more powerful than any GF3. In practice, in actual games, a GF3-equipped PC with enough CPU and bandwidth would outperform it, meaning that extra power was largely wasted.
Yeah that's very true. No doubt the PC version of the architecture has a huge advantage because of its dedicated RAM. An advantage that the CPU enjoys too, I'm sure.


Anyway, I thought the Doom 3 engine only used the CPU to do its shadows if you were running an GF4 MX, which partially executed vertex shader code on the CPU. On any of the Ti class cards, it used the vertex shader hardware and ran much faster. I could be wrong, though.
Intel has an article written up by J.M.P. van Waveren of Id Software that spells out how the engine works. It's highly CPU-based for a lot of the rendering stages. He cites that basically it was a decision made to work around limits even on cards with earlier vertex shader hardware. I'd imagine if they'd targetted VS 2.0, things could've been different. Who knows....
http://www.intel.com/cd/ids/developer/asmo-na/eng/dc/games/293451.htm

Is there any reason why a dev would use less than 16?
Well, the anonymous devs did say that each stage you use costs performance.
 
Yeah, what would be the difference between GC and Wii, and wouldn't a faster clock and more available memory/bandwidth allow devs to come closer to that 16?

The poin is the memory subsystem of the TEV
But could you improve that with a 100% backward compatibility?
 
I check the rev gx.h and the dolphin gx.h
Both of them contain 16 unit for the TEV.
So it is not a proof.Max for the skills of the dev.:)

Actually the dev (if he's a real dev) says:

4 Texture Pipe (16 Stages Each) to.... 8 Texture Pipelines (16 Stages Each)

So he's saying both are 16 stages but Wii now has twice the texture pipelines.

I suppose this does sound very plausible, given the die size differences between GC and Wii's GPUs. But who knows if its actually true. If the person making the blog is DrTre from IGN then I'm pretty sure he's not making this up. But he could have been duped by the 'developer' answering the questions, so who knows.
 
Last edited by a moderator:
Back
Top