So if Far Cry on Wii was given a second chance........

These specs comport well with what other sources have said. There are lots of plausible explanations for the large die size that don't involve secret hidden physics processors or massive stealth shader power that Nintendo just hasn't told any developers about.
And at this point, there's no rationale in favour of Wii having special magic sauce either. It's not doing anything on screen that we wouldn't expect of the overclocked GC hardware. If Nintendo aren't achieving fantastic physics, shaders, polycounts etc. in their games, you know the hardware isn't there to do it!
 
This is solid reasoning:
http://blog.newsweek.com/blogs/leve...amecube-one-point-five-yes-says-beyond3d.aspx

Rather detailed specs were released a long time ago:
http://www.maxconsole.net/?mode=news&newsid=8802

These specs comport well with what other sources have said. There are lots of plausible explanations for the large die size that don't involve secret hidden physics processors or massive stealth shader power that Nintendo just hasn't told any developers about.

That 2nd website lists the gpu as containing the 24MB of main memory on die. I thought it was already confirmed that the 24MB 1T-SRAM was external still?
 
He probably just assumed they were on the same chip from the documentation given him. Either that, or they planned for a single die, but ended up being unable to do it. After all, they're both under the same heatsink.
 
He probably just assumed they were on the same chip from the documentation given him. Either that, or they planned for a single die, but ended up being unable to do it. After all, they're both under the same heatsink.

Well in that case, what's the explanation for the large die size of the gpu? It could just be an inefficient shrink, I have no idea what's typical of the industry (or designs with edram) but the chip is apparently about twice as large as what a perfect shrink would have accomplished.
 
Well in that case, what's the explanation for the large die size of the gpu? It could just be an inefficient shrink, I have no idea what's typical of the industry (or designs with edram) but the chip is apparently about twice as large as what a perfect shrink would have accomplished.

Actually, as was thoroughly gone through in the "ginourmous" Wii thread, once you take the embedded DRAM into account, the disparity between expected and actual GPU size is almost a factor of three.

Since no further information ever surfaced, that thread just kept degenerating, and people who were interested in facts moved on. To my knowledge, there has been no leak of the tech specs of the Wii GPU into the public domain since. The die size of the GPU was never explained.

(The article that fearsomepirate linked to was posted 2006-07-30, at which time final kits and docs weren't widely available, so even if we assume that the leak was credible, we don't know how well it describes final hardware, nor do we know if there are things it fails to mention. It is known to contain at least one factual error. )

People who for some reason feel compelled to put down the Wii from a hardware standpoint are the only ones who make any statements regarding the GPU at all. The rest of us simply shut up, either out of lack of knowledge, or NDAs.

All that is in the public domain is that the Wii GPU is compatible with the GC GPU, and that it is far too large for a straight shrink. Not terribly satisfying, but that's where it stands, and if nothing substantial happens, that's where it'll stay.
 
I've been under the impression that it has 8 pixel pipes. I think I remember some guy had his hands on leaked dev kit docs (which were extremely similar to Cube's) and that there was some mention of that.

That info was also part of a deleted Ubisoft interview too. That interview may have been fake though as there was some serious flak received for posting it.
http://www.rage3d.com/board/showthread.php?t=33857367
 
I've been under the impression that it has 8 pixel pipes. I think I remember some guy had his hands on leaked dev kit docs (which were extremely similar to Cube's) and that there was some mention of that.

That info was also part of a deleted Ubisoft interview too. That interview may have been fake though as there was some serious flak received for posting it.
http://www.rage3d.com/board/showthread.php?t=33857367

Ah, thanks for the info guys. Still, even doubling the TEV to 8 pixel pipelines wouldn't account for the discrepancy as that's only one part of the graphics chip.
I remember a rumor about some kind of DRM security function being integrated into the chip, but that'd be an awfully large DRM module if that's all that's extra.

I don't think it's entirely unreasonable that there are extra pixel/texel pipelines that have gone unused so far in most games. Sure, we're used to just being able to pop in a graphics card with more pixel pipelines and automatically get a processing improvement in our games, but from what I remember it wasn't always that way. I remember when the original radeon came out that could do 3 texels per pixel that it was stated games wouldn't immediately take advantage of it and that they'd have to be programmed for it. Now, that could have been fud, or perhaps it's only more recently that DirectX has automatically allocated graphics card resources. Anyone know?
If it's a nontrivial task to reallocated GPU resources on the wii, then perhaps games that started out on the Cube would have required substantial redesigning to take advantage of the extra hardware.

Additionally, many devs are looking to sell their games on the wii/ps2/psp trifecta, which could cause them to design their engines for the least common denominator of all three. Say the fillrate and processing power of the psp, with the lack of features of the ps2.

Another possibility is a large surge in the use of middleware. Gamecube had quite a few games developed from the ground up for it over many years, whereas games are being pumped out on the Wii like there's no tomorrow. Devs may be happy with Gamecube/PS2 level performance but much shorter dev times, especially when the reward for a highly tuned wii games is something that still graphically looks like something you'd download off of xbox live arcade and being 1-2 years later to the market than your competition on a system where games sell because of neat controls and not graphical wow.

...maybe. I'll readily admit that the only game on the Wii that I'd question a GameCube being able to handle it is Super Mario Galaxies, but it's too tantalizing that there's unexplained die space on the wii. Even if the explanation is that the gpu had to be "padded" to that size so there'd be room for the tracings or for redundancy, I'd still like to hear it. I'd imagine unused die space would have been removed in a later revision to cut costs though and someone would have noticed if that happened.
Maybe hollywood is one big failed SOC, and the 65nm version of the wii will only have a single chip. :)
 
I remember when the original radeon came out that could do 3 texels per pixel that it was stated games wouldn't immediately take advantage of it and that they'd have to be programmed for it. Now, that could have been fud, or perhaps it's only more recently that DirectX has automatically allocated graphics card resources. Anyone know?
Nah, being able to do 3 textures per pixel is still only does any good if when three or more textures passes are being used, but GPUs aren't built like that these days anyway. More pipes on the other hand are always get the work distributed between them.
 
Last edited by a moderator:
That interview was posted by a friend of mine, and yes, she made the whole thing up and dragged me into it to gain more credibility. Remember, the fake interview also said the GPU was roughly on the level of an X1400, which we now know to be completely false.
 
Nah, being able to do 3 textures per pixel is still only does any good if when three or more textures passes are being used, but GPUs aren't built like that these days anyway. More pipes on the other hand are always get the work distributed between them.

Compare G92's pixel fillrate to it's texture fillrate. >3x textures per pixel.
 
Here's an interview from Kuju. They made Battlion Wars for the Wii. They're saying the Wii is basically 2x the power of GC, and that 3rd parties aren't putting enough effort into their games.

http://www.videogamer.com/news/27-11-2007-6978.html

I think 2x the power isn't so hard to believe -- as oppose to crazy idiots claiming the Wii could do Gears of War at 480p. Expecting UE3 on the Wii to look like it's bigger and more powerful rivals at a lower resolution is crazy, but I don't think it's too unrealistic to think that 3rd parties aren't making use of whatever extra muscle that's been added to the Wii. Check out another 3rd party putting some effort into their game.

screenshot33non5.jpg

screenshot02lp5.jpg

screenshot25ncy4.jpg


It's no Crysis, but at least it doesn't look like a PS2 game, or even your average Gamecube game. I don't think it looks as good as Galaxy, but it's damn better than 90% of the other 3rd party efforts out there.
 
Last edited by a moderator:
Personally I think that godzilla game looks ugly.

But the interview is right, what 3rd party devs took the effort to create a good looking wii game? which devs took the effort to create a game that atleast looks like a high end GC game? little to none I think. The best 3rd party effort I have seen are the last few levels of Red Steel and thats pretty much made on GC kits.
 
Speaking of Red Steel, that reminds me, is the AI as intelligent as Ubi Soft claims? They said something like the AI's almost as good as FEAR.
 
Speaking of Red Steel, that reminds me, is the AI as intelligent as Ubi Soft claims? They said something like the AI's almost as good as FEAR.

That was what they were hoping to hit, but they really missed the mark. That Godzilla game doesn't look too different from the ones released on the Cube and Xbox.
 
That Godzilla game doesn't look too different from the ones released on the Cube and Xbox.

I think you should go back and check out what they look like again.

Save The Earth on Xbox

92043520041028screen001on2.jpg


I admit, you have to be a total godzilla nerd to spot the huge difference in the models, but you can at least clearly see the improved shader effects in the skin and the water. Even the snow looks better... well, not like there was any snow in the previous games, but they have bump mapped snow in this. SSX on Wii didn't even have bump mapped snow, and that's a snowboarding game.
 
So a shift from normal mapping to EMBM is an improvement? Granted, Unleashed looks better, but not by a lot.
 
Normal mapping? Save The Earth never used any normal mapping at all. I just regular bump mapping.

Speaking of EMBM, what really pissed me off about the Far Cry port is that they didn't even bother with making of water look half way decent. Forget normal mapping and anything too advanced, they could have used EMBM for water, but Ubi Soft couldn't. I just recently started playing the game, and I realize how simple yet beautiful water can look even with a game from 2004 (or was it 2005).
 
Last edited by a moderator:
Back
Top