The Wii CPU is equivalent to a 1.4GHz P3 at most...
So a 2.4Ghz P4 then?.. Seriously though the P3 was quite a bit faster then a P4 clock for clock.
The Wii CPU is equivalent to a 1.4GHz P3 at most...
Running P3 optimized code. Which is gonna affect any high-latency/high-clocking designs, as they are sensitive to certain things low-clock CPUs aren't, and I disagree you should generalize that to overall performance.Teasy said:Seriously though the P3 was quite a bit faster then a P4 clock for clock.
On final note, can someone please point me to benchmarks(preferably not some synthetic nonsense) that demonstrate how a 750x series CPU is supposed to noticeably outperform a P3 - let alone by a factor of 2?
Actually the dev (if he's a real dev) says:
So he's saying both are 16 stages but Wii now has twice the texture pipelines.
I suppose this does sound very plausible, given the die size differences between GC and Wii's GPUs. But who knows if its actually true. If the person making the blog is DrTre from IGN then I'm pretty sure he's not making this up. But he could have been duped by the 'developer' answering the questions, so who knows.
I thought the consensus (from developers) was that the Gekko@485MHz in GC was roughly equivalent to the 733MHz P3 in Xbox? Going by that, wouldn't a bone stock Gekko@729MHz be equivalent to a 1.1GHz P3? Since Broadway isn't just an overclocked Gekko (faster FSB, more cache etc), wouldn't it be safe to assume it's equivalent to a 1.4GHz P3 at most?
No. Maybe a G4 class processor running some Altivec-enhanced code, but real world performance of the PPC750 rate it at perhaps 10-15% faster than the equivalent P3 at some tasks (but not all).
Julian Eggebrecht, president of Factor 5, a developer currently underway with Gamecube projects for a different opinion. "Gamecube's Gekko is the most powerful general-purpose CPU ever in a console. The PowerPC alone is so much better and faster structurally that Gekko not only is much, much faster than the PS2's main CPU but every bit as fast as a 733 MHz Pentium," rebukes Eggebrecht. "Don't forget how extremely powerful the 256K second level cache in Gekko makes Gamecube. The size of a CPU's second level cache determines how fast general game-code runs. The PS2 has a tiny 16K of second level cache, and even the X-Box only has 128K."
In terms of how it performs against PS2's Vector Units, Eggebrecht offers the following: "Gekko is not just a plain PowerPC CPU, it has special commands just for games and has special modes making it possible to run hand-written assembler code very, very fast. We did experiments with particles on Gamecube after the N64, and as opposed to the two weeks it took to get the particle system running and optimized on the vector unit it only took two days on Gamecube.
"Based on our calculations, Gamecube's CPU has all the power PS2's CPU and VU0 have combined and then some. ...
Running P3 optimized code. Which is gonna affect any high-latency/high-clocking designs, as they are sensitive to certain things low-clock CPUs aren't, and I disagree you should generalize that to overall performance.
Eg. - in a benchmark that flushes texture cache on every polygon, I'd expect GS to wipe the floor with Xenos, RSX, and WiiGPU combined - not just per clock, but in absolute terms. Should I conclude it's the fastest GPU overall?
I also have an issue with the whole "per-clock" performance thing when comparing over 3-4x Mhz gap. Memory subsystems don't scale linearly with clockspeed, and neither do execution pipelines - if they did, High-clocked CPUs would not be making design decisions they do in the first place.
And how he can see it from the sdk?
See what?, the 16 stage TEV? Well I'd imagine it would tell you that in the SDK.
I still don't think that people properly appreciate what the 750CL accomplishes.
It is small (inexpensive), fits within a very tight power envelope, and performs reasonably.
If we compare to the P-III for instance, the smallest die size was 79mm2 (at 130nm) more than five times the size of the 750CL, and the TPD of the mobile part at 866 MHz was 19.5W. TPDs are not maximum power draw, so lets estimate this as a factor of 5 higher power draw than the 750CL at Wii clocks.
That seems the issue, and one I wish Nintendo didn't base so much emphasis on. It's not like the GC was a runaway success and everyone wants to carry their existing GC libraries over! Heck, Nintendo were wanting new customers who'd never played a console before anyway. so why worry about BC with titles they've never heard of? I guess ultimately they wanted the same dev tools as they have now, and weren't willing to develop new tools for new hardware.But, it wouldn't have the backwards-compatibility-of-questionable-worth.
I guess ultimately they wanted the same dev tools as they have now, and weren't willing to develop new tools for new hardware.
No.The more pipeline.That managed from .lib,and you have to go to assembly level to be able to see no of the pipelines
After hearing things like the Wii SDK is a copy-paste job of the Cube SDK, I think I'll agree 120% with you here. They want to save money across the board, and reusing previous development work is definitely one heck of an extra way to do that over simple hardware price.
That seems the issue, and one I wish Nintendo didn't base so much emphasis on. It's not like the GC was a runaway success and everyone wants to carry their existing GC libraries over! Heck, Nintendo were wanting new customers who'd never played a console before anyway. so why worry about BC with titles they've never heard of? I guess ultimately they wanted the same dev tools as they have now, and weren't willing to develop new tools for new hardware.
Since the beginning, I've said that Nintendo should have made the case just a bit bigger (smaller than the original PS2) and ditch BC completely. Let's be honest, very few would mourn the loss of being able to play GC games.