If hollywood was showing framebuffer effects and multisampling up the wazoo, I'd accept that as the edram showing its value
It already is whenever you pop in one of the AAA Gamecube titles.
but right now the wii is struggling to put out graphics that cards with 1/10th the memory bandwidth can do.
You mean like Radeon's 7000 series and the Geforce 2 MX? I think it's been too long since you had one of those cards, because I still have a GF2 MX in an older machine, and it can't do
jack.
I seem to recall the gamecube's being only around 10GB/s and a 50% overclock would only bring that up to 15GB/s)
I got the specs from the wrong source. However, the 10.4 GB/s was to the texture cache. The framebuffer had 7.8 GB/s, for a total 18.2 GB/s. 50% overclock would give you 27.3 GB/s.
you know even retrospectively I'm not sure what the rationale is for choosing edram over a more powerful chip.
The bandwidth makes it easier to actually use the available power. Take the N64 as the extreme counterexample. Its logic silicon was theoretically pretty awesome for the time, but the high-latency, low-bandwidth RAM meant that only programming gods could get non-craptacular graphics out of the machine. With Gamecube, even EA could get the machine making pretty pictures with minimal effort. Think about it: this is a machine that could push only 30m polys/sec
max theoretical, but could do ~12m in game. That's 40% of its available peak. Xbox could do 106m theoretically, but you'd never get anywhere close to 40m in-game, ever. NV2a had all kinds of power that couldn't get tapped because it was bandwidth-starved. The fact is, your processor can't do anything if there's no data for it to process...it just sits around picking its nose. Another good example is the entire Geforce FX series, which was completely bottlenecked by bandwidth. Great chips on paper, but the memory ineffiency meant Radeon 9x00 cards crapped all over them.
We know why the 360 has edram, even if games aren't really utilizing it, but the cube wasn't capable of the same type of multisampling effects.
First, as we all know, special effects on Gamecube depended on multitexturing. High bandwidth and low latency to your texture memory are absolutely key to being able to take advantage of multitexturing. Second, AAA Cube games had
lots of framebuffer effects. Off the top of my head, the water and lava in RE4, the targeting computer in the Rogue Squadron games, the proton bombs in Rebel Strike, the depth-of-field in Windwaker, Baten Kaitos, and Mario Sunshine, and numerous special effects in Metroid Prime 1 & 2 were all framebuffer effects.
Then again, the cube usually had the worst version of any multi platform game
Generally because it lacked online. Graphically, the PS2 was usually the worst.
In retrospect, what was the point of MS blowing all that money on NV2a? NV2a was significantly more powerful than a GF3, but in real-world games, a PC with a Geforce 3 would generally output the same games in higher resolutions, better framerates, and with better textures than the Xbox, and a Geforce 4 would just blow it away.
To put it in perspective, the NV2a theoretically processed polygons 3.5x as fast as Flipper, had 3x the texel fillrate and 5.7x the pixel fillrate, 2.7x as much RAM, and 10x the overall floating-point performance. Save bandwidth and texture passes, the
slowest things on NV2a were still 3 times as fast as their antecedents on Flipper.
Theoretically, Xbox should have been able to run Rebel Strike in HD with ~3x the number of TIE fighters on screen, 3x the geometric and texture detail on the terrain and/or capital ships, and possibly even more shader effects on top of that. It should have been able to run Half-Life 2 and Doom 3 comfortably on high settings and a steady 30 fps. There are a lot of things NV2a could do that it never will do, because the system it's embedded in just didn't cut it.