Uh, are you sure? Because GF3 cards launched with 460MHz (effective) memories, and I have a really hard time believing the Box version of the GPU pre-dates the desktop version to such a degree these faster memories were not on Nvidia's roadmaps at the time when NV2A was concieved.
Actually, now that I think about it, weren't GF3 Ti500s out already by the time the Box launched? Or at least "released"?
I did say when it was designed, not when it launched. The designed called for the fastest memory available at the time. They didn't upgrade that as MS may do with xbox 2.
I think the key issue being implied was that perhaps the GeForce3 was pretty strapped for bandwidth to put just one vertex engine to good use. So expecting a 2nd vertex engine (in the XGPU) to prove useful while competing for the same bandwidth is a bit dubious. Maybe the prospect of dual vertex engines was a bit optimistic and premature given the state high-speed memory and bus architecture was in at the time.
...er, maybe I was thinking of texture pipelines when I made that spiel about vertex engines? Ahhh, whatever!
Yeah I was just about to say, that the dual vertex engines in xbox worked out very well for the console, and proved extremly useful. Geforce 3 had other bottlenecks to worry about, on top of mem bandwidth.
But Quincy, that the Box specs called for the fastest memories of the time doesn't change it still launched with slower memory than the desktop counterpart, so I don't really see your point here. What exactly is your point, really?
That 400MHz effective memory was the fastest at some point doesn't really count for a whole lot of beans when considerably faster memory was already available when the thing actually went on sale, and could easily have been incorporated into the design had but the will for it existed (except it'd meant another financial kick to the balls in MS's wallet of course).
But Quincy, that the Box specs called for the fastest memories of the time doesn't change it still launched with slower memory than the desktop counterpart, so I don't really see your point here. What exactly is your point, really?
My point is, based on what you wrote earlier below:
Guden wrote:
"You guys are nuts. Why on Earth do you expect MS to take a top-of-the-line graphics chip, DOUBLE it, and then stick it in a cheap piece of consumer electronics?"
MS and nivdia did just that, they took the fastest GPU doubled the vertex performance along with 64 megs of the fastest ram at the time, and put it in some relatively cheap comsumer electronics. They did exactly what you just said.
Now when replied to people with...
Guden wrote:
...And MUCH less bandwidth available to it, and less effective memory too.
I said they used the fastest memory avaiable at the time of designing the xbox. Just because a PC card out at the time of "xbox launch" used faster memory, what difference does it make if it had less bandwidth then a card released a year after the specs froze? Who cares if it had less effective memory then most PC cards on the market at the time? What's your point exactly?
Nothing is to stop MS form using the fastest memory out there up until the point of spec freeze, and they haven't frozen the specs as of yet. It's not like the memory MS ended up using really was something that hurt them. It was still damn fast. I really don't get the point you're trying to get across.