If it performed like it had eight pixel pipes for shader ops, I might buy Vince's arguments. When the performance on shaders, supposedly this architectures' raison d'etre, is terrible, I will tend to suppose that instead of an intelligent, flexible architecture, the NV30 is a GeForce 4 4x2 architecture with FP shaders awkwardly bolted on.
I also tend to discount that the poor shader performance is due to poor driver optimization. They must have known the instruction scheduling properties of this architecture, and had it running on a simulator, for almost a year now. It's difficult to imagine that the current drivers are so bad that they cannot unlock more than a fraction of the part's shader processing power.
If it performed like it had eight pixel pipes for shader ops, I might buy Vince's arguments. When the performance on shaders, supposedly this architectures' raison d'etre, is terrible, I will tend to suppose that instead of an intelligent, flexible architecture, the NV30 is a GeForce 4 4x2 architecture with FP shaders awkwardly bolted on.
Excellent point and I'd tend to agree if I wasn't so apprehensive about extremely effecient fillrate the 4*2 would have to be achieving - but it could go either way and I'm trying to keep the minds open untill we atleast get an official explination of this event; which is infact, false.
We'll, then I was wrong. Cool, is this due to the routines that run under the umbrella of Lightspeed Memory Architecture or whatever their compression and rejection is called. So, do you have any opinion [public] in the topic?
I think B3D should officially contact Mike Magee (yes, "Flame the Editor") and ask for a public acknowledgement of B3D's contribution to the story, either with an edit to the article (a correction at the top, not the bottom), or with a separate article noting it. Just b/c we're on the internet doesn't mean journalistic ethics fly out the virtual window.
We'll, then I was wrong. Cool, is this due to the routines that run under the umbrella of Lightspeed Memory Architecture or whatever their compression and rejection is called.
At the moment I'm reacting to data, and so far all the data I've seen is leading in one direction. If there is a test that points to something else then I'm keen to see it and try it.
I'll probably save my opinion until the preview, and naturally I shall try and follow up my testing results with NVIDIA themselves and see what they have to say.
If you guy's only knew what this thread, with all of it's post's encompassed, looks like in the eyes of someone who just simply loves to play video games(me),...I realize that these politics must go on behind the curtains somewhere, but I must ask you all...at what point did you evolve from someone like I, to the state of what I am seeing displayed before me now?....I think I will stay a nieve gamer thank you....unless hanging out here has already made it too late for me....
If you guy's only knew what this thread, with all of it's post's encompassed, looks like in the eyes of someone who just simply loves to play video games(me),...I realize that these politics must go on behind the curtains somewhere, but I must ask you all...at what point did you evolve from someone like I, to the state of what I am seeing displayed before me now?....I think I will stay a nieve gamer thank you....unless hanging out here has already made it too late for me....