the topic is a fact, think what ATI did last gen with HD4 series and now does with HD5 series, compared to the nVidia competition.
From quick glance, nV's "truly scalar" design is far more effective, as in, it's far more easy to get the "full potential" out of them in real world situations compared to ATI's WLIV design - however ATI was last gen just a tad behind, just like with this current gen, with substantially smaller chips
And then the tesselation, sure, tesselation and geometry performance overall is the strong point of GF100, but what nV does with 16 tesselators, ATI does with 1 or 1½ dependin on how you view it
So what is it that makes them so much more effective while at least in shader utilization, GeForces should be much more effective?
edit:
pressed enter too early on the topic, bit drunk here
From quick glance, nV's "truly scalar" design is far more effective, as in, it's far more easy to get the "full potential" out of them in real world situations compared to ATI's WLIV design - however ATI was last gen just a tad behind, just like with this current gen, with substantially smaller chips
And then the tesselation, sure, tesselation and geometry performance overall is the strong point of GF100, but what nV does with 16 tesselators, ATI does with 1 or 1½ dependin on how you view it
So what is it that makes them so much more effective while at least in shader utilization, GeForces should be much more effective?
edit:
pressed enter too early on the topic, bit drunk here