It's curious that a 700MHz, 8-pipe, 128-bit DDR GPU is targeting a 6600 and not a 6600GT. Perhaps they're correctly focusing on price b/c their (shader) hardware isn't as efficient per clock, or perhaps their drivers need some work. Or z is right and S3 is competing with full tri while everyone else is bringing bri/try to gameday.
But didn't OGL guy say that S3 implemented some try-like optimizations way back when? Is ATI doing the same thing with try, or are they cutting more corners in the name of speed?
But didn't OGL guy say that S3 implemented some try-like optimizations way back when? Is ATI doing the same thing with try, or are they cutting more corners in the name of speed?