Tagrineth said:
3dfx wasn't using the 1 Z per pixel trick, I'm not even familiar with how that would work at all
Supersampling: Color per sample, Z per sample
Multisampling: Color per pixel, Z per sample
Multisampling using the 1 Z per pixel trick: Color per pixel, Z per pixel
The advantage is that in many case, you can use a very basic compression algorithm and still get efficiency as good, if not better, than complex compression systems used today.
It works IQ-wise quite well, too. But in the case the Z test would only work on some samples, and not all or none, then it may even look as if there was no AA. Those cases are rare, but they *do* exist.
You could also do something like a Z value every 2 samples - but then, compression wouldn't be so good anymore. But still, about 30% less Z usage isn't that bad!
The Z check units which would otherwise have gone unused, be it from pipeline combination or from loopback (Rampage doesn't loop, though, IIRC), are used for the additional Z samples, resulting in no fill rate hit.
Makes sense. Not a bad technology, either. But nVidia's "multiple Z units per pipeline" on the GF3 does the same thing, without extra cost when multitexturing.
Rampage has a lot of bandwidth, moreso even than GeForce4, and saves some from not using it for geometry.
Okay, so Rampage had a lot of bandwidth. But even considering the geometry bandwidth advantage, it *still* doesn't have more bandwidth than the NV30, which got Z Compression too. And the NV30 at 4x AA is *slower* than the Rampage at 8x AA! That isn't very logical, IMO. The only way it made sense if it was an horrible artifact-fest, with at least 2-3% visibility error then.
releasing Rampage as-is would've been identical to admitting defeat - you all know that, and if nVidia had done it, they would've been ridiculed big-time by the 'in the know' lot.
Of course. But that wouldn't have prevented them to use Rampage technology in the NV30. Or wait, are you suggesting the GeForce FX sucks because nVidia wanted to make people think 3DFX technology sucked? Hey, wait a second, that makes sense!
And anyway, nVidia had the XBox contract with Microsoft. If they had released a product superior on the PC before the XBox was released, Microsoft would have sued them like they've never sued before...
Also as Uttar said, nVidia did NOT get 100% of 3dfx. They got a very high percentage of assets, but a lot of engineers went to *ahem* other companies instead.
Well, from my understanding, they recieved all of the assets ( that is, all the intellectual assets - nobody cares about what's in the fridge of the QA room!
) *but* only a small part of 3DFX employees.
I don't think they hired much of 3DFX marketing ( although the accountants/management do seem to have interested them
j/k ) guys, mostly engineers. IIRC, it was about 100 engineers. Or do I have my figures wrong?
And Uttar: the HSR implemented in Rampage (and partially on VSA) was a stopgap measure before better bandwidth-savers could be implemented in later cores, and a mostly-last-minute addition.
Okay, so that HSR's goal was of increasing performance, even if it cost you IQ, and it probably was a setting you could disable.
By "better bandwidth-savers", you're talking of Fusion, right? Or did I get all the codenames confused again?
I'd hazard a guess that maybe 10% of the actual Rampage team went to nVidia.
If those estimates were accurate, then it would explain a LOT.
I'd estimate 50% or more joined ATI - am I possibly right on that?
Uttar