John Reynolds said:
Wrong on the low end, because there was no onboard T&L built into Rampage.
There was, but it was a basic VS1.0 T&L engine, specifically so they could claim DX8 compatibility on the low-end model. It wasn't anywhere near as fast as even GeForce3's built-in T&L, probably not even as fast as GF2's even, but hell, it was there.
Rampage was never meant to be more than dual-chip in consumer space - 3dfx learned their lesson with the incredibly infamous Voodoo5 6000.
Rampage did make it to silicon, however one COULD make the argument that SAGE never existed, as it was about a week or two from tapeout - so there really WASN'T ever a real SAGE.
Rampage had a really weird method to performance aniso, it would jumble the sample points while doing MSAA and pushing back LOD. Trust me in that it should've (in theory) looked better than what XGI is doing, but honestly not as good as true AF. They did support pure AF along the lines of GF3/4Ti though. For its time, their performance AF would've been quite impressive, but of course today it'd look quite ridiculous compared to R3x0 and NV3x's implementations.
NV25 incorporated almost all of Rampage's 2D engine.
I'm sure there are snippets of Rampage basic design in NV3x, but of course I can't be sure of this.
One facet of Rampage is actually indeed used by R3x0 (I can't remember if NV3x used it too), in that to make up for a lack of extra Z test units, the core can borrow and re-use them during texture loops (I believe we found that R300 has two Z units per pipeline)... tremendously reducing the pipeline strain while performing MSAA with multiple texture layers.
I'm fairly sure the single Rampage/single Sage board was targetted to compete with GF3, and would've probably tied or slightly lost to the GF3 in most tests, but the dualR/singleS would've quite handily crushed GF3.
Now, two points I'll address directly with quotes:
2. If rampage was so good, I can't see nvidia not using it. If it was just slightly better, I could see them using their own stuff for pride reasons.(or if rampage was like twice the price)
Rampage was good for its time, but by the time nVidia got it, they already had the not-necessarily-superior-but-one-hell-of-a-lot-cheaper GF4Ti nearly ready, so there was really no point in pursuing it. Besides, releasing Rampage would've basically amounted to nVidia admitting defeat and saying "Hey, yes, we bought them because they had a superiour product and we were scared of them!"
3. I thought the initial specs 3dfx gave out about the rampage put the single chip version at the voodoo5 performance level, the dual chip at voodoo 5 6000, and a quad chip at twice that. So the mainstream part sounds more like a competitor to geforce 3, though I suppose maybe performance was underestimated and it could get that extra 50% to reach a geforce 4.
The thing about Rampage is, it had some new bandwidth saving tech compared to VSA-100, came at a higher default clock, had two more pipelines per core, supported DDR memory, and was generally laid out more efficiently - keep in mind, VSA-100 was natively running everything in what amounts to a hacked-to-all-hell GLide! Rampage was designed from the ground up for D3D and OGL (D3D being the primary platform - nVidia's being OGL of course).
The main difference though was that Rampage was due out the doors before GF3. April 2001 was the target release date... and the core was mostly bug free, main issue being an incredibly retarded reversed DAC on all three working cores produced.