3dfx/3dhq Revenge superchip.

MuFu said:
Sure, but one of the main reasons Creative swallowed Aureal was to finally bury all the Patent 990 litigation. Although A3D 2.0 was a fantastic technology I really don't think Aureal were head and shoulders above Creative in terms of tech, as so many people believed. In 10K1 the latter probably had the upperhand when it came to handling multiple audio streams in a desktop environment and superior MIDI features, to name just a couple of areas. Plus they already had a very exploitable engine in EAX and of course there is no reason why they wouldn't have embraced Aureal tech since the aquisition, as nV surely will have embraced 3Dfx ideas.

3dfx and nVidia were in court too, and 3dfx was set to win :) You think nVidia didn't want to kill that before it ended?

nVidia will and is embracing 3dfx ideas, slowly but surely. GeForce4 has a good chunk of Rampage's 2D engine, keep in mind... but that's the only explicit use of 3dfx tech thus far.

I suspect some of NV30's tech IS 3dfx-influenced, for example the 4 colour-ops per cycle with two textures (4x2 architecture) which can do 8 any-other-ops per cycle (the whole 8 pipelines debacle)... 3dfx were good at tricks like that. :)

I just don't believe Rampage was as ahead of it's time as you claim. They canned it because it really wasn't all that great - if it was, we would have seen explicit use of associated technologies by now.

The only explicit use so far is GeForce4's 2D engine, as stated.

I still say it's because nVidia doesn't want to admit their most bitter rival was in fact better than them... because that's what they'd be doing if they used specific tech from Rampage.

Besides... what explicit tech would they use, anyway? It's Rampage as a whole that's really significant (or at least, was). It's still only a DX8.0 part, when you get right down to it, and a lot of its really neat tricks are done better in some aspects anyway (for example, 'borrowing' Z-check units from other pipelines while multitexturing + MSAA comapred to having multiple Z-checkers in a singe pipe). Rampage as a whole would have been the first DX8 core, and would still be ridiculously fast in a select few situations.
 
Hey remember 3Dfx stating their interest in aquiring Aureal? That would have been one to watch... :LOL:

Alot of parallels between those two companies.

MuFu.
 
nVidia will and is embracing 3dfx ideas, slowly but surely.

We noticed ;) *cough* nv30 disaster *cough*

good ol' Norton Antivirus Found something... W32.3Dfx-Trojan-horse, (R)emove, (A)quire, (L)et it fail in peace ?
 
Tagrineth said:
Creepy. But 3dfx didn't do that.
In a way, that's a good thing. But in another, it makes me wonder how the heck Rampage managed to do 180FPS with 8x AA... :( I could understand it if they had Z Compression, but without that? Seems strange...

There *are* rumors of Z Compression not being applied when enabling AA on a GF3 - so the performance hit on a Rampage should be pretty much the same, beside that the Rampage got a lot more memory bandwidth - but it couldn't do that much of an impact, anyway.

True. Actually, Rampage has no extra cost when not multitexturing, so in a heavily multitextured situation Rampage is much more efficient (transistors saved).
Yeah, we agree on that one - Rampage's tech was more efficient *when* multitexturing, since it then saved transistors. But if the trend *is* that Singletexturing is going to make a comeback for a signifiant part of the scene ( 15%? 20%? ) , then this technology would be worthless today.

And anyway, does that mean Rampage's "Free" FSAA myth comes from something nVidia does BETTER? Hehe, now that's a good one :)

It has 200MHz DDR on a 256-bit bus, NV30 has 500MHz DDR on a 128-bit bus... making it 12.8GB versus 16GB. Factor in that NV30 is also using a chunk of that for geometry, which Rampage isn't; that NV30 is using a 100% precise HSR algo (probably eliminates a little less overdraw on average); that NV30 is a flawed architecture from the get-go...

Okay, so the NV30 is a flawed architecture. But the R300 isn't.
And the Radeon 9700, with 20GB/s of bandwidth, can't even be as fast!
http://www.theinquirer.net/default.aspx?article=5023

The R300, with 6x FSAA and no AF, plays at 140FPS! Okay, it's old numbers ( okay, it's The Inquirer too, but similar numbers exist in other places too ) with old drivers, but still...

Also consider that Rampage is using a nearly-free 'optimised' AF algo which doesn't need to sample as much, while NV30 is using a slightly optimised algo which still does take a lot of additional samples.
Okay, fine - but without quality comparaisons, it'd be hard to determine just how good Rampage's algorithm was.



Heh. Well, maybe. There's also the rumours of sabotage... nVidia sabotaged 3dfx's VSA line (which IS possible considering certain evidence), so bitter 3dfx engineers sabotage key points of NV30 (again possible).

It's VERY far-fetched though, and has disturbing ramifications... I doubt it's true at all. But there are some BIG 'oopses' at 3dfx that can't be ignored or written off...
Like the rumor of someone who "forgot" to take chips back from the fab? Hehe, I loved that one :) But yeah, it does seem rather far-fetched.
If it was true - and that's nearly impossible - then nVidia would have fired the employees who did the sabotage. Which means there'd be no more sabotage for the NV35, if there ever was, so it's a probably unexisting problem we can already forget about ;)


Don't know the exact percents, but yes, a lot went to ATi, some went to Matrox, and a good chunk of the 3dfx core went... elsewhere. (mysterious music plays)

Here are all the explanations I can find right now ( in no precise order ) :
1. The XP4/Xabre *is* Rampage. Transitor count is about the same, and they doesn't seem to have real VS support either - sounds like they didn't get their hands on Sage! :p
2. They suicided. ( :( That would be bad... )
3. They joined 3DHQ, and Revenge is for real ( yeah, right... )
4. They joined PowerVR, because they loved the whole GP idea.
5. They were really Jen Hsun Huang in disguise all along.
6. They joined Intel, and are currently working on something that'll crush nVidia & ATI forever. Or at least it'll be a lot better than Intel's current offerings, even if it'll maybe be integrated...

The last one seems the most likely, IMHO.


Uttar
 
Hmm, now that you put it that way, Ailuros, maybe the whole HSR thing makes no sense...
But what you've got to question is not whether HSR made sense. It's whether it was active in that 180FPS test! Because we don't know if it was an artifact-fest or not.

There are references on more than one website that tested the driver function on V5's if you want to really find out wether there were artifacts and under what conditions. If I didn't limit performance to an upper threshold, the more aggressive the setting the more the artifacts.

The rumour mill back then wanted Fear to achieve close to 200fps in q3a with just 4xMSAA on, but running at an insanely and rather unrealistic for final release clockspeed. I believe Fear was to be a TBDR, do I need to elaborate on HSR in that department? The little KYRO has 32Z/stencil units alone, and I'm not going any further into this one, other that the claims for the frequency of Fear running in the lab went as high as 500MHz.

What we DO know is that it probably *is* possible to implement that on any GPU. We also know that it can give nice performance boosts.

Where are the educated folks when you need them? How bout a fair explanation first how exactly the BSP system of the q3a engine works first of all?

And with 8x AA and no Early Z, the boosts you can get with such a thing could be quite stunning indeed. Of course, there probably are artifacts - but it would certainly explain the 180FPS score. If it was the case, maybe my "one Z by pixel" theory isn't even required anymore.

Again even dual chip Spectre was limited to JUST 4x sample AA, despite the fairy tales that float around. Why? Not enough bandwidth for starters.

Also, you've got to realize that the NV30 design was already underway.
I'd guess Jen Hsun Huang and Marketing's goal of "Cinematic Computing" was already given to the nVidia employees. And when 3DFX employees came in and learned about that goal, they probably thinked:
"Yeah, right... Let's let them decide on the design. If we propose our technology, they're gonna ask us to make it 'Cinematic Compliant'. Pfff..."

Says who? Former 3dfx folks had their hands already in the NV25 development for some minor aspects, and worked with the other assigned engineers for the NV30 project.

More seriously though, I seriously believe nVidia didn't have the time to include most of their 3DFX tech yet. nVidia got a lot of tech, too, and finding ways to unite both isn't always so obvious. The GeForce FX probably got some 3DFX tech, but not that much of it.

It is my understanding that past NV25, all former 3dfx/GP joined designs cannot leave much more headroom for implementations then those that have been already conducted, unless NVIDIA decides to move completely to deferred rendering.

The NV40 is really when you're gonna see the 3DFX ( & GigaPixel, too! ) influence, according to early rumors.

That's what I heard exactly for the NV25 and NV30 accordingly. If that one shouldn't come true, the next best guestimate will be NV50 or NV60 up to eternity. In the meantime no one seems to realize that it's technically impossible that what 3dfx had by the end of 2000 on it's drawing boards might be even outdated today.

There are all NVIDIA engineers today and it's not that it's the first significant number of former 3dfx engineers that found a working place at NVIDIA. Just one name for those with a good memory is enough: BALLARD.

Rampage has a lot of bandwidth, moreso even than GeForce4, and saves some from not using it for geometry.

True compared to the NV25.

Dual chip Rampage= 12.8GB/sec bandwidth

Fear which was to follow was meant only for single chip sollutions. Let's see the most probable final clockspeed should have been between 250-300MHz, where DDR ram availability and pricing would have played a very important role.

Since core and memory would have run most likely in isochronous speeds:

@250MHz = 8.0GB/sec bandwidth
@300MHz = 9.6GB/sec bandwidth

Any particular reason why on first glance they seemed to lower raw memory bandwidth? If Spectre would have been a single chip TBDR, it's 6.4GB/sec bandwidth would have been enough to compete against NV20. Wild exaggerations? Not in the least wilder than a supposed software hack that chunks out completely unrealistic scores.

Also as Uttar said, nVidia did NOT get 100% of 3dfx. They got a very high percentage of assets, but a lot of engineers went to *ahem* other companies instead. And Uttar: the HSR implemented in Rampage (and partially on VSA) was a stopgap measure before better bandwidth-savers could be implemented in later cores, and a mostly-last-minute addition.

Well *ahem* let's hear then where they went to. I'm sure you can even name the companies. You couldn't by any chance verify where exactly the father of the supposed HSR thingy works today can you?

From what I understand, 3dfx's last days were nuts. Nobody was really focusing on anything, except the core Rampage team. I'd hazard a guess that maybe 10% of the actual Rampage team went to nVidia.

I recall NV hiring around 100 people back then and I dare to predict that they took of course the most talented and gifted engineers w/o a second thought.

By which exact calculations have you reached that 10%? If you recall a company that had a number of engineers in excess I personally don't.

I can see you day dreamers harping on the same old tiresome stuff even if we get cards with over 50GB/sec raw bandwidth and 5x times the rendering power of today. Then the same old tune will still be....yeah but Rampage would have...........ack.
 
http://www.lostcircuits.com/video/sound_compare1

Nvidia, who had built a reputation for designing 3D video chips, quietly hired some former employees of Aureal, but nobody really knew what Nvidia wanted with them...

...Nvidia revealed what they were doing with those ex-Aureal employees when it unveiled the original Nforce chipset featuring not only a built-in Nvidia 3D accellerator, but a new audio solution that took 5.1 channel support a step further than anybody else with on-the-fly Dolby Digtal encoding.

The plot thickens...

MuFu.
 
Ailuros said:
There are references on more than one website that tested the driver function on V5's if you want to really find out wether there were artifacts and under what conditions. If I didn't limit performance to an upper threshold, the more aggressive the setting the more the artifacts.

Indeed. But with care you could get few to no visible artefacts and fantastic performance increases - espeically with FSAA, and that's SuperSampled AA, not MS.


Again even dual chip Spectre was limited to JUST 4x sample AA, despite the fairy tales that float around. Why? Not enough bandwidth for starters.

SO if "NOT ENOUGH BANDWIDTH" is the excuse, please explain to everyone why 3dfx was going to push 8x FSAA with the Voodoo5 6000, which had even LESS bandwidth (10.4GB/sec) and LESS memory per chip (only 32MB)?

Says who? Former 3dfx folks had their hands already in the NV25 development for some minor aspects, and worked with the other assigned engineers for the NV30 project.

Yeah, most of the '3dfx tech' is "new" 3dfx tech, not the old stuff. The old stuff, individually, is more or less obsolete now.

It is my understanding that past NV25, all former 3dfx/GP joined designs cannot leave much more headroom for implementations then those that have been already conducted, unless NVIDIA decides to move completely to deferred rendering.

Pretty much.

That's what I heard exactly for the NV25 and NV30 accordingly. If that one shouldn't come true, the next best guestimate will be NV50 or NV60 up to eternity. In the meantime no one seems to realize that it's technically impossible that what 3dfx had by the end of 2000 on it's drawing boards might be even outdated today.

Actually NV25 does have some direct 3dfx tech in it; the 2D engine. Look it up. I'd assume NV30 inherits the same engine...

There are all NVIDIA engineers today and it's not that it's the first significant number of former 3dfx engineers that found a working place at NVIDIA. Just one name for those with a good memory is enough: BALLARD.

Or how about everyone's favourite example, Brian Burke?

Rampage has a lot of bandwidth, moreso even than GeForce4, and saves some from not using it for geometry.

True compared to the NV25.

Dual chip Rampage= 12.8GB/sec bandwidth

Fear which was to follow was meant only for single chip sollutions. Let's see the most probable final clockspeed should have been between 250-300MHz, where DDR ram availability and pricing would have played a very important role.

Since core and memory would have run most likely in isochronous speeds:

@250MHz = 8.0GB/sec bandwidth
@300MHz = 9.6GB/sec bandwidth

Any particular reason why on first glance they seemed to lower raw memory bandwidth? If Spectre would have been a single chip TBDR, it's 6.4GB/sec bandwidth would have been enough to compete against NV20. Wild exaggerations? Not in the least wilder than a supposed software hack that chunks out completely unrealistic scores.

Fear was NOT a deferred renderer. Fear was a direct derivative of Rampage.

MOJO was to be the full deferred renderer.

Get your obscure 3dfx lore straight, dear! ;)

Well *ahem* let's hear then where they went to. I'm sure you can even name the companies. You couldn't by any chance verify where exactly the father of the supposed HSR thingy works today can you?

I don't care where the father of VSA's HSR went. I really don't. But you people are missing the obvious. It really would NOT be good if I named the 'mystery company' though. It isn't that big a stretch of logic, if you ask me.

And please. Anyone here I may have told at some point? QUIET. This is NOT a good situation for a few individuals right now.

I recall NV hiring around 100 people back then and I dare to predict that they took of course the most talented and gifted engineers w/o a second thought.

By which exact calculations have you reached that 10%? If you recall a company that had a number of engineers in excess I personally don't.

Heh... teh mystery company. And that number of engineers would be in a lot of trouble right now if we talked about them too much. So this tack is hereby ignored by me... I really can't give any more answers.

I *can* say that some of course went to ATi and Matrox.

I can see you day dreamers harping on the same old tiresome stuff even if we get cards with over 50GB/sec raw bandwidth and 5x times the rendering power of today. Then the same old tune will still be....yeah but Rampage would have...........ack.

Dude. The whole point to this thread is to dispel the myths around Rampage and try to write a few truths in stone. Can't you see that?
 
SO if "NOT ENOUGH BANDWIDTH" is the excuse, please explain to everyone why 3dfx was going to push 8x FSAA with the Voodoo5 6000, which had even LESS bandwidth (10.4GB/sec) and LESS memory per chip (only 32MB)?

Which would have been completely unplayable, unless some very few CPU bound corner cases and low resolutions.

Is that the ONLY proof you have that Spectre allowed in fact 8x sample AA? Of course would it have been possible to be enabled even on single chip Rampage, but it still doesn't mean that they had plans up to the end for more than 4x samples.

Indeed. But with care you could get few to no visible artefacts and fantastic performance increases - espeically with FSAA, and that's SuperSampled AA, not MS.

*yawn*

Yeah, most of the '3dfx tech' is "new" 3dfx tech, not the old stuff. The old stuff, individually, is more or less obsolete now.

Well for blindfolded 3dhq fans, obviously the late 3dfx developed dx10 chips already.

Pretty much.

That I have to see first (concerning the switch to a full TBDR)

Actually NV25 does have some direct 3dfx tech in it; the 2D engine. Look it up. I'd assume NV30 inherits the same engine...

Thanks for the insufficient lecture, but there's more to it, even for the NV25.

Fear was NOT a deferred renderer. Fear was a direct derivative of Rampage.

MOJO was to be the full deferred renderer.

Get your obscure 3dfx lore straight, dear!

Obscure is nice coming from a daydreamer that still is as NAIVE to believe that Spectre was capable of 180 or more fps with AA/AF on.

You don't have by any chance a reasonable speclist for it do you?

I still wonder how it was slated to be a GF3-Killer with the specs I have in mind and to that single chip only.

I don't care where the father of VSA's HSR went. I really don't.

It doesn't serve the purpose does it?

It really would NOT be good if I named the 'mystery company' though. It isn't that big a stretch of logic, if you ask me.

I stated myself that he works at NVIDIA and asked for verification. Clever excuse-line but I'm not eating it.

And please. Anyone here I may have told at some point? QUIET. This is NOT a good situation for a few individuals right now.

That should have been a consideration prior to publishing that idiotic speclist across several websites.

The only thing I really feel sorry about is the dozens of users some of you managed to believe in this kind of crap.

Heh... teh mystery company. And that number of engineers would be in a lot of trouble right now if we talked about them too much. So this tack is hereby ignored by me... I really can't give any more answers.

10% of what again? Don't bother they had 500 engineers and we didn't know it.

Dude. The whole point to this thread is to dispel the myths around Rampage and try to write a few truths in stone. Can't you see that?

The only truth there was and still under a very hypothetical scenario when dealing with vapourware, is that Spectre would have been very good competition to NV20. It was no GF3-Killer, however you want to harp on it from different perspectives.

I'd expect some of you after three long years to finally wake up. It has gotten as bad in the past two years, that I feel most of the times uncomfortable to state that I used to have a preference for 3dfx.
 
Ailuros said:
Which would have been completely unplayable, unless some very few CPU bound corner cases and low resolutions.

Is that the ONLY proof you have that Spectre allowed in fact 8x sample AA? Of course would it have been possible to be enabled even on single chip Rampage, but it still doesn't mean that they had plans up to the end for more than 4x samples.

Well what other proof am I supposed to provide? A screenshot or something? As far as I know the test boards in question have been taken offline. They TECHNICALLY aren't even supposed to exist.

4x AA is equally unplayable on a V5. But talk to ANYONE at SimHQ on that tack.

The biggest proof of Spectre allowing 8x AA is simply that there's no reason for it NOT to.

And finally, would a 3dfx Rampage powerpoint which even mentions 16x AA satisfy you on this tack?!

Indeed. But with care you could get few to no visible artefacts and fantastic performance increases - espeically with FSAA, and that's SuperSampled AA, not MS.

*yawn*

Fuck what the creator of the HSR wants to think. He created a technique that CAN work under controlled conditions. There have been TWO people here who have given their testimony - one of whom referred to a Voodoo3. Not even VSA.

Well for blindfolded 3dhq fans, obviously the late 3dfx developed dx10 chips already.

I'd like to see who mentioned that.

As I said. Rampage wouldn't really be viable today, partly because of the only real 'performance AF' being a cheat mode that doesn't stand up to 'true' AF, but 'looks a little smoother than nVidia's Aggressive' (testimony from engineer at mystery company).

Thanks for the insufficient lecture, but there's more to it, even for the NV25.

I said explicit, direct 3dfx tech. As in stuff that was a part of Rampage. I'm sure there's more 3dfx influence in other areas, but that's the clear 'point the finger HERE!' example.

Fear was NOT a deferred renderer. Fear was a direct derivative of Rampage.

MOJO was to be the full deferred renderer.

Get your obscure 3dfx lore straight, dear!

Obscure is nice coming from a daydreamer that still is as NAIVE to believe that Spectre was capable of 180 or more fps with AA/AF on.

Cheat AF, fillrate-free AA with piles of bandwidth, and 30-50% efficient HSR. WHY is it so impossible to attain 180fps?

You don't have by any chance a reasonable speclist for it do you?

I still wonder how it was slated to be a GF3-Killer with the specs I have in mind and to that single chip only.

It IS a GeForce3 killer. But not in direct, straight, unfiltered, standard rendering (single chip). AA/AF combined totally murder GF3 though.

I don't care where the father of VSA's HSR went. I really don't.

It doesn't serve the purpose does it?

No, it's IRRELEVANT. Why should it matter? FINE, so he went to fucking nVidia. WHAT'S THE BIG DEAL? I never said 'no 3dfx engineers went to nVidia'. I never said 'very few 3dfx engineers went to nVidia'. All I've said is that most of the core Rampage team did NOT go to nVidia.

It really would NOT be good if I named the 'mystery company' though. It isn't that big a stretch of logic, if you ask me.

I stated myself that he works at NVIDIA and asked for verification. Clever excuse-line but I'm not eating it.

Um... ok? Can we get past the HSR guy for a minute here? I'm talking about OTHER engineers.

And please. Anyone here I may have told at some point? QUIET. This is NOT a good situation for a few individuals right now.

That should have been a consideration prior to publishing that idiotic speclist across several websites.

The individuals knew what they were getting into and know that things could get very bad for them very easily. They knew that at some point things would be said, but they had faith that somethings would remain unsaid.

The only thing I really feel sorry about is the dozens of users some of you managed to believe in this kind of crap.

You seem to be the only one resisting so adamantly. Where's Vince? Even HE isn't beating me with his Anti-3dfxâ„¢ Stick.

10% of what again? Don't bother they had 500 engineers and we didn't know it.

Sure, plenty of engineers went to nVidia. But not all the engineers were working on Rampage. A good number of the core Rampage staff... did not go to nVidia.

Dude. The whole point to this thread is to dispel the myths around Rampage and try to write a few truths in stone. Can't you see that?

The only truth there was and still under a very hypothetical scenario when dealing with vapourware, is that Spectre would have been very good competition to NV20. It was no GF3-Killer, however you want to harp on it from different perspectives.

Sure, a single-Rampage spectre would've been great competition to NV20. It would've killed it in AA+AF in all but doggedly single-textured games.

Dual-Rampage is another story.

The only thing Rampage lacks is raw texture rate, but it doesn't need it that incredibly much...

I'd expect some of you after three long years to finally wake up. It has gotten as bad in the past two years, that I feel most of the times uncomfortable to state that I used to have a preference for 3dfx.

Wake up and smell the hummus, so to speak. WHY do you find it so thoroughly impossible to believe that 3dfx had something marvellous up their sleeves? They new that they needed to compete with a single core, especially after the flak they got for Voodoo5 'Hey my GeForce2 does more than your Voodoo5 with half the chips! 3dfx sux0rz'... so they made sure the single chip could compete. Which would logically mean that dual chip would really handily beat the daylights out of its competition (see also: Voodoo2 vs. TNT).
 
The biggest proof of Spectre allowing 8x AA is simply that there's no reason for it NOT to.

That's it I'm convinced.

Fuck what the creator of the HSR wants to think. He created a technique that CAN work under controlled conditions. There have been TWO people here who have given their testimony - one of whom referred to a Voodoo3. Not even VSA.

Getting nervous already?

As I said. Rampage wouldn't really be viable today, partly because of the only real 'performance AF' being a cheat mode that doesn't stand up to 'true' AF, but 'looks a little smoother than nVidia's Aggressive' (testimony from engineer at mystery company).

Thanks for at least that verification that had gone unanswered so far no matter how often I asked. It was too obvious from where (it's called nowadays) "performance" anisotropic on NV30 could originate from.

Cheat AF, fillrate-free AA with piles of bandwidth, and 30-50% efficient HSR. WHY is it so impossible to attain 180fps?

See R300 three whole years later. See CPU's/memory/platforms of 2000 and today etc etc.

Tell me that someone found two rampage cores in the garbage can and build a board out of it and I'll have another kneeslapper. I've seen worse claimed so far from the infamous 3dhq gang; nothing surprises me anymore.

It IS a GeForce3 killer. But not in direct, straight, unfiltered, standard rendering (single chip). AA/AF combined totally murder GF3 though.

At a cost of a predicted 500$ pricetag for dual chip. As for the murder it happens more than often in vaporware wet dreams of some out there.

No, it's IRRELEVANT. Why should it matter? FINE, so he went to fucking nVidia. WHAT'S THE BIG DEAL? I never said 'no 3dfx engineers went to nVidia'. I never said 'very few 3dfx engineers went to nVidia'. All I've said is that most of the core Rampage team did NOT go to nVidia.

I'm puzzled wether I am actually having an argument with a female here or a truck-driver. ;)

I'm sure you counted them one by one haven't you?

The individuals knew what they were getting into and know that things could get very bad for them very easily. They knew that at some point things would be said, but they had faith that somethings would remain unsaid.

I don't think I want to comment further on that one.

You seem to be the only one resisting so adamantly. Where's Vince? Even HE isn't beating me with his Anti-3dfxâ„¢ Stick.

I don't think that could even stand as an argument here. I don't see where 3rd parties are relevant.

Sure, plenty of engineers went to nVidia. But not all the engineers were working on Rampage. A good number of the core Rampage staff... did not go to nVidia.

Which were how many? I asked a perfectly logical question and expect a reasonable answer for it. When you "guestimate" 10%, then I suppose that you have roughly an idea how many engineers they had in total and how many assigned to Rampage.

Sure, a single-Rampage spectre would've been great competition to NV20. It would've killed it in AA+AF in all but doggedly single-textured games.

Dual-Rampage is another story.

The only thing Rampage lacks is raw texture rate, but it doesn't need it that incredibly much...

*yawn*

Wake up and smell the hummus, so to speak. WHY do you find it so thoroughly impossible to believe that 3dfx had something marvellous up their sleeves? They new that they needed to compete with a single core, especially after the flak they got for Voodoo5 'Hey my GeForce2 does more than your Voodoo5 with half the chips! 3dfx sux0rz'... so they made sure the single chip could compete. Which would logically mean that dual chip would really handily beat the daylights out of its competition (see also: Voodoo2 vs. TNT).

Because I am a realist.
Because I have been close to people back then that worked for the company and none of them came ever to similar wild exaggerations.
Because I won't allow myself to believe in fairy tales.
Because I won't allow myself to underestimate what the competition has to present.
Because we're debating about a piece of vapourware were nothing else but paperspecs are known.
Because I never took part at the Voodoo vs TNT/GF debacles, just because it isn't my style.

Do I need a longer list than that?

Logic and reason is something that has nothing in common with 3dhq. Ridiculous claims of exaggerated performance ratios on outdated hardware, vapourware dreams, supposed driver sets that never made it to the daylight etc etc. not to speak that on their former incarnations of their message boards you didn't only need tranquilizers to read the unbelievable amount of bullshit that got piled up there, but free speech was a word that didn't exist in their vocabulary either.
 
You seem to be the only one resisting so adamantly. Where's Vince? Even HE isn't beating me with his Anti-3dfxâ„¢ Stick.
Because its patently obvious to most of us that you are totally irration WRT this 3dfx thing, and as such, its almost not worth our time to "discuss" it with you.


WHY do you find it so thoroughly impossible to believe that 3dfx had something marvellous up their sleeves?

Why do YOU find it so impossible to believe that 3dfx tech wasnt "all that and a bag of chips"?
I cannot believe some of the moronic stuff you 3dfx fanpeople post. If the tech was that good, it would have been used.
I also like you constant insistence that 3dfx tech IS good enough to be used, and the accompaning "proof" that it is used in the 3D part of the nv25...
If the only good part was 2D, why do you keep harping on the 3d performance?
Why must you people paint 3dfx in such a holy glowing light?
 
Problem for 3dfx and Rampage... somethings called Pixel and Vertex Shaders version 1.1... The Texture Computer may have been very flexible, but it just wasn't flexible in the right way to get PS1.1, And Sage didn't have support for the Address register in VS1.1.
 
ok after 4 pages im givin up reading this whole thing through. im just going to say that i find it amazing that so many intelligent people are unable to think ouot of the box. you all assume that the NV / ATi way is the only way and that anything too radically different is impossible. Bitboys, you all assume, is totally fake because their designes were not the norm. IF they had been able to overcome the difficulties with bringing such a product to market (one of the largest difficulties being the closed-minded people that assume its fake just because they cannot explain it) then they would be able to meet, or almost meet their performance projections. It never ceases to amaze me howbrainwashed people can get and deny the existance of everything that they are not familiar with. You have no idea how many problems you may be causing. And im not just talking graphics cards anymore, i mean life in general and everyhting in it.

</rant>
 
Sage said:
ok after 4 pages im givin up reading this whole thing through. im just going to say that i find it amazing that so many intelligent people are unable to think ouot of the box. you all assume that the NV / ATi way is the only way and that anything too radically different is impossible. Bitboys, you all assume, is totally fake because their designes were not the norm. IF they had been able to overcome the difficulties with bringing such a product to market (one of the largest difficulties being the closed-minded people that assume its fake just because they cannot explain it) then they would be able to meet, or almost meet their performance projections. It never ceases to amaze me howbrainwashed people can get and deny the existance of everything that they are not familiar with. You have no idea how many problems you may be causing. And im not just talking graphics cards anymore, i mean life in general and everyhting in it.

</rant>
funny. now turn your argument around and look in the mirror.
 
On the contrary, it never seizes to amaze me how some people are willing to believe almost anything, no matter how improbable, without a shred of hard evidence. Let's save the blind faith for Sunday mass, shell we?
 
at the end of the day in 2000 or 2001 when it was set to be released it would have been one of the top of the line cards if not the top of the line card. But compared to the new cards i believe it would loose. Tech doesn't stand still . But then again there are some things that the rampage may still have done better . Nothings perfect .
 
3dfx successful products:

Voodoo 1
Voodoo 2
Voodoo 3 2000
Voodoo 3 3000
Voodoo 4 4500
Voodoo 5 5500

3dfx failed products:

Voodoo Rush
Voodoo Banshee
Voodoo 3 3500
Voodoo 5 6000

Looks like a 40% failure rate to me, nice and evenly distributed throughout their history. Not exactly the kind of track record you'd expect from a company getting ready to leapfrog the competition by two years...
 
Crusher said:
3dfx successful products:

Voodoo 1
Voodoo 2
Voodoo 3 2000
Voodoo 3 3000
Voodoo 4 4500
Voodoo 5 5500

3dfx failed products:

Voodoo Rush
Voodoo Banshee
Voodoo 3 3500
Voodoo 5 6000

Looks like a 40% failure rate to me, nice and evenly distributed throughout their history. Not exactly the kind of track record you'd expect from a company getting ready to leapfrog the competition by two years...

can we count the voodoo 5 6000 since it was never released to the public ?
 
It was announced, demonstrated, and the only reason it wasn't released was because it kept running into problems and eventually the company went bankrupt before it could work them out. I think that counts as a failed product. You could also count the Voodoo 3 2000 and 3000 as one product. The 4500 wasn't terribly more advanced than a V3 either, but enough so that it deserves to be listed separately, I think.

The point is, every product since the original Voodoo was just an evolution of the previous generation, with the addition of the T-buffer at the end. And quite a few of them failed miserably. It seems unlikely that this same company would have had such amazing things coming into fruitition at the same time they were going bankrupt and couldn't secure loans or investment. I think what they probably had, were a lot of ideas that seemed really good in theory. But even if those ideas had been given the opportunity to develop into products, I think it would have both taken a lot longer than some people suggest, and it wouldn't have ended up being as spectacular as the idea made it out to be (see T-Buffer).
 
Back
Top