3dfx/3dhq Revenge superchip.

rashly said:
tag: those 250fps numbers came from aqueel. hes the one who originally said that figure.

I was never told 250fps. I was told 180fps at the time. So was Devin. My additional source said 180 as well.

I think by now we know that Syed isn't the most trustworthy fellow around, anyway. When did he say 250fps anyway? Just recently?
 
well, rampage was only about 30 million transistors and sage was about 25 million (and at that time, .18 micron). they costs alot less per chip than a nv25 or nv30, but with multiple rampages and a sage, it was not much more expensive than nvidia or ati single chips.

Dual chip Spectre was estimated for 500$ US back then and you're better aware of it than most here. On that same cost level ATI could easily create a 500$ dual chip board today, that would beat the living shit out of any Spectre.

Can we please get back to reality now?
 
i wasnt talking about the retail cost of teh card, just the cost for 3dfx of how much the chips would be for them to make. it wouldve been a very expensive card, but ti wouldve also owned whatever was out in early 2001.

edit: oh, and in early 2001 no way could ati create a duel chip board that would rival spectre. maxx is weak, and rampage wouldve been the first dx8 chip.

tag: let me retract that 250 number by aqueel. what he said in summer 2002 was 170fps 1600x1200x32 with 8xfsaa and 16 tap af in quake 3. i dont know where the 250 came from, nor do i know the validity of the 170fps.
 
edit: oh, and in early 2001 no way could ati create a duel chip board that would rival spectre. maxx is weak, and rampage wouldve been the first dx8 chip.

*sigh* Please re-check how the specific hypothetical argument originated.

No one said anything about maxx in the first place. One can check each companies OVERALL sales figures and then you can tell me why one of them actually managed to stay alive. Officially 3dfx's era ends with the VSA-100, with the rest being "if's, but's and maybe's"

I disgress at that "no way" comment, ATI's engineers weren't and aren't any freshmen in the business either. Their supposed "impotence" shows in today's products; while on the contrary when it comes to 3dfx I can see after the V2 sales figures potentially taking a nosedive and deal after deal getting lost; gee I wonder why.
 
Does anybody know who was primarily responsible for the Revenge design/specs?
aqueel was responsible for them.

Ailuros: i know what you said, but obviously any chip made now in 2003 would own spectre. no one is saying that it wouldnt. ati wouldntve been able to match the performance of spectre in 2001, thats what im saying.

and what other dual chip solution would ati use on one card besides for maxx?

the if's but's and maybe's are what this thread is about.
 
aqueel was responsible for them.

I wonder what technical background Aqueel has to produce a design that would theoretically leap so far ahead of the seasoned engineering talent from ATI, Nvidia, etc. Does anyone know if he has an EE degree and work experience related to video cards? He seems to be at the center of some key issues in this thread, but how much is known about his credibility in this area?
 
and what other dual chip solution would ati use on one card besides for maxx?

AFR had downsides as much as the initial SLI on VSA-100. If and buts encounted, why would have theoretically 3dfx been able to improve it's SLI implementation and not ATI it's multichip approach? The way I see it, ATI just never needed a multichip sollution in 2001, since they felt that their R200/8500 line was perfectly adequate compared to what the competition had. It wouldn't have been too hard to spend time and resources on a dual 8500 and beat the NV25 by far, but they preferred obviously to concentrate on the generation to follow with the known results.

AFAIK they don't use AFR on their professional quad-R300 setups.
 
rashly: I was told 180fps at 1280x1024x32 / 8xMSAA / 16xPerfAA... though maybe 1600x1200x32 with the same settings could have managed 170fps given 1-2% additional visible error or something like that.

Ailuros: I had a really big explanation of stuff written up, and a server timeout swallowed it :'(

Summary: Nobody's trying to argue that Rampage would be a truly viable product today. The extrememly limited features (PS1.0?!) and dubious IQ while in performance AF mode result in the only saving grace being the fantastic edge quality in the great-performing 8x AA, and the overall incredible visual quality (though poor performance) of 8x SSAA.

And wrt your dual R300 argument, keep in mind power requirement for running two of those things, and board complexity would increase, and you'd need more RAM... so it'd probably be $500-600 as you say. But then we could go back to square one, and say there could've been a 4xRampage monster (a la V5-6k), which would just about pull ahead of 2xR300... but then ATi could, for the same price, make a 4xR300... etc. It's a pointless argument.
 
Ailuros said:
AFAIK they don't use AFR on their professional quad-R300 setups.

Yeah, it's a macrotiling technique of some sort. The framebuffer is unified.

MuFu.
 
The extrememly limited features (PS1.0?!) and dubious IQ while in performance AF mode result in the only saving grace being the fantastic edge quality in the great-performing 8x AA, and the overall incredible visual quality (though poor performance) of 8x SSAA.

Since you're repeating yourself, I can repeat myself too.

Good edge antialiasing with lower than mediocre texture filtering sounds oh so great. You may call it Half Scene Antialiasing for what it's worth (HSAA) :D

8xRGSS? Apart from CPU limited corner cases, 200MPixels/sec fillrate left at 200MHz clockspeed sounds great. I won't even touch high resolutions or bandwidth.

PS1.1/VS1.0 for Spectre.

And wrt your dual R300 argument, keep in mind power requirement for running two of those things, and board complexity would increase, and you'd need more RAM... so it'd probably be $500-600 as you say. But then we could go back to square one, and say there could've been a 4xRampage monster (a la V5-6k), which would just about pull ahead of 2xR300... but then ATi could, for the same price, make a 4xR300... etc. It's a pointless argument.

Of course is it pointless; it started being pointless in December 2000. ATI has real quad-R300 setup for professional simulators AFAIK, where cost is no consideration. If my memory serves me well, it can come even with just 64mb ram per chip (I think it's capable of 24xAA, but don't quote me on that; if that's sparsely sampled you may calculate the EER yourself).

Power requirements? Let's just see how much power true next generation cards will consume, TDBR's not excluded. If the tests I saw are accurate, the R300 still consumes about 25% less power than a NV30.

Speaking of; hell theoretically I could think of a dual chip TBDR setup (4piped@300MHz) with 128Z/stencil units each. 16xRGMS per chip with times less bandwidth considerations *bursts pipe dream bubble* :p

Yeah, it's a macrotiling technique of some sort. The framebuffer is unified.

Do you mean some sort of viewports? From what I can tell macro/micro tiling is already there in hierarchical Z, or not?
 
Of course is it pointless; it started being pointless in December 2000. ATI has real quad-R300 setup for professional simulators AFAIK, where cost is no consideration. If my memory serves me well, it can come even with just 64mb ram per chip (I think it's capable of 24xAA, but don't quote me on that; if that's sparsely sampled you may calculate the EER yourself).

Well you can also get 32-chip VSA-100 setups. ;)
 
Tagrineth said:
Of course is it pointless; it started being pointless in December 2000. ATI has real quad-R300 setup for professional simulators AFAIK, where cost is no consideration. If my memory serves me well, it can come even with just 64mb ram per chip (I think it's capable of 24xAA, but don't quote me on that; if that's sparsely sampled you may calculate the EER yourself).

Well you can also get 32-chip VSA-100 setups. ;)
No thanks. I'd take a 256 chip R350 rig, however. ;)
 
OpenGL guy said:
Tagrineth said:
Of course is it pointless; it started being pointless in December 2000. ATI has real quad-R300 setup for professional simulators AFAIK, where cost is no consideration. If my memory serves me well, it can come even with just 64mb ram per chip (I think it's capable of 24xAA, but don't quote me on that; if that's sparsely sampled you may calculate the EER yourself).

Well you can also get 32-chip VSA-100 setups. ;)
No thanks. I'd take a 256 chip R350 rig, however. ;)

LOL. In this amazing Beavis & Butthead quote, when they're surrounded by the army with loads of guns and they find it really cool and want a gun too:
"Hehe, can I have one too?"

BTW, I'd still prefer a 128-chip prototype NV40 to a mere 256-chip R350 rig. Of course, it would be harder to get one, but that's another story... But then again, you could ripost by saying a R400 256-chip rig is just as good... Bah! :)

BTW, how many nuclear power plants would you need to power a 256-chip R350, anyway?


Uttar
 
Well you can also get 32-chip VSA-100 setups.

*ahem*

AAlchemy AA5 Realtime 3D Graphics Subsystems combine 4, 8 or 16 3dfx® VSA-100™ graphics accelerators with up to 1 GB of dedicated graphics memory to deliver unmatched realtime 3D performance.

Application transparent T-Buffer-based single-pass, full-scene, sub-pixel anti-aliasing with 4 or 8 sub-samples

aafillrate.jpg


***please note that above fillrate numbers are with 22bpp colour depth.

http://www.quantum3d.com/product pages/aalchemy3.html

http://www.quantum3d.com/product pages/independence1.htm
 
Uttar - well, the 9700 Pro card is using about 50W. Let's say for arguments sake that the "rendering plant" version is clocked lower and has less overall power needs...say...40W. 256 x 40 = 10kW :D
Mind you, a single VSA-100 uses about 15W so 256 of them would still be over 4kW (including memory and such like). Anyone fancy the leccy bill for that lot?
 
Rekrul said:
aqueel was responsible for them.

I wonder what technical background Aqueel has to produce a design that would theoretically leap so far ahead of the seasoned engineering talent from ATI, Nvidia, etc. Does anyone know if he has an EE degree and work experience related to video cards? He seems to be at the center of some key issues in this thread, but how much is known about his credibility in this area?

Aqueel (aquoess, Syed) has zero cred, a year ago he claimed to have made a new Ogl Icd for 3dfx cards that made Voodoo5 as fast as GF3 and also had software Aniso. The new Icd was to be released last may, but was never seen by anyone... Instead he started this *supersecret* Revenge project...

I remember a discussion Aqueel had with a former 3dfx-hardware engineer (Insider) at the x3dfx-forum. Insider didnt belive the claims of working Aniso for V5, Aqueel responded that he couldnt explain how it worked coz Insider only understood hardware and Aqueel only understood software... :LOL:

Also Colourless (GlideXP) posted this about Aqueels programming abilities:
Back at the beginning when I first released glidexp, I had some conversations with him and he pretty much didn't really understand much anything in the glide sources. From memory he couldn't even grasp the concept of a pointer properly.
http://pub43.ezboard.com/fx3dfxfrm1.showMessageRange?topicID=14551.topic&start=41&stop=60
 
Neeyik said:
Uttar - well, the 9700 Pro card is using about 50W. Let's say for arguments sake that the "rendering plant" version is clocked lower and has less overall power needs...say...40W. 256 x 40 = 10kW :D
Mind you, a single VSA-100 uses about 15W so 256 of them would still be over 4kW (including memory and such like). Anyone fancy the leccy bill for that lot?
If you can afford to build a board (or system(s) ) with 256 R350s, then your power bill is the least of your concernes :D It's kinda like asking someone who has a Porsche 911 GT2 how they pay for the gas!
 
Back
Top