Where is Revenge ? (3dfx related)

yeah, and i'd bet my left nut that they kick a GF3s ass. lets not forget Gigapixel. Rampage was too late to be based on Gigapixel tech, but i'll bet they learned an AWFUL lot about mem efficiency and had time to implement some sort of bandwidth saving routines. I still believe that the NV30 has a hell of a lot of Gigapixel-based ideas in it to save bandwidth. remember, it wasnt just tiling that Gigapixel had.
 
Tagrineth said:
I've said it before, I'll say it again, two functional dual Rampage dual SAGE boards do exist.

Not to sound like an ass, but have you seen them in person? Because, everything and everyone keeps telling me that there were no plans for 2-2 board, only for 2-1 one.
 
Sage said:
yeah, and i'd bet my left nut that they kick a GF3s ass. lets not forget Gigapixel. Rampage was too late to be based on Gigapixel tech, but i'll bet they learned an AWFUL lot about mem efficiency and had time to implement some sort of bandwidth saving routines. I still believe that the NV30 has a hell of a lot of Gigapixel-based ideas in it to save bandwidth. remember, it wasnt just tiling that Gigapixel had.

Okay, so you agree with me that Rampage had no bandwith saving technologies (recursive texturing does not help you in single and dual textured games). So how exactly would it defeat a GF4 then, that has an optimized crossbar controller with early-Z and stuff? You seem to contradict yourself here...
 
Tagrineth said:
Ailuros said:
oh trust me, 3Dfx tech is faster than comparable nVidia tech. i would pit an ultra-high-end Rampage / Sage board up against a GF4 and it would probably be a very tough fight for both cards.

Trust you on what? Theoretical or hypothetical assertions don't render anything on screen, nor does any vaporware.

I've said it before, I'll say it again, two functional dual Rampage dual SAGE boards do exist.

.....and the proof to that would be? There's not one single confidential document from back then, no notion, assertion anything and from anyone back then that even indicated something of the kind.

I'll use some older stuff that was posted here at the B3D forums from Wavey (I think cause I have it stored under Dave B. *ahem*) and the rest by someone that used Anonymous poster as a nickname (ask Dave or Rev who he was) (note: using published data is always safer):

Dave B:

As John has said, they were only due ship Rampage revisions with single SAGE implementations, even though 2 Sage units could work together in operations with each other. There's two very simple reasons for this...
The first is that SAGE was going to be the AGP bus master device -; they would have only wanted one because that would be used to shield the dual Rampages from the AGP bus, to enable full AGP capabilities (even though Rampage had its own integrated AGP bridge, it would have been easier to do it this way).
The second reason for not scaling SAGE's was because there would have been absolutely no need! Even though you can increase the number of Rampage's raster units on the board, because of the way SLI works it would not have increased the triangle setup abilities of Rampage - so the setup rate cap of one Rampage is also the cap for 2 or 4 etc. By adding more SAGE units you are just maxing out the performance quicker!
I've got a feeling the scalable nature of SAGE may have been working on for future developments they may have had with Quantum3D.

However, the issue I painted above would not have held true with Fear - with the Fear Raster and SAGE2 (yes, they were working on it) they would have used GigaPixles tiling, and hence polygon binnng would have been in operation - this goes hand in hand with scaling both the T&L unit and the raster cores since the all the polys have already been sorted into the relevant regions and the raster core only needs to set up what is in in its tiles (rather than the entire screen). The question remains whether or not they ever needed to go to more than one Raster core for Fear, since Engineering development had projected a maximum operating frequency off 500Mhz for the chip!!!!

Don't count too much on the frequency, because it makes more sense to use synchronous core/ram speeds on TBR's. If you can think a bit further you may be able to think back and see what the usual highest speeds for DDR, their price and availability were in 2001.

Anonymous:

Rampage was running. It was running DX. Fixed chips could run OpenGL, but if your chip wasn't fixed (if I recall correctly, it could only support direct writes and so there were issues with the FIFO buffer)
Rampage had some transformation capabilities of its own. There would have been a low-end version without Sage, and then a mid-range with Sage and a high-end with 2 Rampage and 1 Sage. Sage was extremely powerful, though it lacked address ops unfortunately, so it only supported 1.0 vertex shaders (meaning there wasn't a matrix pallet, though our people had come up with some good tricks to getting around the issue).
socketable.. no.. I've never heard of it being that way.
HOS and Photoshop type filters - yes.

SAGE 2
No. I think, at the very least, SAGE2 needed to have its own RAM. But also it could have done some of the binning work - i.e SAGE2 could have done all the binning, and only sent the data to the rasteriser that needs to be processed. I don't know if that was how it was due to operate, but it makes for some interesting thoughts as to exactly where you split the processes.
As for Geometry data issues, I'm fairly sure that GP had a Hierarchical Z-Buffer before binning in the first place, which helped alleviate some geometry overhead. I also think SAGE2 had Geometry compression, which also would have helped with the binning with Fear.

Tilers decrease rendering memory bandwidth by a lot, while it is possible that their vertex bandwidth requirements increases up to 2.5 times. Hierarchical Z or any other workaround would have been there to reduce that single disadvantage of TBR's with data on the vertex side. Got any better thoughts why a geometry processor would require it's own ram?
 
Sage said:
yeah, and i'd bet my left nut that they kick a GF3s ass. lets not forget Gigapixel. Rampage was too late to be based on Gigapixel tech, but i'll bet they learned an AWFUL lot about mem efficiency and had time to implement some sort of bandwidth saving routines. I still believe that the NV30 has a hell of a lot of Gigapixel-based ideas in it to save bandwidth. remember, it wasnt just tiling that Gigapixel had.

See above. Fusion was to be a TBR AFAIK.
 
Laa-Yosh said:
Okay, so you agree with me that Rampage had no bandwith saving technologies (recursive texturing does not help you in single and dual textured games). So how exactly would it defeat a GF4 then, that has an optimized crossbar controller with early-Z and stuff? You seem to contradict yourself here...

When an accelerator can texture or filter textures with less ticks, then you ARE saving bandwidth.
 
Ailuros said:
When an accelerator can texture or filter textures with less ticks, then you ARE saving bandwidth.

Sorry, my fault... now I get what you mean. Reducing texture bandwith should definitely help, especially with today's titles; however, how much of the total bandwith utilization would this be, and how much would frame- & Z-buffer traffic take?
 
No idea (and I don't really like theoretical estimates), because I have no also no idea how their adaptive anisotropic algorithm worked. Today we're mostly in the realm of quad textured games. I've seen both K2's and R300's in tests gain about 11% when going from single to quad texturing in SS or SS:SE.

What I can remember from the past arguments and still agree with is the more simple approach from a marketing perspective:

while high end Spectre (2 rampage/1 sage) would have been highly competitive to even NV25 (200MHz, 200MHz DDR, 4 pipes/1TMU, 800MPpsx2=1600MPps, 6.4GB/s x2=12.8GB/s bandwidth), like any other multichip setup it would have had a hard time competing in pricing as time goes by. With an initially projected 500$ pricetag, there's not as much headroom for price reductions as with a single chip setup bar none, in the longrun.

Each rampage rasterizer was ~30M transistors while Sage should have been around ~20M (not sure about sage's transistor count, just an estimate); even if the company would have stayed alive there weren't any plans due to lack of resources for refresh parts. Most likely they would have moved straight from that to Fear (Fusion + Sage II) and introduced a TBR approach instead, followed by plans for entering the PDA/mobile market (see MBX/PVR for comparisons).
 
Laa-Yosh said:
Sage said:
yeah, and i'd bet my left nut that they kick a GF3s ass. lets not forget Gigapixel. Rampage was too late to be based on Gigapixel tech, but i'll bet they learned an AWFUL lot about mem efficiency and had time to implement some sort of bandwidth saving routines. I still believe that the NV30 has a hell of a lot of Gigapixel-based ideas in it to save bandwidth. remember, it wasnt just tiling that Gigapixel had.

Okay, so you agree with me that Rampage had no bandwith saving technologies (recursive texturing does not help you in single and dual textured games). So how exactly would it defeat a GF4 then, that has an optimized crossbar controller with early-Z and stuff? You seem to contradict yourself here...

See underlined.

I'll have to ask about that, too, but I know Rampage isn't fully brute-force like Parhelia. Well I take that back, Parhelia can optimise for burst transfers pretty well... :rolleyes:

I know I can't prove the existence of the 2x2 board. But I can give an argument as to its existence: Shaders. 3dfx saw shaders getting longer and longer, and with vertex shaders, having two SAGE processors allows them to keep the Rampage setup engine full even under highly stressful situations. The benefits would be great as the shader lengths and lightsources increase... but yeah, granted, in general, dual SAGE would be somewhat inefficient... :rolleyes:
 
3dfx saw shaders getting longer and longer, and with vertex shaders, having two SAGE processors allows them to keep the Rampage setup engine full even under highly stressful situations/

You're kidding me right? Show me any "long" shader in any worth mentioning game even today. Spectre was dx8.0 (PS1.1/VS1.0) -spring 2001, while Fear was dx8.1 (PS1.1/VS1.1) -projected for Q4 2001.

They didn't have to incorporate any further bandwidth saving techniques up to Spectre, because beyond that they would have changed the architecture/rendering approach fundamentally.

I even posted the sollution for Sage II and higher level of geometry with deferred rendering in that it most probably would have had it's own dedicated ram and hierarchical Z.

No doubt Gigapixel engineers were/are highly talented professionals, but it's not really that they invented something that hasn't been done or researched by others already.

If anything considering long shaders would have been a point to worry about, then it was in Mojo, but then again it existed mostly on paper. Wanna bet that we'll see a more advanced Tiler than that one next year? ;)
 
Geeforcer said:
Since the demise of 3dfx fanboys have came up with enough 3dfx-related myths and legends to rival the Greek and Roman mythologies combined.


What do you mean "demise"...????

They've been sighted in numerous places--with the Pope, with Aliens, all over, really.
 
Okay, so you agree with me that Rampage had no bandwith saving technologies (recursive texturing does not help you in single and dual textured games). So how exactly would it defeat a GF4 then, that has an optimized crossbar controller with early-Z and stuff? You seem to contradict yourself here...
i never said it would beat the GF4. it would be competetive with the GF4. and yes, Rampage did have some badwidth optimizations other than recursive textureing (which, as has been pointed out, does indeed help in other ways), AFAIK. Also, as Tagrineth pointed out, 2 SAGEs would help with complex shaders.
 
How would it help with long shaders? What was the maximum vertex shader length SAGE could execute in 1 pass? Considering the time-frame, I don't think it was anywhere near the soon-to-be-surpassed VS 2.0 specifications. I think there is some serious grasping at straws going on here.
 
Geeforcer said:
How would it help with long shaders? What was the maximum vertex shader length SAGE could execute in 1 pass? Considering the time-frame, I don't think it was anywhere near the soon-to-be-surpassed VS 2.0 specifications. I think there is some serious grasping at straws going on here.

You'd be surprised. :) Seriously... :)
 
Tagrineth said:
Geeforcer said:
How would it help with long shaders? What was the maximum vertex shader length SAGE could execute in 1 pass? Considering the time-frame, I don't think it was anywhere near the soon-to-be-surpassed VS 2.0 specifications. I think there is some serious grasping at straws going on here.

You'd be surprised. :) Seriously... :)

Try me.
 
Also, as Tagrineth pointed out, 2 SAGEs would help with complex shaders.

Unless you two come up with a viable explanation how triangle setup would be handled with a secondary geometry unit on board (with SLI as supposed to be modified on Spectre), I won't even retouch that complex shaders stuff (with VS1.0? errrrmm.....).

Again 2rampage + 1sage (with 200MHz DDR) was predicted to be ~500$.
 
Ok it should be obvious that 2 SAGEs, despite the fact such cards were never going to be produced, would help with longer vertex shaders because it would execute them faster than a single SAGE. It should improve lighting speed as well. As long as things were Transform limited and not Trisetup limited, an extra SAGE should help, but SAGE was still supposed to be faster then the other hardware out there. Since we are hardly transform limited as is (now), dual SAGE would have been pointless, and overly expensive.
 
Colourless said:
Ok it should be obvious that 2 SAGEs, despite the fact such cards were never going to be produced, would help with longer vertex shaders because it would execute them faster than a single SAGE. It should improve lighting speed as well. As long as things were Transform limited and not Trisetup limited, an extra SAGE should help, but SAGE was still supposed to be faster then the other hardware out there. Since we are hardly transform limited as is (now), dual SAGE would have been pointless, and overly expensive.

Would the 2 SAGE setup execute 2 long shaders in parallel, or 2 instructions in parallel? In other words, any ideas how would processing shaders be distributed among multiple SAGE chips?
 
Assumption would be "execute 2 long shaders in parallel" i.e. each chip processes different vertices. Of course, I don't know any specifics about it.
 
Back
Top