3dfx/3dhq Revenge superchip.

Tag, the HSR drivers worked by utilising the branch prediction routines on Pentium processors (one of the reasons why it worked faster on P3's than AMD's at the time), I highly doubt that SAGE would have implemented similar branching logic as Intel. The other problems is that this was 'prediction' it was not an accurate process which is why it was never intended to actually be used - this was developed by one of the driver guys for a bit of a laff!
 
Dave, when I had a Voodoo 3 3K, those HSR drivers were flawless with HSR enabled along with VSYNC.

I played games at higher resolutions with much better framerate. Who told you it was developed as a joke?
I had no image problems when I had vsync enabled.

I wonder how much better FSAA performance would be if such a HSR algorithm was introduced on the R300 or NV3x line of cards?
 
Tag, who ran the Rampage tests April 2001? In addition, what system was used?

Surely, you are starting to see your own contradictions at this point. When I said that the fastest CPU available in April 2001 was P4-1.7 you told me, and I quote:
"Geeforcer: The Rampage board wasn't in a system made shortly after 3dfx went down, THANK YOU. Something closer to middle of last year. "

So, when exactly was it tested, middle of last year or April 2001?
 
Ailuros said:
Q3 is a dual-textured game if I'm not mistaken.

Mostly, although anything with cubemaps or those glowing/moving surfaces look to have 3 or 4 texture layers. I assume q3 supported allowing to do these in one pass if possible.
 
Even more so, almost every game is dual texturing. Base texture plus lightmap has been around from the early ages of 3D gaming.
 
Mummy: That was MARCH 28. Rampage didn't even tape out for another few months.

DaveBaumann: As I said, Rampage didn't do the whole thing in hardware, it was hybrid hard/soft. Also, read what K.I.L.E.R said. Others have had similar results, with some care.

Rampage's HSR wasn't a 100% awesome fantastic whoa instant-working perfect technology. It eliminates, on average, probably 30-50% overdraw, except in some cases where it can skip more data.

Geeforcer: You're the only one seeing contradictions. Nobody said Rampage was tested April 2001. That date is when Rampage should have hit the market. The tests in question were done sometime around the middle of last year. I don't remember specifics from the system, this was some time ago, and I wasn't told everything about it, but I was told the CPU was 'significantly higher than 2GHz'.

Re: Q3 engine: Yes, generally it uses two texture layers. In some cases it'll go up to three, or four... but IIRC four is the maximum. Also, IIRC, Carmack made the Q3 engine force a second pass after every pair of layers.
 
DaveBaumann said:
Tag, the HSR drivers worked by utilising the branch prediction routines on Pentium processors (one of the reasons why it worked faster on P3's than AMD's at the time), I highly doubt that SAGE would have implemented similar branching logic as Intel. The other problems is that this was 'prediction' it was not an accurate process which is why it was never intended to actually be used - this was developed by one of the driver guys for a bit of a laff!

The gentleman in question works ever since at NVIDIA AFAIK. The idea behind the joke, was when the rendering took too long, to drop some geometry at an upper threshold of 60Hz or something like that (been a long time since I asked about it).

Of course is software HSR possible, but this one seemed rather idiotic to me.

Besides I've asked countless of times why it was supposedly limited to the q3a engine alone. The dumb explanation that it was still in it's infancy, doesn't sit well with me.

Geeforcer: You're the only one seeing contradictions. Nobody said Rampage was tested April 2001. That date is when Rampage should have hit the market. The tests in question were done sometime around the middle of last year. I don't remember specifics from the system, this was some time ago, and I wasn't told everything about it, but I was told the CPU was 'significantly higher than 2GHz'.

No he's not alone. While I don't know his past and current preferences I used to be a 3dfx fan myself and I believed only what made sense and came from sources I trusted.

I don't know how often I posted the following quotes, I know who the guy is who posted but he will remain anonymous. Here are the differences between NV20 and Spectre as he saw them, before the NV20 was released:

NV20: 800 MPPS/1600 MTPS - 4 textures in 2 clocks

1 Rampage: 800-1000 MPPS/MTPS 8 textures in 8 clocks

2 Rampage(closer price to NV20): 1600-2000 MPPS/MTPS - 8 textures in 4 clocks (effective, if my thinking is correct)


NV20: 230 DDR + Z compression (variable rate) and early Z checks (variable gain)

1 Rampage: 200 DDR, 2 Rampage 200 DDR + SLI (2x effectively like having 400 DDR)

NV20: 40 MVPS peak
Sage: 50 MVPS sustained (lacking address ops)

NV20: RGMS 2x and OGMS 4x
Rampage: RGMS 2x and 4x

NV20: 64-tap anisotropic
Rampage: 128-tap anisotropic

The lacking address ops were the reason why Sage had only VS1.0

Rampage was running. It was running DX. Fixed chips could run OpenGL, but if your chip wasn't fixed (if I recall correctly, it could only support direct writes and so there were issues with the FIFO buffer)

Rampage had some transformation capabilities of its own. There would have been a low-end version without Sage, and then a mid-range with Sage and a high-end with 2 Rampage and 1 Sage.

Sage was extremely powerful, though it lacked address ops unfortunately, so it only supported 1.0 vertex shaders (meaning there wasn't a matrix pallet, though our people had come up with some good tricks to getting around the issue).

socketable.. no.. I've never heard of it being that way.
HOS and Photoshop type filters - yes.

SAGE 2
No. I think, at the very least, SAGE2 needed to have its own RAM. But also it could have done some of the binning work – i.e SAGE2 could have done all the binning, and only sent the data to the rasteriser that needs to be processed. I don’t know if that was how it was due to operate, but it makes for some interesting thoughts as to exactly where you split the processes.

As for Geometry data issues, I’m fairly sure that GP had a Hierarchical Z-Buffer before binning in the first place, which helped alleviate some geometry overhead. I also think SAGE had Geometry compression, which also would have helped with the binning with Fear.

In the second case above and since it affects Fear, there Multisampling would have been essentially for "free". References to binning should give a hint as to why.

If you want a card TODAY with real full speed Multisampling and fast performing and high quality anisotropic the vanilla Radeon9700's cost roughly over 200$ US. But that apparently two years after the Spectre would have appeared. If that isn't a severe underestimation of other IHV's engineering talent and at the same time a vast overestimation of the past 3dfx wizzards, then I don't know what else to say.
 
Ailuros said:
The gentleman in question works ever since at NVIDIA AFAIK. The idea behind the joke, was when the rendering took too long, to drop some geometry at an upper threshold of 60Hz or something like that (been a long time since I asked about it).

Of course is software HSR possible, but this one seemed rather idiotic to me.

Besides I've asked countless of times why it was supposedly limited to the q3a engine alone. The dumb explanation that it was still in it's infancy, doesn't sit well with me.

Different average poly rates would want different settings for acceptable error rates. Same reason it only works in Q3A and not other Quake3-engine games.

Other than that, I don't know exactly how 3dfx's VSA HSR worked.

No he's not alone. While I don't know his past and current preferences I used to be a 3dfx fan myself and I believed only what made sense and came from sources I trusted.

I don't know how often I posted the following quotes, I know who the guy is who posted but he will remain anonymous. Here are the differences between NV20 and Spectre as he saw them, before the NV20 was released:

Your point? Nobody said the specs were any different. Just that it's faster than you'd expect, for various reasons. Those are correct fill-rate numbers. One unmentioned difference is that SAGE is a good bit more efficient at TCL than NV20 is... and that it has much more of the AGP bus to itself when paired with two Rampages. (framebuffer divided in half = more space for textures = less need for AGP texturing) I don't know if it has its own dedicated RAM space on Rampage though. I doubt it.

In the second case above and since it affects Fear, there Multisampling would have been essentially for "free". References to binning should give a hint as to why.

Rampage does MSAA in conjunction with multitexturing to make it "free" - but in a single-textured game, 4x AA on Rampage = 1/4th non-AA performance. Simple enough, right? I suppose what you pointed out would mean single-textured AA performance would improve too...

If you want a card TODAY with real full speed Multisampling and fast performing and high quality anisotropic the vanilla Radeon9700's cost roughly over 200$ US. But that apparently two years after the Spectre would have appeared. If that isn't a severe underestimation of other IHV's engineering talent and at the same time a vast overestimation of the past 3dfx wizzards, then I don't know what else to say.

Well what can I say, even a single Rampage was rather powerful back in the day. Of course a dual-chip Rampage would storm ahead to remain competitive long after you'd expect it to.

Rampage ONLY gets 'free' MSAA while multitexturing. Nobody said it's always free.
 
I'm still hesitant on the whole HSR front. Maybe Rampage had it, maybe... But I just don't quite see how such a technology would work, anyway. Not saying it's impossible, just doubting it a little.

As for "AA is free when multitexturing" - well... okay...
How the heck does that work?

I've seen a lot of references to it and just accepted it "as-is" - but I still don't quite understand that thing.
I *can* understand how this gives fillrate-hit free AA when using multitexturing ( in the 4x AA case, 4 texture layers - for 2x AA, 2 texture layers, and so on - right? ) , because the pipes which are idle would be used to calculate values for MultiSampling.

But you keep the bandwidth hit, don't you? So, I still don't quite understand the whole thing... Of course, there's no bandwidth cost to geometry, that's already an advantage. But it *can't* be that signifiant.

As I already said before, I wonder if 3DFX wasn't using the :devilish: evyl :devilish: "one Z per pixel" trick. Considering a very basic compression engine, you could be pretty much certain bandwidth is not the bottleneck.
If it was true, then the 180FPS score is definitively possible. As for saying if it is impressive then, that's another question...

The problem is that in some rare cases, it looks bad. Oh yes, very bad - it's like there was no AA, actually.

What we'd need, thus, is the details of Rampage's MSAA implementation. And it's not like we're ever gonna get that :rolleyes:


Uttar
 
Well what can I say, even a single Rampage was rather powerful back in the day. Of course a dual-chip Rampage would storm ahead to remain competitive long after you'd expect it to.

Single chip less "powerful" than a plain NV20. (Single chip Spectre estimated for ~250$ US)
Dual chip would have been (out of guestimates) quite ahead of the NV20, but in straight analogy to it's higher price too. (Dual chip Spectre estimated for ~500$ US).

Expect what? Dual chip Spectre running ahead of a R300? Keep dreaming. A best case scenario would have been less or equal to a NV25, and to that a R300 is between 2.5 to 3x times faster in worst case AA/AF scenarios.


Rampage ONLY gets 'free' MSAA while multitexturing. Nobody said it's always free.

That still doesn't mean squad compared to competitive sollutions of the past and there isn't even a chance comparing them to today's sollutions. If it all was in such a splendour I wonder why there wasn't a sign or hint for multichip (rasterizer) sollutions past Spectre. I also wonder why Fear's leaked specs did in fact claim "free" M-buffer AA on the contrary. In a further extension I also wonder why Mojo wasn't to have a separate geometry processor anymore etc etc.

I don't know if it has its own dedicated RAM space on Rampage though. I doubt it.

The quote was clearly referring to SageII, which was to be on Fear and not Spectre. Why the heck do you think I pointed out references to binning space in the first place?

Rampage does MSAA in conjunction with multitexturing to make it "free" - but in a single-textured game, 4x AA on Rampage = 1/4th non-AA performance. Simple enough, right? I suppose what you pointed out would mean single-textured AA performance would improve too...

Fear contained the Fusion rasterizer and not "just" Rampage.

Your point? Nobody said the specs were any different. Just that it's faster than you'd expect, for various reasons. Those are correct fill-rate numbers. One unmentioned difference is that SAGE is a good bit more efficient at TCL than NV20 is... and that it has much more of the AGP bus to itself when paired with two Rampages. (framebuffer divided in half = more space for textures = less need for AGP texturing)

See reply to first quote above.

If I put all claims together, what you're trying to tell me is that NVIDIA holds miracle sollutions and patents in it's hands and comes out with "just" a NV30, which came to life from joint ventures from former 3dfx engineering talent and their own. The next best dumb theory I expect to hear is that the former 3dfx folks sit in there and sabotage their current employers hardware on purpose.

Oh wait part of it wasn't supposedly sold....rightyeahsureok :rolleyes:

Different average poly rates would want different settings for acceptable error rates. Same reason it only works in Q3A and not other Quake3-engine games.

Other than that, I don't know exactly how 3dfx's VSA HSR worked.

Oh please....Z buffering and thus HSR was present since ages on Voodoos. Someone inside wanted to play a practical joke aiming at the GP folks and created a complete artifact-fest unless you truly limit the rendering with an upper threshold.
 
Hmm, now that you put it that way, Ailuros, maybe the whole HSR thing makes no sense...
But what you've got to question is not whether HSR made sense. It's whether it was active in that 180FPS test! Because we don't know if it was an artifact-fest or not.

What we DO know is that it probably *is* possible to implement that on any GPU. We also know that it can give nice performance boosts.
And with 8x AA and no Early Z, the boosts you can get with such a thing could be quite stunning indeed. Of course, there probably are artifacts - but it would certainly explain the 180FPS score. If it was the case, maybe my "one Z by pixel" theory isn't even required anymore.

Also...
Ailuros said:
See reply to first quote above.

If I put all claims together, what you're trying to tell me is that NVIDIA holds miracle sollutions and patents in it's hands and comes out with "just" a NV30, which came to life from joint ventures from former 3dfx engineering talent and their own. The next best dumb theory I expect to hear is that the former 3dfx folks sit in there and sabotage their current employers hardware on purpose.

Oh wait part of it wasn't supposedly sold....rightyeahsureok :rolleyes:

I think there's something you've got to remember there: it wasn't a true merger.
nVidia probably got 90% of the employees who worked on Rampage, Fear, ... - but not all of them. They probably got the plans ( and code ) of all those things, too.

But those 10% that have joined other companies may be quite important to understand some parts of Rampage / Sage / ...
The result is code that *is* understandable. But you'd need some time to really understand understand it *all*.

Also, you've got to realize that the NV30 design was already underway.
I'd guess Jen Hsun Huang and Marketing's goal of "Cinematic Computing" was already given to the nVidia employees. And when 3DFX employees came in and learned about that goal, they probably thinked:
"Yeah, right... Let's let them decide on the design. If we propose our technology, they're gonna ask us to make it 'Cinematic Compliant'. Pfff..." ;)

More seriously though, I seriously believe nVidia didn't have the time to include most of their 3DFX tech yet. nVidia got a lot of tech, too, and finding ways to unite both isn't always so obvious. The GeForce FX probably got some 3DFX tech, but not that much of it.

The NV40 is really when you're gonna see the 3DFX ( & GigaPixel, too! ) influence, according to early rumors.

BTW, after further thinking, it seems ot me this system in Rampage to make AA free with multitexturing wouldn't be so useful anymore as soon as you include Early Z and good front-to-back ordering. Could be wrong on this, though, just speculation.


Uttar
 
Joke or not, those HSR drivers did give me about 15-20 fps (with no visible loss) more in Q3 after some tweaking :)
 
OK, once again, 3dfx's HSR wasn't 100% efficient unless it became an artefact-fest, as you all have pointed out. REALISTICALLY, with a roughly 1% visible error, you could expect 30-50% overdraw reduction, which would help the frame rate a good deal but would not be nearly as dramatic as full deferred rendering.

3dfx wasn't using the 1 Z per pixel trick, I'm not even familiar with how that would work at all o_O

The Z check units which would otherwise have gone unused, be it from pipeline combination or from loopback (Rampage doesn't loop, though, IIRC), are used for the additional Z samples, resulting in no fill rate hit.

Rampage has a lot of bandwidth, moreso even than GeForce4, and saves some from not using it for geometry.

Couple of other things: 3dfx and nVidia hated each other. nVidia buying 3dfx's assets, then immediately canning the entire GeForce line and releasing Rampage as-is would've been identical to admitting defeat - you all know that, and if nVidia had done it, they would've been ridiculed big-time by the 'in the know' lot.

Also as Uttar said, nVidia did NOT get 100% of 3dfx. They got a very high percentage of assets, but a lot of engineers went to *ahem* other companies instead. And Uttar: the HSR implemented in Rampage (and partially on VSA) was a stopgap measure before better bandwidth-savers could be implemented in later cores, and a mostly-last-minute addition.

From what I understand, 3dfx's last days were nuts. Nobody was really focusing on anything, except the core Rampage team. I'd hazard a guess that maybe 10% of the actual Rampage team went to nVidia.
 
Tagrineth said:
OK, once again, Couple of other things: 3dfx and nVidia hated each other. nVidia buying 3dfx's assets, then immediately canning the entire GeForce line and releasing Rampage as-is would've been identical to admitting defeat - you all know that, and if nVidia had done it, they would've been ridiculed big-time by the 'in the know' lot.

LOL. Maybe so, but it's ludicrous to assume they canned the Rampage project because of some silly grudge.

It's always about making money, nothing else. At the end of the day, the engineers don't get much of a say in how everything pans out.

MuFu.
 
Tagrineth said:
3dfx wasn't using the 1 Z per pixel trick, I'm not even familiar with how that would work at all o_O
Supersampling: Color per sample, Z per sample
Multisampling: Color per pixel, Z per sample
Multisampling using the 1 Z per pixel trick: Color per pixel, Z per pixel

The advantage is that in many case, you can use a very basic compression algorithm and still get efficiency as good, if not better, than complex compression systems used today.
It works IQ-wise quite well, too. But in the case the Z test would only work on some samples, and not all or none, then it may even look as if there was no AA. Those cases are rare, but they *do* exist.
You could also do something like a Z value every 2 samples - but then, compression wouldn't be so good anymore. But still, about 30% less Z usage isn't that bad!

The Z check units which would otherwise have gone unused, be it from pipeline combination or from loopback (Rampage doesn't loop, though, IIRC), are used for the additional Z samples, resulting in no fill rate hit.

Makes sense. Not a bad technology, either. But nVidia's "multiple Z units per pipeline" on the GF3 does the same thing, without extra cost when multitexturing.

Rampage has a lot of bandwidth, moreso even than GeForce4, and saves some from not using it for geometry.

Okay, so Rampage had a lot of bandwidth. But even considering the geometry bandwidth advantage, it *still* doesn't have more bandwidth than the NV30, which got Z Compression too. And the NV30 at 4x AA is *slower* than the Rampage at 8x AA! That isn't very logical, IMO. The only way it made sense if it was an horrible artifact-fest, with at least 2-3% visibility error then.

releasing Rampage as-is would've been identical to admitting defeat - you all know that, and if nVidia had done it, they would've been ridiculed big-time by the 'in the know' lot.

Of course. But that wouldn't have prevented them to use Rampage technology in the NV30. Or wait, are you suggesting the GeForce FX sucks because nVidia wanted to make people think 3DFX technology sucked? Hey, wait a second, that makes sense! :D
And anyway, nVidia had the XBox contract with Microsoft. If they had released a product superior on the PC before the XBox was released, Microsoft would have sued them like they've never sued before...

Also as Uttar said, nVidia did NOT get 100% of 3dfx. They got a very high percentage of assets, but a lot of engineers went to *ahem* other companies instead.

Well, from my understanding, they recieved all of the assets ( that is, all the intellectual assets - nobody cares about what's in the fridge of the QA room! :rolleyes: ) *but* only a small part of 3DFX employees.
I don't think they hired much of 3DFX marketing ( although the accountants/management do seem to have interested them :p j/k ) guys, mostly engineers. IIRC, it was about 100 engineers. Or do I have my figures wrong?

And Uttar: the HSR implemented in Rampage (and partially on VSA) was a stopgap measure before better bandwidth-savers could be implemented in later cores, and a mostly-last-minute addition.

Okay, so that HSR's goal was of increasing performance, even if it cost you IQ, and it probably was a setting you could disable.
By "better bandwidth-savers", you're talking of Fusion, right? Or did I get all the codenames confused again?

I'd hazard a guess that maybe 10% of the actual Rampage team went to nVidia.

If those estimates were accurate, then it would explain a LOT.
I'd estimate 50% or more joined ATI - am I possibly right on that?


Uttar
 
MuFu said:
Tagrineth said:
OK, once again, Couple of other things: 3dfx and nVidia hated each other. nVidia buying 3dfx's assets, then immediately canning the entire GeForce line and releasing Rampage as-is would've been identical to admitting defeat - you all know that, and if nVidia had done it, they would've been ridiculed big-time by the 'in the know' lot.

LOL. Maybe so, but it's ludicrous to assume they canned the Rampage project because of some silly grudge.

It's always about making money, nothing else. At the end of the day, the engineers don't get much of a say in how everything pans out.

MuFu.

Creative did the same thing. Buy Aureal, can their line in favour of your own inferiour one.

Keep in mind you also make money by removing your competition... especially when said competition had five aces up its sleeve.
 
More seriously though, I seriously believe nVidia didn't have the time to include most of their 3DFX tech yet. nVidia got a lot of tech, too, and finding ways to unite both isn't always so obvious. The GeForce FX probably got some 3DFX tech, but not that much of it.

The NV40 is really when you're gonna see the 3DFX ( & GigaPixel, too! ) influence, according to early rumors.

I thought the NV30 was supposed to have the 3dfx tech? Just about all the interviews I remember reading indicate this.

Still don't get however is how tech which is a couple of years old by now can be seen as the 'holy grail'. Very confused.
 
Uttar said:
Supersampling: Color per sample, Z per sample
Multisampling: Color per pixel, Z per sample
Multisampling using the 1 Z per pixel trick: Color per pixel, Z per pixel

The advantage is that in many case, you can use a very basic compression algorithm and still get efficiency as good, if not better, than complex compression systems used today.
It works IQ-wise quite well, too. But in the case the Z test would only work on some samples, and not all or none, then it may even look as if there was no AA. Those cases are rare, but they *do* exist.
You could also do something like a Z value every 2 samples - but then, compression wouldn't be so good anymore. But still, about 30% less Z usage isn't that bad!

Creepy. But 3dfx didn't do that.

Makes sense. Not a bad technology, either. But nVidia's "multiple Z units per pipeline" on the GF3 does the same thing, without extra cost when multitexturing.

True. Actually, Rampage has no extra cost when not multitexturing, so in a heavily multitextured situation Rampage is much more efficient (transistors saved).

Okay, so Rampage had a lot of bandwidth. But even considering the geometry bandwidth advantage, it *still* doesn't have more bandwidth than the NV30, which got Z Compression too. And the NV30 at 4x AA is *slower* than the Rampage at 8x AA! That isn't very logical, IMO. The only way it made sense if it was an horrible artifact-fest, with at least 2-3% visibility error then.

It has 200MHz DDR on a 256-bit bus, NV30 has 500MHz DDR on a 128-bit bus... making it 12.8GB versus 16GB. Factor in that NV30 is also using a chunk of that for geometry, which Rampage isn't; that NV30 is using a 100% precise HSR algo (probably eliminates a little less overdraw on average); that NV30 is a flawed architecture from the get-go...

Also consider that Rampage is using a nearly-free 'optimised' AF algo which doesn't need to sample as much, while NV30 is using a slightly optimised algo which still does take a lot of additional samples.

Of course. But that wouldn't have prevented them to use Rampage technology in the NV30. Or wait, are you suggesting the GeForce FX sucks because nVidia wanted to make people think 3DFX technology sucked? Hey, wait a second, that makes sense! :D
And anyway, nVidia had the XBox contract with Microsoft. If they had released a product superior on the PC before the XBox was released, Microsoft would have sued them like they've never sued before...

Heh. Well, maybe. There's also the rumours of sabotage... nVidia sabotaged 3dfx's VSA line (which IS possible considering certain evidence), so bitter 3dfx engineers sabotage key points of NV30 (again possible).

It's VERY far-fetched though, and has disturbing ramifications... I doubt it's true at all. But there are some BIG 'oopses' at 3dfx that can't be ignored or written off...

Well, from my understanding, they recieved all of the assets ( that is, all the intellectual assets - nobody cares about what's in the fridge of the QA room! :rolleyes: ) *but* only a small part of 3DFX employees.
I don't think they hired much of 3DFX marketing ( although the accountants/management do seem to have interested them :p j/k ) guys, mostly engineers. IIRC, it was about 100 engineers. Or do I have my figures wrong?

Don't know the exact numbers, but that makes some sense.

Okay, so that HSR's goal was of increasing performance, even if it cost you IQ, and it probably was a setting you could disable.
By "better bandwidth-savers", you're talking of Fusion, right? Or did I get all the codenames confused again?

Fusion would've had similar tech, probably more refined and precise. Mojo was a merger of 3dfx's line with Gigapixel's tech, and was a full deferred renderer using GP's methods.

By the original timetables, Mojo would be out now had 3dfx survived.

I'd hazard a guess that maybe 10% of the actual Rampage team went to nVidia.

If those estimates were accurate, then it would explain a LOT.
I'd estimate 50% or more joined ATI - am I possibly right on that?

Don't know the exact percents, but yes, a lot went to ATi, some went to Matrox, and a good chunk of the 3dfx core went... elsewhere. (mysterious music plays)
 
Tagrineth said:
Creative did the same thing. Buy Aureal, can their line in favour of your own inferiour one.

Keep in mind you also make money by removing your competition... especially when said competition had five aces up its sleeve.

Sure, but one of the main reasons Creative swallowed Aureal was to finally bury all the Patent 990 litigation. Although A3D 2.0 was a fantastic technology I really don't think Aureal were head and shoulders above Creative in terms of tech, as so many people believed. In 10K1 the latter probably had the upperhand when it came to handling multiple audio streams in a desktop environment and superior MIDI features, to name just a couple of areas. Plus they already had a very exploitable 3D engine in EAX and of course there is no reason why they wouldn't have embraced Aureal tech since the aquisition, as nV surely will have embraced 3Dfx ideas.

I just don't believe Rampage was as ahead of its time as you claim. They canned it because it really wasn't all that great - if it was, we would have seen explicit and substantial use of associated technologies by now.

MuFu.
 
Back
Top