Albatron GeForceFX 5900PV 128MB Review

Discussion in 'Beyond3D News' started by Dave Baumann, Aug 26, 2003.

  1. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,511
    Likes Received:
    224
    Location:
    Chania
    Those that can't read german, should wait for the english version of it (which is to come anytime soon):

    http://www.3dcenter.de/artikel/cinefx/

    Else if you're so impatient, use an online translator.
     
  2. Bry

    Bry
    Newcomer

    Joined:
    Aug 29, 2003
    Messages:
    64
    Likes Received:
    0
    Sorry that was my post did not know I was not signed in :oops:
     
  3. foo

    foo
    Newcomer

    Joined:
    Sep 1, 2003
    Messages:
    3
    Likes Received:
    0
    I've watched Beyond 3D for a long time. It's the best Graphics card site around! :D

    I just got a 5900 to replace my 4400. The reason is Stereo Drivers.

    I've had Stereo Glasses (LCD) since the Atari ST days (1988) when the 3D effects needed to be programmed by hand to see them. I went to the PC and after a number of years, I got a Wicked 3D VooDoo 2 (Stereo). I had that card and used it with my ATI Radeon (original).

    Once my PC started getting too fast, and the VooDoo 2 couldn't keep up, I switched to Nvidia.

    ATI needs to add in the ability as well. It is cool that Nvidia continues to support Stereo 3D even when they are hurting with driver bugs.
     
  4. Pit_Viper

    Newcomer

    Joined:
    Feb 18, 2003
    Messages:
    79
    Likes Received:
    0
    Location:
    Maryland
    ...I have always thought that environment-mapped bump-mapping was a great feature - a feature that, disappointingly, was not commonly utilized during the DirectX 6, ( or DirectX 7/8 ) era.

    Sometimes it takes a single concept or element, particularly a persistent concept or element, in order for things to completely dawn (no pun intended) on you. For those of you who are not waiting for a response from nVidia, or who think that they know the fate of the FX 5200/5600/5900 series, do not waste your time reading any further, as I'm simply "preaching to the choir," so to speak. For those of you who are still attempting to decide between ATi and nVidia hardware, I implore you to consider my [long-winded] thoughts.

    There was a time, not too long ago, when I used purchase PC hardware based upon what I felt had the greatest potential to be the next innovative hardware item. This has always been somewhat of an ambitious approach - as enthusiasts, we're a bit more savvy than the typical consumer; however, we're also a distinct minority. One can certainly purchase a great piece of hardware, only to have it underutilized due to the fact that the average consumer will almost always purchase a more "plebian" hardware item, rather than a full-featured, innovative item. By the way, my Gravis Ultrasound still lies in a box somewhere...

    Occasionally, however, full-featured hardware becomes common. Such was the case with the 3Dfx boards. Ironically, I've always felt that towards the end of the 3Dfx era, 3Dfx was responsible for suppressing the advancement of features in 3D entertainment software through their reluctance to support emerging technologies such as 32-bit color and hardware transformation and lighting. I "jumped ship" when 3Dfx began to campaign their "T-Buffer" garbage - effects that were becoming inherently feasible with all cards due to ever-augmenting fill rate of 3D cards. However, because 3Dfx was the popular hardware item at the time, developers had no choice but to cater the 3Dfx hardware.

    ...The aforementioned discussion brings me to the topic of environment-mapped bump mapping. Introduced in DirectX 6, this feature was used in a handful of games, but never became very popular. After observing this feature in use for years in the demoscene, I though that it was great that it was finally implemented in 3D hardware. I thought that the Matrox G400 and the Pyramid 3D were the only two cards that supported this feature, which is why it never really became popular. However, while browsing the Rage3D forums yesterday, I stumbled upon the "Ark of Radeon" demo, and learned that the Radeon 7000 series also supported this feature. Additionally, I recently discovered that the Kyro cards supported this feature as well.

    The offerings from The Market Leader at the time, nVidia, did not support this feature. I awaited support for this feature when the Geforce 2 was announced, but this feature was not implemented in the Geforce 2 feature set. When the dominant market leader doesn't support this feature, why would developers add support for it?

    A retrospective review of the product offerings from nVidia reveals a somewhat disturbing trend: nVidia typically releases products that lack important features of their respective target DirectX version eras. Certainly, the DirectX feature set in any given revision is quite broad; however one would expect that certain new, innovative features that predominantly define a DirectX revision would be supported. The failure of nVidia to support such features can be noted with the lack of environment-mapped bump mapping support in the Geforce 1/2 series (DirectX 6/7), the lack of pixel shader 1.4 support in the Geforce 4 Ti cards (DirectX 8.1), the lack of any pixel shader suppoprt in the Geforce 4 MX series, and the lack of usable pixel shader 2.0 support (weak performance) in the Geforce FX 5800/5900 series of cards. (I also vaguely remember reading something about displacement mapping support not existing in the Geforce FX series, as well as improper support for multiple render targets in the drivers.) As many have stated before, "The Way It's Meant To Be Played" should be interpreted as "The Way It Should Have Been Played Two Years Ago".

    Now, could anyone imagine how many games would support pixel shader 2.0/vertex shader 2.0 if the Geforce FX 5800 had been the dominant card on the market? Would you, as a developer, attempt to write code to utilize a feature set that no card on the market could effectively utilize? (Keep in mind that efforts were supposedly made to augment the performance of the card to that of its current form, due to the threat from the Radeon 9700.)

    It is my understanding that in previous years, ATi has had poor driver support (I have never owned an ATi card). Weak performance and poor driver support were the primary reasons that ATi hardware never gained true market presence. ATi has always had a better feature set. Now that these two problems appear to have been resolved...

    ...I've requested a return-material authorization for my Geforce FX 5900. I guess that one of the few times that I chose to go with the "market leader," I made a slight mistake - developers actually appear to be supporting the Radeon 9xxx series. Hopefully, I will be able to replace this card with a Radeon 9800 (256 MB RAM). Oh well, I guess I will not be able to play Splinter Cell with shadow buffer support enabled...

    I know that a few people are waiting for a response from nVidia regarding pixel shader performance. I had to make a decision before my RMA time period expired, so I had to come to a conclusion this week. I will give you my opinion regarding future performance improvements from Detonator drivers: future drivers have about as much of a chance of providing satisfactory pixel shader performance improvements, while maintaining image quality, as they do of adding pixel shader 1.4 support to the Geforce 4 Ti series of cards.

    ...Sorry for the long, pointless post (I'm sure nobody has gotten this far); I guess that I get bored when I have a day off!
     
  5. hmmm

    Newcomer

    Joined:
    Jul 25, 2003
    Messages:
    56
    Likes Received:
    0
    Location:
    South Dakota, USA
    I rather enjoyed your post, and I made it all the way to the end. Just in case others don't though, I thought I'd repeat this little bit. I especially liked it.

     
  6. Bouncing Zabaglione Bros.

    Legend

    Joined:
    Jun 24, 2003
    Messages:
    6,363
    Likes Received:
    83
    Pit_Viper points at Nvidia and says: "The Emperor has no clothes!"
     
  7. russo121

    Regular

    Joined:
    Aug 27, 2003
    Messages:
    283
    Likes Received:
    4
  8. Miksu

    Regular

    Joined:
    Mar 9, 2003
    Messages:
    997
    Likes Received:
    10
    Location:
    Finland
    Nice post. I enjoyed every line.
     
  9. Bouncing Zabaglione Bros.

    Legend

    Joined:
    Jun 24, 2003
    Messages:
    6,363
    Likes Received:
    83
    Nvidia stopped being hungry and started believing their own hype. They think they are untouchable, that they can run the company primarily on a marketing (rather than engineering) philosophy. They are not making the loss after loss that brought 3DFX down, but they are exhibiting the same lack of willingness to adapt to the changing market, the same stuborness that what they are doing is correct no matter what. All at the same time as the likes of ATI are pulling open Nvidia's previously strong grip on the market.
     
  10. nonamer

    Banned

    Joined:
    May 25, 2002
    Messages:
    564
    Likes Received:
    7
    Russo, did you read the whole thread before bring this up?

    EDIT: Actually this thread is still going on, and we should wait until they actually conclude the debate before linking it the here, especially considering how off-topic this is.
     
  11. WaltC

    Veteran

    Joined:
    Jul 22, 2002
    Messages:
    2,710
    Likes Received:
    8
    Location:
    BelleVue Sanatorium, Billary, NY. Patient privile
    I can't agree here...The V5 shipped in June of 2000 and was fully 32-bits capable. The V3 that I bought in early '99 blew the doors off of my TNT at the time such that it was no contest. nVidia was to spend the rest of '99 trying to catch the V3 with the TNT2. It wasn't until the GF2 that they did it, and the V5 pretty well knocked out the GF2. As well, in '99 GLIDE was by far the predominant 3d API and I had a library full of GLIDE software that was useless on my TNT (which is why I ran a V2 alongside the TNT until I bought the V3 and got rid of both.)

    Hardware T&L was such a great feature that I can think of one, possibly two, titles that ever required it, and it's a feature that has already come and gone and isn't of much use anymore (indeed, it really never was more than a nVidia marketing bullet, IMO.) On the API front DX had to get to DX6 before it fairly equalled GLIDE functionality--which again didn't do anything to help in running all the GLIDE software I had and still used at the time. Of course, back in 1999 there were precious few 32-bit integer game engines--including DX and OpenGL, almost all of them were 16-bit engines--and the "32-bit debate" back in '99 mainly concerned alpha-channel blending as depicted in magnified screenshots. (It also rarely revolved around frame rates--which were pretty poor on the TNT2 in 32-bits and downright unplayable on the TNT in 24 bits.)

    But the main thing I notice when people talk about 3dfx "holding back the industry" is that they invariably forget all about the hardware-jittered, rotated-grid FSAA 3dfx introduced with the V5 (and many of them forget the V5 was a 32-bit 3D card, as well.) Unlike hardware T&L which is passe' already, FSAA of the caliber and type 3dfx introduced over three years ago is still very much alive and kicking today, and the current crop of 3d cards are often judged on the quality of their FSAA--rotated grids and all. The T-buffer effects were entirely secondary to the main use of the T-buffer in the VSA-100, which was hardware-jittered FSAA. That was the star attraction of the V5, because 3dfx was the first to begin forcing it via the driver control panel in games never designed to support it.

    My own thinking is that there was nothing wrong with 3dfx technology back in '99, and surely not in 2000. Indeed, 3dfx's position on the relative importance of AGP texturing for the future (or rather its unimportance) today seems visionary when contrasted to the pundit opinions of the time which talked about what a "crucial" 3d technology AGP texturing was and would be. With 128-mb 3d cards the norm it's easy to see that AGP texturing was never anywhere near good enough in the first place...;) It was pretty easy for me to see that then, but some people just couldn't, for some reason. People also laughed and jeered at the external power bus connector used on the V5--I remember Perez making derogatory remarks about it--and now it's the norm.

    It wasn't 3dfx's technology that did them in--it was 3dfx's management. Had they not changed their business model and embarked on a quest to become another ATi (whose current business model more closely resembles that of 3dfx's original--nVidia also copied 3dfx's original business model), being late with the V5 would have been but a bump in the road. But 3dfx management made a series of very expensive blunders in a short span of time--and did themselves in, IMO. To this day I think the *only* reason nVidia bought them out for $80M was to bury the multitexturing patents 3dfx was very close to proving nVidia had violated when the company just plain ran out of money to continue the suit. I agree completely with you that 3dfx made mistakes--but the ones that counted had nothing to do with their technology, IMO.
     
  12. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,511
    Likes Received:
    224
    Location:
    Chania
    UT2003 was the first real dx7/T&L game IMO, since it actually has completely T&L optimized code; a couple of transformations here and there do not make up for a T&L game.

    It´s natural though that the transition to real T&L took as long; I wouldn´t be as optimistic to expect full dx8.1 games before several card generations have passed after it´s introduction in hardware.

    And I completely disagree that dx7/T&L is passe these days; unless you consider upcoming games to be actually dx8/dx9 games, where it´ll lead to another hearsplitting debate about semantics. What I´d consider a full dx9.0 game for instance is a game that to the majority of it´s code relies on shaders and not a hybrid dx7/8 game with a few pity reflections or whatever added to the mix.
     
  13. John Reynolds

    John Reynolds Ecce homo
    Veteran

    Joined:
    Feb 7, 2002
    Messages:
    4,491
    Likes Received:
    267
    Location:
    Westeros
    The GF1 certainly 'caught' the V3, and the V5 hardly "knocked out" the GF2, which had more than twice the multi-textured fill rate. Personally I preferred the V5 and used it until late 2001, but it was vastly outsold by the GF2s.

    It's all about marketing, but when you're overhyping features at least you have them to hype about.

    Yes, the V5's AA quality was better than anything Nvidia is offering even today in terms of just image quality, but that doesn't exactly give developers a helluva lot to play with, does it? Was the V5 even DX6 compliant? Not even close. 3dfx took that initial SST/V1 technology and respun it just a few too many times. Think about it: all those years in business and they did nothing but milk their initial technology, sparsely adding a bit of new technology here 'n there. It's hard to think of any other company that catalyzed a new market, had a virtual lock on it through a proprietary API, and then pissed it all away within an amazing 3-year timespan.

    And certain ATI AIB partners don't take pot-shots at Nvidia's high-end solutions using two slots up? Or the fan noise on the 5800s? Personally I never had a problem with my V5's internal connector (had a major beef with the rolling lines on my GF2 Ultra), but if 3dfx had insisted on using "more mature" fab processes they wouldn't have needed one.

    How many OEM deals do you think you can get by hawking technology that's essentially four years old?
     
  14. Dave H

    Regular

    Joined:
    Jan 21, 2003
    Messages:
    564
    Likes Received:
    0
    I'm going to have to disagree with you w.r.t. the implication that Nvidia has been in any way unique or even noteworthy in not supporting all the important new features du jour in their products. Compare to 3dfx: no 32-bit color or z until V3, no "large" textures until V5, no hardware T&L ever. (Apologies if I got any details wrong; I didn't follow the 3d market closely at the time.) Or ATI: how 'bout the R2x0, which skipped out on perhaps the most compelling new feature of its time, multisampling AA (yes, it became significantly more compelling in the R3x0/NV3x timeframe with the addition of color compression, but still).

    And then look at Matrox, which has been all about designing in special-case support for one particular feature well ahead of its time, and then hyping it in the hopes of posing as a serious competitor in the performance 3d market. EMBM is a fine effect, but it wasn't particularly appropriate in the G400's timeframe. The reason games with compelling bump-mapping are only being released in the past year or so is because it's taken that long for 3d performance to get to the point where pervasive bump mapping is appropriate from either a performance or an art perspective.

    Remember, it's not like any of these IHVs are inventing these features. They've all been around and well-studied for decades in software rendering. Matrox didn't invent EMBM; 3dfx didn't invent sparse sampling grids for AA; Nvidia didn't invent pixel shaders or multisampling. These are all old techniques, widely described in the public domain years and years ago. The IHVs only decide which features would provide the best bang-for-the-buck (in terms of transistor count and performance impact) if implemented in a particular hardware generation.

    The featuresets for upcoming cards are for the most part set in stone before the relevant DX standard is decided upon. In fact, the main way MS decides on the featuresets for new DX standards is by discussing with the IHVs what features they've already designed into their future cards. MS does their best to expose support for as many of these features as they can while still having a coherently targeted API. It's inevitable that any new architecture won't support every feature of its corresponding DX generation, because that would mean supporting every feature the competition is implementing.

    Let's not forget that R3x0 lacks significant features compared to NV3x--FP32, dynamic branching and high instruction count shaders. The problem is that none of these features is very compelling with the current level of shader performance from NV3x or R3x0. In a year or so, however, they'll all be crucial for high-end parts.

    R3x0 has the same crappy "displacement mapping support" (it's debatable it even deserves the term) as NV3x. But true displacement mapping is inherently a VS 3.0 technique. Matrox's decision to design in support for true displacement mapping in Parhelia was completely ahead of its time, which is one of the worst things you can say about an engineering decision. Meanwhile, Parhelia had laughable performance, pixel shader support only to PS 1.1, and their much-heralded displacement mapping is not and apparently never will be supported in any DX drivers. Not a coincidence, and not a surprise.

    (As for NV3x's poor substitute for MRTs--Multiple Element Textures (METs)--this is indeed apparently a somewhat significant weakness of the design, and moreover AFAIK even METs aren't supported yet in current drivers.)
     
  15. Anonymous

    Veteran

    Joined:
    May 12, 1978
    Messages:
    3,263
    Likes Received:
    0
    Voodoo 3 (AKA Banshee2, and yes that was the originally scrapped name for that card and that came from the mouth (and later denied) of 3dfx employees) was nothing more than a souped up Banshee with the second texturing unit enabled. No more and no less. You could put the 2 cards side by side and have second thoughts about guessing which is which.

    All voodoos had 32bit rendering (Internally) but the display was at 16 bit up to the Voodoo3. V4 and V5 got the 32 bit support. Even 256 color textures were in use up to the V3.

    I have to agree with WaltC that management (Deja-vu?) was the primary factor of 3dfx own demise, and also believe NVidia bought 3dfx because it was cheaper than loosing the lawsuit.

    Is it me or this went way of topic? :lol: Anyway is interesting.
     
  16. WaltC

    Veteran

    Joined:
    Jul 22, 2002
    Messages:
    2,710
    Likes Received:
    8
    Location:
    BelleVue Sanatorium, Billary, NY. Patient privile
    I was specifically referring to '99 for the V3. GF1 didn't ship in volume, and certainly not with decent drivers, until 2000. The real contest in '99 was between the V3 and the TNT/2, IMO. Pre-shipping hype for GF1 in '99 reminded me a lot of pre-shipping hype for nv30.

    I actually bought a GF2, 3 & 4 after buying the V5, but it wasn't until the GF3 that I retired the V5--and only then regretfully, as nVidia never did get up to speed with its FSAA--in fact, I recall for a long time nVidia PR talking about how "high-resolution" was what 3d gamers wanted, not FSAA--as if the two were mutually exclusive.

    Yes, the V5 was not a good system OEM product, and suffered from that just as nVidia's nv30/5 products suffer from it today.


    Yes, but the benefit of sites like B3d is that it helps people penetrate the marketing fog...:) To me, nVidia's "hardware T&L" was entirely as valuable to end users as 3dfx's non-FSAA related T-buffer features...

    It's probably hard to think about it--especially in the 3d vpu markets--because there have been relatively very few such companies and the x86/Windows 3d marketplace is still not yet even 10 years old. I give 3dfx some breaks I won't give nVidia today, because 3dfx virtually jump started the market by nothing more than the seat of its pants. When they started, the x86/Windows 3d industry was so new 3dfx had to write its own API so that customers--and developers--could actually use their products in any sort of meaningful manner. Nobody else made anything as useful as GLIDE (or hardware) in the way of APIs for a long time.

    I disagree with an assessment of GLIDE as "proprietary" (although it surely was) in the sense that 3dfx used it to maintain its position. It was only after OpenGL ICDs began to appear (after much prodding and hyperbole from Carmack) and D3d began to mature, that this viewpoint was even possible.

    Prior to that time--the only useful 3d API that existed for x86/Windows 3d gaming was GLIDE. Basically, at one point, if you didn't run a GLIDE product you weren't a 3d gamer because at one time 100% of all 3d games were GLIDE. It was indeed proprietary--but only out of necessity in the beginning. And in the end, 3dfx made the decision to abandon GLIDE, when it became apparent that the other API's had matured enough to replace it. I agree that the V5's drivers were lacking in their DX6 support, but felt when I was using the card at that time that its GLIDE compliancy was far more important, considering the library of GLIDE software I had at the time...Just trying to frame the issue in the context of the time.

    I agree with your comments about milking--but I still think that was yet another syndrome of their bad management. Was nVidia's management inherently so terrific--or did it just look that way because 3dfx's was so bad by comparison? IMO, I think nVidia's been milking since the GF1 under the same illusions about its place in the market that 3dfx had, which is why ATi was able to score such a coup with R3x0. I agree that 3dfx pissed it away--but they did so on the basis of short-sighted and erroneous assumptions by their management--which just goes to show that in the face of bad management even good technology won't save you...;)

    3dfx was conservative about going to .18--again, a management decision. nVidia's management decision at the time was to go to .18. nVidia was right at the time, and 3dfx was wrong. Flash forward to the present and it's easy to see that with ATi' R3x0 versus nVidia nV30/5, ATi was correct in being conservative and sticking with .15, and nVidia's move to .13 was premature. Management decisions again.

    I always felt that the criticisms of the V5 series from a systems OEM standpoint were justified. The cards were too big physically (although heat and noise weren't really problems.) I'm not surprised to see similar criticisms of nVidia's nv30/35 reference designs today for the same reasons (plus the heat and noise.) And unlike nVidia's careless, expensive marketing of nv30 (which they later pulled), 3dfx at least had the good sense to drop the V56k prior to production.

    Ask nVidia...;) They did it from GF1-GF4...got plenty of OEM deals. Again, I saw nothing remotely similar between the VSA-100 products and the V1. What hurt 3dfx in the OEM business and is now hurting nVidia in the same way is clunky reference designs competing with compact and efficient reference designs offered by competitors for space in OEM boxes.
     
  17. WaltC

    Veteran

    Joined:
    Jul 22, 2002
    Messages:
    2,710
    Likes Received:
    8
    Location:
    BelleVue Sanatorium, Billary, NY. Patient privile
    Agreed, but my comments were relative to '99-2000, which was the year that "hardware T&L" was hawked as relevant--when it was anything but relevant. IMO, shader technology this year is already far more important than was T&L in '99.

    But rotated-grid FSAA is in another category entirely. I just wanted to reiterate that the case for 3dfx "holding back the industry" is really not so good when everything 3dfx did is considered in its proper context. Many were so warped by the incessant anti-3dfx propaganda of the time (which was often completely false) that even today they're amazed to discover the V5 was a 32-bit 3D card...And then there was the FSAA = blurry line of thinking so common at the time that also was an error. The "AGP texturing" errors...There's actually a long list of projections made in '99 about what the "future" of 3d would be which in the years since '99 have been proven false, IMO.

    Edit: One such projection, which I made at the time, was that 3dfx would survive...Heh...;)
     
  18. WaltC

    Veteran

    Joined:
    Jul 22, 2002
    Messages:
    2,710
    Likes Received:
    8
    Location:
    BelleVue Sanatorium, Billary, NY. Patient privile
    True, as far as it goes--but that's like saying there's nothing to a 3d chip since you can run software renderers...;) The trick, IMO, is not coding it for a general cpu to execute in software, the trick is in getting it into a commercial 3d chip in hardware...That's a different trick entirely...sort of like comparing the space shuttle to a Chinese fireworks skyrocket circa 500 A.D. The general principles are similar, but that's about as far as it goes...IMO, of course...
     
  19. Dave H

    Regular

    Joined:
    Jan 21, 2003
    Messages:
    564
    Likes Received:
    0
    Obviously hardware engineering is a completely different task than software engineering, and IMO probably a more interesting one. However, to me all the praise you're heaping on 3dfx's engineers for pioneering rotated grid sample patterns is slightly silly. The fact that a rotated grid offers better 4xAA quality than an ordered grid has been widely known for decades. And implementing it in hardware is relatively straightforward, although it certainly does require more transistors and more engineering work than using an ordered grid.

    The point is that 3dfx didn't invent non-ordered sample patterns, nor did they do the research to discover that they are superior. They just took an old idea off the shelf and made the (praiseworthy) decision to implement it. More to the point, R3x0's sparse sample patterns are not the descendents of V5's RGSS; both R3x0 and V5's sample patterns are descended from the same original research from probably 30 years ago.
     
  20. WaltC

    Veteran

    Joined:
    Jul 22, 2002
    Messages:
    2,710
    Likes Received:
    8
    Location:
    BelleVue Sanatorium, Billary, NY. Patient privile
    I never said they "invented" general ideas about anything. I said they were the first to implement such concepts in commercial 3d chips--and they were. Regarding FSAA in general--3dfx was the first 3d-chip company to implement the ideas in a working 3d-chip. It's a feature that has survived and flourished since and today is considered a requirement for 3d chips in these markets. (Hardware T&L, btw, was a concept as old as the hills and twice as dusty when nVidia implemented it.)

    Your point isn't a very good one--heck, everything about 3d chips came from earlier ideas in 2D computing. That isn't news...You don't just take an idea of the shelf and implement it....;) It takes a whole lot of work and thought. It's kind of like the difference between reading a general patent on an idea and then looking at a schematic of a specific ciruit design which makes execution of the idea possible in a shipping product. A world of difference.

    But to follow your line of reasoning there isn't anything that 3dfx, nVidia, or ATi has done that we should be impressed about--since they "just took old ideas and implemented them"....Heh...;) The process is nowhere near as simple or rudimentary as you'd like to pretend. I kid you not--there was no VSA-100 circuit design sitting around on a dusty shelf for 30 years which 3dfx just dusted off and used. Heh...not quite...;)
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...