More than a year on and shaders are unimpressive

Above said:
Well how about that. 3dfx pilloried for not including the features of their competitors, but see how long it has taken for the software industry to catch up with the hardware? 3dfx died because they tried to build their own boards, not because of design decisions.

I'm glad that they did die, however, seeing as they certainly weren't bringing new features to the table nearly as fast as their competitors (mostly nVidia at the time).

Given the very long cycle that it can take for hardware features to come into use in games, having saturation of hardware that doesn't support X advanced features will only prevent them from being used.
 
I swear you guys from Nvnews need to take the blinders off... at least once in a while..

I'm glad that they did die, however, seeing as they certainly weren't bringing new features to the table nearly as fast as their competitors (mostly nVidia at the time).

So i guess industry leading FSAA years ahead of even Nvidia does *not* count for *new* features... I gues a useless T&L engine that was outdated before even the First T&L Game came out was really *leading the charge*
 
FSAA on the V5 added nothing to raise the minimum hardware performance levels for games. They didn't substantially increase fill rate, polygon rate, dot3, etc They wouldn't have enabled a sea change in rendering quality, like Doom3, or getting closer to CG like fully programmable hardware will enable. Basically, they offered almost nothing for developers to improve their new games, only for consumers to improve the look of their existing old games. (yes, you could do multisampling tricks like depth of field, soft shadows, or motion blur, but their card had no where near the performance to pull this off. where 4 samples just ain't good enough)


Yes, it is an IQ improvement, but even if every single gamer upgraded to a V5, the basic performance and features of the card was the same as every other card on the market. They didn't raise the performance or features bar, just IQ. Let's not forget how they were dragged kicking and screaming into 32-bit. Funny how IQ didn't matter for them until FSAA. Same ole marketing spin -- pump up the features you have, downplay the others. Developers were arguing that 16-bit backbuffer wasn't good enough (even with the 22-bit filter) because of multi-pass artifacts (e.g. transparency), but all of the pixel nitpickers who currently laboriously poor over screenshots of anisotropic filtered textures were defending 3dfx.


Imagine if 3D cards just kept adding new IQ features: more FSAA samples per pixel, higher anisotropic, better texture filtering, etc, but fillrate and polygon rate stagnated as well as there being no way to do per-pixel lighting.

Well, counter-strike would still look alot better, but years later, we would have no Doom3, UT2k3, etc.

I'm not faulting 3dfx for introducing FSAA. FSAA is a must. I'm faulting them for betting their farm on it and shipping a card that was underdeveloped in other areas. I fault them for not shipping Rampage alot earlier.

Basically, 3dfx could not keep up with the market. NVidia and ATI are advancing the state of the art far faster. Good ideas alone are not enough, you have to execute on them. That's IMGTec's problem. It's just not enough to invent a new algorithm, you've got to be able to mass produce it, on a timely basis, for consumers. V6 might have killed the GF2, had they produced it before DDR speeds ramped up.


Yack all you want about T&L and other features added. *Somebody* had to produce the first T&L card, there has to be a first mover. You can't put the cart before the horse. Developers aren't going to produce a T&L enabled game until people have T&L enabled cards. Therefore, the burden is on the hardware vendor to produce this card and sell people on the card before the card is usable. Ditto for DOT3, EMBM, programmable shading, etc


When I bought my Orchid Righteous Voodoo1, there were no games for it except for this pathetic soccer game. I spent most of my time messing around with the demos. Then GLQuake came out, and it instantly made my purchase worth it.


I bought a DVD player before many DVDs were available. Then the Matrix DVD came out, and DVD players went mainstream. I owned an HDTV long before the first broadcast HDTV programs. Someone has to produce the hardware first, and then early adopters have to buy it to make it cheaper for the mainstream users.


My HDTV right now is fairly "useless" given the paucity of content. But because I spent thousands of dollars on it, more content will be produced for it, sets will get cheaper, and one day, you will benefit from my "wasteful" consumption, and get a much better HDTV than mine for 1/10th the price, and you will have a wide selection of content to view.


Yes, early adopters get screwed. But without us, it would be much harder to introduce new technology.
 
Hellbinder[CE said:
]So i guess industry leading FSAA years ahead of even Nvidia does *not* count for *new* features... I gues a useless T&L engine that was outdated before even the First T&L Game came out was really *leading the charge*

I should have specified what kind of features I was talking about.

I was talking about features that change the way games are programmed. FSAA is all well and good, but it's not "bad" for the video game industry overall if other companies don't support it as well, or at all. That is, the lack of support for improvements like increased fillrate, FSAA, anisotropic filtering, and so on will not affect the way games are programmed.

By contrast, the lack of hardware T&L, programmable pipelines, and so on, do affect how quickly the video game industry supports these features.
 
DemoCoder said:
but all of the pixel nitpickers who currently laboriously poor over screenshots of anisotropic filtered textures were defending 3dfx.

pfft you think it's ex-3dfx fanboys currently trying to nitpick the 8500 to death?
 
I'm not faulting 3dfx for introducing FSAA. FSAA is a must. I'm faulting them for betting their farm on it and shipping a card that was underdeveloped in other areas. I fault them for not shipping Rampage alot earlier.

See i just cant understand thinking like this...

First even today Years after the introduction of hardware T&L there are only one or two games that Require it. Kyro proves that. Today no one is suggestign that a GF2 is going to deliver playable frame rates in games like UT2k3 or Doom 3. Yet just becuase it has an outdated, unused T&L engine it is a superior product? No one can play this years cutting edge games on 2-3 year old technology.. no one.. I dont care what company it is. IF 3dfx had not made bad business desisions their next product would have had T&L, industry leading occlusion culling, and FSAA that would probably still be superior to anything Nvidia has.

Its also funny that today everyone is focused on FSAA, yet you still REFUSE to acknowledge that it was a *cutting edge* technology 3 years ago..

This is where some people appartently throw logic right out the window in favor of PR statements and Press releases.

To each his own i guess... :rolleyes:
 
Hey Randell you cheat... I saw that ;)

Bottom line is I personally use 3dfx fsaa today and just about get away with it. T&L and Dot3 ... I played 'em. But I still haven't seen a) improvement enough in FSAA to make me jump ship, or b) that the newer features are as important. Old as my V5 is, it still blows away the integrated graphics that are supposed to be the baseline for developers!
 
Hellbinder[CE said:
]First even today Years after the introduction of hardware T&L there are only one or two games that Require it. Kyro proves that. Today no one is suggestign that a GF2 is going to deliver playable frame rates in games like UT2k3 or Doom 3. Yet just becuase it has an outdated, unused T&L engine it is a superior product? No one can play this years cutting edge games on 2-3 year old technology.. no one.. I dont care what company it is. IF 3dfx had not made bad business desisions their next product would have had T&L, industry leading occlusion culling, and FSAA that would probably still be superior to anything Nvidia has.

Now that's a bunch of bull.

The original GeForce was designed as the minimum spec for UT2k3. With a GeForce2 Pro at medium quality, you can still play UT2k3 at 1024x768x32 at around 60 fps, according to Anand's benchmarks (on the more strenuous of the two...). With a GTS, you may have to go a little bit lower.

You may now try to say, "but that's not good enough! I want to play games at 1280x1024x32 or higher! And with max detail!" Well, games are never designed so that all people can run them with max detail. They're certainly not designed to run them on older video cards with max detail.

The problems today mainly revolve around backwards-compatibility. Imagine two games: One developed around Voodoo3 technology, and another developed around GeForce3 technology. In both games, there are settings you can choose where the GeForce3 will get identical scores. I guarantee you that the game built on GeForce3 technology will look much better. And, presumably, it would be very, very challenging to build in support for older graphics cards that would work halfway-decently. For example, the talk about pre-T&L video card performance in UT2k3 was about playable framerates at 640x480 and below (i.e. Voodoo3, TNT).

Anyway, what I guess I'm trying to say is, while games like UT2k3 and DOOM3 may not be considered playable to the enthusiast on older hardware, it will certainly be possible to play these games on older hardware (Geforce DDR or so) with a number of details turned down.
 
From Anand's shootout in question:

Until very recently, the limiting factor in the FPSes we all played was memory bandwidth. If you remember when the original GeForce 256 launched, the biggest complaint was the lack of DDR memory. The measley 2.7GB/s of memory bandwidth was easily saturated at higher resolutions and thus the demand for higher bandwidth memory solutions was upon us.

Since then, ATI and NVIDIA have both improved their GPUs considerably to offer as much memory bandwidth as possible. They have focused on being efficient by introducing Z occlusion culling technologies like ATI's HyperZ and NVIDIA's Visibility Subsystem which get rid of elements that will never be seen by the user before sending them through the texturing pipelines.

I figure in a purely T&L optimized game like UT2003 a combination of a strong T&L unit and effective bandwidth saving techniques are a necessity for good or higher playability.

A GF2 would score better would it handle overdraw better as a non-T&L Tiler with a T&L unit on board.

For my standards though neither nor is playable so it's rather a moot point.
 
Hellbinder[CE said:
]Its also funny that today everyone is focused on FSAA, yet you still REFUSE to acknowledge that it was a *cutting edge* technology 3 years ago..
Are developers focused on FSAA? Come on how many games do even OFFER you option to enable FSAA? Would you call a TNT class card with way superior antialiasing a cutting edge card?
Features like FSAA are not cutting edge, they are nice but that's all. Dot product 3, cube mapping, hardware t&l, pixel & vertex shaders are features that influence the way games are made. You can not just turn on dot product 3 in the driver and make every game use it.
And UT2k3 and Doom 3 are great evidence for that. You won't even be able to RUN Doom 3 on cards like: TNT 1, TNT 2, Voodoo 1, Voodoo 2, Voodoo 3, Voodoo 4, Voodoo 5, Kyro 1, Kyro 2,...
There are also things YOU NEED TO KNOW when it comes to UT2k3 (well that performance test on anand). Why is Kyro so high? This performance test uses cube mapps and if they are not supported it simply drops them. GeForce 1, 2, MX supports cube maps and so these cards must render 6 textures of the cube map (and Kyro does not need to do that) even if these cards are to slow to do things like that.
 
Btw, DOOM3 *should* run on a Kyro 2, since the Kyro 2 does support DOT3 bump mapping. Whether it will run with acceptable framerates, however, is another question entirely.
 
MDolenc,

When polygon rates increase, overdraw increases too. Hardware T&L does help but can't perform wonders either if overdraw doesn't get effectively addressed. Otherwise we'd see a GF2 Ultra in a dx7 T&L optimized game waltz over a similar clocked GF3.

Discussing wether last year's budget cards will or not run next years games is lunacy.

Chalnoth,

Reverend:

The Kyro (or specifically, the Kyro2). With lack of cubemap support, and with LightDirection being a cube map texture, would disabling per pixel normalization of LightDirection enable the Kyro2 to run DOOM3? Would you do this?

John Carmack:

I doubt it, but if they impress me with a very high performance OpenGL implementation, I might consider it.
 
By the time this wretched Unreal 2003 comes out, software T&L willl be considerably faster than Geforce1 hardware T&L.
Kyro did not have way superior FSAA in any way, wash your mouth out! Above 800x600x16 the performance is much like other cards. Articles at gamebasement come to mind in both cases.
Unreal 2003 iand Doom 3 are such red herrings. Have you seen them running? They can turn out more unreliable than BAPco/madonion in the hand of the person that claims benchmarks for them, because you can't check.
 
By the time this wretched Unreal 2003 comes out, software T&L willl be considerably faster than Geforce1 hardware T&L.
Kyro did not have way superior FSAA. Above 800x600x16 the performance is much like other cards. Articles at gamebasement come to mind in both cases.
Unreal 2003 iand Doom 3 are such red herrings. Have you seen them running? They can turn out more unreliable than BAPco/madonion in the hand of the person that claims benchmarks for them, because you can't check. If you like videocards enough to spend much time in places like this, I don't think you will want to stick with a card for two years, so why talk about next year's games as if you'll be running them on today's cards for long.
 
Ailuros said:
When polygon rates increase, overdraw increases too. Hardware T&L does help but can't perform wonders either if overdraw doesn't get effectively addressed. Otherwise we'd see a GF2 Ultra in a dx7 T&L optimized game waltz over a similar clocked GF3.

Yes overdraw increases when you are drawing 10*X BIG polys instead of X BIG polys. But why would you want to do that?? To use full potential of hardware t&l you need to draw many SMALL polys (and hardware is faster at drawing small polys anyway). You won't increase overdraw if you will use a 80k triangle character instead of 5k triangle character.

Above said:
By the time this wretched Unreal 2003 comes out, software T&L willl be considerably faster than Geforce1 hardware T&L.

Yes it will be, but who will do AI, physic,... for you? Software t&l will be faster if you will burn all your resources in t&l. Even if software is faster developers would still be using hardware to do it because it frees up MANY CPU cycles that can be better spend elsewhere.
 
MDolenc said:
Yes overdraw increases when you are drawing 10*X BIG polys instead of X BIG polys. But why would you want to do that?? To use full potential of hardware t&l you need to draw many SMALL polys (and hardware is faster at drawing small polys anyway). You won't increase overdraw if you will use a 80k triangle character instead of 5k triangle character.

Most hardware is faster at drawing large polys than small polys - fill rate efficiency is typically lost at the edges of polygons. Smaller polys = more poly edges per unit area of screen = less fill rate efficiency. This will only slow you down if you are fill-limited, of course.

- Andy.
 
MDolenc said:
Yes overdraw increases when you are drawing 10*X BIG polys instead of X BIG polys. But why would you want to do that?? To use full potential of hardware t&l you need to draw many SMALL polys (and hardware is faster at drawing small polys anyway).

What hardware is that? A small poly will only be faster simply due to having fewer pixels, but the pixels/s rate of a small poly is pretty much guaranteed to be lower than that of a large polygon.
 
Above said:
By the time this wretched Unreal 2003 comes out, software T&L willl be considerably faster than Geforce1 hardware T&L.

It's not software T&L that gets faster but CPUs.[/code]
 
Back
Top