A bad trend?

Silly... ^^

See your original comment about complete absence of Transform and Lighting.

No, software T&L destroys performance for UT2003... but keep in mind, that when we say software T&L was a viable alternative, we're pretty much referring to the 2000-2001 timeframe, where the only game in which it truly made a difference was MDK2...

That still doesn't change the fact that 3dmark2001 wasn't aimed to predict game performance in 2000/1. A newer version of Futuremark's application came ~2 years later and by that time, real T&L optimized games were available. In that case it's up to everyone's judgement of the real relevance of the application in question.

Actually, they always said that 3DMark03 isn't a fair judge of pre-DX9 hardware, and that 3DMark2001 should be used instead for older cards.

Randell already commented on the possible oxymoron that could build up with that logic.

Needless to say that 3dmark2001 with the advent of real T&L optimized games, has become somewhat redundant, if you consider that you can test an accelerator's abilities in real-time gaming conditions (exeption being f.e. the Nature test).
 
kyleb said:
well i can't agree with you there Tagrineth as futuremark's goal to be repersentitive of future gameing performace, hence the name and all. ;)


that said, a voodoo5 runs ut2003 reasonably well for as old as a card as it is and i doubt a geforce2gts does a whole lot better. however, 3dmark2000 required hardware t&l just to run some tests and therby negating the fact that t&l can be accompished though software.

Later 3dfx shipped drivers with "software Nardware T&L" for win9x.

3dmark and games would detect presence of hardware T&L and thereby use it (even though it was software). It worked very well with DX7, making 3dmark using all it's tests with the real power of the card/CPU combo (inspite of "ommiting tests with silliness), games requiring hardware T&L as a checkbox feature (not through a power point of view) could work fine, and it worked a little (5-10%) than the software T&L classical implementtion.

A V5 5500 could easily have 1000 to 2000 more 3dmarks, depending on the CPU. That wasn't so marginal, considering the time frame.

Just because some tests inspite of exacing something executable were just bypassing because the feature was "siftware".
Man, games in there majority used software whenever present !
 
That software T&L 3dfx had was neat, too bad I think directx8 came out shortly afterwards and didn't work with it. I don't think 3dfx ever had windows xp drivers either.
3dfx also had hidden surface removal, but I think it only worked for quake 3, and while it did give fairly large performance gains on the more aggressive settings, it happened to remove a lot of the graphics as well.
 
Fox5 said:
3dfx also had hidden surface removal, but I think it only worked for quake 3, and while it did give fairly large performance gains on the more aggressive settings, it happened to remove a lot of the graphics as well.

You mean 3dfx invented cheating?
 
Well, it was intended as a fully functional feature eventually.(and I think nvidia had already been caught cheating prior to that) It was a beta feature only(I think it required a registry modification to enable), and ideally it was supposed to give large performance games with no effect on image quality. I guess it was an attempt to make use of Gigapixel's knowledge? However, don't the radeons and geforces now have hidden surface removal?
 
lol, hsr wasn't really cheating as it was a user controlled option and off by default. it wasn't static clip planes either, the removal of what should be seen was simply part of the buggy nature of it's beta implementation. i'm pretty sure most modern drivers do a form of hidden surface removal as well.

also Randell is right, and my memory on this old topic is obviously fading. the issue people had against 3dmark was not that 3dfx was excluded from the tests, simply that 3dmark's software t&l was horribly inefficient as shown by 3dfx's software implementation which Magic-Sim pointed out.
 
lol, hsr wasn't really cheating as it was a user controlled option and off by default. it wasn't static clip planes either, the removal of what should be seen was simply part of the buggy nature of it's beta implementation.

Dropping geometry isn't that much apart either. While software HSR is possible, if one doesn't consider it a joke, then it wasn't anything but a silly idea.

i'm pretty sure most modern drivers do a form of hidden surface removal as well.

HSR is as old as the Z-buffer. Any rudiments past or alternatives are usually done via real hardware support (TBDR, hierarchical Z, early Z etc etc.).
 
Ailuros said:
lol, hsr wasn't really cheating as it was a user controlled option and off by default. it wasn't static clip planes either, the removal of what should be seen was simply part of the buggy nature of it's beta implementation.

Dropping geometry isn't that much apart either. While software HSR is possible, if one doesn't consider it a joke, then it wasn't anything but a silly idea.

i'm pretty sure most modern drivers do a form of hidden surface removal as well.

HSR is as old as the Z-buffer. Any rudiments past or alternatives are usually done via real hardware support (TBDR, hierarchical Z, early Z etc etc.).

I thought the idea behind 3dfx's software HSR was to trade the performance of overly powerful cpus at the time(when matched with a voodoo 3 I suppose, I think it took until a 1 ghz cpu until the voodoo3 became 100% the bottleneck) to increase graphics performance.(hey, on the most aggressive settings I could get like 150 fps in a quake 3 botmatch, and I could see clear from one side of the level to the other since all the floors and walls disappeared!)
 
hey, on the most aggressive settings I could get like 150 fps in a quake 3 botmatch, and I could see clear from one side of the level to the other since all the floors and walls disappeared!

Must be a coincidence then, that I mentioned something about dropping geometry.... :rolleyes:
 
sure, but you could also run a lower setting, get a nice little boost in performace and have no visual anomalies at all.
 
kyleb said:
sure, but you could also run a lower setting, get a nice little boost in performace and have no visual anomalies at all.

only in Quake III with a 60fps cap iirc
 
Fox5 said:
That software T&L 3dfx had was neat, too bad I think directx8 came out shortly afterwards and didn't work with it. I don't think 3dfx ever had windows xp drivers either.
3dfx also had hidden surface removal, but I think it only worked for quake 3, and while it did give fairly large performance gains on the more aggressive settings, it happened to remove a lot of the graphics as well.
You are right (I remember being in the group which seeked for proper driver ieces for XP).

However, HSR was tunable. A friend of mine found the settings to allow the most aggressive settings under Quake3 without nearly any artifact.
 
For the past few days I have had a huge need to post another "Reverend at The Pulpit" diatribe in the General Discussion that includes my honest thought (one that I have not adequately expressed thus far) on this very topic as well as other things that have been bothering me, stuff that challenges me specifically about why Beyond3D should or should not be another website covering the 3D industry.
 
Reverend said:
For the past few days I have had a huge need to post another "Reverend at The Pulpit" diatribe in the General Discussion that includes my honest thought (one that I have not adequately expressed thus far) on this very topic as well as other things that have been bothering me, stuff that challenges me specifically about why Beyond3D should or should not be another website covering the 3D industry.
What's stopping you Rev? Pulpit posts are always fun! :D
 
well it's all about marketing, ie Nvidia selling an inferior product and claiming it's not...

NV3X might have some merits, but looking at developers perspective(which I am obviously not :) ) - why should they spend $$$ on a card for similar, albeit worse IQ with the same FPS. They get performance; DX8 IQ is acceptable to most of the gamers anyway, and they save $$$. I don't think it is good business sense to invest time and money in part of the product that will not be appreciated enough to give good return on investment. So they have only DX8 and DX9.0 level code with no special FX parts.
Have performance, lose some IQ and cut expenses.
 
okay, here becomes my take to all of this.

In first message, Brent did hope pushing the graphics forward more than backward. So, if that's not what Futuremark has been doing with their benchmarks what is it? I have been picking up dropped eyeballs on diffrent forums since 3dmark2000 launch and I think 3DMark 03 won't be the last one.

Again, what comes to 3DMark2000 and HW T&L support, Futuremark stated back then that there was more than 3 IHV's stating that they will bring HW T&L enabled accelerator market during next 6 months. well, what happened? Radeon got massive delay, as well as Savage2000 and eventually never worked that well, Matrox even denied that anything called G800 was ever in the works. So only nVidia delivered on time, and now some of you are blaming Futuremark dismissing others.

on 3DMark2001, Futuremark tried estimate, how much graphics would go forward during it's lifetime. and imho they succeeded pretty well, if think that it was used actively 2 years. a reason for supporting PS 1.1/VS 1.1 on Game Test 4 was again based on IHV's comments how many of them would launch 1.1-1.3 compatible HW during next year. Again, 3DLabs with P10 delayed a lot, PowerVR / STM canned Kyro III, Bitboys first slipped and then canned Avalanche and Matrox slipped wayyy much further than originally planned, ATI did deliver on time, but was only one supporting 1.4 at that time when decisions on 2001 were done against 5 others targetting on 1.1-1.3 PS. Obiously many of us too would have decided to support 1.1 as in Game Test and leave 1.2-1.4 as an option in special test in possible update. And when the update (SE) was done, 3DMark2001 had become most used synthetic 3D benchmark ever. it would have been bit too much make modifications on Tests that affect on final score.

and what comes to cheating in synthetic benchmarks, that has happened as loooong as I can remember. even Matrox used to cheat on some old Winbenches by drawing nothing on screen and then publishing results on their website as and ad how fast their card is. So, this is a lot older idea than just DX8.

sounds, like Futuremark defending speech? well, mostly it is that. I am up in armpits full of whiners that are never happy. The Best part is that these guys never think about how they would react if they would have been in the control of FM during all of these years.

just my ideas and thoughts. and if it matters I have been watching finnish gaming industry VERY closely now 2 and half years.

REMEMBER, THINGS AREN'T AS SIMPLE AS THEY LOOK OUTSIDE.

Nappe1
from somewhere out here, almost middle of nowhere.
 
Brent said:
I was just reading some of this stuff and came across this: http://discuss.microsoft.com/SCRIPT...L=directxdev&D=1&F=&S=&P=7149

I am most disturbed about that response at the top.

Having to run the card in DX8 class just to get acceptable performance.

That makes me sad, i want to see graphics go forward, not go backwards

In my Personal Opinion

The thing that has got me most curious is how all of a sudden all the NV30's ARB2 problems were solved by the mighty Camrack???? :?:

Either that, or because NV has a new (good performing) part out on the ARB2 Camrack decided to foresake all the NV30/NV35 adopters out there and ditch the special handcoded NV30 path... Things that make you go Hmmmmmm.
 
Back
Top