Futuremark Announces Patch for 3DMark03

ROFLMFAO~~~~

Sorry, I'm just reading this thread and actually laughing out loud to myself at the quandry nVidia has gotten themselves into! :LOL:

Things are sooo bad for 'em that they've actually had to resort to the truth! :oops: ;) :LOL:
 
Exxtreme said:
AndY1 said:
http://www.xbitlabs.com/news/video/display/20031114041519.html

Since the complier is still active with the new version of 3DMark03 there is currently no explanations for performance drops of certain GeForce FX parts in the latest build 340 of the famous 3DMark03.
Sorry but this is really ridiculous. :LOL: :LOL: :LOL:

Not as ridiculous as the complete communications breakdown that's been exposed at nVidia.

I know Uttar's editorial pointed towards it, but it almost feels like nVidia read it and said 'Hey, that's not true - It's actually much worse than that! Lets go and prove it to discredit him!' :p
 
Hanners said:
Exxtreme said:
AndY1 said:
http://www.xbitlabs.com/news/video/display/20031114041519.html

Since the complier is still active with the new version of 3DMark03 there is currently no explanations for performance drops of certain GeForce FX parts in the latest build 340 of the famous 3DMark03.
Sorry but this is really ridiculous. :LOL: :LOL: :LOL:
It's actually much worse than that!
I know.

I have written it allready @ 3dc-forums: Nvidia has really huge management- and PR-problems.
The PR-department doesn't know what the driver guys are doing etc. And statements like this from Perez and Alibrandi damage the reputation of the company more and more. :( :(
 
PatrickL said:
Hehe i just said same thing in the news forum :)
I meant to say it over there, but good catch on that one Patrick. I totally missed that they didn't mention the drivers used and just assumed they were the 52.16 set.

Did nVidia get a copy of the 333 patch? If so, when?
 
PatrickL said:
Same time than other beta members i guess .
Are we talking days or weeks here? I really have no idea about the timetable with such things, but it just keeps bugging me that nVidia had some advance knowledge that this was coming and their PR department got caught seemingly with their trousers down about it. :|

That's the one thing that doesn't make sense to me about this all. Could the communication in nVidia be so bad that the people who got the patch never mentioned to PR that they were about to get bitchslapped and caught red-handed at app detected optimizations again?
 
MuFu said:
cop.gif
FUAD
cop.gif
 
It wouldnt be that difficult to change the code to replace the shader, and they already have a handbuilt substitute that should be mathematically the same so they could just drop it in again and have it out in a jiffy.

So does 52.70 just replace the new shader code again? If it is that can only add to the PR nightmare, or will they just defend it as a valid optimisation as they tried to do before?

WRT the inquirer and their publication of results from the FX5700, what implications does this have for theinq if it was with the 52.70 drivers, as isnt there some problem with publishing results with drivers that havent been given the thumbs up from FM?

Also, in 52.16 is the score with 340 patch that with shaders compiled by the UC or do the drivers actually detect that 3DMark03 is being run and actually disables the UC in anticipation of being able to substitute the hand coded shaders?


Man thats been long reading, 21 pages, sheesh. :oops:

Jim
 
Re: Extremely OTish

Dio said:
I'm glad you didn't mention algorithmic complexity on Tuesday. Something which worked perfectly reasonably for months turned out to have an O(2^n) pathological case. Ouch. Spent four hours debugging that assuming it was an infinite loop...
Hee Hee. Slightly OT but I had exactly the same problem a while back with the Dreamcast VQ compressor. I was using the bog-standard C library Quicksort routine. Ran really well on all the images until I tried compressing a 1k*1k picture of the earth from space. About 1/3 of it was pure black and suddenly QS hit its pathological case in a very unpleasant way.
 
/me thinks Nvidia really needs some sort of Unified PR Technology. Or we will soon be looking at something like that :

For Immediate ReleaseErrr, for delayed release. We think.

We at Nvidia (for the time being) really think our Unified Compiler sort of works. Or not. In some cases, we think it brings better speed and the same visual quality. In some other cases it doesn't work. But it works or doesn't work with all applications which are (or are not) aware of the compiler. Most of the time it does, at least.

In some cases our Unified Compiler Technology relies on a magical little dwarf making on the fly shader replacements, we know it doesn't sound "unified" but since you download the driver in one file it really is, we believe. If you don't feed the little dwarf the good shader, then the compiler sort of doesn't work. But is has nothing to do with 3DMark. Or maybe it has, we really wish someone would explain that to us, since it's kind of confusing sometimes.

We have maaaany users that love our technology, and we think those users really deserve good explanations of what our forward-looking futuristic unified technology can bring them. Or not.

If they think they are holding the performance crown, our competitors are really all smoking something halluchino... hallunocinegic... salugercicomic. Arnoldschwarzeneggeric... Majestuogenic... Supercalifrajalisticexpialidociousinic. In some way.

Nvidia may or may not be a global cheater in some kind of communication age and our goal is to do all kind of stuff to maaaany pixels.
 
DaveBaumann said:
Simon, read a few posts back - I think that is out the window since work on the compiler has stopped, i.e. its already optimal for "mathematically equivelent" shaders. Its probable that the hand coded shaders are now wholly mathematical equivelent.

Dave, I sincerly question that answer you got from Derek Perez regarding NVIDIA halting work on the compiler.
My understanding of the situation would be that a few things are still going to be worked on, but not all of them. The reason for this is the unified nature of the compiler. If NVIDIA tells the truth, the whole thing has been engineered with reusage in mind, and that means if the NV40 got certain similar problems, it'll be possible to use the general NV30 algorithms with perhaps a few different factors taken into account.

A good example of this is register usage. AFAIK, register usage is still existant, although to a lesser extent, in the NV4x. I beleive the NV4x is also based around Xx2, so the idea of putting TEX instructions together is also a good one. And so on.

So making the algorithms for these things more efficent is a good idea, and I sincerly cannot believe they're already optimal...
Certain things will most likely never be "perfect" on the NV3x, and not be developped further than now, as the focus should be on the NV4x part now: after all, we're just 3 or 4 months away from launch.


But yes, the idea of trying to make their compiler better than it is, because it'll become that good, is insane. You'll never get 900 points with the little work NVIDIA is likely to dedicate to the NV30 UC path.


Uttar
 
Hanners said:
Exxtreme said:
AndY1 said:
http://www.xbitlabs.com/news/video/display/20031114041519.html

Since the complier is still active with the new version of 3DMark03 there is currently no explanations for performance drops of certain GeForce FX parts in the latest build 340 of the famous 3DMark03.
Sorry but this is really ridiculous. :LOL: :LOL: :LOL:

Not as ridiculous as the complete communications breakdown that's been exposed at nVidia.

I know Uttar's editorial pointed towards it, but it almost feels like nVidia read it and said 'Hey, that's not true - It's actually much worse than that! Lets go and prove it to discredit him!' :p

Hey, just imagine what Uttar would feel like if it was proven that you touched the truth ;)
 
Simon F said:
How about this answer -
(DISCLAIMER This is all speculation of course...)

#3 A hypothetical company currently has a poor quality compiler but it probably expects it to improve in future revisions. To make their system look better in the interim (until they can make the compiler more intelligent) they put in some customised, hand-coded examples. FM changed their code so it no longer matches those examples.
To be frank, 1 1/2 years ago, I would have considered this, too, but I don't think NV deserves the benefit of doubt or speculation anymore, to put it nicely.

This whole soap opera appears to me like if a company produced a car that only can do left hand turns due to an mistake during steering wheel developement.
Then they declare the roads are not valid (while other cars manage them just fine) and tries to get the government to adjust all the roads accordingly..
Until then, the buyers are advised to take three turns left, resulting in the same direction or something..
Off course, neither the roads are rectified nor the steering wheel is "patched" before the successor is on he market, slightly better but with similar errors.
Wash, rinse, repeat.

Yeah, I admit I'm biased, but who in their right frame of mind and their ear to the ground wouldn't be?

Cheers,
Mac
 
Back
Top