The Rydermark Thread (TM)

The Baron said:
You recall incorrectly. NV30's lackluster performance was primarily due to:

4. Ridiculous register usage penalties.

Isn't the whole point of _pp to reduce the register usage penalty you mention in #4?

We know NV has had three years to push around _pp. I wonder if it might be technically doable with some extended effort over that time to come up with an automated algo in the driver to reliably predict (> 95%) some subset of shaders that could be forced to _pp?

geo, I can't believe you are actually taking this "article" and trying to formulate ways in which Nvidia could be cheating and forcing _pp. Apparently all it takes is the Inquirer posting some unfounded and easily debunked accusation to get people to start racking their brains - why provide evidence when readers will make it up for you! Brilliant! :LOL:
 
trinibwoy said:
geo, I can't believe you are actually taking this "article" and trying to formulate ways in which Nvidia could be cheating and forcing _pp. Apparently all it takes is the Inquirer posting some unfounded and easily debunked accusation to get people to start racking their brains - why provide evidence when readers will make it up for you! Brilliant! :LOL:

I said it was tangential to the thread, and more related to my own interest. Look around, I've had other threads on automating _pp.

And I didn't call it cheating either; I said if it was good enough it would be an opt(imization) in my book --haven't we had thread after thread after thread defending the idea that _pp is entirely appropriate and harmless to IQ if used wisely/judiciously (tho come to think of it, a lot of that would have been before your time here)?
 
I thought the general consensus was that _pp should be at the developer's sole discretion and forcing it discreetly was a no-no.
 
I always thought that having automatic _pp setting within a HLSL compiler or shader library would be a good thing. But I don't think it'd be a useful thing for runtime compilation.
 
trinibwoy said:
I thought the general consensus was that _pp should be at the developer's sole discretion and forcing it discreetly was a no-no.

Those conversations in the past have always been in the context of either a universal forcing, which everyone would agree is a bad thing, and/or application detection of high-profile games/apps for overly aggressive shader replacement to nose ahead of the competition on that particular app.

What I'm pointing at would be a new thing. Could it be abused? Sure, very little couldn't be. Doesn't mean it would be, particularly if there's no app-detection involved (which removes the possibility/temptation to get just that little extra to nose ahead of the competition in 3DM, Oblivion, or whatever the particular app being detected) but instead a general purpose algorithm of high reliablity in picking its spots.

Edit: What I mean by "general purpose" is that it does its thing just as much in "Barbie's FunHouse 2006" as in 3DM06, and doesn't care/know which one it is working in. That way there is no issue of creating a false impression for consumers of either IQ or performance by cherry-picking which apps are looked at. Tho if such a thing exsited (and, again, not by any means saying it does), I'd still want to be able to turn it off so we could test its reliability.
 
I for one am taking this article with a grain of salt... but I honestly wouldn't be surprised if it were true given nVIDIA's track record.
 
For a long time now, my own pet nickname for Faud has been "Fraud"...;) When reading the few of the Inquirer's forays into perverse, verbose, and flowery prose that I sometimes skim, I see in my mind's eye the image of someone sitting behind a keyboard who is thoroughly bored and so is ransacking the remote corners of his tiny cranium for something "juicy" to write about to create some excitment in his otherwise apparently mundane and flat existence. --Heh...;)

This stuff reminds me of late 2002, right after ATi had shipped the 9700P, when nV was giving press conferences replete with robust catechisms like: "96-bits is not enough." In this way nV was reminding its audience that the 128-bits it professed to be shipping was indeed certainly enough--er, right before it came to light that the original drivers nV had released were supporting only FP16 at highest precision by default. Stung by these revelations that weren't-supposed-to-happen-because-they-aren't-smart-enough-to-figure-it-out,
nV did indeed release drivers which supported fp32, at which time we discovered that "FP32 isn't nearly as fast as it needs to be to be useful." Thus ended the FP32 lesson for '02 and '03.

But much has happened since then, of which this Inquirer piece seems blissfully unaware. As it is, the piece does a remarkably poor job of proving its statements, but I think most everyone will agree with me that the usual Inquirer piece is long on allegation while incredibly short on corroboration. But, I think that from the Inquirer's viewpoint that's the whole idea, isn't it?...;)
 
WaltC said:
...to create some excitment in his otherwise apparently mundane and flat existence. --Heh...;)
Seems to have worked. Combine _pp, NVIDIA scandal, benchmarking, and theinq... surefire recipe for a 100p thread if there ever was one.
 
Ratchet said:
Seems to have worked. Combine _pp, NVIDIA scandal, benchmarking, and theinq... surefire recipe for a 100p thread if there ever was one.
Perhaps. But no matter his blatant lies and attempts at attention, I'll not click on those links.
 
Vysez said:
Yes, if you advertise full precision support, you need to support at least FP24 or FP32.

There are no caps bits for precision. It comes packed with the shader model. If you support shader model 2.0, you must support at least FP24.
 
It's no secret that Fuad has ties in this industry, and nVidia just happens to be on the opposite spectrum. But it worries me that everytime he does decide to go on his nVidia finger-pointing sprees, we have people jumping to the conclusion that it must hold atleast some truth, fully disregarding his bias, incompetency, and overall ignorance.

There is too much inconsistency with the article,as outlined by previous members, and the fact that we're actually giving warrant to it is ridiculous. It's like complaining about a GPU review done at Gamespot. :rolleyes:


In the coming weeks, we are going to be seeing far more conspiracy reports, mostly about G80. This I can already predict. So I suppose I'm just giving everyone a little heads-up of what we can expect from Fuad in the near future, saving people the time and energy wasted giving his nV finger-pointings legitimacy, to realize in the end that yes, his articles are nothing but BS FUD. I think most of us have already established that, though.

Nelsieus
 
I'll probably be crucified for saying this but I never quite understood why there is such vehemence towards NVIDIA for they're forcing of precision hints. Yes, if this has to do with a benchmark that influences the sales of video cards, then there is a case but we're not talking about 3DMark.

On my 21 inch CRT, if nobody told me NVIDIA was doing this I'd never have guessed because I would not have noticed it. I mean, it's not like if NV cards were ignoring anisotropic filtering calls and just use trilinear which should definitely be a clear cheat! Heck, I have failed to see one instance in any type of game where anything less than FP16 results in a clear rendering artifact. IMO, even FP24 is surplus right now.

On the other hand, if this is about morals and ethics then this thread surely belongs in another forum and I'll say cheating per se is wrong! :)
 
Nelsieus said:
In the coming weeks, we are going to be seeing far more conspiracy reports, mostly about G80. This I can already predict.

Conspiracy theories re G80? Like what? Certainly there will be spec rumor-mongering, as there always is. But "conspiracy theories"?

I hope if Faud doesn't have anything at all to back up this story that NV rips him a new one in public for gross irresponsibility, much like ATI ripped Sander Sassen a new one. I see a bright line difference between run of the mill spec rumor-mongering, as Inq typically does, and this kind of thing.
 
dizietsma said:
Fuad's piece failed to reach impartial precision and ended up with just partial precision.
That's why people visit their site. Their "news" are rarely flat-out lies or cooked-up stories but they know a good news post when they see one.

Without a single bit of word from NVIDIA or any kind of evidence, I would not be against the deletion of this thread.
 
Why? I don't boycott The Inq, quite the opposite. But I see the contents there mostly as cheap comedy nevertheless.
 
geo said:
I hope if Faud doesn't have anything at all to back up this story that NV rips him a new one in public for gross irresponsibility, much like ATI ripped Sander Sassen a new one. I see a bright line difference between run of the mill spec rumor-mongering, as Inq typically does, and this kind of thing.

No comparison to Sassen article.
 
Back
Top