demalion said:
I have trouble seeing the association between what you talk about, and what Dave said, WaltC. It is demonstrable that there is a tangible result of a lack of specific information released by ATI and a relative abundance of specific information (however PR-centric and technically distorted) being released by nVidia, and that is what Dave is talking about...your discussion of your personal take on whether information is necessary for you doesn't say anything that changes that, or seem to succeed in making your opinion applicable to what Dave was addressing, AFAICS. He wasn't talking about
your reaction to such information.
On that note, stevem's comment on 2-sided stencil seems to underscore what Wavey is saying fairly well.
DaveB said:
....in which ATI and NVIDIA make information that make them look favorable available. For instance, with the 52.16 drivers the first thing NVIDIA did was mail a whitepaper to their entire press list, so that information was readily available to them and it makes NVIDIA look good - now, for instance, how many journalists, or even review readers, know that ATI have already implemented a basic shader compiler optimiser since cat3.6? I knew, because one of the ATI guys here has referenced it, and some of you may have read it in my 256MB 9800 PRO review, by the press at large have no clue because ATI didn't tell us/you about it. Further to that - how many of you knew that ATI can actually do some instruction scheduling in hardware (i.e. it can automatically parallelise some texture and ALU ops to make efficient use of the separate pixel shader and texture address processor)? I'll bet not many - why the hell aren't ATI telling us about these things?
So, yes, I've already said to Josh that I think there has been too many assumptions in there, but the apparent disparity between the NVIDIA and ATI information in there is partly because ATI just don't inform people about a lot of things.
What I thought Wavey was talking about was conditions "...in which ATI and NVIDIA make information that
make them look favorable available." (emphasis mine.) My point was simply the obvious one, that ATi hasn't needed to provide people with a lot of information "making them look favorable" since their products do most of their "favorable" talking for them.
You make what seems to me a very odd distinction here between what nVidia releases as "information" which is "PR-centric" and "specific information" which I assume you mean is information which you consider technically correct and PR-neutral. My question to you is simply how can you distinguish which is which?....
Certainly, the idea that something sounds plausible is no reason to confuse it with a fact.
Case in point: TR asked nVidia point blank a few months ago to clear up the confusion it had created around nV3x with the following question: "Does nv3x render 8 pixels per clock?" To which nVidia replied, "Yes, we do 8 ops per clock." (That was a line I doubt I'll ever forget.)
Considering that nVidia was unable to provide a straight, PR-neutral answer to such a simple, fundamental question about its architecture as "How many pixels per clock does it render?", I find it difficult to believe the company when it discusses much more complex and arcane "facts" about its architecture. Obviously, I do not possess your ability to separate the wheat from the chaff in this regard...
I mean, I find it just ludicrous that anyone might have to be "told" by ATi that ATi does generic, global shader optimizations in its drivers, before they might appreciate that ATi has in fact done this. Since the first 3d card shipped IHVs have routinely "optimized" their drivers in a global sense to get more from their hardware. I think DaveB was being polite to Josh with these statements, but really these are things that if one doesn't understand one has no business writing "state of 3d" articles, to put it bluntly. It doesn't follow for me that ATi should have to "talk about it" as nVidia has "talked about it" in order for me to consider it has been done. It's clear from using R3x0 for the last year that such things have been done, routinely. Rather, this is the kind of thing I expect them not to spend an appreciable amount of time talking about because it's just so evident in the performance and IQ of their products.
So why isn't ATi telling us about such things?
Because they are so self-evident, of course. At least, to me they always have been. I think it is illogical to assume that because nVidia talks about something that it is in fact what nVidia has actually done, or that because ATi has not talked about something that ATi has not done it. I have a problem with that approach. It's simply not required that ATi tell me about every little thing they've done in order for me to see that those things have been done, and indeed, when I contrast the deficiencies of nV3x with R3x0 I can understand why nVidia has
done so much talking...
What I have seen, though, demonstrated many times over the last year, is that there is often a gulf between what nV3x actually does and what nVidia PR says it does. In line with this disparity, people postulate all kinds of artificial constructs in an attempt to bridge the gap--and often come up with ideas which, while sounding plausible, cannot be verified or demonstrated by anything nVidia has
actually said. I suppose that people do this because they cannot accept the fact that nVidia is simply misrepresenting its products--so they rationalize in order to avoid that conclusion.
Let's take the whole, "DX9 is an ATi-Microsoft conspiracy with nVidia as its target," line of thought. Well, that's certainly one way to look at it...
A better way, I think, is to simply understand it in terms of ATi building and getting to market a much better chip. Then there's the "GPUs of the future are going to be so complex, along the lines of nV3x, that it will be routine in the future to see compiler optimization require at least year after product introduction before the product can become half as fast as simpler, more efficient traditional architectures."---Huh?....
I guess that's one very convoluted way to look at things--but I still prefer the simpler view that nV3x "sucks" in comparison with R3x0, and even great attention to compiler optimization cannot change the basic facts.
The last reminds me of the old saw about two guys each bidding their rocket designs to boost payload into orbit. One guy's design is 2 stages and costs $x, while the other's is 10 stages and costs $2.5x....
Does the guy with the most stages, whose design burns the most fuel and is the least efficient at getting the payload where it needs to be--is that guy going to win? Nope--he's going to lose. And this is what I see between nV3x and R3x0 as it has played out for the last year. Why have fx12, fp16, and fp32, when all you need is fp24? Why go to .13 microns to do a 4-pixel-per-clock chip when you can do an 8-pixel-per-clock chip at .15, and get better yields and save money? Etc.
All talking seems therefore very much beside the point to me. I don't think we need a "monkey see, monkey do" situation where if nVidia talks about something it considers relevant ATi must address the same subjects, or vice-versa. The companies are different as are their products, and when we let the
products do the talking we are left with some fairly clear and unambiguous answers.