State of 3D Editorial

^ Well, I just don't see how you can say Nvidia will possibly fall to #3 behind XGI. That seems quite a bit premature, wouldn't you say? What has XGI done to show they can even pull off the Volari launch with a mild success? And even if they do get it out the door, I don't see them overtaking Nvidia in the forseeable future.

Time will tell...
 
[maven said:
]
OpenGL guy said:
stevem said:
As has been mentioned many times, a key determinant of D3 performance will be accelerating shadow volumes. Nvidia have incorporated Carmark's zpass/zfail stenciled shadow volume techniques by including 2-sided stencil testing & depth clamping in HW.
ATI has the same in the R3x0 and RV3x0 parts.

Depth Clamping too? I was only aware of two-sided stencil...
I was referring to two sided stencil. I have no idea what depth clamping is.
 
I think it limits the maxium depth therefore culling all triangles past a certain point for any calculations.
 
arrrse said:
If Doom 3 had been released in H2 2002 or H1 2003 as originally expected, the 5800 might have looked pretty good to the eye of "Mr Joe Consumer"...
Lets not forget that there is an NV3x path built into Doom3 with all the technical quality compromises which that entails:
Texture lookups where ATI does mathematically correct calculations (which Carmack has stated to be clearly better than texture lookups), built in lower precision where ATI is doing fp24 (and again Carmack has said the higher precision looks better if only slightly), & probably others though those two are bad enough.
And thats without even considering what the drivers will do to it...

I just refuse to buy the idea that GeforceFXes will really be beating ATI at Doom3.
Maybe they'll get a few more fps, but Carmack himself has stated that the FX is doing less so who cares if NV3x gets those few fps more?

Its like if valve cut down the effects load on the mixed mode of HL2 enough that the FX beats ATI in fps.
Who cares when we all know that the FX is doing less work & the ATI card renders it better?

I guess the answer is: Joe Consumer who doesn't realise that the FX is doing less work at lower quality... :rolleyes:

Joe Consumer probably wouldn't care even if he knew the FX was doing less work. And why should he? As long as the quality differences aren't that great Joe Consumer won't be missing anything, nor will he understand why the differences are present. He'll probably go as far to say that the FX is producing the right image...
 
OpenGL guy said:
bloodbob said:
I think it limits the maxium depth therefore culling all triangles past a certain point for any calculations.
That's odd, I thought the far clipping plane did that ;)

Yes, but NVIDIA Is already using so much "clipping" power elsewhere they can't use that here ;)
Seriously though, never heard of Depth Clamping.


Uttar
 
Saw a diagram of the damn thing somewhere after the FX came out and serious it looked like a far clipping plane for shadows calculations ( the geometry is till used in everything else so its not quite a far clipping plane ).

Okay contacting my uber secret nvidia source I found out about the ultrashadow details.

1. They can do shadow volumes it in 1 pass rather then 2 which everyone else needed at that point in time supposedly ( I'm surprised R300 couldn't :/ )
2. They use some of the info to cull pixles
3. Allows a DEVELOPER ( ie can't be done automatically on previous gen games ) to specify a z-min and z-max for that shadow which then allows the card/drivers to cull some parts of the scene ( man this diagram I got from their website sucks the "fill saving" looks to include the rendered scene rather then the stuff culled ).


If you didn't guess the uber secret source is http://www.nvidia.com/object/LO_20030508_6927.html.
 
Xmas said:
Maybe stevem was thinking of depth bounds (UltraShadow) instead of depth clamp. Which is, btw, only supported by NV35+
Yes, UltraShadow was what I had in mind. Both the non-color write trick of NV3x + depth bounds increase performance. Depth clamping reduces depth precision loss.

OpenGL guy said:
I was referring to two sided stencil. I have no idea what depth clamping is.
Exposed via NV_depth_clamp extension. Saw it on a paper by Kilgard & removes the need for clipping primitives to near/far plane of stenciled shadow volumes. Link. BTW, does R3x0 render stenciled shadow volumes in one pass? Does it matter, given R3x0 texture/pixel capability?

bloodbob said:
Saw a diagram of the damn thing somewhere...
Nvidia's GDC 2003 page has interesting info.
 
jiffylube1024 said:
^ Well, I just don't see how you can say Nvidia will possibly fall to #3 behind XGI. That seems quite a bit premature, wouldn't you say? What has XGI done to show they can even pull off the Volari launch with a mild success? And even if they do get it out the door, I don't see them overtaking Nvidia in the forseeable future.

Time will tell...

I think that XGI could beat GeforceFX in Dx9 games if Nvidia is forced to do real Dx9. No mixed mode and no cheats.

It would probably be difficult to design a Dx9 gfx-card that didn´t beat a Gf FX in that situation. :D

Personally I doubt people will buy XGI gfx cards. There´s no reason if you can buy a much better ATI 9800 for the same money. And Nvidia, even if they are slow in Dx9, still has a good reputation among ordinary ignorant people. People that will continue to buy Nvidia cards.
And the drivers could be a problem for XGI. Perhaps there will be problems with a lot of games. They are new and that could be expected.

So my guess is that XGI will be nr:2 in Dx9 performance but few will buy the cards.
 
from an overclockers.com poll, by Ed Stroligo - 11/3/03:

". . .Over 90% of those indicating they wanted a video card said they wanted an ATI card. There were a few who indicated nVidia, very few. Most who chose ATI didn't even mention nVidia as a factor. It's a wipeout for nVidia among this audience . . ."

" . . . nVidia is quite another matter. This audience has proven in the past to be a good leading indicator of future trends for the general computing population, what they think today is often what others think tomorrow.

I'd be scared excrementless if I were nVidia after seeing these responses. Right now, they aren't losing, they're not even playing, at least not according to this audience . . . "

I entirely agree with that vision. If you look at polls in web sites at least 90% of last generation cards owned are ATI (both performance or value cards); Nvidia CEO says that video cards are not the main interest of Nvidia now. "Market share" quotes are just quotes from what remains of the past, I think what is selling now is much different.
 
stevem said:
BTW, does R3x0 render stenciled shadow volumes in one pass? Does it matter, given R3x0 texture/pixel capability?
I think with that you mean two-sided stencil. Yes, R3x0 supports it. But how is that related to R3x0 texture/pixel capability?
Two-sided stencil doesn't save fillrate. It allows you to submit the geometry only once instead of twice, which saves bandwidth and VS performance.

As for depth clamp and depth bounds:
Depth clamp disables near and far plane clipping and allows you to define a range [zmin, zmax] to which the screen-space depth will be clamped. This way you don't have to worry about shadow volumes that extend past the near or far clipping planes. You don't have to generate extra geometry to "close" the shadow volume at the clipping planes.

Depth bounds (UltraShadow) is a way to save fillrate/bandwidth while rendering stencil shadows. It simply acts on the fact that a shadow volume can't affect fragments that are in completely in front of it or behind it. So you define a range [zmin, zmax] in which the shadow volume fits completely. The chip can then discard any fragments where the depth value in the Z-buffer is outside this range. With a hierarchical Z-buffer, this rejection can happen per tile.
 
demalion said:
I have trouble seeing the association between what you talk about, and what Dave said, WaltC. It is demonstrable that there is a tangible result of a lack of specific information released by ATI and a relative abundance of specific information (however PR-centric and technically distorted) being released by nVidia, and that is what Dave is talking about...your discussion of your personal take on whether information is necessary for you doesn't say anything that changes that, or seem to succeed in making your opinion applicable to what Dave was addressing, AFAICS. He wasn't talking about your reaction to such information.

On that note, stevem's comment on 2-sided stencil seems to underscore what Wavey is saying fairly well.

DaveB said:
....in which ATI and NVIDIA make information that make them look favorable available. For instance, with the 52.16 drivers the first thing NVIDIA did was mail a whitepaper to their entire press list, so that information was readily available to them and it makes NVIDIA look good - now, for instance, how many journalists, or even review readers, know that ATI have already implemented a basic shader compiler optimiser since cat3.6? I knew, because one of the ATI guys here has referenced it, and some of you may have read it in my 256MB 9800 PRO review, by the press at large have no clue because ATI didn't tell us/you about it. Further to that - how many of you knew that ATI can actually do some instruction scheduling in hardware (i.e. it can automatically parallelise some texture and ALU ops to make efficient use of the separate pixel shader and texture address processor)? I'll bet not many - why the hell aren't ATI telling us about these things?

So, yes, I've already said to Josh that I think there has been too many assumptions in there, but the apparent disparity between the NVIDIA and ATI information in there is partly because ATI just don't inform people about a lot of things.

What I thought Wavey was talking about was conditions "...in which ATI and NVIDIA make information that make them look favorable available." (emphasis mine.) My point was simply the obvious one, that ATi hasn't needed to provide people with a lot of information "making them look favorable" since their products do most of their "favorable" talking for them.

You make what seems to me a very odd distinction here between what nVidia releases as "information" which is "PR-centric" and "specific information" which I assume you mean is information which you consider technically correct and PR-neutral. My question to you is simply how can you distinguish which is which?....:) Certainly, the idea that something sounds plausible is no reason to confuse it with a fact.

Case in point: TR asked nVidia point blank a few months ago to clear up the confusion it had created around nV3x with the following question: "Does nv3x render 8 pixels per clock?" To which nVidia replied, "Yes, we do 8 ops per clock." (That was a line I doubt I'll ever forget.)

Considering that nVidia was unable to provide a straight, PR-neutral answer to such a simple, fundamental question about its architecture as "How many pixels per clock does it render?", I find it difficult to believe the company when it discusses much more complex and arcane "facts" about its architecture. Obviously, I do not possess your ability to separate the wheat from the chaff in this regard...:)

I mean, I find it just ludicrous that anyone might have to be "told" by ATi that ATi does generic, global shader optimizations in its drivers, before they might appreciate that ATi has in fact done this. Since the first 3d card shipped IHVs have routinely "optimized" their drivers in a global sense to get more from their hardware. I think DaveB was being polite to Josh with these statements, but really these are things that if one doesn't understand one has no business writing "state of 3d" articles, to put it bluntly. It doesn't follow for me that ATi should have to "talk about it" as nVidia has "talked about it" in order for me to consider it has been done. It's clear from using R3x0 for the last year that such things have been done, routinely. Rather, this is the kind of thing I expect them not to spend an appreciable amount of time talking about because it's just so evident in the performance and IQ of their products.

So why isn't ATi telling us about such things?

Because they are so self-evident, of course. At least, to me they always have been. I think it is illogical to assume that because nVidia talks about something that it is in fact what nVidia has actually done, or that because ATi has not talked about something that ATi has not done it. I have a problem with that approach. It's simply not required that ATi tell me about every little thing they've done in order for me to see that those things have been done, and indeed, when I contrast the deficiencies of nV3x with R3x0 I can understand why nVidia has done so much talking...:)

What I have seen, though, demonstrated many times over the last year, is that there is often a gulf between what nV3x actually does and what nVidia PR says it does. In line with this disparity, people postulate all kinds of artificial constructs in an attempt to bridge the gap--and often come up with ideas which, while sounding plausible, cannot be verified or demonstrated by anything nVidia has actually said. I suppose that people do this because they cannot accept the fact that nVidia is simply misrepresenting its products--so they rationalize in order to avoid that conclusion.

Let's take the whole, "DX9 is an ATi-Microsoft conspiracy with nVidia as its target," line of thought. Well, that's certainly one way to look at it...:) A better way, I think, is to simply understand it in terms of ATi building and getting to market a much better chip. Then there's the "GPUs of the future are going to be so complex, along the lines of nV3x, that it will be routine in the future to see compiler optimization require at least year after product introduction before the product can become half as fast as simpler, more efficient traditional architectures."---Huh?....:) I guess that's one very convoluted way to look at things--but I still prefer the simpler view that nV3x "sucks" in comparison with R3x0, and even great attention to compiler optimization cannot change the basic facts.

The last reminds me of the old saw about two guys each bidding their rocket designs to boost payload into orbit. One guy's design is 2 stages and costs $x, while the other's is 10 stages and costs $2.5x....:) Does the guy with the most stages, whose design burns the most fuel and is the least efficient at getting the payload where it needs to be--is that guy going to win? Nope--he's going to lose. And this is what I see between nV3x and R3x0 as it has played out for the last year. Why have fx12, fp16, and fp32, when all you need is fp24? Why go to .13 microns to do a 4-pixel-per-clock chip when you can do an 8-pixel-per-clock chip at .15, and get better yields and save money? Etc.

All talking seems therefore very much beside the point to me. I don't think we need a "monkey see, monkey do" situation where if nVidia talks about something it considers relevant ATi must address the same subjects, or vice-versa. The companies are different as are their products, and when we let the products do the talking we are left with some fairly clear and unambiguous answers.
 
Xmas said:
I think with that you mean two-sided stencil. Yes, R3x0 supports it. But how is that related to R3x0 texture/pixel capability?
Xmas, thanks. 2-sided stencil test is single pass by design as 2 passes for front/back geometry aren't done. I had also expected fillrate cost from projections.
 
SPCMW type of conversation

WaltC said:
...

What I thought Wavey was talking about was conditions "...in which ATI and NVIDIA make information that make them look favorable available." (emphasis mine.) My point was simply the obvious one, that ATi hasn't needed to provide people with a lot of information "making them look favorable" since their products do most of their "favorable" talking for them.
And you, again, completely missed the discussion I referred to...which wasn't talking about what you think/know about ATI, but what the media and some people besides yourself do.
You make what seems to me a very odd distinction here between what nVidia releases as "information" which is "PR-centric" and "specific information" which I assume you mean is information which you consider technically correct and PR-neutral.
Did you just want to say I was being odd? Let me know where I miscommunicated:
I said "specific information" in reference to both ATI and nVidia. I then referred to PR-centric and technically distorted in association with nVidia, because nVidia quite often seems to fit that with their information. Where is there a mystery of whether "specific information" is PR-centric or not...if it is, it seems for this discussion I'll be adding the description: "PR-centric" as applicable, as well as other descriptions that don't make "specific information" any more confusing.
Are you confusing yourself by simplifying the situation to "acting like your opinion of nVidia wrong-doing" and "acting unlike nVidia in any arbitrary particulars that suit you", such that specific information has to be like the former?

My question to you is simply how can you distinguish which is which?....:)
That seems like a pretty silly question, unless you've forgotten what forum you're in, or just don't pay attention to the testing and discussions that go on. Or maybe it is just rhetorical?

It is also besides the point, since the problem Wavey seems to be referring to is the large amount of people who don't "distinguish which is which", especially those that publish. At least for the understanding I have of it, and agree with.

Certainly, the idea that something sounds plausible is no reason to confuse it with a fact.
Yes, it is a pretty obvious reason...facts sound plausible too. Telling facts from falsities takes knowledge to evaluate beyond "sounds plausible", yes?

My arguing that there is a reason doesn't mean that I'm arguing that it is a good reason to confuse "plausability without factuality" with "factuality" (rather the opposite would be what I maintained...if we were to actually discuss the matter). It means that I'm suggesting ATI providing facts doesn't require more reason than what Dave mentions, because your viewing things as either "providing information like the bad information nVidia has provided" or "not providing information", doesn't mean things are actually that simple, even if you spend several paragraphs failing to relate that premise to logic.

...goes on to talk about how nVidia's information has been bad...
I mean, I find it just ludicrous that anyone might have to be "told" by ATi that ATi does generic, global shader optimizations in its drivers, before they might appreciate that ATi has in fact done this.
And how did you find out? Were you born with the knowledge? You've completely skipped over any discussion of how such knowledge could be obtained, by the simple expedient of proclaiming that it obtains itself. Further, you propose that it is ludicrous to propose that ATI act as if it could be otherwise.

Nevermind any mention of how "otherwise" is demonstrably the case, when you can take the opportunity to focus on how the issue is how they should avoid being nVidia instead? :-?

No, it is not obvious that ATI does generic global shader optimizations for your (lack of) stated logic, because it is obvious that generic global shader optimizations don't write themselves. A bit of logic: it would only require that ATI didn't write one. Given the 3dmark 03 GT 4 "valid optimization implemented other than generically and globally", not "knowing" that there was a generic and global optimizer doesn't seem too unreasonable.
This wouldn't mean it would be ATI's fault that someone echoes what their competitors say about ATI's hardware and drivers without checking with ATI, it just means ATI can do something to improve the situation resulting from some people displaying such behavior (those people willing to listen)...without having to be exactly like nVidia.

Since the first 3d card shipped IHVs have routinely "optimized" their drivers in a global sense to get more from their hardware. I think DaveB was being polite to Josh with these statements, but really these are things that if one doesn't understand one has no business writing "state of 3d" articles, to put it bluntly.
Eh? Please at least criticize for rational reasons. You're saying because it is obvious that IHVs optimize, that it is obvious that ATI's shader optimizations are generic and global at any given point in time. There is no logical connection between the two, outside of thinking such things happen automatically for ATI because they "should".
Of all the things done wrong, not being born with the knowledge of the specifics of ATI's optimization is not one of them. It appears to me that you are transforming "that is wrong for that reason" to "that is wrong for this reason", where "this reason" is a new simplification whose apparent merit, so far, is to make it more convenient for you to propose your preference as some sort of divine mandate independent of logic.

It doesn't follow for me that ATi should have to "talk about it" as nVidia has "talked about it" in order for me to consider it has been done.
Who is arguing that general optimization can only exist if ATI talks about them, or that ATI should act exactly like nVidia? The proposition simply seems to be: if you've accomplished something, toot your horn. If it isn't PR saturated technical distortion, it isn't like (the unsavory part of) the information nVidia has released.
So why isn't ATi telling us about such things?

Because they are so self-evident, of course. At least, to me they always have been.
...
I decided to stop responding here, where you seem to repeat the idea of "ATI shouldn't say anything because everyone should know", and continue to bypass the actual discussion I pointed out by saying it is self-evident that the discussion needn't occur.
I actually have to wonder if you're trying to sabotage the idea of criticizing Josh, because this sentiment seems more suited as a satire of the idea that a writer should have a sufficient set of standards for checking and evaluating information. Why should they bother, if they should simply know everything to begin with? :oops:
 
Re: SPCMW type of conversation

demalion said:
And you, again, completely missed the discussion I referred to...which wasn't talking about what you think/know about ATI, but what the media and some people besides yourself do.


My comments were also about the media, D... But, you asked me to explain my comments in connection with Dave's. In my first post, I quoted DaveB's comments verbatim, and in my response to your question, I quoted them again--just so there'd be no misunderstanding as to what I was responding to. Dave's comments that I quoted distinctly reference ATi's global shader optimization, and the comments I quoted do not mention stencils, which is why I did not talk about stencils. For some reason you keep wanting to overlook the comments I quoted and responded to, and make reference instead to comments I did not quote or respond to. I know that you think DaveB's comments didn't actually mean what they said as quoted, but I obviously disagree.

This is getting silly, don't you think? "What I think I know," is irrelevant--I was quoting what DaveB said, and simply offering my opinions on issues that I see directly relate to them. Please don't tell me I have to quote those remarks a third time before you understand that what I responded to and what you think I should have responded to are two different things...:)

Did you just want to say I was being odd? Let me know where I miscommunicated:...

and then I said:
My question to you is simply how can you distinguish which is which?....:)
That seems like a pretty silly question, unless you've forgotten what forum you're in, or just don't pay attention to the testing and discussions that go on. Or maybe it is just rhetorical?


My point was that in these very forums I've seen people take nVidia's PR descriptions of its products and attempt to reconcile them with objective testing done on those products, and in the process create pages of meaningless speculation. Remember the "zixels" discussions? That's a prime example of how being in these forums does not serve as innoculation against nVidia's PR-centric technical "information" such as, "Yes, we do 8 ops per clock," when the company was simply asked to disclose the number of *pixels* per clock nV3x does. "Information" such as that coming out of nVidia has prompted many a marathon speculation session around here--not to be confused with discussions of any facts in evidence...:)

It is also besides the point, since the problem Wavey seems to be referring to is the large amount of people who don't "distinguish which is which", especially those that publish. At least for the understanding I have of it, and agree with.

I said in my initial response to you that I felt DaveB was being polite to Josh in saying, "Why doesn't ATi tell us such things?" I wasn't inferring that DaveB was complaining personally because he himself felt left out. Yes, it may be true that some people have to have every little thing laid out for them--but to me the greater problem is the mass confusion caused by the nVidia PR machine incessantly spinning and distorting information to the degree that even people who should know better get fooled by it (again, as is evidenced in a few lengthy threads in these forums--which have had very little technical probity behind them.)


and I said:
Certainly, the idea that something sounds plausible is no reason to confuse it with a fact.
Yes, it is a pretty obvious reason...facts sound plausible too. Telling facts from falsities takes knowledge to evaluate beyond "sounds plausible", yes?


Yes--that was my point exactly--that it takes more than plausibility to establish a fact.

My arguing that there is a reason doesn't mean that I'm arguing that it is a good reason to confuse "plausability without factuality" with "factuality" (rather the opposite would be what I maintained...if we were to actually discuss the matter). It means that I'm suggesting ATI providing facts doesn't require more reason than what Dave mentions, because your viewing things as either "providing information like the bad information nVidia has provided" or "not providing information", doesn't mean things are actually that simple, even if you spend several paragraphs failing to relate that premise to logic.
...goes on to talk about how nVidia's information has been bad...


Again, my point was that I have no complaints with the information ATi has provided thus far. I really think it's stretching probablity to imagine that if even they did provide a longer list of factoids that those nuggets of information would either be completely understood or correctly placed in context. I don't think that they would be (obviously.) My point with reference to nVidia's "information" proves itself. Even when such "information" is provided it is more often than not moved completely out of context and/or completely misunderstood. And that happens because the basic assumption used in interpreting the information is flawed--that assumption being that the information is both accurate and correct.

Heh...:) It often strikes me that people take some of the more bizarre PR ramblings spewed by nVidia as sort of an "I.Q. test," as a puzzle which is put before them as a challenge or an enigma to be unwrapped. Again, the most obvious example of this is the "zixels" thread(s). The simpler explanation, of course, is that the "information" was meaningless to begin with--and so it's not surprising when meaningless threads evolve around meaningless information--when it's not clearly understood that the foundational information the thread is based on is itself meaningless. Such things as relating ops per clock to pixels per clock are merely PR devices the intent of which is to mislead. I don't classify that as "information."


and I said:
I mean, I find it just ludicrous that anyone might have to be "told" by ATi that ATi does generic, global shader optimizations in its drivers, before they might appreciate that ATi has in fact done this.
And how did you find out? Were you born with the knowledge? You've completely skipped over any discussion of how such knowledge could be obtained, by the simple expedient of proclaiming that it obtains itself. Further, you propose that it is ludicrous to propose that ATI act as if it could be otherwise.


OK, the context here is about people publishing articles on "the state of 3d", and what such people should know without having to have it spelled out for them. When people write such articles they place themselves in the role of being an "authority figure" on the subject--ergo, they should know a lot of things without expressly having to be told about them afresh whenever they occur. Although I don't publish such articles, I didn't have to be told that ATi was globally optimizing for its hardware (shaders are no less "hardware" than anything else in the chip) to know that they were simply through empirical use of their products and the observations I've made stemming from that use, coupled with years of experience in the use of similar products. I would think that anyone setting himself up as an authority figure on 3d hardware to the extent of publishing articles on "the state of 3d" would be no less qualified.

But to provide an example of what typically happens...

ATi could say: "We, of course, do global optimizations in our drivers for our hardware, including shaders and everything else"...

and some "authority figure" at a web site would undoubtedly interpret that as follows...

"Well, out of ATi's own mouth we have it confirmed that ATi does application-specific optimization of its shader code...!"

In fact, I think this has already happened more than once...:)

So, the problem is not so much in the amount of "information" delivered, it's found in whether or not the information the IHV provides is first factual and accurate, but far more importantly the problem lies in the ability of those who hear it to understand it and to place it into meaningful context. And that's the real problem: some "authority figures" are illegitimate. If the information is false/highly misleading, they cannot see it; or if it is entirely factual they are unable to understand it and place it into its proper context.


No, it is not obvious that ATI does generic global shader optimizations for your (lack of) stated logic, because it is obvious that generic global shader optimizations don't write themselves. A bit of logic: it would only require that ATI didn't write one. Given the 3dmark 03 GT 4 "valid optimization implemented other than generically and globally", not "knowing" that there was a generic and global optimizer doesn't seem too unreasonable.

Maybe it's not obvious to you, but it has been to me for a long time...:) FYI, 3d-card drivers don't write themselves, either. Anytime you have a major IHV which has been writing drivers for a specific architecture over a length of time, you may safely assume they are constantly optimizing their drivers for every aspect of their hardware on a global basis. You may not safely assume the opposite...:) (Unless the IHV in question is non-competitive in the 3d-gaming market segment.)

Optimizing is a good thing. It's too bad that nVidia has succeeded in butchering the word over the last year, but unfortunately some people now see "optimizing" as some sort of "negative" when in fact it is common practice in the industry and has been for years.

Secondly, I see no need to confuse application-specific optimization with global driver optimization, since they are entirely separate and I've not mentioned the former.

This wouldn't mean it would be ATI's fault that someone echoes what their competitors say about ATI's hardware and drivers without checking with ATI, it just means ATI can do something to improve the situation resulting from some people displaying such behavior (those people willing to listen)...without having to be exactly like nVidia.

The problem is that we don't have that situation. We have a situtation which has devolved recently and materially into a situation where if nVidia makes any statement whatever about what it claims to be doing or to have done, that if there is no corresponding statement from ATi on the same subject, then some people equate "silence" with "we aren't doing anything similar."

That's of course nonsense...:) There are certain general practices that any IHV must do to remain competitive--whether they talk about it or not is irrelevant. Lack of verbosity on a given subject should never be equated with a lack of action on the part of an IHV. Conversely, excess verbosity on the part of an IHV shouldn't be interpreted any further than the actual performance/IQ its products support.

So, again, the problem is not with the "information" provided by the IHVs, be it true or false, the problem is in the "hearers" of that information, and in their ability to understand whether it is truth or fiction, and to put all such information into its proper context. IMO, of course.


Eh? Please at least criticize for rational reasons. You're saying because it is obvious that IHVs optimize, that it is obvious that ATI's shader optimizations are generic and global at any given point in time.


Nope, never said that. In fact, I haven't talked about application-specific optimization of any type. What I've said was that I didn't need to be told that ATi was doing global driver optimizations for its hardware, including its shader hardware, since this is what competitive IHVs *always* do, without question or exception. Discussing application-specific optimization is an entirely different subject, and I've not commented on it.

Who is arguing that general optimization can only exist if ATI talks about them, or that ATI should act exactly like nVidia? The proposition simply seems to be: if you've accomplished something, toot your horn. If it isn't PR saturated technical distortion, it isn't like (the unsavory part of) the information nVidia has released.

And, as I've said, the very best way to "toot your horn" is to let your products--drivers and hardware--do the tooting...:) All other horn blowing is but a pale shadow of that, because in the end that's the only horn that counts.

I decided to stop responding here, where you seem to repeat the idea of "ATI shouldn't say anything because everyone should know", and continue to bypass the actual discussion I pointed out by saying it is self-evident that the discussion needn't occur.

Again, what I actually said was that it was self-evident that ATi has been doing global driver optimization for R3x0 since the company began shipping it...:) If one has been using R3x0 products for the last year, as I have done, such a fact is indeed evident. The loudest voice I hear from ATi, bar none, is that of its hardware and drivers as I have known them. Everything else is just background chatter.

I actually have to wonder if you're trying to sabotage the idea of criticizing Josh, because this sentiment seems more suited as a satire of the idea that a writer should have a sufficient set of standards for checking and evaluating information. Why should they bother, if they should simply know everything to begin with? :oops:

You see what I mean about how the problem is most often with the hearer?...:D My comments actually criticized Josh--and certainly could not be construed as supporting his article as it stands. And I never said that people who write authoritative articles on the "state of 3d" should "know everything"--but they should certainly know that ATi has been optimizing and otherwise refining its drivers over the last year to get the most out of its architecture--including shader function and performance--which is honestly a comment I believe DaveB made simply out of politeness to Josh--since I'd bet DaveB didn't have to hear it from ATi to know it, either...:) ('Course, I could be wrong, but if I had to lay odds I'd guess that DaveB's been around the block enough to know what competitive IHVs always do with their drivers and hardware.)
 
Re: SPCMW type of conversation

WaltC said:
demalion said:
And you, again, completely missed the discussion I referred to...which wasn't talking about what you think/know about ATI, but what the media and some people besides yourself do.
My comments were also about the media, D... But, you asked me to explain my comments in connection with Dave's.

I understood your comments and your characterization of how the media should know what you decided they should, as I discussed, I simply don't see how that logically relates to what Dave was talking about. All you continued to talk about is how you always knew there was always a generic and global shader optimizer, and that this is the only way things can be, when all of that is in direct contradiction of what reality would seem to indicate.

In my first post, I quoted DaveB's comments verbatim, and in my response to your question, I quoted them again--just so there'd be no misunderstanding as to what I was responding to.

Yes, and then you talked about what you knew and considered obvious. You seem so wedded to the idea of your statements being established fact that you yet again completely miss discussion associated at all with how that premise might just be wrong.

Dave's comments that I quoted distinctly reference ATi's global shader optimization, and the comments I quoted do not mention stencils, which is why I did not talk about stencils.

WaltC, I can't help but finally ask: what the hell are you talking about? Stencils isn't the issue, information and people not knowing things is.

For some reason you keep wanting to overlook the comments I quoted and responded to, and make reference instead to comments I did not quote or respond to.

Because Wavey was talking about information, and so was I. That's what I presume when he said "information". He then gave an example, and so did I. That's what I presume he did when he mentioned the optimizer (note the "since 3.6", if you would), and what I did when I mentioned the double-side stencil.

You, on the other hand, are talking about a fiction you made up of how ATI having a general and global shader optimizer is self-evident (again, please note that Wavey proposes that it was not the case for Cat 3.5 and earlier, and consider how non-omniscient people might have missed the change), as an excuse to argue against Wavey when he actually said "information". As I discussed, the idea is indeed a fiction, seemingly promoted by your ignorance of the possibility of how a "general and global shader instruction optimizer" is distinct from other optimizations IHVs may have already done, and continuing to miss how your basic premise is asserted to be completely incorrect by what you quoted.

I know that you think DaveB's comments didn't actually mean what they said as quoted, but I obviously disagree.

Well, if I'm arguing Wavey didn't mean what his comments obviously said...oh my, that is a bit self-fulfilling, isn't it? :oops:

This is getting silly, don't you think?

You started out silly, WaltC. Please read my first reply to you again.

"What I think I know," is irrelevant--I was quoting what DaveB said, and simply offering my opinions on issues that I see directly relate to them.

It is strange that you chose something "irrelevant" as the logical support for your opinion, then. :oops: Some might even say "silly"...see above.

Please don't tell me I have to quote those remarks a third time before you understand that what I responded to and what you think I should have responded to are two different things...:)

Who are you talking to? I'm the person who pointed out the irrelevance of your saying ATI shouldn't release information because you know it already. Note: long does not automatically mean logical, WaltC, you have to work at it throughout.

I look at the rest of your post, and I am (not so) amazed to note that you are also trying to dismiss a long history of discussions as useless because they bothered to discuss something you'd already made up your mind on (without any technical basis that seems evident). Strange, though, how you just proposed you wereonly talking about the shader optimizer a moment ago, and how this made my stencil example irrelevant. :-?

The "zixel" discussion wasn't meaningless, it was a discussion about some new information pertinent to NV3x performance, and how nVidia's PR was inaccurate in using "pixels". The very term "zixel" is a separation from nVidia PR, not something that follows it. I'm happy for you that you knew exactly the situation before the discussion, but, again, Wavey wasn't discussing your apparent omniscience.

It would be...interesting...to go over (somewhere else) the topics you have complaint with, and how exactly you propose your discussion illustrates more "technical probity". While we're bringing up past discussions, I'll mention that I actually find your discussions to consistently be the exact opposite, most often ignoring technical details or even being downright technically incorrect. I believe I've even pointed it out to you exactly why on some occassions...though if you call those occassions "meaningless" I guess it's "obvious" that it doesn't matter?
...

There is...quite a bit more I could say to try and decipher the multi-layered illogic and filibuster you present and illustrate the problems with it, and I'm sure it could be entertaining to some, but I think I've covered everything actually remotely on topic about your commentary. I am amazed at how little relation your conversation has to anything remotely established outside of what you've already decided without any apparent adherence to objective rationale at all.
 
Re: SPCMW type of conversation

demalion said:
...lots-o-ranty-stuff...
There is...quite a bit more I could say to try and decipher the multi-layered illogic and filibuster you present and illustrate the problems with it, and I'm sure it could be entertaining to some, but I think I've covered everything actually remotely on topic about your commentary. I am amazed at how little relation your conversation has to anything remotely established outside of what you've already decided without any apparent adherence to objective rationale at all.
Before I respond to demalion a question for the mods: Do you ban people for kicking other members in the nuts hard around here? :|
 
Back
Top