State of 3D Editorial

First of all wassup? I'm new here... been reading up on stuff from this website for quite some time now... i'm from a tiny compunity over at x-3Dfx.com. :p

Anyways...

I see all of you getting worked up over nothing. Unless you all haven't heard nVidia has just released an XT line of video cards... yes "XT". It is a lower end FX5600 using ATi's new naming scheme.

It doesn't take an idiot to figure out that nVidia is trying to confuse customers into buying there crappy cards over ATi cards. If you all haven't read the marketshare reports they go a lil like this...

In the Total graphics card market the situation looks like this:
Intel: 35%
nVidia: 25%
ATi: 22%
VIA: 9%
SiS: 8%
Matrox: 1%

In the work environment the situation looks like this:
nVidia: 62% (Down 2%)
ATi: 32% (up 4%)

In the Laptop environment:
ATi: 71% (up 3%)
nVidia: 21% (up 2%)

In the Total DX9 market share:
nVidia: 72% (Thanks to the 5200)
ATi: 27% (9200 is not a DX9 card although it outperforms the 5200).

High Performance DX9 cards 9500 and up; 5600 and up (my area):
ATi: 68%
nVidia: 32%

nVidia not used to ONLY having 32% of the High Performance GPU market have released the FX5600 XT, which has a core clock of only 230Mhz and a memory clock of 400Mhz (200Mhz DDR).
Speculation is (rather this is quite evident) that nVidia want to confuse customers who are looking to buy 9600 XT's into buying 5600 XT's.. Wow.. just change the 5 for a 9 and you got the same name.

As for driver cheating I've exposed quite a few cheats by myself and have seen them first hand (I own a GeForceFX 5600 Ultra by Leadtek).

As some of you pointed out the Quake/Quack cheat from ATi. They lowered Mip Map levels to achieve better performance in Quake III... they were caught and promised to never do it again.... they have yet to cheat.

nVidia have never admited to cheating and continue to do so. They even went as far as telling consumers that they had implemented a system to regulate what a cheat was and what an optimisation was only to turn around and cheat again. Now I've lost all my Trilinear filtering in my D3D apps with my FX. Is it just me or isn't Trilinear filtering a VERY old method of filtering which is basically now not a bonus but a given... EVERY card must do Trilinear... and should be able to do so.

I am angry, I used to always buy the latest products from both ATi and nVidia only to find myself no long willing to purchase the crap coming out of nVidia.

This puzzles me as it had changed me into sort of a fanboy I guess... because my constant search for the truth has given me a certain sense of pride from owning an ATi 3D accelerator..... (since I keep revealing cheat after cheat after cheat in nVidia drivers).

To be fair I have tested ATi drivers consistantly as well... and have yet to find anything fishy.

To be fair to all consumers around the world I believe it's time for nVidia to close it's GPU doors down for good... it will only get worse.

XGi is coming out with what might be a VERY competitive product which may force nVidia to the number 3 spot (much the same fate 3Dfx faced when the GeForce2 GTS and the Radeon came out back in 2000).

If you don't agree with what I say then you're probably once of those people who hug there nVidia plush doll each night before going to be bed...since from where I stand this is AS brutally honest as one can get....

Will I ever buy another nVidia product... HIGHLY unlikely...
Do I think nVidia has a chance to build another market leading product.. Hell yes...it's nVidia.
Then why won't I buy another nVidia card?

Because they lied, cheated, blackmailed, hurt and betrayed our communities trust. They completely and totally deceived us enthusiasts and gamers and they don't seem to want to stop.

Peace.
ElMo
 
ElMoIsEviL said:
XGi is coming out with what might be a VERY competitive product which may force nVidia to the number 3 spot (much the same fate 3Dfx faced when the GeForce2 GTS and the Radeon came out back in 2000).

They won't.
 
I just want to say (going back a way), that while what nVidia have done may well be the "nature of business", our not standing for it is the "nature of consumerism". Sure, nVidia may do X, Y and Z to improve sales, look good and impress the share holders, but why on earth do we need to stand for it? Are reviewers working for nVidia? No, they're supposed to be guiding the consumer, so should look at it from their perspective and unleash hell on any practice that will not benefit us.

Helping the consumer is the point of reviewers. Without that, they may as well not exist.

Now let us never hear how "it's all right because nVidia are a business" again. If their business is screwing the consumer, then I as a consumer intend not to stand for it... in this case my money has gone to ATI.
 
wasn't nVidia the one entity that was pushing FP32 over FP24 because of it's superior IQ and did they also claim that they could do FP32 FASTER than ATi could do FP24?

I never liked any of nVidia's business tactics and I went from a S2 Savage 2K to a Radeon LE and have never looked back.
 
Bouncing Zabaglione Bros. said:
jiffylube1024 said:
^ I find this fascinating. I'm curious - how much of the R3x0 design would you say ATI owes to ArtX? If I remember correctly ArtX was the company that won the Nintendo Gamecube contract, which ATI was in the bidding for. Was the Gamecube contract the only reason for ATI buying ArtX, or was it deeper than that?

It's not so much the design itself, but when Dave Orton of ArtX joined ATI in the role of running the company, he changed the aims, attitudes and expectiations of ATI. He turned the company around into something that was willing and able to jump from building average mass-market cards, to the best graphics cards in the world more than a year ahead of their competitors at Nvidia.

Ah, I never knew about Dave Orton coming in from ArtX and running the company. The results, however are night and day. ATI before that era (ie the Radeon 9700Pro) was a company without too much of a focused direction IMO. A good market follower, but never a leader. From the Rage II cards to the Rage Pro to the first two Radeon generations, ATI seemed good at releasing acceptable cards, but never seemed to be capable enough to be a market leader.

However, since the Radeon9700 Pro, they have done a lot to prove they are capable of being a market leader. They have been a good study of Nvidia's climb to the top, if you will ;). And Nvidia has succumbed to their own hubris (for the time being). Hopefully both companies reload for a competitive '04!
 
XGi is coming out with what might be a VERY competitive product which may force nVidia to the number 3 spot (much the same fate 3Dfx faced when the GeForce2 GTS and the Radeon came out back in 2000).

If you don't agree with what I say then you're probably once of those people who hug there nVidia plush doll each night before going to be bed...since from where I stand this is AS brutally honest as one can get....

I am very doubtful that XGI will be competitive with ATI and Nvidia when the release their Volari lineup. If XGI is serious about competing in the video card market, they I can see them possibly having a competitive part in a generation or two, but for now they're definately on a prove-it-to-me basis.

First of all, their cards are all numbers and paper specs right now. At least ATI and Nvidia have not only working silicon out there, but competitive parts for sale.

What makes XGI so different from ATI in the original Radeon days and before that - ie a half-step behind? XGI's dual-GPU'd Volari looks good on paper, but there are chinks in the armor. Their alleged 5500 3dmark 2k3 score is already behind ATI's 9800XT.

Second, how can you be so sure XGI's drivers will be all that when they first come out. It takes a long time to get truly reliable, stable, compatible drivers for a series of video boards.

Third, I think you are seriously overreacting if you believe Nvidia is dead in the water. Not only are they in a much *much* stronger financial position than 3dfx ever was, but they have a much more diverse market. They have an excellent position in the AMD motherboard sector as well.

Moreover, we don't know how NV4x and R4x0 will perform, and it's all up in the air right now. Let's wait and see how NV4x does before we decide that XGI (of all companies!) will overtake them!
 
DaveBaumann said:
When I say there is a lack of public information I mean there is a disparity in the ease to jornalists in which ATI and NVIDIA make information that make them look favorable available. For instance, with the 52.16 drivers the first thing NVIDIA did was mail a whitepaper to their entire press list, so that information was readily available to them and it makes NVIDIA look good - now, for instance, how many journalists, or even review readers, know that ATI have already implemented a basic shader compiler optimiser since cat3.6? I knew, because one of the ATI guys here has referenced it, and some of you may have read it in my 256MB 9800 PRO review, by the press at large have no clue because ATI didn't tell us/you about it. Further to that - how many of you knew that ATI can actually do some instruction scheduling in hardware (i.e. it can automatically parallelise some texture and ALU ops to make efficient use of the separate pixel shader and texture address processor)? I'll bet not many - why the hell aren't ATI telling us about these things?

So, yes, I've already said to Josh that I think there has been too many assumptions in there, but the apparent disparity between the NVIDIA and ATI information in there is partly because ATI just don't inform people about a lot of things.

I haven't seen much of interest in nVidia's whitepapers all year long--at least information that I judged to be sufficiently free of biased propaganda to the degree that I was tempted to seriously consider it. Too much of nVidia's "whitepaper" jargon seems little more than convenient marketing PR written to confuse issues which are, at their essence, pretty darn simple. Compared to R3x0, nV3x's shader hardware engine is lacking a lot.

It matters little actually what software "optimizations" nVidia undertakes with nV3x, or where they undertake them, it's simply not enough to overcome the disparity between the R3x0 and nV3x architectures at the hardware level. To wit, this is the fact that nVidia wishes to obscure and confuse, if possible, and this is the intent of the "whitepapers" the company creates. There is no real intent to "educate" anybody--the intent here is to explain away their performance deficits to anyone who is willing to listen to any excuse they can manufacture. The only thing surprising to me about it this year have been the people more than willing to lend such empty excuses an authority and weight which grossly exceeds any possible "educational" value they might ever have. I mean, I find it odd in the extreme that people trumpet nVidia's latest tactic--that of blaming the deficit on poor compiler optimization (never mind that it was entirely nVidia's fault it was "poor" to begin with)--and speak of the fact that nV3x shader engine performance is now only 200%-300% behind R3x0, when it was 300%-500% behind, as though this was some kind of validation of "what nVidia's been saying." Despite a solid year's worth of "optimizations" of all descriptions, nV3x still plays second string to R3x0. That is about the only solid fact that nVidia has "validated" for me this year.

The fact is is that unless nVidia starts publishing whitepapers which simply state that nV3x is not competitive with R3x0 from a purely architectural standpoint, nVidia simply has nothing to say that is worth reading, as what the company does manage to say is nothing but a "blame game" in which everybody is at fault--except nVidia. I think someone would have to be pretty thick to accept nVidia's whitepapers as being more than utterly self-serving and defensive.

And that's why ATi has needed to say very little about its optimizations, compared to the reams and reams of virtually useless and apologetic info coming from nVidia with the intent to "clarify" that their performance deficit is due to software issues (which are moving targets and change constantly to fit the situation nVidia needs to "explain" away), instead of exposing that the root of nVidia's current performance deficits (and the deficits it has suffered all year) is the inferiority of its nV3x architecture when compared to R3x0. A factual "whitepaper" would explain why this is so...:)

In short, is it really "helpful" to anyone to know that nVidia has an "answer" to explain nV3x shortcomings? Such answers, in that they provide no relief of any kind, cannot possibly be helpful, it seems to me. Rather, I think what is important for everyone to realize and accept is that so long as nV3x remains the "best" that nVidia can field in competition to R3x0, nothing is going to fundamentally change or shift, regardless of how many "whitepapers" nVidia publishes. Conversely, an increase in whitepapers from ATi will serve to broaden their lead not one little bit. What nVidia needs is a competitive architecture--not further whitepapers, IMO. I know that some willingly confuse the "world as it ought to be according to nVidia," with the world as it is, but fortunately, I am not one of them...:D
 
If Doom 3 had been released in H2 2002 or H1 2003 as originally expected, the 5800 might have looked pretty good to the eye of "Mr Joe Consumer"...
Lets not forget that there is an NV3x path built into Doom3 with all the technical quality compromises which that entails:
Texture lookups where ATI does mathematically correct calculations (which Carmack has stated to be clearly better than texture lookups), built in lower precision where ATI is doing fp24 (and again Carmack has said the higher precision looks better if only slightly), & probably others though those two are bad enough.
And thats without even considering what the drivers will do to it...

I just refuse to buy the idea that GeforceFXes will really be beating ATI at Doom3.
Maybe they'll get a few more fps, but Carmack himself has stated that the FX is doing less so who cares if NV3x gets those few fps more?

Its like if valve cut down the effects load on the mixed mode of HL2 enough that the FX beats ATI in fps.
Who cares when we all know that the FX is doing less work & the ATI card renders it better?

I guess the answer is: Joe Consumer who doesn't realise that the FX is doing less work at lower quality... :rolleyes:


Anyways, if only there had been threads like this (as in this much known for sure rather than pure conjecture + little/no proof of the relative performance) way back when the FX line first came out & I was trying to figure out whether to buy ATI or NV...
It would have saved me all those months reading between the lines of reviews, public statements etc before I was sure that the r300 was an utterly fantastic chip & NV was in trouble.

The best thing about the r300 is that the design of the pipeline is so elegant that ATI had enough room within their transistor budget to go brute force too.

We've seen that the 4 pixel pipeline 9600p, even running at a lower clock speed than 5900u can beat that card in the real dx9 shader heavy situations where the elegance of the pipeline design should be the main factor.
This proves outright that the ATI pipeline is the more elegant of the two.
 
As has been mentioned many times, a key determinant of D3 performance will be accelerating shadow volumes. Nvidia have incorporated Carmark's zpass/zfail stenciled shadow volume techniques by including 2-sided stencil testing & depth clamping in HW.
 
I have trouble seeing the association between what you talk about, and what Dave said, WaltC. It is demonstrable that there is a tangible result of a lack of specific information released by ATI and a relative abundance of specific information (however PR-centric and technically distorted) being released by nVidia, and that is what Dave is talking about...your discussion of your personal take on whether information is necessary for you doesn't say anything that changes that, or seem to succeed in making your opinion applicable to what Dave was addressing, AFAICS. He wasn't talking about your reaction to such information.

On that note, stevem's comment on 2-sided stencil seems to underscore what Wavey is saying fairly well.
 
arrrse said:
We've seen that the 4 pixel pipeline 9600p, even running at a lower clock speed than 5900u can beat that card in the real dx9 shader heavy situations where the elegance of the pipeline design should be the main factor.
This proves outright that the ATI pipeline is the more elegant of the two.

Helloooo, I never said NVIDIA hardware was better than ATI's.

I said it looked better in Doom 3 to the eye's of Joe Consumer. That inclusion of "Joe Consumer" also meant I disregarded ALL IQ issues beside major ones, which simply do not exist in Doom 3 for NVIDIA hardware, but in fanatics' fantasies.
Even Carmack said the difference wouldn't be visible unless you tried to find it - so if you want to quote your hero, at least quote him right ;)

Without wanting to be harsh, although I'm sincerly tempted to be due to your nickname, please do not bash people's opinion just because it's "fun". Thank you.


Uttar

EDIT: At Doom3's time of release, I doubt the advantage will be anywhere as big as NV30/R300 or NV35/R350. Which is why NVIDIA's bet hasn't paid off at all.

P.S.: ULE ETA: 48 hours.
 
stevem said:
As has been mentioned many times, a key determinant of D3 performance will be accelerating shadow volumes. Nvidia have incorporated Carmark's zpass/zfail stenciled shadow volume techniques by including 2-sided stencil testing & depth clamping in HW.
ATI has the same in the R3x0 and RV3x0 parts.
 
OpenGL guy said:
stevem said:
As has been mentioned many times, a key determinant of D3 performance will be accelerating shadow volumes. Nvidia have incorporated Carmark's zpass/zfail stenciled shadow volume techniques by including 2-sided stencil testing & depth clamping in HW.
ATI has the same in the R3x0 and RV3x0 parts.

Depth Clamping too? I was only aware of two-sided stencil...
 
OpenGL guy said:
stevem said:
As has been mentioned many times, a key determinant of D3 performance will be accelerating shadow volumes. Nvidia have incorporated Carmark's zpass/zfail stenciled shadow volume techniques by including 2-sided stencil testing & depth clamping in HW.
ATI has the same in the R3x0 and RV3x0 parts.
Maybe stevem was thinking of depth bounds (UltraShadow) instead of depth clamp. Which is, btw, only supported by NV35+
 
Third, I think you are seriously overreacting if you believe Nvidia is dead in the water. Not only are they in a much *much* stronger financial position than 3dfx ever was, but they have a much more diverse market. They have an excellent position in the AMD motherboard sector as well.

Oh I do not believe I am at all overreacting as I never stated that nVidia were as you say "dead in the water".

I am simply mixing in a bit of philosophy as well as laws of nature into these current day events.
What I mean is "they" say that history repeats itself. And by all means it is doing so ever since NV3x was pinned up against R3xx. The entire events which lead up to these two pieces of silicon as well as the events we see today can ALMOST all be matched up to events we saw between nVidia and 3Dfx.

I could possibly display my analogy (careful with analogy's though).


Bck in the day nVidia released a card which we all know as the GeForce 256. This was the first true GPU. There main competitor 3Dfx were still with there aging Voodoo3 lineup which was completely outmatched by the GeForce.
Soon after 3Dfx announced the VSA-100 (Voodoo4 4500, Voodoo5 5500 and Voodoo5 6000). When they became publically available nVidia counter attacked with the release of the GeForce2 GTS... which again completely outmatched the VSA-100 and at the time 3Dfx's best offering the Voodoo5 5500 (the 6000 was never publically released).

3Dfx soon after (a little under a year) went out of business because they did not have any financial backbones to sustain the company. (other business practices).

History repeats itself today.

ATi released the R300 (Radeon 9700 Pro) which completely outmatched nVidia's again GeForce4 Ti lineup. When nVidia made available it's counter attack in the form of the GeForceFX 5800 Ultra ATi counter attacked with the Radeon 9800 Pro..... the rest well right now is history.

nVidia won't go bankrupt, never said that... but to assume that they will regain a position of undisputed power like they once held is illogical.

Logic points that they may become the market leader once more wearing the performance crown within the next 2 product cycles.... and there next product might VERY well come close to ATi's upcoming offering the R420.

Add to this the hatred that has spurred up against nVidia as they continue to lie and cheat there customers, well things do not bode well for the once mighty king.

Only time will tell.....ironically I said these VERY same words when things looked grim for 3Dfx.
 
Back
Top