State of 3D Editorial

Saist said:
bloodbob said:
It wouldn't have to have FP16 only FP32.

Nvidia are trying to raise the minium precision requirement to FP32 because it is VERY likely that the R420 will still only have FP24 precision there for nvidia can advertise they are the only DX9.1 complaint product.

Aside from the fact that there is yet to be any proof of any kind that there will be a Microsoft DirectX 9.1, and counting the fact that Microsoft DirectX 9.0b already covers Pixel and Vertex shaders 3.0, as well as covering the 32bit precision (although it is not a required part of the spec while 24bit is), and tossing onto that the fact that the Microsoft DirectX team has stated publically that there will be no updates to the DirectX standard until the time of Longhorn...


Makes me wonder there Bloodbob were exactly your pulling this out of?

Well first thing first I never said there was gonna be a DX9.1 I was replying to someone ( I can't find the post now ) who said that DX9.1 / PS3.0 was gonna require FP32. I said the obvious reason behind this if it was true as we know nvidia has been pushing for FP32 all along.

Now of course a company says there will be no more updates but what happens if they found something that REALLY stuffed up the standard at a late stage would they leave the standard completely broke I doubt it. And atleast on a binary level DX9 is gonna be updated because the DX9.0b does not yet support PS3.0 with HLSL.
 
My very first post in this forum :)

As a computer enthusiast, with a bad technical background (which means to say I dont understand much of what you people are discussing) but also seems to put me in level with the author of this article (who even though he might be wrong in a lot of his statements did manage to catch my intrest).

He really does make a very good point, and gave me an answer which I've been looking for, for quite some time. Whether he has his facts wrong about 16, 24 or 32 doesn't really concern people like me (although I do understand what it means theoreitcly), we want a kickass gfx card at a low price :rolleyes: .

Anyway, you all say he's pro nVidia, and thats the intresting part, what I got out of the article is the exact opposit. But maybe I'm looking at it objectivly. Basicly what I got out of the article is that all current nVidia products will not be able to compete with ATi products in speed unless you introduce more advanced software, which would ultimately mean none of them would run with satisfactory results (though nVidia would suck slightly less).

So he says basicly for now - Buy ATi, and maybe next time nVidia, if they do perform better at that time.

I have a close friend (nVidid homeboy) who went and bought a brand new FX card when he burned his 4400 card. Now he's living on hope that future drivers and such will bring out the hidden power that he does seem to think that card posseses. Now according to that article, that aint going to happend - hardware restriction and a complicated yet stupid(slow) way to use current specifications.

I didn't see anyone argue this, so I'm assuming this more or less correct?
 
JoshMST said:
I didn't like it when they did this, and I certianly didn't appreciate being lied to and misled. However, that is the nature of the business. I think we have all learned a significant lesson here. It is these companies business to sell chips, and they will try to sell them anyway they can. In the end such behavior is counterproductive, and it earns distrust from consumers. But these companies are betting that by the time everything comes out, they actually have real fixes and optimizations to give to the end user to help ease the pain. NVIDIA has done this to a degree, and they are still selling cards to users. And these users are in turn still buying these products.

Tell that to Kyle at [H] :rolleyes:
 
JoshMST said:
NVIDIA will not change the way they do business even though it is well known that they cheated. Yes, NVIDIA took this a step above what ATI did in the past with its 8500. ATI corrected their "optimizations" as NVIDIA is correcting theirs now. All I am saying is that nobody has their hands clean. SiS takes massive shortcuts and optimizations with their 3D products, but nobody talks about that (mostly because nobody really cares). It is the way it is. Not trying to make excuses, not trying to say it is ok, I am just saying that is the way it works. Do we have to like it? No. Do we have to deal with it? Yes. Companies have done this in the past, and they will do this again in the future. If they think they can slip this by users and reviewers, then they will.

If you call the DX9 wrapper a correction, well, so be it. The bottom line is that the workload is being reduced (not to mention precision and IQ) just to keep up in performance.

SIS....they admitted first hand that their Shader engine was 50% software. The mipmap level issue has been heavily discussed also and I never heard from them anything near the junk NVidias PR has tossed around.

Lying is just plain wrong...there are no mid points there. If their PR is so good, why havent they gotten into an official statement on why the NV3x architecture is not fully DX9 compliant. That alone would explain the issues, and all those "cheats" would have been treated not so hard by everyone. I personally can care less if the hardware supports DX10 right of the bat, but the way that a company markets a product also is very important, because I hate being lied about anything.
 
atrox- said:
Anyway, you all say he's pro nVidia, and thats the intresting part, what I got out of the article is the exact opposit. But maybe I'm looking at it objectivly. Basicly what I got out of the article is that all current nVidia products will not be able to compete with ATi products in speed unless you introduce more advanced software, which would ultimately mean none of them would run with satisfactory results (though nVidia would suck slightly less).

I didn't see anyone argue this, so I'm assuming this more or less correct?

I believe people were disputing that claim in this thread :?:
 
Here is my point of view to atrox-.

Standards are made with a set minium quality the people who create those standards are happy with the results that are produced. Some people think those standards might be set short and other think you could get away with less but the line has to be cut somewhere and that is why a standard is created to atleast statisify most people. Now nvidia believe that some how 24 bits isn't enough yet they feel it is neccesary to replace shaders that use a minium of 24 bits with ones the uses 16 or less.

Nvidia has been systematicly breaking those standards. Each time nvidia drops the subjective quality of a image by say 1% it doesn't matter all that much but they do it again and again each time we are statisfied that the image doesn't look too much different but after a while what we are statisifed is not any where near what we should have expected. Sooner we make a stance soon we get what we were advertised if you know the truth and you don't mind paying for the product going ahead and buy it but when we are mislead about a product and we buy it I ain't happy.
 
atrox- said:
My very first post in this forum :)
Welcome. :)

Anyway, you all say he's pro nVidia, and thats the intresting part, what I got out of the article is the exact opposit. But maybe I'm looking at it objectivly. Basicly what I got out of the article is that all current nVidia products will not be able to compete with ATi products in speed unless you introduce more advanced software, which would ultimately mean none of them would run with satisfactory results (though nVidia would suck slightly less).
This is a sticking point for me. What do you mean by "advanced"--something that uses more of the FX's DX9"+" features? But how would this help nVidia, using longer and more complex floating-point shaders, when they can't even match ATi's performance with current limited DX9 shaders? Not to mention it appears (per ShaderMark and HL2) that nV still hasn't exposed as much DX9 functionality as ATi (FPRT). I don't view that back-handed compliment as favoring ATi at all, and I certainly don't consider concluding ATi is the best choice in the high- and mid-end segments as necessarily biased, based on aggregate review results.

Having kept up with the R300+ and NV30+ on B3D's forums since their releases, I thought Josh's article was peppered with inaccuracies and some explanations that flirted with apologies (despite his protest to the contrary). I don't think I'm perceiving Josh's article from a biased position, but rather a more informed one. I also want to make it clear I don't view Josh as biased at all, and I respect him and usually look forward to his "State of 3D" editorials. This time, however, I think the news he presented was both old and incorrect or misinformed. I note that he's willing to make the extra effort to update it with corrected info, though, and I expected nothing less of him.

Josh, I agree that the article could use some updated information, and I'm glad to see you're willing to make the effort. Showing up here for the abuse is going above and beyond, though, so for that I tip my hat to you. :) I look forward to your update.

BTW, I agree with andy that the tone here can get overly and unnecessarily hostile. I'm particularly baffled when it happens to other reviewers who actually take the time to register and post here. I hope andy's post resonates with some people, and we can have more civilized and constructive conversations, rather than self-indulgent borderline-flame-fests.
 
atrox- said:
My very first post in this forum :)
And you are very welcome.

atrox- said:
Basicly what I got out of the article is that all current nVidia products will not be able to compete with ATi products in speed unless you introduce more advanced software, which would ultimately mean none of them would run with satisfactory results (though nVidia would suck slightly less).
Right now, things seem to indicate the opposite: ATI cards are the cards performing well with next generation software and are 'only' highly competitive with older, DX7/8 class games.

Of course, I work for ATI so I would say that, but I think that this reputation of the R300-series is very well deserved.
 
geo said:
All this "what's wrong with NV?" stuff really boils down to about 80% of the answer being "ATI". I think NV never believed in their hearts that ATI could do something like that, and got so focused on benchmarking their future products against their past products. I feel 99% sure that when they first sat down to talk about what 5800 was going to have to be able to do, that there was *never* a moment when one NV guy turned to another and said "Of course it will have to be 3x as fast as the current generation in AA to be competitive".

I disagree.
If NVIDIA hadn't ****ed up so badly with the NV30, maybe it still wouldn't be competitive in certain respects such as AA speed, as I am not aware of ANY design which could beat the R300 in that area, but it would most certainly have been a winner in other categories where it currently loses.

What "killed" NVIDIA was their optimism in relation to their own part. Not their pessimism in relation to ATI's part.

Once again, I'm of the "NV30 is a badly designed and executed finesse design while the R300 is a stunningly well designed and executed brute force design" school - but feel free to disagree.

All AFAIK and IMO, of course :)


Uttar
 
Hey this is also my first post on this site.

I was actually sent here from another site's forum and can't really believe what I've been reading. We were trying to have a decent little discussion about just this same subject, so I figured that I would get some good info here.

Could you please point me to some hard-data about alot of the conjecture you have been spouting? I mean when I answer a question with a simple "Yes/No", I never think about being taken seriously, but that's alot of what I see in this thread, alot of "It's like this", but no "because X=Y and Z/R=P". I have read that everyone thinks ATI should document more technical info on their products to the public, and I have to wonder if there isn't those documents then how can anyone prove or disprove anything stated about ATI's products? Are we just supposed to take everyone who works at ATI word on it? Anyways, as a computer programmer I never try to confuse hardware with software. There are times that I have written good code that runs very well, but when I ported it to another architecture it doesn't show the increase I wanted. I see that when I hear that Nvidia's product has all these new complex additions, but doesn't run well at all. I have taken a $150,000 Sun server and brought it down lower than a P2-300 in performance with some well placed crappy code and basic assumptions that didn't turn out. I had an excuse, I never compiled or ported any code from x86 to that architecture, I was just messing with it. Nvidia doesn't have that excuse, they should have done some more research and possibly held back their product so they could have changes to the architecture to be more competitive. But then they wouldn't have been competitive at all and possibly would have stumbled so badly that they would be stuck with thier motherboard sales and eventually could have stopped being profitable. I would hate to think that there was a chance that Nvidia could have been kicked out of the videocard market, Ouch! I would hate to think what ATI would have done if they no longer had to compete with Nvidia. Anyways, I'm tired and want to go to bed, but I hope I pissed a bunch of people off with this post and that it sparks some real technically backed-up response instead of the drabble I've been seeing posted in this forum.

Laterz
 
There is a lot of technical responses on this forum - you post was not the best way to go around looking for it. I'd suggest you start with the search button and look for any PS2.0 shader test, there is a fairly obvious trend amounst all of them.

When you say "should we just take ATI's word on it", no of course you shouldn't, but much of what hasn't been said has been proven and they really should be communicating these thing better. However, the problem is that many people are taking NVIDIA's word for it without much evidence to back it up. The types of assertions that "FX is designed for longer shaders" is one that is taken at face value, however longer shaders are just going to compound the issues with the shader performance of FX, not make them better.

We ran an article comparing NV30 to R300 prior to the release on NV30 based on the technical documentation that NVIDIA were sending to developers. Now reading that documentation it appeared NV30 would be the "all mother" of shader architectures - R300 was already available at that point and we had a lot of input from ATI because the first draft was severly lacking in knowledge of R300 and even after that the article still favoured NV30. And yet, look at the situation now - R300 has proven itself to be a far more capable architecture in a great range of shader situations.

Sure, you can always make good tech look bad with some fairly diasterous coding, however with the fairly limited tasks (in comparison to CPU's) that DX9 is designed for it does actually seem that R300 "just works", going by the majority of shader code we've seen so far. The window in which you can make the FX architecture "look bad" is much wider than that of the R300.
 
arken420 said:
Hey this is also my first post on this site.

[...]

Anyways, I'm tired and want to go to bed, but I hope I pissed a bunch of people off with this post and that it sparks some real technically backed-up response instead of the drabble I've been seeing posted in this forum.

Laterz

:oops:

Errr, it isn't our fault if most of what we say is based on already discussed and argued facts in other threads :p
Plus, you got to understand of lot of this is based on reliability. Some people here, including Dave, having proved that when they say something non-public, it's often accurate.

But to respond to a few more precise points...
Are we just supposed to take everyone who works at ATI word on it?

No, but Dio for example, has a pretty big implication in doing for-developer papers about improving shader performance on ATI hardware. OpenGL Guy works on D3D drivers and the AA implementation driver-wise IIRC. And so on.

Those people generally know WTF they're talking about, and they generally have no reasons to lie to us.

Same thing for the NVIDIA personnel lurking internet forums illicitly *grins* - too bad that since they can't claim for NVIDIA, nobody ever takes them seriously. A source of mine regularly posted *correct* information on forums, generally not very juicy stuff, and everyone always said he was lieing or something.
If I told the exact same thing, everyone would have said it made sense.

Just shows you how much reliability plays in trusting information.

I really wish NVIDIA personnel finally received the authorization to post stuff on forums. The current way it works, a small paragraph in a barely related contract not authorizing them to post is pretty lame.

The lamest part, though, is that I know for a fact this has already been used at least once against employees.

Nvidia doesn't have that excuse, they should have done some more research and possibly held back their product so they could have changes to the architecture to be more competitive. But then they wouldn't have been competitive [...]

NVIDIA's original NV30 ETA: Spring 2002
Available on store shelves: Spring 2003

Trust me, it was delayed enough already.

I see that when I hear that Nvidia's product has all these new complex additions, but doesn't run well at all. I have taken a $150,000 Sun server and brought it down lower than a P2-300 in performance with some well placed crappy code and basic assumptions that didn't turn out. I had an excuse, I never compiled or ported any code from x86 to that architecture, I was just messing with it.

I don't quite understand what you're trying to say there. Are you saying NVIDIA's architecture is very complex to program, but that if an ideal situation is given for a specific result, you can achieve good performance?
If that's the case, let me tell you right away that compared to ATI, even ideal cases are generally very far behind in Pixel Shader programs.

Sure, cases where NVIDIA beats ATI in complex programs do exist, but they're significantly rarer than the opposite - and significantly might in fact be an understatement *grins*

and that it sparks some real technically backed-up response instead of the drabble I've been seeing posted in this forum.

Once again, there are people on this forum who pretty much know the technical justifications of what other people are saying. And that might partly be why joining this community isn't particularly easy, as for many other technical communities :)


Uttar
 
Wow, that's a much better response to an otherwise very uninformed basis. I agree that I'm not the best person to be on this thread trying to prove a point, but I've got to start somewhere. Look at it this way, I'm an unbaised third-party trying to make everyone look at these things in a new light. I can see that the performance between the two cards is grossly different, down-right nasty on some programs. What does that mean? Clearly we are seeing that the Nvidia is capable if the right software is used, correct? Take that at face value. If the FX card is only good at DX8 stuff then fine, but give it that much credit. Don't worry about what Nvidia is claiming, since all manufactures lie about somethings, maybe lie is harsh, bend the truth...

Anyways, my whole point here is that the architecture between the two cards is very different, they go about things very different. We can see that, and everyone agrees. ATI, having had the benefit of inputting to M$ what DX9 should look like and act like has an upper-leg in that respect. OpenGL is different though, in that it hasn't changed dramatically in a while, with the exception of the extensions. Also, does the current HLSL hamper the FX's chance to compete in DX9 games? From everything I have heard it does, it again is not well suited for the FX's architecture, correct? If that's the case, that DX9, is just not well suited to the FX, then what is? Has anyone looked at what this card can do in anything other the game industry? With huge shader programs, and higher presision(sp?) programs can there be a good reason to use this card? I believe that to fully understand anything with computers you have to sit down and program it. That's really how I learned my way around when I was much younger, and I think it still holds true today. I'll put it this way, I won't say another word about this untill I've had a chance to sit down with an ATI and Nvidia card and write some DX and OpenGL stuff for it and see how it performs.

Thanks for just a wonderful reply. Most people I've seen when talking about this have been just down-right rude about responding to posts like mine.

Laterz
 
arken420 said:
If that's the case, that DX9, is just not well suited to the FX, then what is?

More accurately, the FX is not well suited to DX9. Subtle, but key, difference.
 
arken420 said:
Also, does the current HLSL hamper the FX's chance to compete in DX9 games? From everything I have heard it does, it again is not well suited for the FX's architecture, correct? If that's the case, that DX9, is just not well suited to the FX, then what is? Has anyone looked at what this card can do in anything other the game industry? With huge shader programs, and higher presision(sp?) programs can there be a good reason to use this card?

It is true that HLSL originally didn't favor NV3x cards at all, quite the opposite in fact. But with the PS2_a profile for HLSL as introduced by DX9.0b, things have gotten much better IMO.
Furthermore, the Detonator 50s are relatively smart when it comes to reordering instructions and reducing register usage.

Performance-wise, even in the NV3x profiles and using Cg, the NV3x performance compared to the R300 using ARB2 profiles is still pretty bad.

Actually, if you accept to use FX12 nearly everywhere, the NV30vsR300 battle we had back in the day was a mixed bag even in complex shaders I believe. Problem is, the DX9 standard was FP24, and nothing lower than FP16 is tolerated in PS2.0.

The NV35 "fixed" that by reaching peak IPC numbers in FP16 mode, with FX12 not being faster beside perhaps by 0.5% due to latency, and I'm not really sure of that either.
But the problem then is that the clock advantage NVIDIA had has greatly diminished: It was 500vs325. Now it's 475vs412.
That was originally a 54% advantage, it's now a 15% advantage.

And FP32 average IPC is still not equal to the NV30's average FX12 IPC, while the register usage penalties ( that means, the more registers are used, the lower the performance ) are obviously bigger with FP32.

It is however true that the NV3x architecture can be very nice in certain niche workstation market, since it's still more flexible than the R3xx ( unless certain ATI driver developers got around to exposing the F Buffer in OpenGL & Direct3D :p ) - just look at the Quadro FX 500, 1000, 2000 and 3000 success. But if ATI hardware had the extra instruction slots, or used its F Buffer to emulate them, it would still be significantly faster.

Of course, NVIDIA has other minor advantages in niche markets requiring extreme flexibility, such as their dynamic branching support in the Vertex Shader. All of these are excellent reasons for which NVIDIA is still the uncontested leader of the workstation market. ( Although ATI's design win with SGI does seem to be a move in the right direction for them ).

Once again, considering how overly ambitious the original NV30 design was, and how much they failed executing that design, I hardly find the NV3x's problems' surprising.


Uttar
 
arken420 said:
Don't worry about what Nvidia is claiming, since all manufactures lie about somethings, maybe lie is harsh, bend the truth...

I’ll start with this, since its actually what this discussion is about. Josh has admitted that he’s had a lack of input from ATI and much input from NVIDIA – this thread is highlighting that the few who have had more input from ATI can see that this is quite obvious from Josh’s article. Given the responses I’ve had from Josh privately he now has gained much better contacts with ATI both from this thread and through better PR contacts which should give him a much more balanced perspective on what is occurring – ultimately that may not do anything to change his opinions on what will actually happen, and that’s fine as long as he has had better input from both parties and he articulates why he feels that is the case (and knowing Josh, I’m sure he will).

Again, this is what’s trying to be addressed in this thread in particular.

But on to your other points…

What does that mean? Clearly we are seeing that the Nvidia is capable if the right software is used, correct? Take that at face value. If the FX card is only good at DX8 stuff then fine, but give it that much credit.

And people do (although, even then, ATI’s R300 architecture has been proven to outperform the FX architecture on numerous occasions even with a clock speed deficit). However, even then people aren’t saying this is all the FX is good at – obviously stencil performance is another big case that goes in the FX’s favour, however the single key title that will highlight this just isn’t here yet (no doubt, much to NVIDIA chagrin). But then, perhaps there is also a case to say that perhaps R300 is just a little more balanced for the range of tasks that is required of it?

ATI, having had the benefit of inputting to M$ what DX9 should look like and act like has an upper-leg in that respect.

Suggestions are taken from all IHV’s. Its not the case that MS and ATI were closeted away and designed DX9 together.

Also, does the current HLSL hamper the FX's chance to compete in DX9 games? From everything I have heard it does, it again is not well suited for the FX's architecture, correct? If that's the case, that DX9, is just not well suited to the FX, then what is?

Arguably, in some cases its not that suitable for R300, but R300 just handles these thing better (as I highlighted with the texture ops earlier in this thread). There are other cases whereby HLSL compiles to macro’s that could be better handles by native instructions, but the driver compilers implemented by both NVIDIA and ATI are designed to spot these and handle them natively.

Has anyone looked at what this card can do in anything other the game industry? With huge shader programs, and higher presision(sp?) programs can there be a good reason to use this card?

What about “horses for courses� If you’re primary target market is games, then surely that’s what you should design for? And that’s predominantly what we are talking about here.
 
Uttar said:
Of course, NVIDIA has other minor advantages in niche markets requiring extreme flexibility, such as their dynamic branching support in the Vertex Shader. All of these are excellent reasons for which NVIDIA is still the uncontested leader of the workstation market. ( Although ATI's design win with SGI does seem to be a move in the right direction for them ).

Personally I woudl contest that. IMO, the workstation mrket is even slower to wake up to shaders than the game market is due to its reliance on OpenGL. Even though Cg has been around to some extent and now OGL1.5 has ratified shader extenstions and language, I suspect that it won't be until OpenGL2.0 is finailsed and we see a widespread hardware adoption of it that the workstation market will evolve a little more rapidly.
 
Clearly we are seeing that the Nvidia is capable if the right software is used, correct?

What do you mean by "right software" ? Tailored to the FX architecture, like the special code path in HL2 ? Or one not using too many DX9 features or none at all ? I remember Nvidia boasting about "moving the industry forward"...

If the FX card is only good at DX8 stuff then fine, but give it that much credit.

Well, the GF4200 was good at DX8 too, so the FX being good at DX8 is hardly something to be excited about... And who were the ones gloating about "the Dawn of cinematic rendering", and how 3DM2K3 was a bad benchmark since it included too much DX8 effects... Nvidia hyped the DX9 capabilities of its cards, and got burned badly.

ATI, having had the benefit of inputting to M$ what DX9 should look like and act like has an upper-leg in that respect.

I wish people would stop implying that ATI somehow "unfairly" got an advantage by being favored by MS, or that they had better luck with a "snapshot". Companies don't start >300 millions $ R&D programs on a whim. They all have a plan, and it seems Nvidia's plan was to forcefeed CG to the industry and reintroduce a near proprietary standard in 3D graphics. That's called hubris, and it backfired pretty badly on them. The irony is that ATI was so succesful because it adopted exactly the same tactic as Nvidia had when battling 3dfx : stay away from proprietary stuff, and try to run the industry standard better than the competition.

Has anyone looked at what this card can do in anything other the game industry? With huge shader programs, and higher presision(sp?) programs can there be a good reason to use this card?

Well, those qualities are probably very interesting for the workstation market (QuadroFX series), but the problem is that the GFFX family (from the 5200 all the way to the 5950) is marketed as gaming cards, and ones suited for DX9 games at that. There is clearly a problem there.

Very long shaders (over the current DX9 limit) like those you mention are not currently possible in a real-time game, for either ATI or Nvidia's offering. For the kind of shaders we will see in this generation of hardware and games, the ATI offering is much, much better. So is a very long shader count an advantage for Nvidia (discounting the F-buffer for the moment) ? Sure, but not one which has any relevance to the gaming market, which is where those cards are sold.

OpenGL is different though, in that it hasn't changed dramatically in a while,

For the gaming market, OpenGL does not really matters. Carmack matters, which is a subtle, but important difference. If Carmack was to start programming in DX9, then for all purpose the OpenGL gaming market would be dead (well, there's also Serious Sam). If you want to make a successful gaming card, you better make sure that it performs very well under Direct3D.

And unless I'm mistaken, OpenGL changed a lot too.
 
arken420 said:
Wow, that's a much better response to an otherwise very uninformed basis. I agree that I'm not the best person to be on this thread trying to prove a point, but I've got to start somewhere.

Common practice when "starting somewhere" is to do a lot of lurking and reading of threads to get the atmosphere of the forum. If you jump in with both feet as you have, you'll tend to get dismissed by those that have gone over this ground time and time again over the last few months.

That's probably why you are seeing abrupt responses that you consider rude. Some people probably consider you are also being rude by barging in and not reading up around the locale first.
 
Back
Top