Daliden said:
Well "excuse me very many" for not being a native speaker and therefore not knowing the correct terminology
. With elaborate programs (
not "elaborate shader instructions") I simply meant, for example, that nVidia supports branching in its shader programs, and that's not found from ATi, nor any other hardware that has shader support of some sort, now is it? That's where the "usual" came from. If by "sub-par" you mean "slow as hell", yeah, then it is sub-par. But you are also saying that it is actually not the case that you can write more elaborate (yes dammit, feel free to offer me a better word to use there) shader programs on nV3x? Of course they cannot be used in realtime games, and that should have been obvious to nVidia engineers from simulations. So, that really does beg the question "was there more behind choosing CineFX as the name than just empty marketing rhetoric?".
There are as many theories being circulated about the nV3x architecture (eg, "non-traditional," or "zixels" or--you name it) than most of us have fingers and toes...
What I'd like to know is the purpose for which you might like to write a "more elaborate" shader program of the type being hypothesized?...
Is elaboration for elaboration's sake a good thing? I can see how it might be a slow thing, but as to being a "good" thing, I think that is YTD...
(Yet to be determined.) I guess I'm just weary of hearing fanciful tales spun about an architecture no one really knows anything about simply in order to justify it (I don't think you are doing that, btw.)
And what's with the condescension? As if the "Pixar-like rendering" meme hasn't been around for years. Haven't read nVidias PR about Dawn; watched it a couple of times once the ATi wrapper came out. Pretty enough, I guess, but it's basically just textures. I want light and shadows, dammit!
Actually, it was 3dfx which started the "Cinematic, Toy-story" brand of PR associated with 3d gaming back in 1999--and you are exactly right, it's as old as the hills. I guess I'm weary of hearing it...
I must admit that I don't see your logic here. I specifically mentioned "2-in-1" so that it could be said that nVidia designed nV3x to be both a 3D gaming chip and a 3D workstation chip. I mean, to me this seems the only rational explanation for the performance we're seeing. OK, so they guessed wrong, and nobody wanted to adopt CG instead of Renderman . . .
My logic is simple and factual--most of nVidia's chips in the last few years have been sold into the "professional" segment and into the "gaming" segment. nV3x is no departure. The only difference between them has been the driver and software packages included, as well as the price. ATi for instance sells R3x0 as FireGL (I believe)...and it is just as capable for the "professional" as is the nV3x--but it's much faster at full-precision 3d *games* than is nV3x. Ergo, it does not follow that a capable 3d "professional" chip *must be* slow at full-precision 3d gaming.
Some are, however, slow at 3d gaming--such as 3d-labs offerings--but 3d Labs products are not sold into the 3d gaming markets, either. They are squarely aimed at the professional markets, as the company's pricing and marketing clearly indicate.
The rational explanation for nV3x's slow gaming full precision performance is abundantly clear--it's a poorly designed DX9/ARB2 full precision chip, in *comparison to* R3x0. We need not invent fanciful tales about "off-line" rendering to explain it.
Perhaps I should have used "2-on-1" instead, 2 chips on one board? ATi uses the same circuitry for everything, be it games or workstation use. But that would not exactly be the case with nV3x, would it? In gaming, the FP32 units would lie dormant, and much of the shader features remain unused. But in workstation use, it would be the FX12 and FP16 units that would be left aside (all this speculation still relates to the rosy world of nVidia's dream from a couple years back).
In professional 3d rendering work, most especially in ray tracing, the nV3x fp32 units would be useless for final output. You need the cpu for that. The best you could say is that, for software that supports it, you might use fp32 for *preview* work--but not for final output.
Uh, of course they aren't marketing it as a DX8 chip
That would be nVidia from some parallel universe. But what has the current marketing to do with what the design intentions were back then?
Back when...as in back when they found out about fp24 in DX9? The problem is that nobody put a gun to nVidia's head and said "You must include fp32" or else, did they? It was an elective choice they made, just as ATi made its choices. Nothing prevented nVidia from designing an fp24 chip, except nVidia.
Secondly, if nVidia had to depend on the "professional workstation" market for its high-end 3d chips, they'd be out of business rather quickly, or wind up being bought by Creative Labs...
Obviously, their intent was to design a chip for the 3d gaming markets which they could also market in a higher-priced package to the professional markets--just like they've always done with Quadro. There is nothing I can see that would possibly make me think anything different.
If we want to talk about marketing, let's talk about the marketing when the first FX cards were published. At least to me it seemed an awful lot like "the few cards we can get available are selling as hot cakes to Hollywood studios, where they are used to make movies" or something like that. It had this image of an all-powerful card that would be almost ridiculous to use for mere gaming. Of course, when that didn't pan out at all, now they market it for gaming as much as possible.
OK, I see your problem...
You were blinded by the PR blitz about the "dawn of cinematic computing" that preceeded the first nV30 product availability by several months (before it was cancelled)...
Man, and they say PR doesn't work...Honestly, I never, ever got out of their PR the message that you did...
What I got out of it was nV3x was supposed to bring "Hollywood" to 3d gaming. Heh...
I was very skeptical then, and with much justification, it turns out.
One more thing: I'm not here advocating anything, I'm just standing on a soap box and trying to advance my own pet theory about what happened to nVidia
That's fine--except it's simply not the most obvious, simplest explanation. What's the reason nV3x looks so bad at DX9/ARB2 precision? It's R3x0, of course, and nothing else whatsoever. If not for R3x0 no one would be discussing why it was "so slow" in those areas, because it would be the "fastest" thing going. In short, if not for R3x0, this discussion would not be taking place...
If the R3x0 is not considered by some to be as good on a "professional" basis as nV3x, the reason would have nothing to do with fp32--at all--since you can't use it to do ray-tracing--Heh...
--or any other of the incredibly resource-intensive things off-line special effect production work entails. The reason would have to do with specific nVidia-architecture optimizations code designers have built into their rendering software for nVidia OPenGL drivers. It certainly would not be because of the hardware.