State of 3D Editorial

Discussion in 'Graphics and Semiconductor Industry' started by PaulS, Oct 30, 2003.

  1. PaulS

    Regular

    Joined:
    May 12, 2003
    Messages:
    481
    Likes Received:
    1
    Location:
    UK
    Saw this over at NvNews:

    State of 3D Editorial

    Nothing really new there, but seems like a reasonable overview of the situation. Interestingly, they mention that nVidia didn't attend the early DX9 meetings, and they also downplay the effect registers have on performance of GPUs (including the NV3x). Your thoughts?
     
  2. Hellbinder

    Banned

    Joined:
    Feb 8, 2002
    Messages:
    1,444
    Likes Received:
    12
  3. Hellbinder

    Banned

    Joined:
    Feb 8, 2002
    Messages:
    1,444
    Likes Received:
    12
    This page is even worse...

    http://www.penstarsys.com/editor/tech/graphics/nv_ati_dx9/nv_ati_dx9_3.htm

    This guy may well work for Nvidia. As far as I can tell it is not a very accurate overview of either technology. And goes out of its way to Gloss over and candy coat issues dealing with Nvidia.

    Someone out there, i dont know who.. perhaps Humus? or Hanners... Needs to write up one of these articles for ATi. Becuase about once a year this site or one of the others posts a vaguely guised Nvidia appologist article.
     
  4. CorwinB

    Regular

    Joined:
    May 27, 2003
    Messages:
    274
    Likes Received:
    0
    My thoughts ? Yet another attempt to rewrite history, and a funny rehashing of Nvidia's "But MS is baaad with us" arguments (using the term loosely here).

    Seems the author didn't even bother to check what the actual DirectX specs are (perhaps he got his information first hand from Team DX ?). There is no such thing as asking for FP32 with DirectX : there is standard precision (24 bits and more) and partial precision (PP hint). The author is so busy making excuses for Nvidia's bad design that he doesn't even grasp the technical aspects.

    All the rest of the "technical" content is the same, putting a positive spin of "flexibility" on the overly complicated NV3x design, saying it builds a good "fundation for the future" (I'm sure that people having forked 500$+ for a NV3x are feeling happy that they didn't buy a good performing video card, but rather a "fundation for the future").



    Can anyone spell "revisionism" here ?

    So was Enron... I'm sure consumer fraud is something shareholders appreciate...

    Welcome back to 1998, when real-time trilinear was far away...

    The whole article looks like it was written by D. Perez, with the techno-babble from D. Kirk...
     
  5. olivier

    Newcomer

    Joined:
    Aug 8, 2002
    Messages:
    78
    Likes Received:
    0
    Location:
    Trois-Rivieres, Quebec
    this is a pro-nvidia article, he mades some good point but a lot of mistake like nvidia pixel shader is 1/2 of ati but they are able to do 75% of the perf of ati ...

    nv30 at 500 mhz with 4 pixel pipe is equal to ati r300 at 250 mhz with 8 pipe ...... and the 9700 non-pro destroy every nvidia card in pixel shader.
     
  6. Dave Baumann

    Dave Baumann Gamerscore Wh...
    Moderator Legend

    Joined:
    Jan 29, 2002
    Messages:
    14,079
    Likes Received:
    648
    Location:
    O Canada!
    I've just been speaking to Josh about a few points (only really addressed the conclusion so far). Basically, there are a number of assumptions in there because there is a lack of public information from ATI.
     
  7. CorwinB

    Regular

    Joined:
    May 27, 2003
    Messages:
    274
    Likes Received:
    0
    Is it a lack of public information regarding the specifics of their architecture, or regarding supposed deals with MS ?

    The article could be divided into 4 parts :
    1) Bad MS is out to get poor Nvidia
    2) The NV3x hardware is "elegant" and "flexible", ATI is "brute-force"
    3) Butcheatoptimizations are good for you, you are too stupid to notice the degradation anyway, and the good people at Nvidia had to do this because of their nasty shareholders
    4) Nvidia has an edge for future designs

    1) is laughable. It sounds like the "snapshot" theory of Dr. D. Kirk, implying that companies make hundreds of millions of dollars investments based on "ideas" and "snapshot". A different theory says that Nvidia was feeling very powerful with their market share after having finally killed 3dfx, and thought they could bully MS in making DX what they wanted.

    2) Here, the author as a point, the NV3x hardware looks like it's better suited for workstation rendering (better quality at the expense of speed, longer shaders...). The problem is that those good qualities don't help at all in real-time games, which is what the GFFX series is sold for.

    3) is downright insulting. I wish people like the author would understand that when I'm forking 500$+ on a video card and I ask it to perform with full quality, then I'm expecting full quality...

    4) The author may or may not have a point here. I've seen many similar theories in the past, some of which held true, some of which didn't.
     
  8. Hellbinder

    Banned

    Joined:
    Feb 8, 2002
    Messages:
    1,444
    Likes Received:
    12
    Dave there is a "lack of public information from ATi"?? I could have swore this was exactly the opposite. That ATi have been very open about their Architecutre while Nvidia has not been open and has frankly misrepresented it on more than one occasion. Heck Sireric has practically done an Autopsy on the R300 pipelines in theese forums for us.

    I specifically have much bigger issues with his information and opinions/Conclusions about Nvidia. Of course he also has some issues with his ATi information as well. Some of which are troubbleing. like his assumptions about what FP24 is Qualified for and what it will run into problems with.

    Again another example of someone who in one breath says "Fp24 will run into limitations in the Future" but at the same time Fp16 for Nvidia is a better forward looking solution... With a better "Foundation" for Future games than ATi. :roll:
     
  9. OpenGL guy

    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,357
    Likes Received:
    28
    There are too many problems in the article for me to address...
     
  10. Dave Baumann

    Dave Baumann Gamerscore Wh...
    Moderator Legend

    Joined:
    Jan 29, 2002
    Messages:
    14,079
    Likes Received:
    648
    Location:
    O Canada!
    When I say there is a lack of public information I mean there is a disparity in the ease to jornalists in which ATI and NVIDIA make information that make them look favorable available. For instance, with the 52.16 drivers the first thing NVIDIA did was mail a whitepaper to their entire press list, so that information was readily available to them and it makes NVIDIA look good - now, for instance, how many journalists, or even review readers, know that ATI have already implemented a basic shader compiler optimiser since cat3.6? I knew, because one of the ATI guys here has referenced it, and some of you may have read it in my 256MB 9800 PRO review, by the press at large have no clue because ATI didn't tell us/you about it. Further to that - how many of you knew that ATI can actually do some instruction scheduling in hardware (i.e. it can automatically parallelise some texture and ALU ops to make efficient use of the separate pixel shader and texture address processor)? I'll bet not many - why the hell aren't ATI telling us about these things?

    So, yes, I've already said to Josh that I think there has been too many assumptions in there, but the apparent disparity between the NVIDIA and ATI information in there is partly because ATI just don't inform people about a lot of things.
     
  11. Hellbinder

    Banned

    Joined:
    Feb 8, 2002
    Messages:
    1,444
    Likes Received:
    12
    Interesting..

    Of course there is also the problem with The Information Nvidia is dolling out being accurate. But i digress. Obviously if you feel that ATi needs to do a much better job with public communication then there can be no doubt that they do. :)
     
  12. Dave Baumann

    Dave Baumann Gamerscore Wh...
    Moderator Legend

    Joined:
    Jan 29, 2002
    Messages:
    14,079
    Likes Received:
    648
    Location:
    O Canada!
    I think I've fairly regularily told them their communication in PR and Marketing stinks.
     
  13. JoshMST

    Regular

    Joined:
    Sep 2, 2002
    Messages:
    465
    Likes Received:
    18
    Ok, I am ready to be brutalized!

    This is Josh from PenStar, and please do not believe me to be an apologist for NVIDIA. I have been trying to get info for this article for quite some time, but I guess that I have been looking in the wrong places! Obviously I should have come here, because Dave tells me there are quite a few ATI engineers that could have really helped me understand exactly what is going on with ATI's technology. I am honestly just looking for good information, and then trying to pass that information onto the readers in a way that is easy to understand. Not all of us are engineers and programmers, so some of the things we read and try to understand don't always stick well.

    That being said...

    I have emailed back and forth with Dave discussing some of the problems with the article (which I knew there were some, but because I wasn't getting answers, I had no real choice but to post it as is). Note the editors note at the beginning that asks for clarification from readers if they have any.

    My conclusions, while I do stand beside most of them, I think I made them with some wrong information. I hope I made it clear that I do not think that NVIDIA WILL get back the performance crown, but rather because now that DX9 VS/PS 3.0 specifications are well known, both companies are playing on a much more level field. My original thought was that NVIDIA would be better prepared due to the higher level of funcitonality of current chips, but in speaking with Dave, it becomes a bit more apparent the the leveling comes from a solid understanding of what PS/VS 3.0 specifications are when their chip designs are still in the early stages. I will continue to study everything I can about the two architectures, and I am very open to people writing me and letting me know what I did wrong/right/or was just confused about.

    As for Microsoft and NVIDIA. My point was not to say that MS screwed over NVIDIA, but rather that NVIDIA apprently took themselves out of the loop and much of the groundwork that was laid for DX9 did not have a lot of input from NVIDIA. I think that is all NVIDIA's fault! It wasn't like MS excluded them from helping to flesh out the DX9 spec. NVIDIA didn't show up for their own reasons, and perhaps they were trying to pressure MS to accept NVIDIA's own vision of what DX9 should be.

    So please, help me to understand where I am mistaken or confused. I have no problems updating the article, or even re-writing the entire thing if my information and conclusions are so mind-numbingly bad.
     
  14. TheTaz

    Newcomer

    Joined:
    Sep 17, 2003
    Messages:
    7
    Likes Received:
    0
    I'll just re-paste what I posted in the nvnews forums, regarding this article:

    -----------------------

    Though he tried to use words like "I'm not excusing nvidia for blah blah blah"... the whole article still seemed to wander in that direction.

    Are we supposed to feel sorry for nVidia trying to "bully" their own DX 9 version, and having that "backfire"?

    Are we supposed to say "You 'optimised' with questionable ethics, but that's ok, we understand you have to 'snow-job' your investors".

    Are we supposed to REALLY believe that NV3x is "future-proof" because it can do FP32 and ATi can't??? (If NV3x could do FP32 at a decent speed, that MIGHT be believable... but it can't).

    Anywho... it was still an interesting read.

    Taz
     
  15. OpenGL guy

    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,357
    Likes Received:
    28
    Dave,

    The problems go far beyond that. Some examples:
    Hint: False.
    Hint: The R3x0 runs everything at FP24 so performance with FX12 or FP16 data is identical to performance with FP24 data.
    Hint: Why should the number of vertex shader registers matter at all when talking about pixel shader performance? Also, it's the number of internal registers that was in question, not the number of exposed registers. Anyway, say that the author is correct that the FX 5900 has 16 temps and the R3x0 has 32 temps, isn't that a 2x difference? The same factor that he said was wrong earlier? Why is it that test programs show that reducing register usage in the pixel shader is very critical for performance on GeForce FX boards?

    The author also claims that Gunmetal 2 uses "PS1.1/2.0" when there isn't a single PS 2.0 shader in the game!

    I could go on for hours...
     
  16. Sxotty

    Veteran

    Joined:
    Dec 11, 2002
    Messages:
    4,894
    Likes Received:
    344
    Location:
    PA USA
    He said the conversion from 12 to fp24, I think you should clarify that the reason there is not a penalty for conversion is that it just takes the value and pretends it is in fp24 or something similar, i.e. it is not a real conversion.

    Like if you convert from decimal base 10 to hex or something it takes time.
     
  17. nelg

    Veteran

    Joined:
    Jan 26, 2003
    Messages:
    1,557
    Likes Received:
    42
    Location:
    Toronto
    Whats that saying about being half pregnant.
     
  18. OpenGL guy

    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,357
    Likes Received:
    28
    All conversions are free because any cost is pipelined out if necessary. And, yes, they are real conversions. For example, a lookup into a 32-bit RGBA texture is converted to floating point before giving the value to the pixel shader.
     
  19. Reverend

    Banned

    Joined:
    Jan 31, 2002
    Messages:
    3,266
    Likes Received:
    24
    Because, truth be told, these sort of things fly over 99% of people's heads. By "people", I'm not talking about 3D enthusiasts like many here. By "people", I mean those that really matter, folks that buy video cards.

    We can explore all we are able to wrt the technology behind every single chip, because that's what we (B3D) likes to do, but the truth really is that whenever we publish a review/preview/article/etc, 99% of folks that stumble upon our content will just look at charts and parts of our conclusion that they understand ("Is it good or better than the other card, that's all I want to know!").

    Of course, it wouldn't hurt ATI one bit if they take that one further step in educating media outlets... but then again, most media outlets don't bother to understand 3D anyway and just re-gurgitate whatever is given to them.

    ps. I'm not in a particularly pleasant mood atm.
     
  20. Razor04

    Newcomer

    Joined:
    Oct 24, 2003
    Messages:
    121
    Likes Received:
    0
    That paragraph was the thing that caught me most off guard. I don't pretend to know anywhere near as much about graphics card as other people here but why would the R420 have to have FP32 and FP16? Where is the requirement that says that FP16 is a must? I thought that there was only a minimum precision so if the R420 supports FP32 only there would be no reason for FP16 if it runs well using FP32. Just thought I would chime in my 2 cents before writing up my solid mechanics lab. :(
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...