Carmack's comments on NV30 vs R300, DOOM developments

Discussion in 'Architecture and Products' started by boobs, Jan 30, 2003.

  1. Dio

    Dio
    Veteran

    Joined:
    Jul 1, 2002
    Messages:
    1,758
    Likes Received:
    8
    Location:
    UK
    I was expecting that but from Andy.

    It's not my fault I project well. It's 'leadership qualities' you know - i.e. I'm tall and have a very loud voice. :)
     
  2. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    In defense of Bioware, at the time, ATI only had their command-line shader extension. Now that they have added the ATI_text_fragment_shader extension, Bioware had a much nicer codepath available in supporting ATI graphics cards. I think ATI really screwed up with their original ATI_fragment_shader extension.
     
  3. Humus

    Humus Crazy coder
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    3,217
    Likes Received:
    77
    Location:
    Stockholm, Sweden
    GL_ATI_text_fragment_shader is only available on Mac platforms, so the situation haven't changed. I sense that this extension has a very low priority and probably wont be supported ever in Windows.
     
  4. Hyp-X

    Hyp-X Irregular
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    1,170
    Likes Received:
    5
    Nit: It's command-based (or procedure-based). Command-line means something else. ;)
     
  5. Doomtrooper

    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    3,328
    Likes Received:
    0
    Location:
    Ontario, Canada
    Morrowind released as a DX8 title and guess what...shiny water..no issues, a good example why I'm starting to prefer DX titles.
     
  6. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    Interesting. I guess that changes the situation somewhat, but I can still believe why somebody would support the NV shader extensions and not the ATI shader extensions (no reason now...but with the 8500's...).
     
  7. Humus

    Humus Crazy coder
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    3,217
    Likes Received:
    77
    Location:
    Stockholm, Sweden
    Really? The GL_NV_register_combiner is much more confusing and inconvenient than GL_ATI_fragment_shader ever was. Smack on the GL_NV_register_combiners2 + GL_NV_texture_shader{2|3} and it's even worse.
     
  8. antlers

    Regular

    Joined:
    Aug 14, 2002
    Messages:
    457
    Likes Received:
    0
    I think Chalnoth might have been referring to other-than-technical reasons (such as marketshare) for Bioware to have preferred the NVidia extensions.
     
  9. g__day

    Regular

    Joined:
    Jun 22, 2002
    Messages:
    580
    Likes Received:
    2
    Location:
    Sydney Australia
    Interesting reading the corver interview between Reverend and John Carmack on Beyond3d today. SO JC views the NVidia drivers really aren't fully tuned by a long shot reading page 2.

    DOes anyone have a guess how much longer it will take them to iron out teh major performance bottlenecks - days, weeks or months?
     
  10. Sherlock

    Newcomer

    Joined:
    Jul 29, 2002
    Messages:
    25
    Likes Received:
    0
    Location:
    Colorado, USA
    What exactly does that mean for FX users?
    Does anyone follow what JC's "significant improvements with futre drivers" refers to in terms of ARB2 performance? Are massive 40-50 % performance gains coming accross the board, or 4-5%? Are we only talking about drivers not yet optimized for dx9? Or like any other driver just needs tweaking here to ensure compatability with all games/API's? What's with the generic and almost misleading statements?
    :roll:
     
  11. demalion

    Veteran

    Joined:
    Feb 7, 2002
    Messages:
    2,024
    Likes Received:
    1
    Location:
    CT
    Well, we've had this discussion earlier in the thread. Here is my attempt at a synopsis.
     
  12. THe_KELRaTH

    Regular

    Joined:
    Dec 9, 2002
    Messages:
    471
    Likes Received:
    0
    Location:
    Surrey Heath UK
    I would have thought he's just relaying what Nvidia have told him after he questioned the poor performance.
     
  13. Sherlock

    Newcomer

    Joined:
    Jul 29, 2002
    Messages:
    25
    Likes Received:
    0
    Location:
    Colorado, USA
    Thanks demalion, that a bit more clear. Carmack's statements still show nothing definitive. It's going to take alot of work by Nvidia and alot of working with developers before what carmack is saying means anything at all. Is there a tangible guarantee we'll ever see what he's talking about in practice?
     
  14. Reverend

    Banned

    Joined:
    Jan 31, 2002
    Messages:
    3,266
    Likes Received:
    24
    My opinion is that John had a whole lot of NV goodness during Quake3 development work. He probably got (most of) what he asked for from NV during that time and NV probably responded the quickest. I believe NV delivered most of what they promised to John during that time. That is probably why John appears to continue (i.e. from GF1 to GF2 to GF3 to GF4 to GFFX) to give NV the benefit of doubt wrt DOOM3 and the NV30. What I read from John's .plan and his answers to my questions is simply that, no more - giving NV his benefit of doubt.

    I honestly think all John wants is have DOOM3 run the best all things considered. I doubt he'd ignore the fact that the Radeon 9700Pro was available commercially 6 months ago.
     
  15. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    Well, Humus, that may be your opinion, but which one did you learn first? Often people tend to prefer the one that they program with first. And please note that this has purely to do with the programming interface, not the underlying assembly code.

    Anyway, I was going directly off of John Carmack's statements that ATI's programming path is messier than nVidia's (w/ respect to the GF3/R8500 extensions).

    And no, I wasn't talking about marketshare at all. In particular, I find that it could be a fair bit more cumbersome to support both types of extensions than it would be if the extensions were much more similar in structure.
     
  16. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    From what he's said in the past, I really believe that he'd like to drop NV30-extension support if he has the chance (assuming he doesn't move to any HLSL, which seems unlikely at this juncture). He seems to prefer industry-standard extensions by quite a bit over proprietary extensions. For example, the NV30 vertex program extension obviously offers a fair bit more functionality than the ARB vertex program extension, so why doesn't he use the NV30 extension as well? I really don't think it has much to do with programming ease as far as he's concerned, but more of a "purist" standpoint to 3D coding.

    Anyway, if nVidia brings the ARB2 performance up to the NV30 fragment program extension performance, I really think that JC will drop the NV30 fragment program support (I doubt the performance can actually be identical, but it seems possible that it will get close).
     
  17. Joe DeFuria

    Legend

    Joined:
    Feb 6, 2002
    Messages:
    5,994
    Likes Received:
    71
    Chalnoth...

    So, um, did you ask Carmack which path he coded for first, GF3 or Radeon 8500? Why do you apply this restriction to Humus, and not Carmack?

    In any case, I don't believe carmack criticized ATI fragment extensions at all. If anything, it was the other way around. IIRC he favored the nVidia over ATI Vertex Prgram extensions, was agnostic about vertex array extensions, and favored ATI fragment extensions.
     
  18. Bjorn

    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    1,775
    Likes Received:
    1
    Location:
    Luleå, Sweden
    I think you hit the nail on the head there.

    Carmack .plan feb 11 2002

    And for the fragment extensions:

     
  19. Thowllly

    Regular

    Joined:
    Feb 6, 2002
    Messages:
    551
    Likes Received:
    4
    Location:
    Norway
    What he is talking about there is the hardware of the GF4 vs. the 8500 (the fragment shaders themselves), not the software interface to access the hardware (the fragment shader extension).

    He is saying that the fragment shader of the 8500 is better than the fragment shader of the GF4. He is not saying anything about the fragment extensions. The reason the GF4 can't sample more than 4 textures in a pass is not because NVIDIA made a worse fragment extension than ATI, it's because NVIDIA made less flexible hardware than ATI.
     
  20. Nagorak

    Regular

    Joined:
    Jun 20, 2002
    Messages:
    854
    Likes Received:
    0
    Why would it matter to him? He doesn't play any games and his "game" (and I use the term loosely) isn't out yet.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...