NVIDIA Stepping Back from Cg?

Discussion in 'Graphics and Semiconductor Industry' started by Dave Baumann, Aug 5, 2003.

  1. RussSchultz

    RussSchultz Professional Malcontent
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,855
    Likes Received:
    55
    Location:
    HTTP 404
    I wonder if MS finally agreed to pluggable backends to their HLSL compiler?

    That, and I wonder who's going to be spearheading common HLSL between DX and OGL?
     
  2. Ostsol

    Veteran

    Joined:
    Nov 19, 2002
    Messages:
    1,765
    Likes Received:
    0
    Location:
    Edmonton, Alberta, Canada
    I'd bet on a compiler that can compile DX9 HLSL to OpenGL shader assembler and OpenGL HLSL to DX9 shader assembler. Perhaps that'll eventually get into RenderMonkey (which hasn't been updated in a while).
     
  3. RussSchultz

    RussSchultz Professional Malcontent
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,855
    Likes Received:
    55
    Location:
    HTTP 404
    OpenGL doesn't use assembly as their high level language, or as their low level language, so you'd have to develop a "translator" between DX-HLSL to OpenGL HLSL.
     
  4. Ostsol

    Veteran

    Joined:
    Nov 19, 2002
    Messages:
    1,765
    Likes Received:
    0
    Location:
    Edmonton, Alberta, Canada
    True, but it could still potentially be compiled down to an assembler language.

    On reflection, however, I think I might be missing your original point about an API independant HLSL entirely. . . I'm guessing that you were implying an API independant HLSL that does not compile down to any assembler code, but the binary machine code that the GPU utilizes directly (like what OpenGL HLSL does and D3D HLSL can do).

    Is this correct? If so, then I'm guessing that it won't happen any time soon -- especially if OpenGL and DirectX shader capabilities start to differ significantly. An example example would be if one API starts putting depth and stencil operations in the programmable pipeline or unifies vertex and fragment shading significantly before the other API does.
     
  5. RussSchultz

    RussSchultz Professional Malcontent
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,855
    Likes Received:
    55
    Location:
    HTTP 404
    I can't imagine them diverging architecture too wildly, since the same hardware has to run both. Or at least it does in 99% of the hardware out there. (Excepting a few crazy super high end professional cards, everthing needs to be DX capable)
     
  6. Socos

    Newcomer

    Joined:
    Feb 23, 2003
    Messages:
    48
    Likes Received:
    0
    IMHO the whole Glide thing is exactly what [N] was going too. 3DFX did well until they went away from the specific API but if 3DFX didn't do it the developers would of made them. Dev's program for the widest audience, but here I am not telling you anything you didn't already know.
    The developers smacked em before they even got started this time. I just don't know how [N] can swing it with HLSL being their PS2.0 performance is so dissmal. I hear the Vidiots saying how the Det 50's will work miracles with PS2.0 whatever... You can't make up that much of a performance delta with drivers..

    /end Rant. :shock:
     
  7. Hanners

    Regular

    Joined:
    Jul 12, 2002
    Messages:
    816
    Likes Received:
    57
    Location:
    England
    Perhaps this means that NV40 is going to be a little bit less 'unique' (read 'hard') to program for? I always got the impression that one of the main reasons nVidia pushed CG so hard was because it was pretty much the only way to wring decent performance out of NV3x cards.
     
  8. Dio

    Dio
    Veteran

    Joined:
    Jul 1, 2002
    Messages:
    1,758
    Likes Received:
    8
    Location:
    UK
    ARB_fragment_program is just about assembly-level...
     
  9. bloodbob

    bloodbob Trollipop
    Veteran

    Joined:
    May 23, 2003
    Messages:
    1,630
    Likes Received:
    27
    Location:
    Australia
    Yes but in opengl 2.0 ARB_fragment_program will be legacy old stuff it won't support the staff that GLSlang will.


    Okay now hear is a question does CG support PS_3.0 and VS_3.0 ??? if not then CG back end would have to be upgraded for those render targets and I doubt CG in it self currently has all the features of PS_3.0 exposed so new function would have to be added ect. I could be wrong and then of course there is simple the fact that its gonna preform like *#$@ when having a render target of GLSlang.
     
  10. [maven]

    Regular

    Joined:
    Apr 3, 2003
    Messages:
    645
    Likes Received:
    16
    Location:
    DE
    There has been talk that the DX9-SDK Summer Update (Beta2 out now) has a new back-end profile which optimises for low register usage and allows ps_2_x (including predication etc.).

    Sound familiar?
     
  11. Mark

    Mark aka Ratchet
    Regular

    Joined:
    Apr 12, 2002
    Messages:
    604
    Likes Received:
    33
    Location:
    Newfoundland, Canada
    So that's it then. nVidia is willing to dump Cg because Microsoft is going to do what Cg was invented for, to overcome the shortcomings of the nV3x architecture.
     
  12. digitalwanderer

    digitalwanderer Dangerously Mirthful
    Legend

    Joined:
    Feb 19, 2002
    Messages:
    18,992
    Likes Received:
    3,532
    Location:
    Winfield, IN USA
    Yup.

    <sigh>

    Well, it's better than making developers code two paths at least....

    <sigh>
     
  13. Ostsol

    Veteran

    Joined:
    Nov 19, 2002
    Messages:
    1,765
    Likes Received:
    0
    Location:
    Edmonton, Alberta, Canada
    Well, if developers pre-compile the HLSL code, then two paths will definitely be used. If, however, it is not precompiled then all the devs have to do is make sure the low-register usage profile is used for NVidia cards. Otherwise, the HLSL code itself will remain the same for both cards. If the devs want to support PS2.x as well, that's a reasonable argument for coding an additional render path. In that case it's more of technology-level-specific path than a vendor-specific-path.
     
  14. Dave Baumann

    Dave Baumann Gamerscore Wh...
    Moderator Legend

    Joined:
    Jan 29, 2002
    Messages:
    14,090
    Likes Received:
    694
    Location:
    O Canada!
    MS is doing its job - the want their code to run best across a wide range of hardware. Its not a good advert for HLSL is it runs exceptionally well on one architecture and very poorly on another.

    I have heard what maven is saying, however I've heard that they have thus far done it in a fashion that doesn't imapct of R300 performance, but does make more NV30 friendly register usage.
     
  15. Myrmecophagavir

    Newcomer

    Joined:
    Dec 28, 2002
    Messages:
    136
    Likes Received:
    0
    Location:
    Oxford, UK
    You sound like you think that's bad. That's why we have high-level languages!
     
  16. Popnfresh

    Newcomer

    Joined:
    Mar 8, 2003
    Messages:
    19
    Likes Received:
    0
    History of Cg vs. glslang

    There was a discussion about this recently on the OpenGL.org boards. The OpenGL HLSL proposal predates Cg.

     
  17. Mark

    Mark aka Ratchet
    Regular

    Joined:
    Apr 12, 2002
    Messages:
    604
    Likes Received:
    33
    Location:
    Newfoundland, Canada
    no no, not bad. I just find it difficult to digest. Hardware should be fixed to meet specifications, not the other way around. That's why they're called specifications afterall.
     
  18. RussSchultz

    RussSchultz Professional Malcontent
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,855
    Likes Received:
    55
    Location:
    HTTP 404
    Not to really delve into this again, but we have C compilers that compile down to different machine code for different processors.

    Why? Because there's lots of ways to skin a cat. This is for the betterment of all hardware--it will give IHVs more lattitude to make better designs.
     
  19. bloodbob

    bloodbob Trollipop
    Veteran

    Joined:
    May 23, 2003
    Messages:
    1,630
    Likes Received:
    27
    Location:
    Australia
    The problem is we are compiling to the same machine code because it is compiling into pixel shader assemble. Now its alright if it optimises the HLSL to pixel shader assemble that runs better on a targeted platform. BUT HLSL IS OPTIONAL! if users choose to use to pixel shaders in assemble then this is where we get a problem because it will effect the speed greatly in some cases on some platforms now if nvidia made their cards run well on 90% of pixel shader instruction combinations then users who choose to use assemble would be alot better off but microsoft is making it easier for nvidia not to do this by producing a new backend on HLSL compiler.
     
  20. RussSchultz

    RussSchultz Professional Malcontent
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,855
    Likes Received:
    55
    Location:
    HTTP 404
    The idea is that you don't compile into the arbitrary machine code that MS has devised, but something that fits your hardware more appropriately.

    The openGL HLSL mechanism does it this way, and I, for one, think its the right thing to do.

    This, of course, doesn't alleviate the need to have a DX9 assembly to whatever micro-op compiler, but its a great step forward to allowing creativity in the design for the forward looking case.

    Edit: Speaking speculatively, of course. I'm not sure whatever changes are in the works for DX9 allow this.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...