MS, ATI, NVidia, DX .....

Discussion in 'Graphics and Semiconductor Industry' started by chavvdarrr, Oct 7, 2003.

  1. Bouncing Zabaglione Bros.

    Legend

    Joined:
    Jun 24, 2003
    Messages:
    6,363
    Likes Received:
    83

    If you had to choose to do one of two jobs, and you knew one was going to take five times longer than the other, which one would you choose to do?

    If I offered you an un-named amount of cash, or five times that same amount, which would you choose to have? It it still meaningless?

    In the context of what Vavle was saying, you just need to know that it takes five times more effort (whatever that "effort" is) to program for NV35 than a standard DX9 path - and NV35 end up being slower and with lower IQ.
     
  2. Laa-Yosh

    Laa-Yosh I can has custom title?
    Legend Subscriber

    Joined:
    Feb 12, 2002
    Messages:
    9,568
    Likes Received:
    1,455
    Location:
    Budapest, Hungary
    I think we already know a lot about what the possible optimizations for NV3x were:
    - Going through the shader code and checking out which instructions could use a partial precision hint.
    - Then checking the result to see if it does not degrade the image quality too much.

    Now, if Valve really has 1200 shaders for HL2, then this adds up quite a lot...
     
  3. Xmas

    Xmas Porous
    Veteran Subscriber

    Joined:
    Feb 6, 2002
    Messages:
    3,344
    Likes Received:
    176
    Location:
    On the path to wisdom
    But this is not about choosing between two jobs, but whether some work is worth the effort. If optimizing for ATI takes one minute and optimizing for NVidia takes 5 minutes, then there's no doubt both are worth the effort.

    So what's a standard DX9 path? When Valve started implementing their DX9 path, they only had ATI hardware. So their so-called "generic" path surely takes R300 performance characteristics into account. Maybe when the XGI Volari comes up we'll see how "generic" that path really is.

    NV35 ending up being slower (than R300, I suppose, not than "a standard DX9 path") doesn't mean it's hard to optimize for it. It rather just means that the chip can perform less operations per clock.
     
  4. Dave Baumann

    Dave Baumann Gamerscore Wh...
    Moderator Legend

    Joined:
    Jan 29, 2002
    Messages:
    14,090
    Likes Received:
    694
    Location:
    O Canada!
    Have you been talking to Mr Kirk?
     
  5. K.I.L.E.R

    K.I.L.E.R Retarded moron
    Veteran

    Joined:
    Jun 17, 2002
    Messages:
    2,952
    Likes Received:
    50
    Location:
    Australia, Melbourne
    Are you talking about David Kirk or Captain Kirk?
    I'm not sure if you're trying to rip a joke there. :)

    Mr Kirk said that the NV3x's poor perfromance is due to software and not hardware.
     
  6. Bouncing Zabaglione Bros.

    Legend

    Joined:
    Jun 24, 2003
    Messages:
    6,363
    Likes Received:
    83
    I think we can guess Valve were talking about larger units than minutes. :roll: Besides, Valve have already told us it wasn't worth the effort and they wish they had just left the NV3x doing the DX8 path.

    You mean like how everyone used to code to Nvidia because they had the best cards? Well if the shoe is on the other foot, then that's the way it should be. It was alright when it was working in Nvidia's favour, now it's working for another company with the best technology (shrug).

    We've also had no reason not to think that ATI has stuck far more closely to DX9 (no sub-DX9 partial precision or CG HLSL replacements), something we know Nvidia has not done with NV3x


    You say that as if it's a good thing: "It's not harder to optimise for - it's just a slow card". :roll:

    We know from Valve and others than NV3x is harder to optimise for, as it needs all kind of fiddling about to see where you can use lower precision and live with the IQ degredation, or where you can live with the low speed performance at a decent IQ. It simply has a more complex and fiddly architecture without the power to drive it fully, so you need to babysit everything it does. Developers hate that kind of stuff- they just want to write their code to the API and have the hardware run it properly - something NV3x is particularly bad at doing according to loads of developers.
     
  7. Xmas

    Xmas Porous
    Veteran Subscriber

    Joined:
    Feb 6, 2002
    Messages:
    3,344
    Likes Received:
    176
    Location:
    On the path to wisdom
    I have no problem with that. I just find it a bit silly to say, 'hey, it took us five times longer to optimize for NV3x than optimizing our "generic" path for the hardware we developed it on right from the start.'

    What came first, the chip specs or DX9? Partial precision isn't sub-DX9, it's part of it.

    No, I actually say "it is a slower card (bar sin/cos ops and texture reads), but that's no indicator for whether it's hard to optimize for" :wink:
     
  8. Bouncing Zabaglione Bros.

    Legend

    Joined:
    Jun 24, 2003
    Messages:
    6,363
    Likes Received:
    83
    FP12?

    That's a somewhat circular argument. Valve says NV3x take five times longer to optimise for, you translate that as Valve meanng "the card is slower". Then you complete the circle by saying that because NV3x is slower, that doesn't necessarily mean it's harder to optimise for. :shock: :roll:

    We *know* the card is harder to optimise for because Valve and other developers say that you have to spend significantly more time and effort optimising NV3x, and at best get parity performance on DX8, and at worst are significantly slower on DX9. Isn't this obvious from all we know about NV3x, it's benchmarks, it's gaming performance, and what developers say about it?
     
  9. Xmas

    Xmas Porous
    Veteran Subscriber

    Joined:
    Feb 6, 2002
    Messages:
    3,344
    Likes Received:
    176
    Location:
    On the path to wisdom
    You mean FX12? Partial Precision should be FP16.

    I think you got me wrong. You said "In the context of what Vavle was saying, you just need to know that it takes five times more effort (whatever that "effort" is) to program for NV35 than a standard DX9 path - and NV35 end up being slower and with lower IQ." And I just wanted to point out that NV35 ending up being slower is not related to the effort.
     
  10. Pete

    Pete Moderate Nuisance
    Moderator Legend

    Joined:
    Feb 7, 2002
    Messages:
    5,777
    Likes Received:
    1,814
    The point is, what if every card came with "twitchy" architectures and beta drivers? Then Valve would have to spend that same 5x time on every card? I think their complaint is as much about the FX's architecture as it is about the FX's underperforming drivers (at the time, though the 52.14 seems to have improved things). If IHV's don't include good drivers at a card's release, then devs have to basically choose between spending time optimizing during game development, or hope that the IHV sorts things out before the game is released, right?
     
  11. demalion

    Veteran

    Joined:
    Feb 7, 2002
    Messages:
    2,024
    Likes Received:
    1
    Location:
    CT
    A quick comment: Valve already specified what they did for the NV3x path.

    • Uses partial-precison registers when appropriate.
    • Trades off texture fetches for pixel shader instruction count.you missed this
    • Case-by-case shader restructuring.

    Carmack has said something similar as far as the difference between the "NV30" and "ARB2" paths, correct?

    To the discussion in general:

    Valve also specified their problem with doing this: they got a poor return on their larger investment, in image quality and performance. These factors are directly related to why Valve has a problem with the effort, and why they recommend other developers code to the DX 8.1 standard instead (as nVidia now also seems to recommend in some statements). Come to think of it, I believe Carmack said something similar about not all developers being able to spend the effort he has on Doom 3 in a specific NV3x path, moving forward, for OpenGL as well. Do I misremember or misrepresent here?
     
  12. chavvdarrr

    Veteran

    Joined:
    Feb 25, 2003
    Messages:
    1,165
    Likes Received:
    34
    Location:
    Sofia, BG
    well, after the leakage of HL2 source .... it will be interesting to hear some opinionon how well-optimised are Valve's shaders ...
    ?
     
  13. Mark

    Mark aka Ratchet
    Regular

    Joined:
    Apr 12, 2002
    Messages:
    604
    Likes Received:
    33
    Location:
    Newfoundland, Canada
    if anyone is interested, the original post is here.
     
  14. Socos

    Newcomer

    Joined:
    Feb 23, 2003
    Messages:
    48
    Likes Received:
    0
    Not to get off topic, but they (Microsoft) were actually paying people to post bad things about OS/2 in news groups around the net. Even though too this day OS/2 has better multi-tasking (IMO). OS/2 was a superior OS to Windows back in the day in nearly every way. PR got the best of Big blue on that one and the rest is history.

    Of course it could be a pre-cursor to what is happening today with big [N] and ATI!!
     
  15. nelg

    Veteran

    Joined:
    Jan 26, 2003
    Messages:
    1,557
    Likes Received:
    42
    Location:
    Toronto
    Socos, my point was that regardless of motive M$ made the right decision.
     
  16. digitalwanderer

    digitalwanderer Dangerously Mirthful
    Legend

    Joined:
    Feb 19, 2002
    Messages:
    18,992
    Likes Received:
    3,533
    Location:
    Winfield, IN USA
    Well Mr. Kirk got it half-way right at least... ;)
     
  17. dream caster

    Newcomer

    Joined:
    Jan 4, 2003
    Messages:
    40
    Likes Received:
    0
    Location:
    Chile
    True; he said so.
     
  18. YeuEmMaiMai

    Regular

    Joined:
    Sep 11, 2002
    Messages:
    579
    Likes Received:
    4
    since you love to question everything that makes nVidia look bad, now will you explain exactly how us, the consumer will not have to pay more for games due to nVidia's crap product and even crappier management decisions.

     
  19. Sxotty

    Legend

    Joined:
    Dec 11, 2002
    Messages:
    5,497
    Likes Received:
    867
    Location:
    PA USA
    Valve also told us that HL2 would be released on Sep30th :roll:


    edit: below not from BZB
    I agree with this and I said it originally everyone doesn't seem to follow the argument tho...
     
  20. jvd

    jvd
    Banned

    Joined:
    Feb 13, 2002
    Messages:
    12,724
    Likes Received:
    9
    Location:
    new jersey
    I think it was more of the fact that look nv3x performs like shit on this game. So we went and we worked really hard on making it work better but it still didn't work better .
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...