Asking Tim Sweeney about NVIDIA and more

Discussion in 'Beyond3D News' started by Reverend, Sep 29, 2003.

  1. Ilfirin

    Regular

    Joined:
    Jul 29, 2002
    Messages:
    425
    Likes Received:
    0
    Location:
    NC
    Also keep in mind that Shadermark is really just mostly DX8 class effects run in DX9. There's still a rather massive performance difference when running really long (60-80 instructions), multi-passed (3-4 passes) shaders. But then there's also the problem of not being able to even compare the FX class cards to the r3x0 cards in those cases, since FP render targets are STILL missing from the drivers (despite nv's dev department constantly promising to have them in "the next driver release").

    *grumble* *grumble* *grumble*
     
  2. nelg

    Veteran

    Joined:
    Jan 26, 2003
    Messages:
    1,557
    Likes Received:
    42
    Location:
    Toronto
    There are two ways to influence games and benchmarks. One way is to have them play to your strengths. The other is to have them play to your competitions weakness. The scuttlebutt has been that nVidia has chosen the latter. ATI, IMHO, seems to be able to employ both concurrently simply by having code written to the standard API. :lol:
     
  3. nelg

    Veteran

    Joined:
    Jan 26, 2003
    Messages:
    1,557
    Likes Received:
    42
    Location:
    Toronto
    Yes. Inquiring minds would like to know.
     
  4. WaltC

    Veteran

    Joined:
    Jul 22, 2002
    Messages:
    2,710
    Likes Received:
    8
    Location:
    BelleVue Sanatorium, Billary, NY. Patient privile
    Exactly--that's the way I read it, too...:) But I think he also might have been saying, "If you don't want to take our word for things because we have marketing deals going with IHVs and you don't trust us, then just write up some benchmark code yourself and find out...:)" I think this probably has to mean something like this, since Sweeny has made it clear in several responses to the Rev which the Rev has reprinted that it's very unwise to confuse marketing deals with game code.

    Another way of phrasing it might be: "Marketing deals don't make silk purses out of sow's ears." Heh...:)

    There's no doubt in my mind that Newell hand picked ATi as the IHV for the HL2 bundling deal because of ATi's hardware--because, had it been asked instead, nVidia would have eagerly accepted the deal and ponied up the cash just as well as ATi. In fact, it probably would have been worth more to nVidia than to ATi, since it would have helped sell nv3x-based products this year. But Valve chose ATi, and that's that...
     
  5. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    299
    Location:
    UK
    And saying I'm sitting over the real story here, but that I just can't say it, or at least not in its integrality...


    Uttar
     
  6. RussSchultz

    RussSchultz Professional Malcontent
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,855
    Likes Received:
    55
    Location:
    HTTP 404
    Errr...what's integrality?
     
  7. Anonymous

    Veteran

    Joined:
    May 12, 1978
    Messages:
    3,263
    Likes Received:
    0
    I think he simply meant, everyone sucks! :lol: Why signal out Nvidia? :roll:
     
  8. Anonymous

    Veteran

    Joined:
    May 12, 1978
    Messages:
    3,263
    Likes Received:
    0
    Pardon the poor bolding in my previous post. This is what i meant to point out.
     
  9. Tim

    Tim
    Regular

    Joined:
    Mar 28, 2003
    Messages:
    875
    Likes Received:
    5
    Location:
    Denmark
    Has any benchmark ever shown the NV35 to be 2/3 times slower than the R350 when partial precision are used? 3mark03 etc used full precision. Shadermark 2 even uses extended PS 2.0 saving some passes for the NV35.

    Yes the R350 is only 1.3 times faster when partial precision is used but it is still 1.9 times faster with full precission (I think that is pretty close to 2 times)

    Edit: In the worst case the NV35 is 2.6 times slower, in the best case it is slightly faster than the R350.
     
  10. Tim

    Tim
    Regular

    Joined:
    Mar 28, 2003
    Messages:
    875
    Likes Received:
    5
    Location:
    Denmark
    We are talking about Shadermark 2 SM2 is DX9.
     
  11. Ilfirin

    Regular

    Joined:
    Jul 29, 2002
    Messages:
    425
    Likes Received:
    0
    Location:
    NC
    I was talking about SM2 as well.
    [edit]

    Though on second inspection of the shaders they are actually of decent length.
     
  12. TMorgan

    Newcomer

    Joined:
    Aug 12, 2003
    Messages:
    15
    Likes Received:
    0
    This "Guest" guy has be Brian Burke experiencing the absolute low point of his career... But to be objective about it it may just as well be a very, extremely uninformed Troll who simply genuinely doesn't haven't got a slightest clue about anything that has happened during this year. ;)

    (No, I can't figure out which one of those two options is more sad...)

    - Tom
     
  13. Anonymous

    Veteran

    Joined:
    May 12, 1978
    Messages:
    3,263
    Likes Received:
    0
    Microsoft/Nvidia Fallout

    I believe this whole issue is based around the fact that Nvidia didn't want to let Microsoft push them around, instead of following the DirectX 9 path Nvidia created their own, a game that is labeled as "The way it's meant to be played" means that Nvidia has had a chance to use their code path (the card's architecture is built for this path). Without using their "custom" code path the card doesn't function well. But who said that DirectX 9 was the bomb? Also another rumor (possible) I have heard is that Nvidia requested that Microsoft incorporate their code path into Directx 9, MS declined and we are left with what we have now. I can see Nvidia changing it's plans as we speak. I believe though that if ATI/Valve hadn't intervened everyone would be googoo over the FX code path and not focused so much on DirectX 9. But who am I to speak, I'm a sellout to Microsoft just like the majority of gamers.
     
  14. Rugor

    Newcomer

    Joined:
    May 27, 2003
    Messages:
    221
    Likes Received:
    0
    Re: Microsoft/Nvidia Fallout

    What an interesting perspective. Nvidia as the poor put-upon underdog. However, as best I understand it, Nvidia had hoped that DX9 would be friendlier to the GfFX architecture than it is, and did make suggestions to how MS should have coded DX9.

    However, MS was not going to let Nvidia dictate to them, and so they ended up not following their recommendations. Nvidia has a long history of attempting to get the entire industry to follow standards they define. CG anyone? MS chose a DX9 spec that could be applied to any hardware, rather than favoring Nvidia, and Nvidia chose not to follow it. They may have had good reasons for choosing not to follow it, but that was their choice.

    Given that Nvidia's current architecture performs poorly in both DX9 Pixel Shaders and OpenGL Fragment Shaders when running the default paths it appears the flaw lies with Nvidia not MS. If it were MS specific we would not expect the issue to cross over to OpenGL as it does.
     
  15. WaltC

    Veteran

    Joined:
    Jul 22, 2002
    Messages:
    2,710
    Likes Received:
    8
    Location:
    BelleVue Sanatorium, Billary, NY. Patient privile
    Re: Microsoft/Nvidia Fallout

    Developers want an API path--that's what the API's are for. The last thing they want is to have to code "special" paths for "special" hardware in addition to coding for the API like they have to do anyway. Generally, developers want to spend time developing their game software, not promoting a specific IHV's hardware with a great expense of time and money tied into internal vendor-specific code paths. Why? Because the goal of developers is to sell their software, not to sell an IHV's hardware. That's what the API's were designed for in the first place--to make it easy on developers and to ensure a wide, level playing field for all the IHVs. nVidia should have absolutely no problem with this, unless the problem is it objects to having to compete with other IHVs.

    Besides, nV3x does a swell job with standard DX8 coding--which is every bit as much a "M$ standard" as is DX9. And nV3x has the same problems under the OpenGL ARB2 path as it does with DX9. So nVidia's problem is not M$, it's not game developers, it's not benchmark companies--nVidia's problem is that its current 3d hardware is simply not nearly as good as its main competitor's. Pretty simple when you think about it. Maybe they'll get it right with nV4x, and maybe not. Either way, the ball's in their court.

    What was it, exactly, that you think allowed ATi to do such a good job with R3x0, but at the same time placed such an "unfair burden" on nVidia? Remember that nV3x only looks as poor as it does under DX9/ARB2 because R3x0 looks so much better. Conversely, nV3x is what makes R3x0 look so good...:)
     
  16. Anonymous

    Veteran

    Joined:
    May 12, 1978
    Messages:
    3,263
    Likes Received:
    0
    good point, very easy game for ATI to play for the next 2 years.
     
  17. RussSchultz

    RussSchultz Professional Malcontent
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,855
    Likes Received:
    55
    Location:
    HTTP 404
    Re: Microsoft/Nvidia Fallout

    Having MS choose the SDK features that don't quite match your strengths when you're too far down your development path.

    Same thing happened to TDFX. (Remember PS1.0? Neither does anybody else)
     
  18. WaltC

    Veteran

    Joined:
    Jul 22, 2002
    Messages:
    2,710
    Likes Received:
    8
    Location:
    BelleVue Sanatorium, Billary, NY. Patient privile
    That's the point for the API in the first place--it's supposed to make it easy on everybody--especially developers. (However, I doubt it's easy for IHVs to design worthwhile 3d chips.) I can't believe this controversy has resulted in people questioning the APIs and misunderstanding their purpose--it's not like nVidia was cut out of the loop when the API specs were being firmed up. But it is, unfortunately, very much like what an IHV might be expected to do when faced with superior competition--attempting to warp and bend the API process to suit itself and its own needs as opposed to the desires of the general markets, is what I mean. The attempt hasn't worked very well for nVidia, has it?
     
  19. EntombeD

    Newcomer

    Joined:
    Sep 14, 2003
    Messages:
    62
    Likes Received:
    0
    Exist already:

    Here is one: http://www.rightmark3d.org/
     
  20. WaltC

    Veteran

    Joined:
    Jul 22, 2002
    Messages:
    2,710
    Likes Received:
    8
    Location:
    BelleVue Sanatorium, Billary, NY. Patient privile
    Re: Microsoft/Nvidia Fallout

    Hmmmm...how does this tie in with nV3x having its greatest *comparative* utility as a DX8.x/ARB1 chip...? Wasn't nv25 designed to support those features? As well, nV3x does indeed support many DX9/ARB2 features, such as ps2.x and even full precision fp. The problem is that it simply isn't very good at those things in relation to R3x0 in terms of performance--not that it doesn't support them. So you're saying that after finishing up its nv25 designs and shipping them, DX9/ARB2 was a "surprise" to nVidia--who had anticipated that DX9/ARB2 would merely be clones of DX8.x, in terms of the feature support DX9 would require? I find that highly unlikely.

    Further, let's look at chronology. R300 shipped months before nV30 had been through its final tape out. In fact, R300 was shipping months before nVidia formally *announced* nV30 at Comdex (I was using a 9700P at the time.) With all the delays surrounding nV30, as compared to R300, you are saying that nVidia's problem was a lack of *advance notice*...? Heh....:) Come on, Russ...that's weak...:D Besides, it sure wasn't a lack of DX9 feature support that caused nVidia to cancel nV30 at the last minute so that it never made it to retail. I would think that of all possible excuses lack of advance notice would be by far the weakest. I think it far more likely that nVidia knew everything ATi knew--but that it just didn't care...nVidia was operating in its own orbit at the time.

    Further, don't you think that if there was a shred of proof out there that M$ had conspired in some fashion to rout nVidia as you've suggested, that nVidia would not only know about it--but it would be suing the pants off of both M$ and ATi at the present time? Nope, I think this is the wrong direction completely. Much more probable: nVidia simply believed its course was the correct one for nVidia, and discounted what anyone else might do as irrelevant to nVidia's fortunes. (Further, I'll bet M$ can easily prove, beyond a doubt, what nVidia knew about DX9 and when it knew it.) nVidia just screwed up bigtime, Russ, that's all there is to it. "Human foibles" of judgement explain it all, IMO.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...