Jen-Hsun Talks NV4x

Discussion in 'Beyond3D News' started by Dave Baumann, Mar 3, 2004.

  1. Daliden

    Newcomer

    Joined:
    Sep 18, 2003
    Messages:
    89
    Likes Received:
    0
    Let's say NV40 comes out slightly before R420, and the early benchmarks are stupendous. A couple of years ago, people would have *flocked* to buy NV40 cards left and right.

    Now, it might be a bit different. People will be somewhat wary of the benchmarks and promises, and will at least think about waiting until the R420 comes out.

    I doubt the NV30 debacle will have as far-reaching effects as some predict, but it will have had *an* effect. Remains to be seen how big, though.
     
  2. StellaArtois

    Newcomer

    Joined:
    Jan 12, 2003
    Messages:
    57
    Likes Received:
    0
    Location:
    London, UK
    Exactly. The 'WOW' effect was there with the initial 3DFX products. Voodoo1 - absolutely amazing step up from software rendering. Voodoo2 SLI - superb performance jump. And I stuck with 3DFX, even when they were '3dfx' because of that.

    Basically, we all just want a product to come along and give us all a V2 SLI like jump in performance again. We're all waiting for that (even though it may not come along), but we can sure all hope that NV40 and R420 come close to that... :)
     
  3. digitalwanderer

    digitalwanderer Dangerously Mirthful
    Legend

    Joined:
    Feb 19, 2002
    Messages:
    17,276
    Likes Received:
    1,788
    Location:
    Winfield, IN USA
    Yeah, but it's looking a lot more now like the R420 will be out well before the nV40...and I don't think paper launching the nV40 and spreading FUD is going to help nVidia much this time because of the nV30 debacle. :(
     
  4. PaulS

    Regular

    Joined:
    May 12, 2003
    Messages:
    481
    Likes Received:
    1
    Location:
    UK
    Difference is that a) The NV40 should be more competitive, and b) If it does come out later, it still won't be 6 months behind ATi.

    But you're right, the same PR tactics won't have quite the same impact, simply because of people's jaded attitutes towards NV now.
     
  5. WaltC

    Veteran

    Joined:
    Jul 22, 2002
    Messages:
    2,710
    Likes Received:
    8
    Location:
    BelleVue Sanatorium, Billary, NY. Patient privile
    Since nVidia will not be marketing PCIeX16, because initially they won't be offering such products, I think it's fairly obvious that everyone will indeed see "AGPx16," instead. In short, there's absolutely no incentive for anyone picking up PCIe mboards to even consider nVidia until such time as nVidia begins offering native PCIe reference designs. About the best nVidia might hope for is to confuse its customers into thinking that its "AGP x16" support is "just as good" as a native PCIex16 part would be, but I think that will be easier said than done.

    Right, which is why the sensible course would be to market native PCIe reference designs along with a bridge down to AGP x8 for those who want/require AGP x8.

    I don't consider stating hard facts to be "whining." I had nothing to do with the choices nVidia made in relation to promoting nV3x in the negative ways it chose to do so, and neither did anyone else, "fanbois" included.

    What proves your analogy wrong is the fact that it took about a year post R300-based products shipping before the market as a whole was willing to concede that ATI had surpassed nVidia, and I'm including board and system OEMs in the definition of "market." The reluctance towards ATi in the beginning was based on ATI's past reputation with respect to its drivers and other issues. Now, even with a decent nV40 on tap, if that is indeed what we see, nVidia will have negative intertia of its own to overcome. The door swings both ways. I have no inclination to grant nVidia a free pass as you seem to prefer--nVidia will have to impress me the old-fashioned way, they'll have to earn it...;)

    nV3x is a known quantity--what is known about nV4x? Since nothing is known about nV4x, it strikes me that talking about anything other than nV3x is, at this time, what is truly "irrelevant"...;)

    Well, AMD provides both a model number and MHz number for all of its cpus, as using MHz to compare cpus of differing architectures is wholly a waste of time, as unless you know something about work done per clock, knowing the clock rate is meaningless. Compare the 2GHz Celeron to the 2GHz Northwood P4, for instance. Or, better yet, compare the 2GHz AXP to a 2GHz P4...;) In none of these examples does MHz provide meaningful comparative information with respect to performance. That might be why Intel has recently decided to go with model numbers of its own to help its consumers understand that MHz isn't the number they should be looking at when choosing among Intel processors:

    http://news.com.com/2100-1006_3-5172938.html

    I think everyone is well aware that when a gpu manufacturer provides a specification for pixel pipelines that doesn't tell the "whole story" with regard to performance. However, when you do what nVidia did, which was to falsely state nV3x had 8 pixel pipes when it never had any more than 4, merely to avoid appearing behind ATi, there's absolutely no possibility whatever of you arguing this was done to "benefit the consumer" since such misrepresentation categorically benefits no one on earth except nVidia (and only then if it isn't found out--which it has been.) What nVidia did was tantamount to AMD or Intel shipping a cpu at 2GHz but marketing it as 4GHz (regardless of model numbers.) Of course, neither AMD nor Intel would be that stupid...;) It's troubling, though, to think that nVidia was actually, indeed, that stupid (most probably desperate, really.)

    I have no objection to this characterization of marketing whatever. However, for me "the best light possible" means for instance that you market your *4 pixel pipeline* gpu in the best light possible as a *4 pixel pipeline gpu.* To do what nVidia did and to call it an 8 pixel pipeline gpu (and apparently is still doing!) is not marketing: that's lying, that's fraud. Big, big difference, to me. It's like an automobile manufacturer advertising a 4-cyclinder engine as an 8-cylinder, for instance. The number of cylinders certainly doesn't tell the "whole story" with regard to engine performance, but nobody would ever think that was a valid excuse for misrepresenting the actual number of cylinders a given engine has. Once a manufacturer demonstrates such a massive disregard for the truth by brazenly falsifying the specifications it releases about its products--they've really blown it with me, and in the future I will be very skeptical as to whatever specification information they release concerning their products.


    Not really at all...;) I'm talking about how much more ATi listened to market demand when it designed R3x0 than--obviously--nVidia did when it designed nV30. Whereas ATi well understood that the manufacuring process was totally subserviant and secondary to performance and IQ, and indeed, that prudent selection of the manufacturing process would directly impact yields, nVidia seems to have understood none of that in regard to nV30. From the beginning, long before nV30 shipped, the nVidia CEO proclaimed to the world that ".13 microns" was far more important to nV30 than any other consideration--be it performance, IQ, or yield. The result was a 4 pixel pipeline, .13 micron gpu so poor in comparison to ATi's 8 pixel pipeline, .15 micron R300 in every respect, which shipped nearly six months earlier than nV30 made it out in test samples, that nVidia cancelled nV30 before it ever went mainstream and into retail distribution.

    Obviously, what nVidia counted on was a lack of gpu competition so profound and complete that no other company could keep up with it, regardless of how short-sighted nV30 turned out to be in terms of design. (An object lesson, I think, of what happens when you fixate on manufacturing process as a rudimentary element in gpu design at the expense of almost everything else.) Most of nVidia's assumptions about nV30 were wrong from the start, and the fact that nVidia *didn't listen* to its markets is reflected best, I think, in how much better ATi did with its DX9 hardware support in R3x0 than nVidia did in nV3x. As a result, nVidia has wasted most of its PR efforts over the last 18 months in explaining why "3d gamers" really "don't need" the level of API hardware support ATi built into R3x0. Selling negatives as positives is without a doubt one of the worst PR strategies a company can endorse, and generally is never done in the absence of desperation.

    To me, there's no way to ever say with a straight face that the nV30 design was derived from nVidia listening to its markets--unless you want to say that nVidia production management, unlike ATi's, had delusional balls of cotton stuffed in its ears...;) I can easily imagine, however, nVidia reaching its own internal conclusions that 4 pixel pipes was fine since they'd get a clockspeed boost from .13 microns--since after all, pre-R300, "Who could even keep up with their 4-pixel-pipe nV25?", and that there was no rush in supporting things like ps2.0 in hardware since developers as a whole were more likely to follow nVidia's lead than they were M$'s and so things like that just didn't matter. No, it's clear to me that such clear distinctions between R3x0 and nV3x exist because nVidia decided early on that the only competition it had was with itself. ATi's R3x0 designs, however, are so very different precisely because ATi had a very clear idea of who and what it was competing with, and had a much different take on what 3d gpu markets wanted to see. ATi simply "listened better," imo.


    Hopefully, you really don't think that anyone is fooled by nVidia claiming that it doesn't compete with ATi?....;) Talking about "far gone," that's about as far out as you can get...;) The sad thing, I think, is that the people within nVidia who say these crazy things really believe that no one's the wiser. It's indeed sad, but it does explain so many things the company has publicly stated over the past 18 months, doesn't it? When you live in a delusional bubble, a characteristic of life there is that you tend not to be able to see outside of it very clearly.
     
  6. DemoCoder

    Veteran

    Joined:
    Feb 9, 2002
    Messages:
    4,733
    Likes Received:
    81
    Location:
    California
     
  7. digitalwanderer

    digitalwanderer Dangerously Mirthful
    Legend

    Joined:
    Feb 19, 2002
    Messages:
    17,276
    Likes Received:
    1,788
    Location:
    Winfield, IN USA
    Hubba-wa? What does Walt's IHV preference have to do with nVidia's need to prove their reputation after a year and a half of lies and deceit?
     
  8. Florin

    Florin Merrily dodgy
    Veteran Subscriber

    Joined:
    Aug 27, 2003
    Messages:
    1,648
    Likes Received:
    219
    Location:
    The colonies
    Hehe, he does say some weird things though. I mean, accusing Nvidia of being 'reluctant to admit ATi is taking market share from it generally' or being 'self-serving'. And calling on them to 'acknowledge their competition'. I mean, do the McDonalds of the world spend time talking about how advanced Burger King is or about how much market share they have? But what's more telling is that he already decided for everyone that Nvidia's AGP to PCIE bridge means 'there's absolutely no incentive for anyone picking up PCIe mboards to even consider nVidia until such time as nVidia begins offering native PCIe reference designs'. When we've yet to see if or to what degree this will hamper performance.

    I agree with Guest - brand loyalty is overrated. And if Nvidia's next chip happens to beat ATI then no amount of dodgy marketing is going to matter. Most enthusiasts will switch.
     
  9. digitalwanderer

    digitalwanderer Dangerously Mirthful
    Legend

    Joined:
    Feb 19, 2002
    Messages:
    17,276
    Likes Received:
    1,788
    Location:
    Winfield, IN USA
    If the nV40 can beat the R420 clean, mebbe...but then nVidia will be abandoning all their optimizations for the nV3x line and will be horking all the people who own them.

    M|22
     
  10. Anonymous

    Veteran

    Joined:
    May 12, 1978
    Messages:
    3,263
    Likes Received:
    0
    How do you reach that conclusion? Many of the optimizations developed for the compiler will still apply, and Nvidia has shown in the past that they don't leave owners of older cards in the lurch.
     
  11. Joe DeFuria

    Legend

    Joined:
    Feb 6, 2002
    Messages:
    5,994
    Likes Received:
    70
    Right...they usually decrease performance of older cards as a side effect the "optimizing" for the newer ones. ;)
     
  12. Rugor

    Newcomer

    Joined:
    May 27, 2003
    Messages:
    221
    Likes Received:
    0
    As an example, the best drivers for the Gf2MX are probably still the 30.82 Detonators. The 40 series Detonators broke Freedom Force on that chipset.

    Nvidia's newest drivers always end up dropping performance for the older cards because when you have mutually exclusive optimizations they will always go in favor of their current line.

    Even if they do continue optimizing for the GfFX line they won't devote as many resources to them as they do now. Most of their effort will move to the NV4x generation and the next generation of drivers will reflect that. They may not break older optimizations, but as new games come out the drivers will optimize for NV4x cards not NV3x. I'm not saying they won't work, but the resources to do the kind of fine-tuning the architecture needs will be aimed at newer cards. R3xx cards will probably do much better in later games than NV3x cards because they won't need as much optimization so ATI's inevitable focus shift to R4xx won't hurt them as much.
     
  13. DemoCoder

    Veteran

    Joined:
    Feb 9, 2002
    Messages:
    4,733
    Likes Received:
    81
    Location:
    California
    No, that's only one instance. Back when I had a GF2, optimizations made to GF3 drivers increased the performance of my GF2. Ditto for the GF1, successful versions of GF1 drivers did deliver some increases for my TNT2.

    Sometimes a driver is a performance regression, sometimes it isn't. Sometimes an optimization is architecture specific, and sometimes it isn't.

    Much of the work put into developing an optimizing compiler can be reused.
     
  14. Anonymous

    Veteran

    Joined:
    May 12, 1978
    Messages:
    3,263
    Likes Received:
    0
    Speaking of the TNT2, I noticed the it kept getting faster and faster with each driver release, even after the GeForce came out. Then I noticed that certain features weren't working anymore. For example, polygon offset stopped working in Half-life so that I would get Z fighting with all of the decal textures. Makes you wonder.
    Except that (good) compilers are very architecture specific.
     
  15. DemoCoder

    Veteran

    Joined:
    Feb 9, 2002
    Messages:
    4,733
    Likes Received:
    81
    Location:
    California
    Yes, but most of the work is in handling the datastructures. For example, once you've written a register allocator, loop scheduler, or bottom-up-rewrite-system, you don't need to rewrite it, you just need to tweak the heuristics or configuration for new architectures.
     
  16. Anonymous

    Veteran

    Joined:
    May 12, 1978
    Messages:
    3,263
    Likes Received:
    0
    This didn't have anything to do with technology - MS just plain didn't want to do business with them anymore...
     
  17. Gunhead

    Regular

    Joined:
    Mar 13, 2002
    Messages:
    355
    Likes Received:
    0
    Location:
    a vertex
    I completely agree with the two of you, but I'll just remind of the other kind of "Wow!" that for me was Matrox G400's output quality, in this case after "quality time" with a Riva 128 ZX... Okay, I guess this has been a moot point since at least Kyro, Radeon 8500, and NV20-based cards, and what with flat panels nowadays -- but I distinctly remember not believing my eyes (luckily had a nice Trinitron piece) back then. :)
     
  18. Gunhead

    Regular

    Joined:
    Mar 13, 2002
    Messages:
    355
    Likes Received:
    0
    Location:
    a vertex
    I'm not really knowledged enough to participate in your "spirited exchange" of arguments, but one thing came to mind from the above.

    Yep that is what the CEO proclaimed, but I guess those declarations are more or less meant to be taken with a grain of something hallucinogenic... I mean, maybe it isn't the whole truth, after all.

    What if what really took place was that NV30 was envisioned as just a first implementation of a forward-looking new platform with ultimately the branchy shaders of SM 3.0 or unified shaders of 4.0 in mind -- I recall some guestimations that a "pool of ALUs" might have advantages there over a "ALUs bolted onto pipelines" approach.

    What I'm suggesting, maybe Nvidia really tried to be smarter and more far-sighted with NV30 than what the CEO would have had us believe. ;-)

    (Most welcome to correct me if this is wrong. Enjoying all the information and insight in this thread.)

    [Third edit finally got it right... Now I'll leave this as is. ;-) ]
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...