GeForce FX: 8x1 or 4x2?

Discussion in 'General 3D Technology' started by Dave Baumann, Feb 10, 2003.

  1. pocketmoon_

    Newcomer

    Joined:
    Nov 15, 2002
    Messages:
    117
    Likes Received:
    0
    I'm doing such shader test at the moment, see :

    http://www.beyond3d.com/forum/viewtopic.php?t=4614

    I'll try some texture + integer shaders ans post some results.
     
  2. 3dcgi

    Veteran Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    2,493
    Likes Received:
    474
    Well, pong was very popular. Just like this thread.
     
  3. Arun

    Arun Unknown.
    Legend

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    302
    Location:
    UK
    Yeah... I can already imagine one of nVidia's marketing points for the NV35:

    "2nd Generation Flexible Pipeline Architecture, based on the world's most talked about pipeline organisation EVER!"


    Uttar
     
  4. antlers

    Regular

    Joined:
    Aug 14, 2002
    Messages:
    457
    Likes Received:
    0
    If they'd operate like ATI, they'd just tell us what the hell the chip was doing. ATI was very clear: All ops are going through an FP24 pipeline, which can do a texture op and a vector op per cycle, and a bonus scalar op if the vector op only requires 3 components.
     
  5. Sabastian

    Regular

    Joined:
    Feb 6, 2002
    Messages:
    991
    Likes Received:
    2
    Location:
    Canada
    Yeah someone is prolly having a good laugh somewhere.
     
  6. antlers

    Regular

    Joined:
    Aug 14, 2002
    Messages:
    457
    Likes Received:
    0
    BTW, that last Inquirer story should probably credit B3d as well.
     
  7. K.I.L.E.R

    K.I.L.E.R Retarded moron
    Veteran

    Joined:
    Jun 17, 2002
    Messages:
    2,952
    Likes Received:
    50
    Location:
    Australia, Melbourne
    I thought they did give B3D credit?
    (I still don't even know who runs the site LMFAO!!! I know it's sad :))
     
  8. kyleb

    Veteran

    Joined:
    Nov 21, 2002
    Messages:
    4,165
    Likes Received:
    52
    the editor and chief is the same guy from the register. not that it is hard to tell when comparing the two sites. ;)
     
  9. Luminescent

    Veteran

    Joined:
    Aug 4, 2002
    Messages:
    1,036
    Likes Received:
    0
    Location:
    Miami, Fl
  10. K.I.L.E.R

    K.I.L.E.R Retarded moron
    Veteran

    Joined:
    Jun 17, 2002
    Messages:
    2,952
    Likes Received:
    50
    Location:
    Australia, Melbourne
    Fact remains: This info has been released to us.
    Question: When will nVidia come 100% clean to us with the NV30's architecture?
     
  11. antlers

    Regular

    Joined:
    Aug 14, 2002
    Messages:
    457
    Likes Received:
    0
    Sometime after the NV35 is available :)
     
  12. MDolenc

    Regular

    Joined:
    May 26, 2002
    Messages:
    696
    Likes Received:
    446
    Location:
    Slovenia
    Nope:
     
  13. kyleb

    Veteran

    Joined:
    Nov 21, 2002
    Messages:
    4,165
    Likes Received:
    52
    i think sireric wins by defualt. :lol:
     
  14. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    I doubt nVidia will ever fully-disclose the low-level aspects of the FX architecture. The only chip that they've done so in the past for has been the NV2A (X-Box chip), and it made sense to do so for that chip since it was made for a console, where the developers are encouraged to do very low-level optimizations.

    Regardless, the actual low-level architectural specifics really aren't necessary for anybody outside of nVidia to know. They're just interesting for people like us. So, while you may complain about it, you really have no reason to "cry foul" about nVidia not fully-disclosing the low-level architecture except for your own desires.

    What is important for nVidia with respect to the FX:

    1. Dramatic performance improvements for PS 2.0 and ARB_fragment_program.

    2. White papers on "how to optimize" for the architecture.

    All of the relevant performance characteristics can be discovered through benchmarks. The theoretical figures are only useful in discovering how close real-world performance comes to the supposed "future potential" of the chip.
     
  15. Typedef Enum

    Regular

    Joined:
    Feb 10, 2002
    Messages:
    457
    Likes Received:
    0
    You've got it backwards...nVidia shouldn't require developers to have to sit there and read all these papers to get acceptable performance out of a sub-performing chip...

    No...nVidia should engineer the thing in such a way that developers do not need to spin cycles in such areas...so that they can concentrate on bigger things...like, er, the games themselves.
     
  16. Mulciber

    Regular

    Joined:
    Feb 7, 2002
    Messages:
    413
    Likes Received:
    0
    Location:
    Houston
    from what I understood nVidias always been very good and helping devs optimize for their chips....much to many fanATIcs chagrin.
     
  17. Ilfirin

    Regular

    Joined:
    Jul 29, 2002
    Messages:
    425
    Likes Received:
    0
    Location:
    NC
    Yes, but I doubt that's what Typedef was saying.

    nVidia has a habit of doing things their way and forcing it down the developers' throats. They've gotten away with it in the recent past because they were really the only cards you had to target, so developers would build their apps to work well on nVidia cards, and thus all the other minority IHVs had to develop their cards to meet the same requirements. Or in other words> nVidia is used to setting standards, not adopting them. (which, if you'll remember, was exactly 3dfx's problem as well)

    Now the standards and nVidia's methodology are diverging. Targetting the standards for best performance means best (or atleast really good) performance on all the other IHVs cards, but to get optimal performance on nVidia's hardware you have to target their deviation.

    The point is, you shouldn't need white papers on "how to optimize for nVidia hardware", or ATi hardware, or SiS hardware, or any other hardware; writing optimally for the standards ( ARB & D3D9 ) should equate to writing optimally for any cards that adopt those standards.
     
  18. jpprod

    Newcomer

    Joined:
    Jun 2, 2002
    Messages:
    9
    Likes Received:
    0
    Location:
    Finland
    OK, I'll bite.

    If nVidia has a habit of doing things their way and forcing it down the developer's throats, then you will be easily able to mention at least two concrete instances in which they have done this in the recent past? I thought so.
     
  19. K.I.L.E.R

    K.I.L.E.R Retarded moron
    Veteran

    Joined:
    Jun 17, 2002
    Messages:
    2,952
    Likes Received:
    50
    Location:
    Australia, Melbourne
    Exactly what I was going to suggest. :)
    Game devs should concentrate on using ARB and DX standards and not on using specific codepaths for only 1 or 2 chipsets.
    John Carmack has stated that the NV30's ARB2 path is much slower than the R300's ARB2 path, is that JC's fault or nVidia's?
    Not everyone is John Carmack and not everyone is going to be writing an NV30 codepath, hence the NV30 will suffer if their driver team/s don't pull their socks up.

    I have faith they will, that still doesn't explain if the NV30's architecture is faulty or not. In time we shall see.
     
  20. Ilfirin

    Regular

    Joined:
    Jul 29, 2002
    Messages:
    425
    Likes Received:
    0
    Location:
    NC
    - ps1.1 vs ps1.4 (ps1.4 performance is so bad that you have to resort to ps1.1 w/ multipass)
    - NV30 vs ARB2 extensions (arb2 performance is so bad that you have to use nv30 specific extensions)
    - Generally having their ARB extensions perform significantly worse than their proprietary extensions, forcing developers to adopt the latter as opposed to the 'right' way.
    - Not supporting EMBM until it was dead (granted it was never a particularly nice feature)
    - Not supporting npatches
    - Dropping support for rt-patches
    - No true support for displacement mapping (this goes for both ATi and nVidia)
    - CG vs HLSL

    To name a few (I could name a lot better examples if it wasn't for this sleep deprevation problem). They do it much more silently than saying "do it this way only". Normally it is done by just not adopting features their competitors have created, making sure developers don't make too extensive use of that feature (if at all), other times it is by focusing on their proprietary paths rather than the standard paths, still other times it is by creating a new standard and stubbornly sticking to it when the current standard is usually better (or atleast just as good) anyway. Basically, if a standard wasn't created by nVidia it either won't go into nVidia hardware, or will be poorly implemented so that no one uses it. Sometimes this is legitament (i.e. the feature sucks), most of the time is it just a way of silently killing the competition's tech at great expense to the industry as a whole.

    Obviously nVidia isn't the only one that does this (it's common practice in most industries, especially by the companies controlling the largest percentages of the market), they just seem to do it more so than the other companies.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...