Is 24bit FP a recognized standard?

Discussion in 'Architecture and Products' started by Brent, Dec 22, 2003.

  1. X2

    X2
    Newcomer

    Joined:
    Dec 21, 2003
    Messages:
    23
    Likes Received:
    0
    There are lots of calculations where FP16 is sufficient, and there are lots of calculations where it is not. The spec calls for FP16+ where it is deemed sufficient.

    NV3x does obey the spec it was built to service. FP16 is acceptable because it is numerically sufficient for several calculations common to 3D hardware.
    On DX8 hardware, several calculations were acceptable in 8/9bit precision. Those are still acceptable in FP16. Not every surface is terribly complex.


    Sure, the shader-ridden DX9 desktop of Longhorn will bring NV3x chips to a crawl, it will certainly drop below 40fps. Unplayable!
     
  2. AlphaWolf

    AlphaWolf Specious Misanthrope
    Legend

    Joined:
    May 28, 2003
    Messages:
    9,470
    Likes Received:
    1,686
    Location:
    Treading Water
    No. The spec calls for FP16 where it has been requested by the developer. There is a difference.
     
  3. X2

    X2
    Newcomer

    Joined:
    Dec 21, 2003
    Messages:
    23
    Likes Received:
    0
    The spec calls for FP16+ where it is deemed sufficient by the developer.

    I thought that was obvious... :)
     
  4. Bouncing Zabaglione Bros.

    Legend

    Joined:
    Jun 24, 2003
    Messages:
    6,363
    Likes Received:
    83
    Not to Nvidia. The developer gets whatever precision Nvidia has deigned to allow the developer to have - no matter what the developer requests, or what Nvidia pretends to be supplying.
     
  5. X2

    X2
    Newcomer

    Joined:
    Dec 21, 2003
    Messages:
    23
    Likes Received:
    0
    This may be true for a few selected developers/popular games and benchmarks. And I certainly don't like that.
     
  6. OpenGL guy

    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,357
    Likes Received:
    28
    Wrong. nvidia themselves stated that D3D was their API in the TNT days.
    As has already been noted, STALKER is D3D.
    Morrowind runs fine for me. However, you can't blame the API for what developers do with it.
    Purely subjective.
     
  7. Razor04

    Newcomer

    Joined:
    Oct 24, 2003
    Messages:
    121
    Likes Received:
    0
    Wasn't the STALKER engine created with the use of a 9700 Pro? At least until NV bought them out...
     
  8. AlphaWolf

    AlphaWolf Specious Misanthrope
    Legend

    Joined:
    May 28, 2003
    Messages:
    9,470
    Likes Received:
    1,686
    Location:
    Treading Water
    I know it's nitpicking, but it's not the same thing. The developer has to choose to whether to implement partial precision. It will not always be because he has 'deemed it sufficient', he may have decided that certain hardware cannot run full precision fast enough and lower quality will have to do. I don't think that its the same as 'deemed sufficient'.
     
  9. Popnfresh

    Newcomer

    Joined:
    Mar 8, 2003
    Messages:
    19
    Likes Received:
    0
    Unreal was a Glide and SGL (PowerVR) game and other drivers came later. Homeworld is written purely in OpenGL, D3D support is just a wrapper dll to convert OpenGL calls to Direct3D. Half-life uses a simular wrapper.
     
  10. Dave Baumann

    Dave Baumann Gamerscore Wh...
    Moderator Legend

    Joined:
    Jan 29, 2002
    Messages:
    14,090
    Likes Received:
    694
    Location:
    O Canada!
    Yes, the default for DirectX is full precision is up to the developer to select something different from the default.
     
  11. Doomtrooper

    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    3,328
    Likes Received:
    0
    Location:
    Ontario, Canada
    Yes it was, my email exchange with them...

    Once this hit the net from this forum, 3DGPU interviewed them and they stated this:

    Then I'm sure somone from ATI talked to them, and they retracted some of their comments below:



    Now I do find in Humorous they complain getting a DX9 card in October when DX9 is not even released yet, and didn't have ANY FX cards till after Xmas (remember it was January before any developers had NV30 class boards so approx. 4 months ahead of their 'early Nvidia hardware'...but money talks. This conversation (I have many email exchanges since the one above where they try to convince me the FX cards are better after developing their TWIMTBP game engine on a 9700)...developers much like bridge-it are no better then used car salesmen selling lemons.

    [​IMG]
     
  12. WaltC

    Veteran

    Joined:
    Jul 22, 2002
    Messages:
    2,710
    Likes Received:
    8
    Location:
    BelleVue Sanatorium, Billary, NY. Patient privile

    I realize this makes some kind of sense to you, radar, but I have to tell you I find it quite incomprehensible...:) There's another way of looking at it that I'd like to suggest you consider.

    Most likely the "vast majority of issues" you say occur with nVidia D3d drivers as opposed nVidia OpenGL drivers (just assuming your comment is valid) are most likely the result of the fact that for every OpenGL engine title shipped there are 5-10 D3d titles shipped. The ratio may be higher than that, actually--I confess to not ever doing a scientific study on the subject. So just from a purely numerical average position you'd expect a much greater aggregate number of D3d bugs than you would OpenGL bugs, simply based on the fact that so many more D3d titles get shipped.

    The problem with "bugs" is that they are game-specific, as opposed to API specific. What often occurs is that the way one developer supports a certain feature in his game under his interpretation of an API may be different somehow from the way another developer supports the same API feature in another game, and one may work well with an IHV's drivers while the other requires an IHV driver adjustment (or a patch from the game developer) to tweak or fix the problem. Bugs are "Equal Opportunity" about APIs when it comes to 3d games.

    Second, you can't look at an API feature set and expect it to provide you with a transistor blueprint for building a 3d chip to support it. What an IHV does is to evaluate the hardware features he will need to support, and those he wants to support in an upcoming chip, and he does this by ascertaining the feature support required by both APIs (or rather the particular version of the APIs he believes pertinent to the lifespan of the architecture he is designing), and then he sets out to design a chip that will support all of those considerations in hardware. This is all done, of course, in collaboration with driver design people in a coordinated fashion so that when all is said and done everybody's happy with the chip and it meets your expectations as well as the ones of your potential customers.

    IE, there's no such thing as an "OpenGL" chip or a "D3d" chip. Even though ATi and nVidia may support some of the same features under the same APIs, or between APIs, the underlying hardware and driver interface between the two product lines is much different. The way nVidia and ATi support "trilinear filtering," for instance, is much different in terms of the hardware and drivers found in each IHV's products which is employed to support that feature. Of course, trilinear support under OpenGL in nV3x is supported by the same hardware that provides trilinear support in NV3x under D3d. So you can see that chips are designed to support "whatever" API, as opposed to a chip being designed to support one API in preference to the other (which isn't a logical design direction for obvious reasons.)

    You might get confused about this from the fact that R3x0 is generally referred to as a "DX9" chip. But that interpretation is merely confusing an API with hardware. The limitation relative to R3x0 in OpenGL is not a hardware limitation--it's only limited by the fact that the OpenGL API does not as of yet have as much *standard functionality* built into it as DX9. When OpenGL supports the same standard functionality as DX9, then R3x0 could be said to be just as much an "OpenGL2.x" chip as it can be said to be a "DX9" chip. The term "DX9" with respect to R3x0 is simply a shortcut for describing the fact that R3x0 supports the DX9 feature set in hardware, that's all.

    OpenGL does allow for a mechanism not supported under D3d to get around the fact that its standard feature set is not as complete as what is supported under D3d (ironically, "extension" support was hacked into the OpenGL API years ago, because it was thought that the slow pace of D3d API development at the time would be counterproductive to IHV hardware development, and slow down 3d hardware innovation. Through the OpenGL extension, IHV's could innovate and support new hardware features even if D3d was very slow in picking up support for those features. Just about the opposite situation is true today, which is why the extensions have to go.) And that's the proverbial "OpenGL extension" we all know and love so much...:)

    In theory this means an IHV could design into his vpu an "elegant kitchen sink" or an "8-track tape player," or both, and could support them through extensions in his OpenGL driver immediately, and therefore it simply wouldn't matter a flip whether M$ supported the same features in D3d, and there was no waiting around on the "next version" of OpenGL to have to put up with prior to being able to support the feature in an IHV driver (as you'd have to do with D3d.) Sounds wonderfully optimistic, but there's a problem here, though...

    The purpose of an API is to make things *standard* for the game developer in terms of his game's feature support. Since D3d is a fixed API in terms of feature support, no extensions allowed, a game developer can support D3d features 1-10 (as an example), or any combination of those features, according to his convenience when developing his game--and he can be assured that those features will work with any IHV's hardware and drivers which meet the D3d specifications for the version of D3d his game game requires.

    With OpenGL, however, the "extensions" approach has actually complicated matters for the developer. Instead of a standard approach to supporting 3d features in a game, the OpenGL developer can be assured of a much smaller standard API-supported feature set that he can count on, and lots and lots of *different* extensions, IHV-specific extensions, for the support of many of the features standard under DX9 but not supported under OpenGL by anything other than extensions. This is not a good situation for the OpenGL API, frankly, as the added complication of this approach simply dilutes developer interest in using the OpenGL API. Hopefully, this disparity between the APIs will be rectified with an upcoming version of OpenGL which will meet or exceed the standard feature support found in DX9, but whatever will be will be...:)

    Yes, I think you are probably right in assuming that the stalker developers most likely made a conscious decision about which API they would build their game engine around....:) I'm glad you are "confident" about Doom3--I don't know what I think about it, yet.

    Heh...:) Yes, I'm sure M$ sat back and said, "Eureka! Let's design DX9 just to slow everybody down to the point of unplayability! What a stellar idea!"

    I hate to let you in on the secret, radar, but your particular point of view relative to "DX9" and "being slowed to the point of unplayablity" only applies to you because you are running nV3x. And you can't say you didn't know better, as you've been hanging out here for awhile, so ignorance is no excuse, is it?...:)

    You really need to get some perspective about these things...First you need to check on the ratio between D3d and OpenGL titles shipped in the last few years, and then you need to try and appreciate the fact that just because R3x0 is so much faster at DX9 support than nV3x there is really no reason to expect much difference in the performance delta just because you run an OpenGL game...Well--it'll soon be a New Year--2004--hopefully nVidia can catch up so that you can stop apologizing for them, right?...:)
     
  13. Mark

    Mark aka Ratchet
    Regular

    Joined:
    Apr 12, 2002
    Messages:
    604
    Likes Received:
    33
    Location:
    Newfoundland, Canada
    I'd like to have demalion disagree with one of WaltC's essay-like posts just so I can see how long a combined WaltC+demalion point-by-point rebuttal post would be... I'm betting the resulting page length would crash most browsers...

    ;)
     
  14. radar1200gs

    Regular

    Joined:
    Nov 30, 2002
    Messages:
    900
    Likes Received:
    0
    nVidia's GPU's are developed first and foremost with the needs of professional 3D applications and users in mind. They are marketed to this segment as Quadro's. This market segment does not use DirectX, it uses OpenGL.

    The smae GPU's are then adapted to mainstream use.
     
  15. Tim Murray

    Tim Murray the Windom Earle of mobile SOCs
    Veteran

    Joined:
    May 25, 2003
    Messages:
    3,278
    Likes Received:
    66
    Location:
    Mountain View, CA
    That's exactly why Quadro cards come out months later than their mainstream counterparts, the GeForce 256 came out before anyone even thought of the Quadro, and more incredibly silly BS.
     
  16. YeuEmMaiMai

    Regular

    Joined:
    Sep 11, 2002
    Messages:
    579
    Likes Received:
    4
    nvidia better get their act together as the industry is not going to accept another lackluster card from them. Nv30 is difficult to program for and the DX9 performance is crappy at best......
     
  17. Heathen

    Regular

    Joined:
    Jul 6, 2002
    Messages:
    380
    Likes Received:
    0
    That's not a nice thing to think. I'm only just starting to wake up from the new year. :evil:
     
  18. Zengar

    Regular

    Joined:
    Dec 3, 2003
    Messages:
    288
    Likes Received:
    1
    It would be interesting to know how many exponent/mantissa bits does ATI implement with it 24bit format.

    As far as I know nvidias 32bit is 23 bit mantissa and rest exponent, 16bit is 10 bit mantissa and rest exponent(the same presicion as 12bit fixedpoint).
    What would also be interesting: partial precision of RCP nad RSQ instructions on ATI cards. On my GFFX5600 I get RCP 9 = 1.111 with 32 bit precision and RCP 9 = 1.11 with 16 bit precision. All results are computed via rendering to a floating-point framebuffer.

    Could someone try this on ATI?
     
  19. arjan de lumens

    Veteran

    Joined:
    Feb 10, 2002
    Messages:
    1,274
    Likes Received:
    50
    Location:
    gjethus, Norway
  20. X2

    X2
    Newcomer

    Joined:
    Dec 21, 2003
    Messages:
    23
    Likes Received:
    0
    Don't know about demalion, but I disagree with Walt on several points (especially the OpenGL extension issue). Though I'm not willing to write essays on it. It has beed discussed too many times...
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...