Why was glide faster than opengl / direct3d?

Discussion in 'Architecture and Products' started by Commenter, Dec 18, 2013.

  1. Commenter

    Newcomer

    Joined:
    Jan 9, 2010
    Messages:
    234
    Likes Received:
    17
    I just wondered why a proprietary api like Glide was able to achieve faster rendering on 3dfx's hardware of the day than an open api like d3d and opengl?
     
  2. BRiT

    BRiT (╯°□°)╯
    Moderator Legend Alpha Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    12,514
    Likes Received:
    8,720
    Location:
    Cleveland
    CTM: closer to the metal.
     
  3. Colourless

    Colourless Monochrome wench
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    1,274
    Likes Received:
    30
    Location:
    Somewhere in outback South Australia
    Glide also didn't need to do calls into the kernel to perform operations which is a massive performance increase compared to d3d.
     
  4. swaaye

    swaaye Entirely Suboptimal
    Legend

    Joined:
    Mar 15, 2003
    Messages:
    8,457
    Likes Received:
    580
    Location:
    WI, USA
    D3D was also total garbage until version 5 or 6.

    OpenGL wasn't well suited to simple 90s GPUs meant for games.
     
  5. Pixel

    Regular

    Joined:
    Sep 16, 2013
    Messages:
    981
    Likes Received:
    437
    Aren't you also talking about a time where 3d rendering was more cpu bound. Didn't the old 3d accelerator cards take less workload off the cpu compared to cards now days? Thus this is another reason the api weighing down the cpu became less and less of a factor?
     
  6. Exophase

    Veteran

    Joined:
    Mar 25, 2010
    Messages:
    2,406
    Likes Received:
    429
    Location:
    Cleveland, OH
    Yes, and the CPUs didn't have underutilized additional hardware threads to soak up some of the API overhead.

    The overhead for legacy OpenGL's direct mode was really egregious. At least one if not multiple function calls per vertex and having to evaluate state for each one. I bet the overhead was on a similar level to the software T&L itself. I have to seriously wonder what they were thinking.
     
  7. swaaye

    swaaye Entirely Suboptimal
    Legend

    Joined:
    Mar 15, 2003
    Messages:
    8,457
    Likes Received:
    580
    Location:
    WI, USA
    If you wanted your K6 based CPU to be more tolerable, you certainly wanted Glide instead of D3D or OpenGL. Voodoo cards also tended to be the only cards that worked right on non-Intel AGP implementations of the time.

    Also, Voodoo Rush was the first 3dfx product with hardware triangle setup. Voodoo1 was thus extra CPU dependent. This was a point that Rendition drove home against Voodoo1 because even V1000 did triangle setup, and was also DMA driven so more efficient.
     
  8. Davros

    Legend

    Joined:
    Jun 7, 2004
    Messages:
    14,893
    Likes Received:
    2,311
    also there were 3d-now patches available mainly for games that used 3dfx's minigl
     
  9. backgroundpersona

    Banned

    Joined:
    Feb 19, 2014
    Messages:
    15
    Likes Received:
    0
    GLide was better than Open GL and DirectX because it was "closer to the metal" because there was less code to be used and was a slimmed down Open GL API for games thus less code was needed for same result since it would take less time to find the necessary code for execution.

    Also GLide is exclusive API for 3DFX cards thus it used the hardware of these cards more efficiently, it was something akin of what Mantle was today in a way. An exclusive API for the GPU.
     
  10. UniversalTruth

    Veteran

    Joined:
    Sep 5, 2010
    Messages:
    1,747
    Likes Received:
    22
    oki doki, why doesn't NVidia introduce something like GLide in order to combat Mantle?
     
  11. sebbbi

    Veteran

    Joined:
    Nov 14, 2007
    Messages:
    2,924
    Likes Received:
    5,288
    Location:
    Helsinki, Finland
    Nvidia has OpenGL extensions (such as bindless resources, multidraw and custom resource management) that provide similar performance boosts than Mantle. Sometimes Mantle is better, but for engines that are designed to abuse Nvidia bindless multidraw, it certainly beats Mantle in pure draw call count.
     
  12. linthat22

    Regular

    Joined:
    Feb 10, 2002
    Messages:
    308
    Likes Received:
    23
    Location:
    Michigan
    So would you say history is repeating per se?
     
  13. Alexko

    Veteran Subscriber

    Joined:
    Aug 31, 2009
    Messages:
    4,496
    Likes Received:
    910
    Is anyone actually using that?
     
  14. Blazkowicz

    Legend Veteran

    Joined:
    Dec 24, 2004
    Messages:
    5,607
    Likes Received:
    256
    Isn't that OpenGL 4.4?, rather than nvidia stuff?
    All the new features, core and extensions in GL 4.4 are named GL_ARB_something and three of these new things are advertised as reducing the CPU overhead.
    https://www.opengl.org/documentation/current_version/

    For now AMD and Intel only support up to OpenGL 4.3. I tried to google things, and could find this
    http://www.dsogaming.com/news/amd-aims-to-give-opengl-a-big-boost-api-wont-be-the-bottleneck/

    It sort of implies AMD is working on OpenGL 4.4 support but no version is named explicitly. OpenGL extensions are evoked, but without explicitly saying whose extensions are they.
     
  15. Groovounet

    Newcomer

    Joined:
    Dec 12, 2013
    Messages:
    9
    Likes Received:
    0
    The bindless API is interesting because it means that when the GPU is executing something it can access any resources as long as the resource is in GPU accessible memory.

    The bindless API is actually a less efficient API that the resource binding API but it provides that feature. Why having access to an "infinite" number of resources is interesting?

    Because of MultiDrawIndirect (OpenGL 4.3, supported by AMD, Intel and NVIDIA) that allows the GPU command processor to submit itself the draws and for each draw we can index different resources in the GPU.

    What's better than low overhead? No overhead. This is replacing CPU side resource switching by GPU based resource indexing.

    From my experiments out of that stuff in a synthetic benchmark so that I could hit the GPU command processors bottleneck, I could launch 800000 draws per frame at 60Hz on Kepler and 300000 draws per frame at 60 Hz on Southern Islands.

    This "CTM: closer to the metal" idea is BS, it doesn't mean anything. What that is mean in practice? What code it's going to allow you to write? That's the question we needs to ask ourselves when we think about graphics APIs.
     
  16. Groovounet

    Newcomer

    Joined:
    Dec 12, 2013
    Messages:
    9
    Likes Received:
    0
    If you are interested about what the current state of the OpenGL implementations, I am keeping track of that: http://www.g-truc.net/doc/OpenGL matrix 2014-02.pdf

    Intel OpenGL drivers used to be absolute unusable crap back in 2007. These days Intel is 100% behind OpenGL and does an amazing job catching up. Reliability is pretty code and performance makes sense. So far the drivers supports OpenGL 4.2 + the most important OpenGL 4.4 and OpenGL 4.3 extensions. Back a year ago Intel was supporting only OpenGL 4.0 + some not so important OpenGL 4.1 and OpenGL 4.2 extensions. This is an astonishing work. http://www.g-truc.net/doc/OpenGL matrix 2013-02.pdf

    I have good hope the drivers will support OpenGL 4.3 by Siggraph 2014.
     
  17. lanek

    Veteran

    Joined:
    Mar 7, 2012
    Messages:
    2,469
    Likes Received:
    315
    Location:
    Switzerland
    I dont know why, but im not sure it is really part of a statment from him, but part of an article who have been made when Mantle have been released.

    Is it really your work, or do you just cite some source ? on your second post, some here have ask about the methodology of the test and have put many question about this , but we have not see any response.. I can imagine, someone so much ported on developpement will be happy to respond to other developpers here.

    This far, this give me strange feeling about your post. ( 4 post here and all about the same thing, if it was a developpers, i could imagine you will have take a bit of time allready respond about the question in your 2 december posts )..
     
    #17 lanek, Mar 5, 2014
    Last edited by a moderator: Mar 5, 2014
  18. sebbbi

    Veteran

    Joined:
    Nov 14, 2007
    Messages:
    2,924
    Likes Received:
    5,288
    Location:
    Helsinki, Finland
    Yes, these extensions are now officially part of OpenGL. Nvidia used to have similar proprietary extensions before OpenGL 4.4.
    Yes :)
    That's pretty much what you can achieve with multidraw (because of command processing bottleneck).

    If you need to render more (say, up to 2 million objects per frame at 60 fps), you need to manually fetch (SoA layout) vertex data from raw buffers (do a manual indirection for vertex data -- it's similar to meshes that virtual texturing is to textures). This way you can render infinite amount of separate geometry with a simple standard draw call. When combined with virtual texturing (and some other ingredients) this pretty much allows rendering of the whole scene with a single draw call.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...