Unigine DirectX 11 benchmark

Discussion in 'PC Gaming' started by Neb, Oct 23, 2009.

  1. Rodéric

    Rodéric a.k.a. Ingenu
    Moderator Veteran

    Joined:
    Feb 6, 2002
    Messages:
    3,957
    Likes Received:
    818
    Location:
    Planet Earth.
    Windows 7 32bit...
    That's making me depressed.
     
  2. Lonbjerg

    Newcomer

    Joined:
    Jan 8, 2010
    Messages:
    197
    Likes Received:
    0
    I hope they kill 32 bit with their next OS and IMHO 32 bit Win 7 should never haven been made.
     
  3. itsmydamnation

    Veteran Regular

    Joined:
    Apr 29, 2007
    Messages:
    1,241
    Likes Received:
    332
    Location:
    Australia
    lot of there server apps are 64bit only, exchange 2010, TMG, SCOM etc so looking forward i would really expect the next OS to be 64bit only.
     
  4. Ike Turner

    Veteran Regular

    Joined:
    Jul 30, 2005
    Messages:
    1,654
    Likes Received:
    1,270
    HEre's some 64Bit for you;)

    No AA:

    [​IMG]

    No AA with Radeon 5870 clocked at 900Mhz:

    [​IMG]

    4XAA with default clock:

    [​IMG]
     
  5. Malo

    Malo YakTribe.games
    Legend Veteran Subscriber

    Joined:
    Feb 9, 2002
    Messages:
    6,682
    Likes Received:
    2,726
    Location:
    Pennsylvania
    oh great, the next fucking drama. Just as a guess, optimized path for Fermi for the release, and if so, invalidates any comparisons?
     
  6. Silent_Buddha

    Legend

    Joined:
    Mar 13, 2007
    Messages:
    15,494
    Likes Received:
    4,410
    Interesting your min FPS goes down when overclocked. I'm wondering if this is a result of thermal throttling at those points where the FPS takes a dive thus lowering it even further. As those same points are also going to (theoretically) put the most stress on the video card.

    I'd be interested to see what happens with Min FPS when you underclock the core slightly. Would test it myself but don't have much time right now. Haven't even been able to check in on B3D much. I'm soooo behind in so many threads. :(

    Regards,
    SB
     
  7. Ike Turner

    Veteran Regular

    Joined:
    Jul 30, 2005
    Messages:
    1,654
    Likes Received:
    1,270
    Yeah it looks like the ECC in the GDDR5 is in full action when overclocking the memory resulting in those FPS hick-ups. Will try to underclock it later and run some benchs.
     
  8. ECH

    ECH
    Regular

    Joined:
    May 24, 2007
    Messages:
    682
    Likes Received:
    7
    Read his post here
     
  9. Malo

    Malo YakTribe.games
    Legend Veteran Subscriber

    Joined:
    Feb 9, 2002
    Messages:
    6,682
    Likes Received:
    2,726
    Location:
    Pennsylvania
    Yep, guess I was right. What a surprise.
     
  10. Mendel

    Mendel Mr. Upgrade
    Veteran

    Joined:
    Nov 28, 2003
    Messages:
    1,350
    Likes Received:
    17
    Location:
    Finland
    That's an issue for a separate topic really, so I'll make one...

    In a nutshell though, it's an update of an update and I'm not sure if I can further update it to 64 bit.
     
  11. Lightman

    Veteran Subscriber

    Joined:
    Jun 9, 2008
    Messages:
    1,787
    Likes Received:
    452
    Location:
    Torquay, UK
  12. Arnold Beckenbauer

    Veteran

    Joined:
    Oct 11, 2006
    Messages:
    1,399
    Likes Received:
    333
    Location:
    Germany
  13. Broken Hope

    Regular

    Joined:
    Jul 13, 2004
    Messages:
    483
    Likes Received:
    1
    Location:
    England
    He's also apparently using the OpenGL renderer which doesn't even have tessalation support until version 4 as far as I know?
     
  14. Malo

    Malo YakTribe.games
    Legend Veteran Subscriber

    Joined:
    Feb 9, 2002
    Messages:
    6,682
    Likes Received:
    2,726
    Location:
    Pennsylvania
    Could you use a vendor specific opengl extension?
     
  15. Arnold Beckenbauer

    Veteran

    Joined:
    Oct 11, 2006
    Messages:
    1,399
    Likes Received:
    333
    Location:
    Germany
    In the drivers there is an extension for the "old" tessellator called "GL_AMD_vertex_shader_tessellator" and his Radeon is a HD4800, but I can't believe, that Heaven 2.0 uses the old tessellator, because it's a different render pipeline (no DS, no HS). But it's not impossible: Maybe they used the old Tessellator for prototyping or so.
    So the next question would be: Why there is no support for DX9 (and DX10/10.1)?

    Edit: So, it looks like it's true. With the leaked RC1 you get tessellation with the "old" tessellator and OpenGL.
    http://www.forum-3dcenter.org/vbulletin/showthread.php?p=7926141#post7926141
     
    #75 Arnold Beckenbauer, Mar 24, 2010
    Last edited by a moderator: Mar 24, 2010
  16. Malo

    Malo YakTribe.games
    Legend Veteran Subscriber

    Joined:
    Feb 9, 2002
    Messages:
    6,682
    Likes Received:
    2,726
    Location:
    Pennsylvania
    C2D @ 2.6Ghz 5770 1Gb 1680x1050 0xAA 4xAF - Score 533
    C2D @ 3.2Ghz 5770 1Gb 1680x1050 0xAA 4xAF - Score 595
    C2D @ 3.2Ghz 5770 1Gb @ 900Mhz 1680x1050 0xAA 4xAF - Score 611
    C2D @ 3.2Ghz 5770 1Gb @ 900Mhz 1680x1050 4xAA 4xAF - Score 471
     
  17. NoahDiamond

    Newcomer

    Joined:
    Mar 25, 2010
    Messages:
    6
    Likes Received:
    0
    Hi. I'm NoahDiamond, from the OCN forums. Yes, the engine uses CUDA calls. No, the engine does not support 64-bit. Yes, it is nVidia optimized, no it won't break any records. The Unigine is poorly written, and depends on only a handful of custom libraries.

    Replication is a feature used for graphics processors below Directx 11 that can gain a performance boost. Some cards support it native, and others do not. The HD 5000 series does, but it is not readily activated. You can activate it in either the .bat files, or by starting up the benchmark, pressing the ~ (tilde) key, and entering the following command.

    d3d11_render_use_replication 1

    You can simply enter d3d11_render_use_replication to see if the feature is active.

    "PSSM shadow geometry replication by geometry shader (controlled by the folowing console variables: gl_render_use_geometry_replication, d3d10_render_use_replication, d3d11_render_use_replication)."

    It is a shortcut in rendering shadows, and can cause artifacts in the current release of Heaven 2.0 Benchmark. When benchmarking against cards that cannot properly handle this feature, it can cause artifacts in shadow areas, and can cause shadows to clip in and out, depending on the cache usage of the GPU.

    The 5870/5970 cards handle it rather well due to their cache use, but it is really written for the Fermi cards that have separate cache levels to store instructions.

    As for Tessellation... Voxels (volumetric pixels) are a more efficient method of rendering complex details, saves an ENORMOUS amount on processing power and memory, and can be run directly on dedicated tessellation units. Tessellation was a great idea, but it requires separate model and texture sets, as seaming is a problem when they don't exist as a duality in the program data sets. To avoid seaming gaps, the models/textures need to overlap, and Unigine are not bothered by that issue.

    The tessellation seam gap problem is not existent in the 4A engine used in Metro 2033, Dirt 2 or Aliens vs. Predator 2010. The authors of these programs took the issue into account and worked with the functions of Tessellation.

    The id Tech 5 engine took a different approach, and is still an amazing looking engine to this day. I don't want to sound like a John Carmack fanboy, but he does take his time in writing his software, and the id Tech Engines have always been on top of the charts, using existing technology.

    Fermi will ROCK in the Unigine Heaven benchmarks, but it won't be anything in the real world, as the engine does not take into account that Tessellation, Physx, Render and Shader processes all have to work on the same CUDA cores together.

    In short, the new benchmark is a TWIMTBP game. The Way It's Meant To Be Played. Many games released by THQ and EA have intentionally crippled ATI graphics cards, and several legal issues have come about due to this. Prime examples of this are Crysis the Need For Speed games. They were totally crippled on ATI cards until patches were released later. Now the ATI cards dominate them.

    nVidia is in a marketing position now, and they are taking a HUGE net loss, but they are inflating their market value by buying back stock from investors at more than they paid, and by omitting recurring losses from their financial statements. If you read between the lines, and look at their position, they need a serious boost, and the only way to do this is to stop the marketing hype, stop bribing developers, and get back into what they were in the first place. A chip maker.

    TSMC is tired of working with them, board integrators don't want to deal with them, Apple has dropped their additions for the time being, many notebook manufacturers stopped integrating their chipsets, though they do use their PCI Express cards, and Microsoft are upset with their poorly written software causing the majority of Vista crashes during it's first 2 years in release, prompting another suit.

    Intel don't want to work with them much any more, and have partnered with AMD to integrate ATI Radeon graphics into the Intel chips. Intel and AMD/ATI are now working together in the industry, with an agreement that AMD stays in the low to mid-range performance/price chips, and Intel stays in the mid to high end performance/price chips. All Intel chips with dual PCI express support CrossfireX, and ATI are using Intel's license for Havok. Intel is providing optimizations for AMD, and AMD is providing dominating high end integrated solutions for Intel.

    Meanwhile, nVidia has lost their rights to produce new CPU sockets and has ceased to produce chipsets, and is forced to license SLI to other board makers in order to sell their products.

    I could go on, but nVidia needs a very good product, and they need to make money, and seeing as their new products are due to be in short supply and high priced to work in the current market, it will be hard for them to recover. nVidia spent their money buying up smaller companies like Ageia, 3dfx, and other various smaller companies, and trying to push into the business sector with Tesla. Now they are loosing money faster than they can re-coop it.

    Fermi is what all the other previous CUDA boards have been... High powered emulators. They run Directx in CUDA, Tessellation in CUDA, OpenCL in CUDA, OpenGL in CUDA, and it is not helping.

    Here's a way to tell if a feature on a nVidia is being emulated, see if it is a CUDA program. If it is, it is not in hardware, but in parallel software. nVidia MAY, and WILL get away with running huge amounts of Tessellation in benchmarks, but they will not hold up in games when other processes need to be run.

    The new GTX 480 is basically two GTX 275 cores that have been shrunk down, upgraded, extended and mated into one single GPU. This is why they draw so much power.

    In the automotive industry, they used to say "There is no replacement for displacement", but we all know this is definitely no longer the case. It is the same for graphics cards. The future of Graphics cards is dedicated hardware functions that are can be used for multiple functions and are not restricted, with multiple SIMD inputs an out of order processing.

    Yeah, I make long posts... but sometimes it is hard to fit detailed information into a slogan.

    If you want a slogan, it's this. nVidia are dying, they are losing their market share WELL below 50% of the industry, and they are in a very tight predicament. ATI has focused on developing a more advanced, specialized piece of hardware that can do more with for less money, and they are well poised with support across the map for all positions.

    By the way... Does anyone here own a Tegra Phone?
     
  18. Daozang

    Regular

    Joined:
    Aug 6, 2007
    Messages:
    891
    Likes Received:
    134
    Location:
    Athens
    Heaven Benchmark v2.0
    FPS: 34.3
    Scores: 865
    Min FPS: 15.0
    Max FPS: 64.9


    Hardware

    Operating system: Windows 7 (build 7600) 64bit

    CPU model: Intel(R) Core(TM) i7 CPU 960 @ 3.20GHz

    GPU model: NVIDIA GeForce GTX 285 8.17.11.9621 1024Mb

    Settings

    Render: direct3d10
    Mode: 1920x1200 4xAA fullscreen
    Shaders: high
    Textures: high
    Filter: trilinear
    Anisotropy: 8x
    Occlusion: enabled
    Refraction: enabled
    Volumetric: enabled
    Replication: disabled
    Tessellation: disabled


    It's impressive, but I'm still waiting for something jaw dropping... Haven't had that since the day when I saw Tomb Raider running on Voodoo graphics...
     
  19. NoahDiamond

    Newcomer

    Joined:
    Mar 25, 2010
    Messages:
    6
    Likes Received:
    0
    I still say Pong had the best graphics. Nothing can touch that. Nothing.
     
  20. nutball

    Veteran Subscriber

    Joined:
    Jan 10, 2003
    Messages:
    2,069
    Likes Received:
    378
    Location:
    en.gb.uk
    tl;dr

    Great, that's all we need here, more trolling. Thanks for joining up.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...