Unigine DirectX 11 benchmark

lot of there server apps are 64bit only, exchange 2010, TMG, SCOM etc so looking forward i would really expect the next OS to be 64bit only.
 
No AA with Radeon 5870 clocked at 900Mhz:


Interesting your min FPS goes down when overclocked. I'm wondering if this is a result of thermal throttling at those points where the FPS takes a dive thus lowering it even further. As those same points are also going to (theoretically) put the most stress on the video card.

I'd be interested to see what happens with Min FPS when you underclock the core slightly. Would test it myself but don't have much time right now. Haven't even been able to check in on B3D much. I'm soooo behind in so many threads. :(

Regards,
SB
 
Interesting your min FPS goes down when overclocked. I'm wondering if this is a result of thermal throttling at those points where the FPS takes a dive thus lowering it even further. As those same points are also going to (theoretically) put the most stress on the video card.

I'd be interested to see what happens with Min FPS when you underclock the core slightly. Would test it myself but don't have much time right now. Haven't even been able to check in on B3D much. I'm soooo behind in so many threads. :(

Regards,
SB

Yeah it looks like the ECC in the GDDR5 is in full action when overclocking the memory resulting in those FPS hick-ups. Will try to underclock it later and run some benchs.
 
Can anyone here verify if Heaven 2.0 can really use fixed tesselation hardware from RV7xx series?

Just look at this:
http://www.xtremesystems.org/forums/showpost.php?p=4303197&postcount=98

00002d.jpg
 
He's also apparently using the OpenGL renderer which doesn't even have tessalation support until version 4 as far as I know?

In the drivers there is an extension for the "old" tessellator called "GL_AMD_vertex_shader_tessellator" and his Radeon is a HD4800, but I can't believe, that Heaven 2.0 uses the old tessellator, because it's a different render pipeline (no DS, no HS). But it's not impossible: Maybe they used the old Tessellator for prototyping or so.
So the next question would be: Why there is no support for DX9 (and DX10/10.1)?

Edit: So, it looks like it's true. With the leaked RC1 you get tessellation with the "old" tessellator and OpenGL.
http://www.forum-3dcenter.org/vbulletin/showthread.php?p=7926141#post7926141
 
Last edited by a moderator:
C2D @ 2.6Ghz 5770 1Gb 1680x1050 0xAA 4xAF - Score 533
C2D @ 3.2Ghz 5770 1Gb 1680x1050 0xAA 4xAF - Score 595
C2D @ 3.2Ghz 5770 1Gb @ 900Mhz 1680x1050 0xAA 4xAF - Score 611
C2D @ 3.2Ghz 5770 1Gb @ 900Mhz 1680x1050 4xAA 4xAF - Score 471
 
Hi. I'm NoahDiamond, from the OCN forums. Yes, the engine uses CUDA calls. No, the engine does not support 64-bit. Yes, it is nVidia optimized, no it won't break any records. The Unigine is poorly written, and depends on only a handful of custom libraries.

Replication is a feature used for graphics processors below Directx 11 that can gain a performance boost. Some cards support it native, and others do not. The HD 5000 series does, but it is not readily activated. You can activate it in either the .bat files, or by starting up the benchmark, pressing the ~ (tilde) key, and entering the following command.

d3d11_render_use_replication 1

You can simply enter d3d11_render_use_replication to see if the feature is active.

"PSSM shadow geometry replication by geometry shader (controlled by the folowing console variables: gl_render_use_geometry_replication, d3d10_render_use_replication, d3d11_render_use_replication)."

It is a shortcut in rendering shadows, and can cause artifacts in the current release of Heaven 2.0 Benchmark. When benchmarking against cards that cannot properly handle this feature, it can cause artifacts in shadow areas, and can cause shadows to clip in and out, depending on the cache usage of the GPU.

The 5870/5970 cards handle it rather well due to their cache use, but it is really written for the Fermi cards that have separate cache levels to store instructions.

As for Tessellation... Voxels (volumetric pixels) are a more efficient method of rendering complex details, saves an ENORMOUS amount on processing power and memory, and can be run directly on dedicated tessellation units. Tessellation was a great idea, but it requires separate model and texture sets, as seaming is a problem when they don't exist as a duality in the program data sets. To avoid seaming gaps, the models/textures need to overlap, and Unigine are not bothered by that issue.

The tessellation seam gap problem is not existent in the 4A engine used in Metro 2033, Dirt 2 or Aliens vs. Predator 2010. The authors of these programs took the issue into account and worked with the functions of Tessellation.

The id Tech 5 engine took a different approach, and is still an amazing looking engine to this day. I don't want to sound like a John Carmack fanboy, but he does take his time in writing his software, and the id Tech Engines have always been on top of the charts, using existing technology.

Fermi will ROCK in the Unigine Heaven benchmarks, but it won't be anything in the real world, as the engine does not take into account that Tessellation, Physx, Render and Shader processes all have to work on the same CUDA cores together.

In short, the new benchmark is a TWIMTBP game. The Way It's Meant To Be Played. Many games released by THQ and EA have intentionally crippled ATI graphics cards, and several legal issues have come about due to this. Prime examples of this are Crysis the Need For Speed games. They were totally crippled on ATI cards until patches were released later. Now the ATI cards dominate them.

nVidia is in a marketing position now, and they are taking a HUGE net loss, but they are inflating their market value by buying back stock from investors at more than they paid, and by omitting recurring losses from their financial statements. If you read between the lines, and look at their position, they need a serious boost, and the only way to do this is to stop the marketing hype, stop bribing developers, and get back into what they were in the first place. A chip maker.

TSMC is tired of working with them, board integrators don't want to deal with them, Apple has dropped their additions for the time being, many notebook manufacturers stopped integrating their chipsets, though they do use their PCI Express cards, and Microsoft are upset with their poorly written software causing the majority of Vista crashes during it's first 2 years in release, prompting another suit.

Intel don't want to work with them much any more, and have partnered with AMD to integrate ATI Radeon graphics into the Intel chips. Intel and AMD/ATI are now working together in the industry, with an agreement that AMD stays in the low to mid-range performance/price chips, and Intel stays in the mid to high end performance/price chips. All Intel chips with dual PCI express support CrossfireX, and ATI are using Intel's license for Havok. Intel is providing optimizations for AMD, and AMD is providing dominating high end integrated solutions for Intel.

Meanwhile, nVidia has lost their rights to produce new CPU sockets and has ceased to produce chipsets, and is forced to license SLI to other board makers in order to sell their products.

I could go on, but nVidia needs a very good product, and they need to make money, and seeing as their new products are due to be in short supply and high priced to work in the current market, it will be hard for them to recover. nVidia spent their money buying up smaller companies like Ageia, 3dfx, and other various smaller companies, and trying to push into the business sector with Tesla. Now they are loosing money faster than they can re-coop it.

Fermi is what all the other previous CUDA boards have been... High powered emulators. They run Directx in CUDA, Tessellation in CUDA, OpenCL in CUDA, OpenGL in CUDA, and it is not helping.

Here's a way to tell if a feature on a nVidia is being emulated, see if it is a CUDA program. If it is, it is not in hardware, but in parallel software. nVidia MAY, and WILL get away with running huge amounts of Tessellation in benchmarks, but they will not hold up in games when other processes need to be run.

The new GTX 480 is basically two GTX 275 cores that have been shrunk down, upgraded, extended and mated into one single GPU. This is why they draw so much power.

In the automotive industry, they used to say "There is no replacement for displacement", but we all know this is definitely no longer the case. It is the same for graphics cards. The future of Graphics cards is dedicated hardware functions that are can be used for multiple functions and are not restricted, with multiple SIMD inputs an out of order processing.

Yeah, I make long posts... but sometimes it is hard to fit detailed information into a slogan.

If you want a slogan, it's this. nVidia are dying, they are losing their market share WELL below 50% of the industry, and they are in a very tight predicament. ATI has focused on developing a more advanced, specialized piece of hardware that can do more with for less money, and they are well poised with support across the map for all positions.

By the way... Does anyone here own a Tegra Phone?
 
Heaven Benchmark v2.0
FPS: 34.3
Scores: 865
Min FPS: 15.0
Max FPS: 64.9


Hardware

Operating system: Windows 7 (build 7600) 64bit

CPU model: Intel(R) Core(TM) i7 CPU 960 @ 3.20GHz

GPU model: NVIDIA GeForce GTX 285 8.17.11.9621 1024Mb

Settings

Render: direct3d10
Mode: 1920x1200 4xAA fullscreen
Shaders: high
Textures: high
Filter: trilinear
Anisotropy: 8x
Occlusion: enabled
Refraction: enabled
Volumetric: enabled
Replication: disabled
Tessellation: disabled


It's impressive, but I'm still waiting for something jaw dropping... Haven't had that since the day when I saw Tomb Raider running on Voodoo graphics...
 
Heaven Benchmark v2.0


It's impressive, but I'm still waiting for something jaw dropping... Haven't had that since the day when I saw Tomb Raider running on Voodoo graphics...

I still say Pong had the best graphics. Nothing can touch that. Nothing.
 
Back
Top