NVIDIA GF100 & Friends speculation

That's a quite low cost considering Heaven's tessellation can sink framerate by around 90% at low res and still 70% at 1920 No AA/No AF (53/39% avg)

You can't compare both applications because Unigine is using Tessellation for everything. This is not happend in Metro2033. In a game the performance drop is very high. Because you can't really play with 20/26 in 1650x1050 on a 5850.
 
But the performance lost is very high: The min are 75% and the avg fps are 56% higher without Tessellation.

Looking at the 1920X1200 score the 5870 is 50% faster than the GTX 285.

What is really intriguing though, is the 5870 scores double of what the 4890 does. At long last, the 5870 performs as it was supposed to!

Is there any reason we shouldn't expect the same from the GF100 cards?
 
With HD5770 out-pacing HD4890, which is a nice surprise, there won't be much excuse if GTX480 is considerably faster than HD5870.

The G92-owning horde will certainly want to upgrade as it's pretty miserable. Shame that GF100's looking like it's going to be harder to get than HD5870 was upon launch.

9600GT trounces HD3870, laying to rest any doubt about which is more future proof :LOL:

Jawed
 
You can't compare both applications because Unigine is using Tessellation for everything. This is not happend in Metro2033. In a game the performance drop is very high. Because you can't really play with 20/26 in 1650x1050 on a 5850.
Hopefully.

But then, it'll be interesting to see the cost for both GPUs, assuming nothing has been done to hinder all GPUs except "the one".

One demo I'd like to see in GF100's reviews is AMD's own, published in the DX SDK, as it should show peak tessellation performance on any hardware. Presumably, GF100 should have an insane framerate on this one.
 
You can't compare both applications because Unigine is using Tessellation for everything. This is not happend in Metro2033. In a game the performance drop is very high. Because you can't really play with 20/26 in 1650x1050 on a 5850.

U have almost the same performance drop by just turning on the direct compute Depth of field than tesselation :rolleyes:.
The strange thing is that with maxed dx10 which has its own kind of depth of field (just not trough direct compute and not that extreme) it reaches 41/36 fps. With dx11 just depth of field it has 26.9/24 fps.
Wasnt people saying that the direct compute postprocesing efects were much faster than the pixel shader ones :?:
Maybe this dx11 direct compute advanced depth of field was made more for the gtx4xx than the radeons :rolleyes:.I think we can expect this TWIMTBP title to be the fermi showcase with dx11 maxed out on launch date
 
I don't think AMD's compilers can do auto-vectorization yet, so they are at a significant disadvantage for scalar code.

Presumably in the future for big games the same thing will happen for Direct Compute as happens for shaders, they will do the work for the programmers and manually reprogram the shaders exchanging them during execution in the drivers
 
I've found those high-fps test to vary wildly with plattform. Seems like a lynnfield with P55 suffers here compared to AMD, Bloomfield and even Core 2 systems.

--
edit:
WRT to Metro 2033: There seem to be wild differences depending on how much advacend physics is going on.
http://translate.google.de/translat...GPU-PhysX/Action-Spiel/Test/Test/&sl=de&tl=en
(sorry, only google translation at the moment)
HD5k really… well, see for yourself. :)
This game is completely unplayable on Radeon HD5000(5870/5850) series using max details (DX11+ 4XAA ) , let alone the GTX200 series !
 
Wasnt people saying that the direct compute postprocesing efects were much faster than the pixel shader ones

Unless you know the same effect is being implemented, the comparison is meaningless. They're quite probably doing something more precise in their DX11 implementation.
 
I don't think AMD's compilers can do auto-vectorization yet, so they are at a significant disadvantage for scalar code.
Fortunally AMD's shaders are superscalar and not vec4 right?

On the other hand, the compiler have much to improve yet.
 
I quote Neb from the CP Games thread on this title:

Also yes graphic options should have had a custom tab. But 3072x3072 shadowmaps with extensive filtering and edge smoothing effects, 1:1 post-process effects, POM on most surfaces et etc is quite extreme. If you could set highest setting but change shadows to 2048x2048 then you would surly increase framerate dramatically. I mean ffs Crysis has 1024x1024 shadowmaps in high/v.high setting! Forcing 2048x2048 res for shadowmap in Crysis can reduce you framerate with upto or more than 50%.

Btw I read tesselation and improved DOF takes some serious juice. Playing in DX10 mode with max settings should be a cakewalk at 1080p as opposed to DX11 max settings with improved DOF and tesselation.

So it looks like pretty much the new Crysis when it was first released. On maximum settings it's using features that GPU's are barely capable of handling, even the top-end. Though of course we haven't seen how Fermi handles it yet, have we? :) If it is just VRAM starved at these details, then I'd like to see Fermi vs 5870 E6 edition.
 
Unless you know the same effect is being implemented, the comparison is meaningless. They're quite probably doing something more precise in their DX11 implementation.
And there could be some driver issue too.

STALKER CoP benchmark seems to have such an issue on its 4th scene, activating AA incurrs a considerable drop when combined with CS Ultra HDAO, somewhat comparable to what Heaven shows when tessellation is turned on.
 
That's what the 5970 and crossfired 5870's are there for. ;)
Yeah , the situation is even worse than Crysis ! Not to mention how silly is the idea of requiring a crossfire system to properly run a game that sails smoothly on an xbox360 !

Maybe Fermi could change that ! :D
 
Yeah , the situation is even worse than Crysis ! Not to mention how silly is the idea of requiring a crossfire system to properly run a game that sails smoothly on an xbox360 !

You really think the xbox360 is running settings even remotely close to PC? Let alone resolution difference?
 
Yeah , the situation is even worse than Crysis ! Not to mention how silly is the idea of requiring a crossfire system to properly run a game that sails smoothly on an xbox360 !

Maybe Fermi could change that ! :D
Tune the detail down to xbox360 level and problem solved? Having a "very high" setting that didn't push modern cards would be much worse IMHO.
 
Back
Top