NVIDIA GF100 & Friends speculation

I admit I neglected static power.

How high would that ratio be, in your opinion? How does leakage scale with voltage?

The optimal leakage for a microprocessors is roughly 30% of overall power.

GPUs will suffer from much worse gate leakage (no HKMG), and probably the same sub-threshold leakage.

David
 
WIC: GTX4x0 vs HD5870

100307151266d66547ccd518d0.jpg

The new ati driver will be renamed to catalytic 10.3 ? Or just the chiness drinked too much on the new year celebrations.
 
Nice, I'd forgotten about that page.

How real is it, that GF100 loses only 25% in a "worst-case" scenario?
That page is the benchmark result, not the fps for the close-up of the dragon. So we can only approximate: HD5870 is 44% faster than GTX285 on the benchmark.

Since 44% faster for HD5870 is higher than we often see in other games (~35%), it certainly makes for a puzzling question that this picture would be at ~56fps on GTX480.

As for the reason for the performance drop: well until we get a look at the HS and DS shaders and until someone does a more detailed analysis than B3D's article, well we're stuck.

I wonder if displacement mapping is basically point fetching such a vast number of texels that the big L2 in GF100 is the win?

Jawed
 
Would the GTX285 frame look identical?

I've got no idea if Heaven is picture-equivalent across D3D10 and 11 with tessellation off.

Jawed

Now I think about it, B3D didn't quantify the amount of culling with tessellation off, so 60-80% doesn't mean much on its own.

Jawed

IME it's not picture equivalent between D3D10 and 11, there are some subtle differences with regards to shadow filtering and bloom.

As for the second time, per the entire frame(all states) average culling rates are 76%/69% for untessellated/tessellated respectively, with 13.6 times more input primitives in the tessellated case. The base idea was(and still is) that there are ample benefits to be had via more clever use of amplification rather than smashing it in. Doing extra vertex shading for some 200k primitives that get discarded is something that tends to get lost in the noise, but doing domain shading for the vertices of a few mln primitives that get eventually discarded is a different kettle of fish, IMHO. We can guess that it's one of the areas where the updated Heaven demos will improve.
 
FWIW in the only games that I know to date to include tessellation (Dirt 2 and AvP) there is very little performance hit (if any) to enabling it on the 5870.

From what AMD demo'ed at 5870 launch, the perf high seemd quite significant though. But then, it was only an empty level with a single alien in it, probably with a code basis from the pre-optimization era and early drivers. :) Real gameplay should be affected quite differently though.
 
That page is the benchmark result, not the fps for the close-up of the dragon. So we can only approximate: HD5870 is 44% faster than GTX285 on the benchmark.

I got numbers for 5870 with DX10.1: 71.
With DX11 w/o Tessellation: 73.
5870 with Tessellation: 19-21.
GTX285 at the same position with DX10: 46.
And GTX480: 56.

I think nVidia doesn't want to show the performance without tessellation. The difference between GTX285 and GTX480 is to low.
 
Last edited by a moderator:
Well , since we know that GF100 cards have 4 domains (Core , Text , Shader , Memory ) , it is almost sure that GPU-Z will not be able to recognize their frequencies correctly , however , we can extrapolate some of the frequencies based on the assumption that the program recognized some of them but was not able to categorize them correctly :

WTH is Text frequency :oops:, especially in context of GPUs?

AFAIK, GPU-z doesn't "detect" anything. It just looks up a database.
 
Do you happen to remember any of NV PR's spin at the time of Cypress's launch? :cool:

I remember the "we are so cool because we are faster in Batman (with PhysX)" and the "You can have it all" at the GTC.
I hope you don't think about the "DX11 doesn't matter". That was xbitlabs. ;)
 
Do you happen to remember any of NV PR's spin at the time of Cypress's launch? :cool:

Don't recall anything coming from Nvidia that was dismissive of DX11. There was one comment about "DX11 isn't everything" and of course the usual suspects took that innocuous statement and ran with it.

^ Why is everything 4x0? Would stating 470 or 480 clue Nvidia into who is leaking numbers? Samples must be really scarce :)
 
Back
Top