NVIDIA GF100 & Friends speculation

They don't take much area in themselves, but what's the spacing like?

If those two vias are for the same signal (which they are), you can probably cut it a bit fine.

I highly doubt if it will cause too much bloat.
 
If he was talking about the 448 SP version, I might think that the 5% figure is barely plausible. But that claim seems very fishy to me unless he was testing at CPU limited resolutions.

Rumours built ontop of rumours from "sources" with absolutely no bench numbers nor games used only a vague "5%" total average(?) number. That article sure seem extremly anti Nvidia. Atleast there was a video with FC2 numbers for Fermi right?

Those 5% numbers could aswell be skewed for ATI hardware cherry picking games/test to bench and where they even done on a 1:1 basis as of settings and ingame values?


Anyway suppose the 5% avg figure is right. That would put them roughly equal except in games with tessealtion where Fermi pulls ahead supposedly greatly. Now tessealtion is the future and several games already has support for tesselation and more will. Dirt2, Stalker COP, Alien vs Predator, Metro 2033 and probably some more I forgot about. This would put Fermi ahead.

 
Last edited by a moderator:
Tesselation won´t save the 480, as real games put a strain on the shaders, which lack power for shading and tesselation. in real games 5870 will walk all over Fermi in that situations. - if you follows Charlies logic.
 
Clocks for hand-picked cards selling to a market that only consumes a handful of cards shouldn't be too difficult.

Jawed

The G80 and GT200 Tesla cards had lower clocks than the geforce products - 1350MHz vs. 1512 / 1300Mhz vs. 1476Mhz.
And the lowest version of Fermi-Tesla card starts with 1250Mhz* (not 1200Mhz, my mistake).
That is not really "hand-picked".

* and 416SP (13 SM).
 
Last edited by a moderator:
Unfortunately for Nvidia, NOT!

The 64 TMU's and 48 ROPs @ 600-625 MHz seem to hold back the GF100. Well ATi has learned already that TMUs and ROPs are still important.

And you are convinced of this without seeing so much as a single independent, repeatable benchmark?
 
The G80 and GT200 Tesla cards had lower clocks than the geforce products - 1350MHz vs. 1512 / 1300Mhz vs. 1476Mhz.
And the lowest version of Fermi-Tesla card starts with 1250Mhz (not 1200Mhz, my mistake).
That is not really "hand-picked".
Teslas need to be hand-picked, because NVidia's guarantees on them are far more stringent than those applying to consumer cards. It's part of why they are configured at lower clocks (that plus the extra memory hinders memory clocks), in order to ensure they meet those guarantees - and it's part of the justification for the cost.

So, when you're hand-picking you can be more precise about achieving those guarantees. Teslas are the cream of the crop.

Jawed
 
Teslas need to be hand-picked, because NVidia's guarantees on them are far more stringent than those applying to consumer cards. It's part of why they are configured at lower clocks (that plus the extra memory hinders memory clocks), in order to ensure they meet those guarantees - and it's part of the justification for the cost.

So, when you're hand-picking you can be more precise about achieving those guarantees. Teslas are the cream of the crop.

Jawed

But why should the tesla cards have a higher clockrate than the consumer products? nVidia needs chips which stay under 225 watt. Would these chips not lower clocked because of the power consumption? You can look at the GTX275. The card comes with a full GT200b, higher clocks than the tesla card and need more power (nearly 40 watt).

AMD added lots and lots of TUs in RV770, or didn't you notice?

Jawed

And alot of alus, too. The ratio between alu:tmu of r600 and rv770 is the same.
 
GF100 has Texture Address/Filtering 64/256 compared to gt200 80/80. So thats not exactly like step back, altough they noticed the higher clocked TUs in GPC which on 600 MHz is not happening.
 
Back
Top