NVIDIA GF100 & Friends speculation

Apparently, after looking at the reviews a gpu clocked at 772/1544/4008 is faster than one at 700/1401/3696. Perhaps a review is needed to see what the performance of the 580 vs 480 when they are clocked the same, no?

3 to 5% is architectural improvement, the rest comes from units/clocks.
 
Main Hardware improvements :

-Full speed FP16 texture filtering
-Better Z-culling
And ability to configure the 16K/48K cache split differently for pixel shaders according to techreport.
nvidia isn't lying when they said they focused on graphics, not the compute aspect this time - none of the changes are interesting at all to compute. Speaking of that, no new compute cards announced? While the changes aren't interesting to compute, I'm sure the customers would be interested in the better perf/power ratio.
In fact GF110 looks almost as efficient as GF104 for graphics, considering either perf/w or perf/area (well I'm sure GF104 has to be a bit more efficient otherwise there would have been no point in changing the SMs, but it doesn't look like a huge difference to me).

I agree GTX580 is a refresh, but its also true the more correct code name should have been GF100b ;)
To be fair though, I think neither the g92b nor gt200b had any functional changes. This is more similar to G80->G92 in terms of changes (though obviously this one also cut memory interface and included a shrink).
The reputation of GF100 was probably bad enough they wanted a new name anyway :).
 
IMG0029904.gif

img0029904.gif


http://www.hardware.fr/articles/806-4/dossier-nvidia-geforce-gtx-580-sli.html
 
Im curious about how GTX570, with 480 cores, would perform, and more importantly, its power consumption. Could nVIDIA do a Dual GPU card with it?
 
Im curious about how GTX570, with 480 cores, would perform, and more importantly, its power consumption. Could nVIDIA do a Dual GPU card with it?
I'm sure Nvidia will have a dual card in this product cycle. TDP should be manageable, and NV would absolutely love to finally have a shot for a fastest card. Still IMO dual Caymans in 6990 should be faster than GTX570 X2, so maybe Nvidia would use binned lower-power "hand-picked" GTX580? They dont have to launch a lot of them, nor there is a huge demand for X2 anyway. Just enough to regain (maybe) fastest card crown.
 
Another thing to ask for is: Where is the broken architecture?
GTX480 might have been a broken product, but it seems that Fermi is far from being a broken architecture.
 
Is it? Really? If you compare it with the HD5970, it has a lower power consumption and is pretty close on performance, so its on par with ATI High End power efficiency.

Are you talking about perf / mm2 ?
 
Is it? Really? If you compare it with the HD5970, it has a lower power consumption and is pretty close on performance, so its on par with ATI High End power efficiency.
I was a bit trollsih as the ":arrow:" implied ;) Still it's still a crazy huge piece of silicon, twice as big as HD68xx so I'm not impress by the card at all.
 
Im curious about how GTX570, with 480 cores, would perform, and more importantly, its power consumption. Could nVIDIA do a Dual GPU card with it?
Do we know that GTX570 will have 480 cores? Does even nvidia know yet?
Looks to me like nvidia doesn't want to release another card unless it knows what Cayman can do. With the GTX580, they had no room for adjustment basically no matter how Cayman turns out - this is the best they can do with the given power envelope (and the 300W in Furmark, power limiter or not, is a pretty hard requirement). But there's a lot more choice with the cut-down version - clocks, SMs (could cut 2 instead of just 1) and even amount of memory/rops could be changed pretty easily.
 
Do we know that GTX570 will have 480 cores? Does even nvidia know yet?

No.. its my speculation.

Looks to me like nvidia doesn't want to release another card unless it knows what Cayman can do. With the GTX580, they had no room for adjustment basically no matter how Cayman turns out - this is the best they can do with the given power envelope (and the 300W in Furmark, power limiter or not, is a pretty hard requirement). But there's a lot more choice with the cut-down version - clocks, SMs (could cut 2 instead of just 1) and even amount of memory/rops could be changed pretty easily.

Which is a testament to Fermi scalability. Not broken, not unscalable :p
 
Last edited by a moderator:
Main Hardware improvements :

-Full speed FP16 texture filtering
-Better Z-culling
Apparently also some internal buses rebuild, since they have lost 200m transistors, without loosing ECC nor FP64 capability, as was rumoured.

And ability to configure the 16K/48K cache split differently for pixel shaders according to techreport.
Good catch, that was previously only possible for computing stuff.

Another thing to ask for is: Where is the broken architecture?
GTX480 might have been a broken product, but it seems that Fermi is far from being a broken architecture.
See above. They saved so much transistors without droping any functionality, only adding.
 
See above. They saved so much transistors without droping any functionality, only adding.

Couldnt understand if you agreed or disagreed with my afirmation :p
If its the latter, the underlying architecture stay the same, didnt it? The GPCs and all that Jazz.. It was probably badly implemented on GTX480, though.
 
Ive seen reviews where its power consumption was lower actually.

That's neither here nor there. It all depends on how the measurements were taken. Also how the measurements are presented. One site can provide their consumption numbers based on the entire PC (some use more sophisticated equipment while others use a kill-a-watt). While others will do the same but subtract the power consumption for the motherboard. While others actually have the equipment to test the power consumption of the video card itself.

Therefore, you can't really cross reference different reviews because their methods maybe different.
 
Back
Top