NVIDIA GF100 & Friends speculation

He's talking about 3Dfx Voodoo Graphics/Voodoo2 SLI. Oh those were wonderful times.

No Framebuffer-Effects, no Post-Processing to speak of, so that made 3Dfx' life easy. But along came modern render techniques. :(


P.S.:
thats only true if you ignore the fact that the 5970 was built for overclocking.
Now, if that would only be true. No card is built for overclocking - also not GTX 460, which Nvidia touted as being an overclockers dream. There are always reasons as to why the cards are clocked as they are. :)
 
But they already did that before ATI had even made their first dual-GPU card? :???:

Call it hibernation then. I will wake in seasons where dual GPUs scale perfectly and don't cause trouble and slip into a single GPU trance between times. Between times is very long :)
 
Oh ye, didn't even remember the ill-fated MAXX that never got drivers outside Win9x if my memory serves right?

CarstenS, I'm not counting the beloved 3dfx miracles to this, I was talking about GF7 GX2's
 
If ever there was a reason to eagerly await a performance topping Nvidia dual GPU card, this must be it: the spectacle of you slamming those damn AMD hypocrites!

I'll be quite happy to if I see the same thing happening. :) I'm an equal opportunity slammer. :D

I don't think Nvidia will be able to pull it off this generation however, so it'll have to wait. :p

Regards,
SB
 
I'll be quite happy to if I see the same thing happening. :) I'm an equal opportunity slammer. :D

I don't think Nvidia will be able to pull it off this generation however, so it'll have to wait. :p
I'm sceptical concerning future generations too - because power density inevitably goes up. So at 28nm you probably don't need a huge die for the card consumption to reach 300W (and by that I mean the chip is already operating near its optimal clock and voltage regarding perf/w).
A problem AMD will face too certainly, though with the "small die" approach maybe they can still do it.
Of course, there's a solution to that - just increase power limit to 375W (or more), though it might not be very practical.
 
Don't you also have a harder time, getting 300 watts away froma die with, say 300mm² than 500mm²?
 
Don't you also have a harder time, getting 300 watts away froma die with, say 300mm² than 500mm²?
That too. I think it's more of an orthogonal problem though, as power density is the same whether you'd have 2x150mm² with 150W each or 1x300mm² with 300W.
 
But the degree of lousiness would stay the same for the silicon in both example dies, wouldn't it?

Yes, but does the surface matter anyway?

I might be wrong here but the surface area Si/cooler is not the real bottleneck? You just want a good cold thermal conductor being as close as possible to the heat source. Less silicon - less heat buffer.
 
It does because the more surface area to transfer the same heat the faster you can dissipate said heat. Of course there are limits to this is the chip ran cool to begin with it'd be better to save on diesize etc.
 
Yes, but does the surface matter anyway?

I might be wrong here but the surface area Si/cooler is not the real bottleneck? You just want a good cold thermal conductor being as close as possible to the heat source. Less silicon - less heat buffer.

Without having better data, I was thinking of my fictional 300w GPUs have an even distribution of heat generated throughout the surface area.
 
Back
Top