NVIDIA GF100 & Friends speculation

I lived in Amsterdam for some time but I hadn't heard it called that yet..
I call marijuana pot or happy smoke, this was not happy smoke. I'm suffering from some mild carbon dioxide poisoning from the nasty exhaust from my snow blower when I went out to do my walks, my head is still kind of woozy but smoking some pot helped to get rid of the nausea at least. ;)

Assuming you weren't just being facetious. :p
 
(My thinking on as needed replication is that it will be as or more efficient than NUMA with caching, for instance with dynamic textures reused across multiple frames it is clearly superior, and certainly more efficient than doing full buffer replication all the time since that introduces too much latency in between rendering steps.)

Full vertex buffer replication seems to be going in the opposite direction to where you want to be. You wouldn't want to stream tessellator output over to the next board for example. For other buffers wouldn't you tag shared resources at the driver level and do replication in a push model instead of an on-demand pull? That seems like the best way to go given that you know up front which buffers are shared.
 
Sorry if this is obvious - the nvidia demo comes from this bit of internet history doesn't it: http://www.darwinawards.com/darwin/darwin1995-04.html
"The Arizona Highway Patrol were mystified when they came upon a pile of smoldering wreckage embedded in the side of a cliff rising above the road at the apex of a curve. The metal debris resembled the site of an airplane crash, but it turned out to be the vaporized remains of an automobile. The make of the vehicle was unidentifiable at the scene."
 
Assuming for a moment that the double GPU solution noise is true ... how much of the delay is caused by the introduction of it at launch?
 
Assuming for a moment that the double GPU solution noise is true ... how much of the delay is caused by the introduction of it at launch?

I don't think that a GF100-based dual GPU product would cause the delay in any way/shape/form. Provided that [single GPU] GF100 is anywhere remotely close in performance to HD 5970, then there would be no reason to delay launching GF100 just to give more time for the dual GPU product to be ready.
 
I don't think that a GF100-based dual GPU product would cause the delay in any way/shape/form. Provided that [single GPU] GF100 is anywhere remotely close in performance to HD 5970, then there would be no reason to delay launching GF100 just to give more time for the dual GPU product to be ready.

But if all the issues of yield were holding them back they could have cranked out a double GPU product while they were waiting instead of twiddling their thumbs.
 
Zed_X said:
Deep Dive meeting has over, and we already know how powerful GF100 is, and IT IS! Performance is better then expected, most peoples here will search in own words and feel to shy for craps in thread. We can only laugh now, because Nvidia did it! Perf in Unigine DX11 benchmark is spectacular, Radeon HD 5870 is there like low-end toy for kids against GF100.

PS. Yeah, GF100 has 512 CC, and GPUs are in massproduction now, plenty of manufacturers have first GF100 inhouse.

From XS, http://www.xtremesystems.org/Forums/showthread.php?t=241120&page=31
 
Wow, but i don't think this was running on one card. With this level of tessellation and physx GF100 would be faster than anything on the planet...

I asked im about three or four times quite explictly and he remained adamant. Also other guys assured me, that it was only running on one card - and i have reason to believe that (which i cannot talk about right now).
 
Can confirm, that's true. It will be ironic to see how nvidia chips be faster at games with tesselation sponsored by AMD:smile: So Raja don't lie us

So you're saying that Fermi will be faster than a 5970 when it comes to tessellation? which automatically means that it's gonna be slower than it w/o tessellation
 
Maybe a part of Dual-GF100?
Efficient multi-chip GPU - United States Patent 7616206
(According to page 11 35-50% more performance than a usual AFR-SLI-setup and 87.5% of its costs, through reduced/shared framebuffer.)

Could someone please explain me how can people see "Dual GF100" or "GF104 is dualchip GF100" as even relatively possible scenario, when we know that single GF100 card, with 1/8th of the cores disabled - and assumedly lower than GeForce-variant clocks, consume up to 225W?
Sure, the Tesla has 3-4.5GB more memory than a single-chip GF100 GeForces will boast but memory shouldn't consume too much power IIRC anyway
 
Could someone please explain me how can people see "Dual GF100" or "GF104 is dualchip GF100" as even relatively possible scenario, when we know that single GF100 card, with 1/8th of the cores disabled - and assumedly lower than GeForce-variant clocks, consume up to 225W?
Sure, the Tesla has 3-4.5GB more memory than a single-chip GF100 GeForces will boast but memory shouldn't consume too much power IIRC anyway
There are no real measured numbers of fermis power consumption until now. Only power connector count and 2 Tesla numbers from a PDF of November. So a dual chip is still possible.
 
There are no real measured numbers of fermis power consumption until now. Only power connector count and 2 Tesla numbers from a PDF of November. So a dual chip is still possible.

But it has 1.5 times the transistor count of HD5870, which is rated at something like 180W itself. Do you think it's realistic to think that Fermi will be anything less than 240-250W?

Obviously dual Fermi can be made possible by severely cutting CC count and underclocking, but even then it can be a maximum of 300W and probably won't be much faster than the single Fermi because of power limitations and the fact that it's going to have to be running in SLI. Adding CC clusters surely doesn't increase the performance linearly, but I have no reason to believe that SLI can provide better scaling.
 
But it has 1.5 times the transistor count of HD5870, which is rated at something like 180W itself. Do you think it's realistic to think that Fermi will be anything less than 240-250W?

Obviously dual Fermi can be made possible by severely cutting CC count and underclocking, but even then it can be a maximum of 300W and probably won't be much faster than the single Fermi because of power limitations and the fact that it's going to have to be running in SLI. Adding CC clusters surely doesn't increase the performance linearly, but I have no reason to believe that SLI can provide better scaling.
Target for nvidia was power consumption near GTX285, said Ailuros. So I don't think, nvidia will raise the GPU-Voltage and Frequency to the maximum. Also concern that nvidia has claimed, that fermi has hardware-support for overvoltage. So there has to be a lot of space to maximum power consumption.

btw what is now the real transistor count of fermi? 3 Billion or 3.2 Billion? I have read both numbers in the last days.
 
Target for nvidia was power consumption near GTX285, said Ailuros. So I don't think, nvidia will raise the GPU-Voltage and Frequency to the maximum. Also concern that nvidia has claimed, that fermi has hardware-support for overvoltage. So there has to be a lot of space to maximum power consumption.

btw what is now the real transistor count of fermi? 3 Billion or 3.2 Billion? I have read both numbers in the last days.

I don't believe that's possible for a number of reasons. First, all review sites will be doing their job with the factory clocks and only allocate 1 additional page for overclocking, and if Fermi is around GTX 285's TDP (183W), which also is 5870's TDP, its performance will be only on par with 5870 but with a 50 percent bigger die size.

That, and people who sound like they know things we don't consistently indicate that the chip has huge power consumption.
 
I don't believe that's possible for a number of reasons. First, all review sites will be doing their job with the factory clocks and only allocate 1 additional page for overclocking, and if Fermi is around GTX 285's TDP (183W), which also is 5870's TDP, its performance will be only on par with 5870 but with a 50 percent bigger die size.

That, and people who sound like they know things we don't consistently indicate that the chip has huge power consumption.

GTX285 was 204W, as I recall.
 
Back
Top