Emphasis mine. Is there confirmation that the demos were on non-512 SP parts?Maybe to counterbalance the fact that the gpus shown are salvage parts (less cuda cores active)...
Emphasis mine. Is there confirmation that the demos were on non-512 SP parts?
Not really. It is about absolute performance. In the enthusiast market, power efficiency is not really at the top of the list of priorities. If they can deliver a single card that is quite a bit faster than the current "king" of performance (the HD 5970), that's what they will deliver.
no, optimizing for wireframe viewSince when do you optimize "for this view"? Isn't that just another way of optimizing for rails?
Wireframe shows a massive overdraw, it stutters with Evergreen GPUs too.Am I wrong or I see some intense stuttering when the wireframe is on?
With three GF100s under the hood?
Emphasis mine. Is there confirmation that the demos were on non-512 SP parts?
L’ingénieur responsable nous a par ailleurs donné la consommation de Fermi : 300 watts ! Il s’agit là de la limite autorisée par la norme PCI-Express 2.0 et pour rappel, Fermi n’est doté que d’un seul GPU! Retard, consommation de 300 watts, chauffe importante, autant d’éléments qui nous font dire que Fermi a beau être un bijou technologique, il semble d’ores et déjà mal né. Il se susurre d’ailleurs que la plus puissante des Fermi égalerait une Radeon HD 5870 mais pas la 5970. Pour aller au-delà de la 5870, il faudrait augmenter les fréquences et… dépasser alors la limite des 300 watts. La vie de Fermi pourrait dès lors être courte et la nouvelle génération arriverait nettement plus vite…
Which says what?
Demo crashing could be the result of immature drivers. In my experience of NV product demos over the years, especially in the last few, you shouldn't read anything into the cooling solution. That'll have been shown off for two reasons: to promote partner products (Maingear have been working with NV for years now) and simply because it's cool. The product managers for GeForce are all high-end PC gaming enthusiasts, and they love stuff like that.
Putting demo instability down to the cooling solution is folly. Do we even have confirmation it really crashed? I know it's fun to speculate, but this thread could do with returning to hard facts.
Automatic translation:
http://translate.google.be/translat...dissipation-thermique-tres-importante/468101/
According to them:
In other words, epic fail. I hope they are wrong.
- Fermi TDP = 300W
- GTX380 = HD5870
- Very hot, need for certified cases in order to have no problems with SLI.
no, optimizing for wireframe view
The engineer gave us Fermi's power draw : 300watt !Which says what?
Sure, and why can you use GTX295 in Quad-SLI without certified cases?
Putting demo instability down to the cooling solution is folly. Do we even have confirmation it really crashed? I know it's fun to speculate, but this thread could do with returning to hard facts.
'Cause it has a lower TDP and better cooling system?
Was this heard from anyone reputable? I think the most common expectation is that on a single-chip basis that Fermi's top grade will be faster than a single Cypress.We also heard the fastest variant would just equal a Radeon HD5870, not an HD5970. To go further, it would require higher frequencies, thus exceeding the 300-watt limit. Fermi could then be short-lived and a new generation be coming faster than expected.