NVIDIA Fermi: Architecture discussion

driver issues?
I'm sure when GF100 is launched, all these "static"benchmarks will be fully "optimized" ie the real code will be replaced by driver with something hand-tuned.
 
Not really. It is about absolute performance. In the enthusiast market, power efficiency is not really at the top of the list of priorities. If they can deliver a single card that is quite a bit faster than the current "king" of performance (the HD 5970), that's what they will deliver.

Not at all what I said/meant. It's about architectural power efficiency at the 300w limit, as both dual cards will be 300w (if they want PCIe logo;) and thus the fastet card is the most power efficent.
So if a single 380 is 250w, they'll have to down clock a lot (seems like too much to make sense imho) to keep the dual at 300w, and even if the 250w "380" is quite a bit faster than the 180w 5870, it's not a given that the 300w "395" is faster than the 300w 5970.
 
Since when do you optimize "for this view"? Isn't that just another way of optimizing for rails?
 
Am I wrong or I see some intense stuttering when the wireframe is on?
With three GF100s under the hood? :LOL:
Wireframe shows a massive overdraw, it stutters with Evergreen GPUs too.

Hell, look closely to Heaven scenes with tessellation + wireframe... some areas are just plain white, meaning some triangles are sub-pixel and wireframe adds to that already high overdraw.

It reminds me of a statement Huang made in oct/nov, "From rasterizing and raytracing, only one will survive", which for me was indicative they already knew Fermi was going to struggle with very high polygon count, if not with any D3D load.

Also, note that whatever resolution you choose, tessellation generates exactly the same amount of triangles, so the lower the resolution, the higher the overdraw. I've been playing with this bench a little, and numbers speak for themselves : HD5770 framerate during high poly scenes is between 13 and 17 fps (difference is the result of the rendering being slower at higher res where there's no tesseltation), HD5870 is between 20 and 23 fps, and that's between 640x400 and 1920x1200.
 
Last edited by a moderator:
Emphasis mine. Is there confirmation that the demos were on non-512 SP parts?

Sorry, I should have written the sentence differently. ;)
To me it seems at least strange that a liquid cooled system, with a certified case crashes during a in-house demo, even if it's heavy.

Anyway, more clouds appear at the horizon (the site is the french version of PCWorld):

http://www.pcworld.fr/2010/01/08/ma...dissipation-thermique-tres-importante/468101/

L’ingénieur responsable nous a par ailleurs donné la consommation de Fermi : 300 watts ! Il s’agit là de la limite autorisée par la norme PCI-Express 2.0 et pour rappel, Fermi n’est doté que d’un seul GPU! Retard, consommation de 300 watts, chauffe importante, autant d’éléments qui nous font dire que Fermi a beau être un bijou technologique, il semble d’ores et déjà mal né. Il se susurre d’ailleurs que la plus puissante des Fermi égalerait une Radeon HD 5870 mais pas la 5970. Pour aller au-delà de la 5870, il faudrait augmenter les fréquences et… dépasser alors la limite des 300 watts. La vie de Fermi pourrait dès lors être courte et la nouvelle génération arriverait nettement plus vite…
 
Demo crashing could be the result of immature drivers. In my experience of NV product demos over the years, especially in the last few, you shouldn't read anything into the cooling solution. That'll have been shown off for two reasons: to promote partner products (Maingear have been working with NV for years now) and simply because it's cool. The product managers for GeForce are all high-end PC gaming enthusiasts, and they love stuff like that.

Putting demo instability down to the cooling solution is folly. Do we even have confirmation it really crashed? I know it's fun to speculate, but this thread could do with returning to hard facts.
 
Demo crashing could be the result of immature drivers. In my experience of NV product demos over the years, especially in the last few, you shouldn't read anything into the cooling solution. That'll have been shown off for two reasons: to promote partner products (Maingear have been working with NV for years now) and simply because it's cool. The product managers for GeForce are all high-end PC gaming enthusiasts, and they love stuff like that.

Putting demo instability down to the cooling solution is folly. Do we even have confirmation it really crashed? I know it's fun to speculate, but this thread could do with returning to hard facts.

A driver problem with their own software? At least don't show it to the world, then. I sincerely miss the point of showing an inhouse demo which crashes / is very slow under certain situations.
I mean, you are showing a Triple SLI of Fermi... It should be fast as hell, and not lagging like a 400 bucks pc...
NVidia should know that, given that they're so fond of marketing.
 
Which says what?
The engineer gave us Fermi's power draw : 300watt !

It's the limit allowed for PCI-Express 2.0 certification and, as a reminder, Fermi is only one GPU !

Late, high power draw, hot, many things hinting that, although Fermi is a great engineering piece, it already seems to not be wery well born.

We also heard the fastest variant would just equal a Radeon HD5870, not an HD5970. To go further, it would require higher frequencies, thus exceeding the 300-watt limit. Fermi could then be short-lived and a new generation be coming faster than expected.

Free translation as google translator often... translates in googlish language, I hope I'm not as bad as this bot. At least I understand the article since it's written in my natural language, so even with some missing word translations it should be readable.
 
Sure, and why can you use GTX295 in Quad-SLI without certified cases?

'Cause it has a lower TDP and better cooling system?
I don't know, they're reporting something overheard by case manufacturers. :LOL:
Anyway, PCWorld is not Fudzilla, I think. ;)
 
Putting demo instability down to the cooling solution is folly. Do we even have confirmation it really crashed? I know it's fun to speculate, but this thread could do with returning to hard facts.

Riiight...;) Not too many of them, tbh.
 
We also heard the fastest variant would just equal a Radeon HD5870, not an HD5970. To go further, it would require higher frequencies, thus exceeding the 300-watt limit. Fermi could then be short-lived and a new generation be coming faster than expected.
Was this heard from anyone reputable? I think the most common expectation is that on a single-chip basis that Fermi's top grade will be faster than a single Cypress.
 
Back
Top