NVIDIA Fermi: Architecture discussion

I was expecting this from NV, but the question remains will they beable to support 3 monitors with only one card or will you have to sli them?

HardOCP just did extensive testing of eyefinity at different resolutions across an array of cards and games based on playable (30 to 50 fps) framerates. Kyle's conclusion was 5850 at a bare minimum, 5870 most balanced, 5890 best with caveats. For real gaming.

http://www.hardocp.com/article/2010/01/05/amds_ati_radeon_eyefinity_performance_review

Fermi needs a minimum of 100 to 120 fps for a workable 3 monitor 3D set-up. Based on HardOCP's menchmarks, it's going to take substantially more than 5890 performance to still retain anything near enthusiast/gamer settings.

But Nvidia are masters at the PR game, it will be fascinating to see what they put together.
 
HardOCP just did extensive testing of eyefinity at different resolutions across an array of cards and games based on playable (30 to 50 fps) framerates. Kyle's conclusion was 5850 at a bare minimum, 5870 most balanced, 5890 best with caveats. For real gaming.

http://www.hardocp.com/article/2010/01/05/amds_ati_radeon_eyefinity_performance_review

Fermi needs a minimum of 100 to 120 fps for a workable 3 monitor 3D set-up. Based on HardOCP's menchmarks, it's going to take substantially more than 5890 performance to still retain anything near enthusiast/gamer settings.

But Nvidia are masters at the PR game, it will be fascinating to see what they put together.

HD 5890 performance ? Are you really trying to compare Fermi based GeForces of still unknown performance with a not even announced product, that we don't know if it will ever even exist ? :LOL:
 
What I meant was that the GF104 codename was used to reference the Fermi based GeForce X2 card.

I speculated in the past, that if GF100 (single chip high-end) was quite a bit faster than the HD 5870 (let's assume LegitReviews number - 36%), NVIDIA doesn't really need GF100 X2 to beat the HD 5970. A mid-range chip X2 should be more than enough.
Most considered it to be a flawed speculation, because there were no news of a tape out for such a chip.

Right now, GF104 either does reference the X2 card or there is a mid-range chip that tape out, without much hassle or news about it.

This would contradict every single generation of nVidia codenames since they started to have more than 1 chip per generation
 
Isn't that what I said ?

I didn't understand you to mean that GF104 was a binned GF100 part, no. I can see that that might be what you meant. :shrug:

I also meant to imply that it could fill both the single-chip "high-end" (<GF100) and the GX2 "uber-crazy-end" niches as my assumption is that the binning would be grabbing GF100 parts that had a few bad ALUs (as opposed to the very best performing, coolest GF100 parts). But I see a did a poor job of saying that :(

Kaotik said:
This would contradict every single generation of nVidia codenames since they started to have more than 1 chip per generation

This is nVidia you're talking about? Masters of unambiguous naming conventions....
 
Last edited by a moderator:
What I meant was that the GF104 codename was used to reference the Fermi based GeForce X2 card.

I speculated in the past, that if GF100 (single chip high-end) was quite a bit faster than the HD 5870 (let's assume LegitReviews number - 36%), NVIDIA doesn't really need GF100 X2 to beat the HD 5970. A mid-range chip X2 should be more than enough.
Most considered it to be a flawed speculation, because there were no news of a tape out for such a chip.

Right now, GF104 either does reference the X2 card or there is a mid-range chip that tape out, without much hassle or news about it.

what I was lead to believe was the GF100 = GF380/360 GF104 = 350/340/320 and with possibility of 355x2
 
"2:45PM He still didn't tell us when GF100 would ship, and it's over! What a tease. At least he's aware that people are curious, very sensitive of him."


I had that feeling. Thankfully Palm's presentation was so awesome that I don't regret staying up.
 
"2:43PM The new GF100 GPU is in production, it's "ramping very hard," and will be on display here at CES."

Well, I guess that ends any theories about a possible A4 stepping, if his statement is true. Although given Nvidia's history of twisting words, I guess you could also read it to mean that that they're having difficulties and Fermi production is hard to get ramping. ;)
 
So if GF100 is ramping then what kind of numbers can we expect to see before the end of the quarter? If yields are still low then the price has to remain high due to supply and demand, so when is the practical release date from here? Will they simply release it in the last few weeks of Q1 and real supply to come later by mid Q2?
 
This would contradict every single generation of nVidia codenames since they started to have more than 1 chip per generation

But that's because we are assuming that this is the codename of a chip, when it might just be a codename for the card.

We'll see.
 
I didn't understand you to mean that GF104 was a binned GF100 part, no. I can see that that might be what you meant. :shrug:

I definitely didn't say binned GF100, but Fermi based GeForce X2 card, just means that it's a card with two chips based on Fermi. Whether they use full Fermi chips (all units enabled) or not, is another question.
Although they may go that route, if indeed GF100 is right on the heels of Hemlock in terms of performance. They won't need a X2 card with two fully enabled Fermi chips to beat it.
 
GF104 as a dual-board chip is still based on that unverified 'source' of a laid-off engineer. I wouldn't put too much effort into arguing over it

And the truth is, a dual chip isn't necessary unless the performance of GF100 / the single chip is too close to the 5xxx equivalent for comfort, and they want a dual chip to maintain the performance crown. If GF100 is all it's cracked up to be, I don't know how necessary that dual chip is (granted it could be a 2 x cut down cores)
 
GF104 as a dual-board chip is still based on that unverified 'source' of a laid-off engineer. I wouldn't put too much effort into arguing over it

And the truth is, a dual chip isn't necessary unless the performance of GF100 / the single chip is too close to the 5xxx equivalent for comfort, and they want a dual chip to maintain the performance crown. If GF100 is all it's cracked up to be, I don't know how necessary that dual chip is (granted it could be a 2 x cut down cores)

A dual-Fermi would the same limitation of the HD 5890: 300W PCI-E 2 spec TDP.
All in all, a dual-Fermi makes sense if Fermi(GF100) had better performance x watt than Cypress and if SLI scaling is (much) better than CF. And, of course, if the single GPU accomplish the task, a dual-GPU card isn't necessary.

PS: sorry for my (bad) english
 
GF104 as a dual-board chip is still based on that unverified 'source' of a laid-off engineer. I wouldn't put too much effort into arguing over it

And the truth is, a dual chip isn't necessary unless the performance of GF100 / the single chip is too close to the 5xxx equivalent for comfort, and they want a dual chip to maintain the performance crown. If GF100 is all it's cracked up to be, I don't know how necessary that dual chip is (granted it could be a 2 x cut down cores)

GF100 is not cracked up to beat Hemlock, so a dual chip card will always be necessary to retake the performance crown.
 
GF100 is not cracked up to beat Hemlock, so a dual chip card will always be necessary to retake the performance crown.

Sure, but if the '380' is 250w, it may not make too much sense having a 300w dual-gpu card.
Could even be that it couldn't beat the 5970, even if the 380 trounces the 180w 5870 - it'll be all about power efficiency (@300w) and not absolute performance.
 
Back
Top