NVIDIA Fermi: Architecture discussion

The most interesting thing so far is that no benches have leaked from Asia...should happen any day now...
 
The most interesting thing so far is that no benches have leaked from Asia...should happen any day now...
With launch in March I would not expect real numbers until February. 2-3 weeks before release there will be numbers. In other words, leaked NDA material.
 
The most interesting thing so far is that no benches have leaked from Asia...should happen any day now...

Why? There are no cards out to reviewer or AIBs. I don't think we will see real, independent numbers until february.
 
Why? There are no cards out to reviewer or AIBs. I don't think we will see real, independent numbers until february.

There have been quite a few leaks of early and unauthorized benches about one to two weeks before AIBs got boards in the past. Since the latest rumors are that AIBs will get boards within the next week or ten days, now is about time for unauthorized leaks :)

You're right though, no way we'll see anything independent and believable until Feb.
 
There have been quite a few leaks of early and unauthorized benches about one to two weeks before AIBs got boards in the past. Since the latest rumors are that AIBs will get boards within the next week or ten days, now is about time for unauthorized leaks :)

You're right though, no way we'll see anything independent and believable until Feb.

But how could you leak numbers when you don't have a card? :LOL:
I think we will only get the specification this month.
 
In fact, way more than "only one person" are inclined to think Fermi won't be a performance monster...

Anyone who had been fooled by AMD before the launch of R600 and NV before the launch of NV30 simply can't believe their promises without proofs.

Kinda hard to look past the deafening silence and 99% propablility that if Fermi's hard numbers were THERE, Nvidia would have been leaking some figures to keep the high end community from defecting enmasse to AMD.

I've noticed a sea change over the last four months on the forums, particulaly related to Charlies predictions on Fermi. At first the sentiment was running heavily in favor of Nvidia and was derisive of Charlie, generally giving Nvidia the benefit of the doubt. That is now pretty much reversed and Nvidia scepticism now prevails and the fanboyism has grown sparse and mostly muted.

That added to earlier blunders puts Nvidia in the precarious position of courting a near certain PR disaster if they pre-release bogus numbers or promises for the purpose of keeping people from jumping ship and the actual numbers fall short when the card is released. Any site that tried to rationalize or downplay Fermi's numbers or cover for Nvidia would stir up a hornets nest.

Nvidia simply cannot AFFORD to put out numbers or promises they aren't certain they can produce when they release the board. What counts is not what Fermi can do eventually, but what it can do when it is released. That is when it's reputation will be cemented in the publics mind and if Fermi's numbers are marginal, it's a hobsons choice, and right now the best of those choices appear to be not pre-poisoning the waters any more than has already occured, releasing the card as soon as feasible, and then letting the PR department go to work with what they have.

Picking the least bad outcome.
 
Last edited by a moderator:
Hmm all that and nothing worth while :!: How long do you think driver development would take? Lets see hmm August was tape out of A1? And they will be demoing the board at CES not just benchmarks;).

How could someone know that it would be pushed if nV didn't even know how many respins it would have taken! Ridiculous!
 
A wider bus would require additional chip perimeter for the pads, which would most likely have added to die size (at which point, why not add even more ALUs for all the free space). Another option would be to remove other features that take up pad space, but those options would be very limited as memory pads are the dominant feature on the perimeter.

Well do we know that Cypress couldn't support a wider bus on its current die size? The general question still stands though as to whether throwing more of X unit at the problem actually results in tangible gains. Too bad it's not possible to disable individual SIMDs or ROPs like back in the day to do some sort of analysis on which units are superfluous.

Kinda hard to look past the deafening silence and 99% propablility that if Fermi's hard numbers were THERE, Nvidia would have been leaking some figures to keep the high end community from defecting enmasse to AMD.

Perhaps, but we're not there yet. Not only are Nvidia users holding fast it seems many RV770 owners are happy with what they've got for now as well. I don't think the race is quite over yet.
NV30 and R600 didn't fare so poorly simply because they were late. It's that they were late and lackluster once they were available and throughout their lifetimes. I don't know what AMD has in store for 2010 but given it's all 40nm for the next 12 months it seems to me that Fermi will have lots of time to catch up. That's assuming that it's not a disappointment on release of course.
 
Perhaps, but we're not there yet. Not only are Nvidia users holding fast it seems many RV770 owners are happy with what they've got for now as well. I don't think the race is quite over yet.
NV30 and R600 didn't fare so poorly simply because they were late. It's that they were late and lackluster once they were available and throughout their lifetimes. I don't know what AMD has in store for 2010 but given it's all 40nm for the next 12 months it seems to me that Fermi will have lots of time to catch up. That's assuming that it's not a disappointment on release of course.

True, but this generation is unlike any other in that the hardware has now surpassed the software. The 5870 already allows all settings maxed out in nearly every game presently out at 24" and below and it's going to be a while before this changes. The only compelling reason, other than bragging rights, to even need something with substantially more performance is going to three screens, and AMD owns that niche. I'm thinking that will grow far faster than some first thought considering the across the boards and sometimes startling enthusiasm from the sites that have reviewed a three screen set-up ... to SEE it in action apparently kindles an irresistable lust to have it kind of thing. That means a shift from upgrading to the holy grail of a 30 inch screen to a three screen set-up (and at substantially less cost to boot) ~ a shift from a focus on a Fermi oriented solution (assuming it has a substantial performance edge) to an AMD-Xfire solution ... at least until Nvidia can come up with a multi-screen solution.

For those already heavily into sli/xfire and who have a 30" screen the logical next step is THREE 30" monitors, after all they are, by definition, already way INTO the absolute best gaming experience/edge possible, and a three screen set-up provides a substantial advantage in head to head gaming, and that leads directly and only to AMD, or for those like Coleen Kelly at TWiT who already run 3 30" monitors, the next step is almost certainly xfired 5870's (now that eyefinity supports xfire) and there simply is no reason to buy a Fermi board. It doesn't GET you anything.
 
Last edited by a moderator:
If you are already running with maxed out setting at 60+ fps, what exactly do you need more performance FOR?


Not just graphics ;), and then you have new games that come out in on 6 months don't you think they will push these cards more? You know when the g80 came out, there was really no reason for it too, ATI's and nV's offerings were pretty good when you look at SLi and Xfire too, but really what kind of reasoning is that, when most people don't think about SLi, and Xfire.
 
Not just graphics ;), and then you have new games that come out in on 6 months don't you think they will push these cards more? You know when the g80 came out, there was really no reason for it too, ATI's and nV's offerings were pretty good when you look at SLi and Xfire too, but really what kind of reasoning is that, when most people don't think about SLi, and Xfire.

And if AMD releases a 5890 card when Fermi is released and it matches Fermi's performance?
And is cheaper?
And is quieter?
And runs cooler?
And uses less power?
And does eyefinity?

If all the above were true, would you still buy a Fermi over a 5890?

Heck, who would?

Thus the conundrum Nvidia faces if Fermi is not meeting performance expectations and cannot trounce the expected 5890 ... why WOULD anyone buy it over a 5870/5890 ... and if one DID do so just because it's an Nvidia ... maintaining pride is going to have a tough slog when one's peers evince laughter and derision for being such a fanboy twit as to buy a second rate solution just because ...

Fermi not besting a 5890 is a worst case scenario, perhaps, but one well within the realm of the possible.
 
Last edited by a moderator:
And if AMD releases a 5890 card when Fermi is released and it matches Fermi's performance?
And is cheaper?
And is quieter?
And runs cooler?
And uses less power?
And does eyefinity?

If all the above were true, would you still buy a Fermi over a 5890?


And what does that have to do with what you were saying before? You were saying there is already too much performance, make up your mind. :D

I don't know how to respond now........
 
And if AMD releases a 5890 card when Fermi is released and it matches Fermi's performance?
And is cheaper?
And is quieter?
And runs cooler?
And uses less power?
And does eyefinity?

If all the above were true, would you still buy a Fermi over a 5890?
Yes,
because I don't want to wait till I can play Borderlands with a adequate performance (still waiting).
because I don't want to wait till ATI fixes Gothic2-Performance (broken since 3 years).
because I don't want to wait 3 months for a Crossfire-Profile for Risen (still waiting).
because I don't want to wait till Saboteur is fixed on ATI-Cards (still waiting, just a beta patch out).
because I want to use driver profiles. App detection, adjust and forget. Still missing at ATI.

btw Eyefinitiy and less power doesn't fit together. 3 Monitors and you still care about power consumption?!

And I think, nvidia will deliver all features on release. ATI missed Crossfire-Support for Eyefinity and LOD-switching for RGSSAA. CF for Eyefinity is still missing, only works on 5970, if I am right.

Drivers are the main disadvantage of ATI. Only for the drivers I would pay more for an nvidia card of the same performance.
 
And if AMD releases a 5890 card when Fermi is released and it matches Fermi's performance?
And is cheaper?
And is quieter?
And runs cooler?
And uses less power?
And does eyefinity?

If all the above were true, would you still buy a Fermi over a 5890?

Heck, who would?

I would. For its Folding @ Home performance. F@H GPU client performance is about the only thing holding me back from buying a Radeon HD 5 series card to replace either my 8800 GTS 512 or GTX 285. That and the current pricing which places these cards above their initial MSRP.
 
Back
Top