NVIDIA Fermi: Architecture discussion

I can´t wait to write "EPIC FAIL for NV" when that thing launches in .... well May?
If it does launch as late as may it will be pretty damn fail for NV, that's true. AND fail for Fermi too if it performs as indicated by these early figures. However, I'd be sad and unhappy if that's the case, because in my heart I'm a tech geek, and Fermi is a radical and brave design with advances in many areas that will be very important for the future of computing.

Perhaps Fermi's premature in reaching so high already, but still, ya gotta admire Nvidia for trying I think (even though I hate them for their crappy business practices).

Besides, I just plain hate waiting for anything... I am one of those who feel if I have to wait a week, or even a day for an NDA to expire, I'm waiting just too damn long! And may's quite a ways off... :(
 
Of course they are negotiated independently ... I'm just asking what the results of those negotiations are. If they are at capacity and they give NVIDIA a larger discount per mm2 then any allocation to NVIDIA will loose them money in the short run. In the long run they want NVIDIA&AMD to stay competitive of course, but still it looses them money in the short run.

So, how charitable are we to assume TSMC is?

I doubt very much that short term thinking drives TSMC's business. Nvidia and AMD aren't really on the same level in terms of the amount of business they do with TSMC and there's GloFo waiting in the wings to take over AMDs GPU production. Then there are the reported personal relationships between Huang and TSMC mgmt. Considering TSMC will lose AMD no matter what they do these are all reasons for Nvidia to have slightly more leverage in those conversations.
 
Heh, I was waiting to see when the "Nvidia pays TSMC to break AMDs parts" theory would surface. The source is expected, Im just surprised it took this long. Because AMD selling GPUs for a few months is so horrible that it's worth risking the reputations and viability of both Nvidia and TSMC.

I looked at that article. And I'm no Charlie fan. But did he actually say that? I didnt see it. It'd be pretty obsurd.
 
Heh, I was waiting to see when the "Nvidia pays TSMC to break AMDs parts" theory would surface. The source is expected, Im just surprised it took this long. Because AMD selling GPUs for a few months is so horrible that it's worth risking the reputations and viability of both Nvidia and TSMC.

Besides that I'm not willing to believe that TSMC is the convenient scapegoat for all possible problems either. If I'd point fingers I'd rather point three for TSMC/NVIDIA/AMD for varying degrees of mistakes that could lead to such low supplies.

The obvious mess right now does not sound to me like it's by 100% TSMC's responsibility. Now I can't obviously say or even estimate where each party could have gone wrong, but I can't believe either that easily that TSMC is the only one to blame here.
 
I looked at that article. And I'm no Charlie fan. But did he actually say that? I didnt see it. It'd be pretty obsurd.

Of course he didn't. Just look at the subtitle first and then:

For TSMC to screw up this badly for this long is not an understandable problem. It goes beyond error, and falls well into that nebulous zone where too many things went wrong for them to all be chance. You have a bring up error, a sudden tool miscalibration, and then two plus months of no one noticing something that should have set off alarm bells in an hour or two. This should have been tested for and caught within a wafer batch or two.

With that in mind, the sheer incompetence of this error, coupled with the sheer incompetence of the metrology, makes you wonder if it was actually by chance. From there, things get really odd.

Something meows on a hot tin roof and it's definitely not me ;)
 
I can´t wait to write "EPIC FAIL for NV" when that thing launches in .... well May?

What makes you think it's going to fail? Consider how important this launch is for NV. They've certainly had lots of issues and made some mistakes (fake cards, excessive arrogance and the childish anti-Intel site) but I wouldn't make any assumption about the hardware's performance until we actually see it.
 
Late, slow or not, it mostly matters for games. It won't have any competition in GPGPU. But I'm willing to see AMD prove me wrong ;).

What I'm saying is that despite all these problems, nV has a chance to pull a quite decent chip.


And about AMD's response to Fermi, I believe that what was meant is that AMD was preparing an answer to Fermi/a respin anyway and will probably be ready for spring. Don't think it was implied that they have running chips. Anyway, we should hear about a tape out, shouldn't we?
 
Why should we expect that to be a real card?
Why wouldn't it be a real card? Engineering the final product doesn't stop just because the chip is late.

It's not like they'll have cards ready to go now just sitting around till May waiting to be sold :)
With boards rumoured to be in some third parties' hands, many wouldn't be sitting around...

Jawed
 
Slide here has the TDP: http://www.computerbase.de/bildstrecke/27497/23/

Its 225W. I dont suppose they can go too much higher for consumer GPUs. Maybe a little, iirc the single chip record was GTX280 at 236W.

Edit: Then again, consumer boards won't have 3GB on them... probably doesn't make a lot of difference though.

If memory serves well the C1070 was in the ~190W ballpark with 4GB ram (with a peak 1.44GHz ALU frequency) while they claim themselves 204W for the GTX285 with 1GB ram and 1.476GHz ALU frequency. I don't recall what the memory frequency on the 1070 was but on the C1060 it's 800MHz which is quite a bit lower than the 1242MHz memory speed on the GTX285.
 
It's certainly late. Don't recall seeing price or performance numbers anywhere. You'll have to wait a few months to tally his score.

Well since those are just opinions anyway he can be correct/incorrect at anyone's whim of what "slow" or "expensive" means.
 
If Nvidia can't hit their clocks, they may have driver issues and be slower than the competition. It would be ironic if Nvidia kickstarts the HPC/GPU market and AMD comes in and takes it all away with significantly cheaper and more powerful hardware.
 
That would take more than just flops no? The software support and fancy-schmancy memory model are probably as, or more important to those guys. ECC is being pimped as a killer feature too but I don't know if that's just Nvidia hype or not.
 
That would take more than just flops no? The software support and fancy-schmancy memory model are probably as, or more important to those guys. ECC is being pimped as a killer feature too but I don't know if that's just Nvidia hype or not.

It probably is, for the workstation/GPGPU market. For the gaming market POV, I think it's more a "who cares" situation.
 
Back
Top