That is quite a lot of words to say ... nothing.
Ah, ye olde FUD "We might be late but we tried to make it bug free." Because, as you know, only NV was affected by the 40nm issues the last time around.
That is quite a lot of words to say ... nothing.
Ah, ye olde FUD "We might be late but we tried to make it bug free." Because, as you know, only NV was affected by the 40nm issues the last time around.
Ah, ye olde FUD "We might be late but we tried to make it bug free." Because, as you know, only NV was affected by the 40nm issues the last time around.
It's disturbing to see how many people still believe 40nm was an Nvidia specific problem. The fact that you don't hear about others doesn't mean it wasn't there.neliz said:Ah, ye olde FUD "We might be late but we tried to make it bug free." Because, as you know, only NV was affected by the 40nm issues the last time around.
But it's also telling that right next to Nvidia, there was a company that was way ahead of them and had far fewer problems with the process. Which is perhaps a good reason to think nvidia botched something, even if they were standing on the shoulders of TSMC.It's disturbing to see how many people still believe 40nm was an Nvidia specific problem. The fact that you don't hear about others doesn't mean it wasn't there.
When you're dealing with issues that are clearly process related, the standard way is to flag them to the fab, make them fix it, and hope it happens before major production starts. It's unusual to change your design for it: the fab will normally already tweak some polygons for you before mask making. Making your die larger and doubling up is pretty much unheard of. In this case, AMD did so anyway and reaped the benefits initially. Nvidia paid for it, but probably mostly in low yielding GT21x products. (Does anybody here really care about those?)rpg.314 said:But it's also telling that right next to Nvidia, there was a company that was way ahead of them and had far fewer problems with the process. Which is perhaps a good reason to think nvidia botched something, even if they were standing on the shoulders of TSMC.
It was definitely worth it. They took a very unusual step by gambling on TSCM not getting their stuff fixed on time and won.rpg.314 said:AMD scored a home run by launching next to Win7, won lots of market / mind/ dev share. I think that was worth it, especially if you consider that AMD was coming from a point of area efficiency advantage.
That factors in what?rpg.314 said:A large die is going to yield poorly. So that factors in obviously.
If Nvidia didn't delay GF100 tape out to work around yield issues, one could say they made the best possible decision: after all, by the time they went to production, those issues were history.But beyond that, AMD bet on TSMC screwing up and won so everybody expects nv to have made the same bet.
Uhm, yes, sure.And gf100 had poor clocks and heat as well.
Speaking of selective Memories…
Both AMD and Nvidia had to find their respective ways around the problems of TSMC's 40nm process.
You don't need to take my word for it though, take Anand's!
http://www.anandtech.com/show/2937/9
"The problem with vias was easy (but costly) to get around. David Wang decided to double up on vias with the RV740. At any point in the design where there was a via that connected two metal layers, the RV740 called for two. It made the chip bigger, but it’s better than having chips that wouldn’t work. The issue of channel length variation however, had no immediate solution - it was a worry of theirs, but perhaps an irrational fear.
TSMC went off to fab the initial RV740s. When the chips came back, they were running hotter than ATI expected them to run. They were also leaking more current than ATI expected."
Now, if you're already near the reticle limit, you cannot simply make the chip bigger, so obviously Nvidia couldn't go the RV740 route.
Me neither. And how can GF100 be considered reticle size limited?CarstenS said:Haven't heard of that before, and neither seems google - at least with a credible source instead of some random forum post, which indeed show up when googling "Fermi GF100 "double vias"".
People's focus on yields. They see considerable disparity and latch onto it.That factors in what?
If they hadn't spun Bx series, I would have agreed. They needed a silicon spin to really fix fermi which came almost a year behind AMD. If TSMC had fixed the process by march 2010 and nv's engineering was all right, then why was a Bx spin necessary?If Nvidia didn't delay GF100 tape out to work around yield issues, one could say they made the best possible decision: after all, by the time they went to production, those issues were history.
So making the same bet would have been even worse (for GF100). So do we agree that 'everybody' was wrong?
An indicator of inadequate impedance matching between process and architecture, right?Uhm, yes, sure.
But let's not forget the other side of the case: there is no question that TSMC fixed the issues completely early 2010 and 40nm became quickly a very stable high yielding process without any via doubling monkey business.
The 40nm process is of exceptional quality right now, without monkey business. Yes, I am sure about that. As are tons of companies who are have rolled out their 40nm chips.Alexko said:Are you sure about that? I don't remember hearing about it.
The B part was faster and consumed less power. Nvidia said they used new lower leakage cells. Sounds like enough reasons to me, especially because the A version power requirements likely prevented them from productizing a full SM part.rpg.314 said:If they hadn't spun Bx series, I would have agreed. They needed a silicon spin to really fix fermi which came almost a year behind AMD. If TSMC had fixed the process by march 2010 and nv's engineering was all right, then why was a Bx spin necessary?
Me neither. And how can GF100 be considered reticle size limited?