NVIDIA GF100 & Friends speculation

Nope ram, pcb and housing etc, are dirt cheap compared to the GPU, you can get 2 gigs of ddr 3 memory for $60 here, for GDDR5 at wholesale how much do you think it would be?

And here is where everyone points out that GDDR5 != DDR3. the economies of scale are significantly different. 1GB of high speed GDDR5 is in the realm of 80-100 by itself. In other words, GDDR5 commands a significant price premium above and beyond the HIGHEST VOLUME PRODUCTION DRAM IN THE WORLD? Who would have thought?
 
The notion of "a wafer costs $xxxx so chip costs $xx" is very short sighted. There a lot more that goes into the chip costs even before it hits a board. leoneazzurro is certainly on the right track.

Yep, not many people understand the tradeoffs involved in things like how much to you add to the cost of the die to reduce the cost of your testing. At speed vs functional. Various different tester types and tester speeds. For ultra high volume parts its generally worth it to increase die cost by 25-50% to get test reductions of 15-30%.
 
They can launch but they won't have any volume in the channel after launch, I should rephrase it. And anyways it was stated they are at ~60% yields, nV did state this that current yields of Fermi is as good as the gtx 2xx line.

Maybe the original GT200 at Week 0 of manufacturing.... come now Razor, you should expect to have to provide proof of things like this by now.
 
Absolutely Chalnoth, I was just trying to point out the fallacy of trying to link direct materials cost of the main component to final sale price. Heck there are industries where you can loose money with a COGS multiplier of 10 thanks to NRE and low volumes.
Right. I tend to think that for most brand-new high-end GPU's, the materials costs are actually largely irrelevant. They only become highly relevant later in production, when the prices start dropping significantly.

To see what I mean, let's take the toy example where company A has a per-unit cost of $300, while company B has a per-unit cost of $320. If they both sell their products at $600, then there is little difference: company A has a little bit of an advantage, but not much.

But as the production ramps up and the prices come down, these two parts might sell for $350 each. Suddenly company A is earning 66% more profit than company B.

So what this tells us, then, is that if we expect that nVidia's parts are more expensive to produce (almost certainly true), then there is only one way they can survive a price war with ATI: their parts have to be good enough that people will pay more to buy them. It doesn't necessarily have to be that much more. In the toy example above, company B only needs to be able to sell at a $370 price point to make as much money as company A, so they don't necessarily need to make that much more, as long as the marginal difference between the two isn't that great.

And I don't expect the marginal difference will end up being that great, as the GPU itself is only one part of the whole, though obviously that's not a terribly easy thing to judge.
 
There's nothing there, except for this:



Which is vague at best. Then again JHH also said that NV's current mainstream offering is "fabulous" so maybe "fabulous" is Nvidian for "mediocre".

I think you are more generous with your translation of fabulous than I am.
 
Is it surprising that NV is at A3 silicon already? In that the card's not even out yet and they've spun it 3 times and that sounds like they have been pushing themselves to the limit.
Occams razor would say that you need to spin twice because of ordinary logic bugs. What makes you think that this is not the case here?

Yep, not many people understand the tradeoffs involved in things like how much to you add to the cost of the die to reduce the cost of your testing. At speed vs functional. Various different tester types and tester speeds. For ultra high volume parts its generally worth it to increase die cost by 25-50% to get test reductions of 15-30%.
By up to 50%??? Not saying there aren't such cases, but I just can't imagine any situation where this would be true. Even 25% is far above anything is seen...

Can you elaborate?
 
By up to 50%??? Not saying there aren't such cases, but I just can't imagine any situation where this would be true. Even 25% is far above anything is seen...

Can you elaborate?

Most designs already have a significant die overhead for DFx features. Once you add these all up they generally do add 25+% to the die size. Just doing things like fullscan vs noscan adds a significant overhead.
 
Most designs already have a significant die overhead for DFx features. Once you add these all up they generally do add 25+% to the die size. Just doing things like fullscan vs noscan adds a significant overhead.
I've done the inventory on a real design in the past: scan FF vs non-scan FF adds maybe 25% to the FF area. Let's say 10% in overall gates area if you have a FF rich design. Add a handful of observability points or FFs but that really don't add much gates. Membist for bigger rams, scan bist for small register files and a few iobist for interfaces. And logicbist if you're really desperate. Let's say 20% of total area if you have a lot of RAMs.

But 50%?
 
That's true but this is going to be a hard launch, at least for the 470, not sure about the 480 though.

You have low standards for a hard launch. As it looks now, the 480 won't even be shipped to all partners, you'd have to be elbow-deep in NV to get them.

The 470 won't start shipping in "Cypress launch quantities" until the mid or end of Q2 (human, not NV)
 
Last edited by a moderator:
I've done the inventory on a real design in the past: scan FF vs non-scan FF adds maybe 25% to the FF area. Let's say 10% in overall gates area if you have a FF rich design. Add a handful of observability points or FFs but that really don't add much gates. Membist for bigger rams, scan bist for small register files and a few iobist for interfaces. And logicbist if you're really desperate. Let's say 20% of total area if you have a lot of RAMs.

But 50%?

OK, 50% is a bit extreme. There are some parts of designs where 50% or even 100% increase to testability are realistic but not for a design as a whole, but you get my point.
 
And here is where everyone points out that GDDR5 != DDR3. the economies of scale are significantly different. 1GB of high speed GDDR5 is in the realm of 80-100 by itself. In other words, GDDR5 commands a significant price premium above and beyond the HIGHEST VOLUME PRODUCTION DRAM IN THE WORLD? Who would have thought?

You mean $80-100? Unless by "high-speed" you mean 6-7GT/s, that can't be right: you can find HD 5670s with 1GB of 4GT/s GDDR5 for about $115 on Newegg.
 
So let me get this straight: Most of you think that NVIDIA is dumb and:

1) Wasn't expecting Cypress to be smaller than Fermi
2) Took absolutely no precautions to ensure that Fermi would be profitable
3) Doesn't know how to design chips and despite the forward looking architecture and key elements of it, the performance delta over previous generations is barely 30% higher.
4) Will charge an arm and a leg for it, despite not having a good performance lead over the competition

If these points are to be believed/assumed, 3) alone indicates that performance increase of Fermi vs GT200, is the worst ever with an architecture change.

And some of those pointing these out, want "realistic" assumptions/predictions ? I mean...REALLY ? :rolleyes:
 
So let me get this straight: Most of you think that NVIDIA is dumb and:

1) Wasn't expecting Cypress to be smaller than Fermi
2) Took absolutely no precautions to ensure that Fermi would be profitable
3) Doesn't know how to design chips and despite the forward looking architecture and key elements of it, the performance delta over previous generations is barely 30% higher.
4) Will charge an arm and a leg for it, despite not having a good performance lead over the competition

If these points are to be believed/assumed, 3) alone indicates that performance increase of Fermi vs GT200, is the worst ever with an architecture change.

And some of those pointing these out, want "realistic" assumptions/predictions ? I mean...REALLY ? :rolleyes:

Are we reading the same thread?

It seems for me that point 1-3 are just taken out of empty air. I don't recognize those claims from this thread.

Point 4 is certainly a possible scenario though.
 
I've done the inventory on a real design in the past: scan FF vs non-scan FF adds maybe 25% to the FF area. Let's say 10% in overall gates area if you have a FF rich design. Add a handful of observability points or FFs but that really don't add much gates. Membist for bigger rams, scan bist for small register files and a few iobist for interfaces. And logicbist if you're really desperate. Let's say 20% of total area if you have a lot of RAMs.

But 50%?


Agreed.

You have DFT covered. Maybe the rest of aron's overhead goes to DFM stuff. I am unfortunately totally unfamiliar to industrial (and not only industrial actually ;) ) DFM practices.

Also at-speed test overhead?

Edit: OK, so the 50% was somewhat retracted. Anyway, nice to know the scan overhead compared to no-scan so thanks :D
 
Last edited by a moderator:
So let me get this straight: Most of you think that NVIDIA is dumb and:

1) Wasn't expecting Cypress to be smaller than Fermi

Why would they care about the size of Cypress? Why would nVidia care about anything that any other company does when they design their chips?

2) Took absolutely no precautions to ensure that Fermi would be profitable

Again, what can you do to ensure you sell your products at a profit? With the economy down, you might see a lot of products can't be sold "at a profit" because no one will buy them

3) Doesn't know how to design chips and despite the forward looking architecture and key elements of it, the performance delta over previous generations is barely 30% higher.

They know very well how to do it, but aiming and shooting don't equal scoring. (see NV30, see R600)
Heck, R600 A11 was a 500Mhz chip coupled with GDDR3, you think GF100 looks bad in that light?

4) Will charge an arm and a leg for it, despite not having a good performance lead over the competition

NVIDIA is a software company (i think), they sell their hardware and CUDA comes at a premium?
 
Back
Top