Nvidia BigK GK110 Kepler Speculation Thread

Well you can work out 10,000 units * 900 bucks = $9 million in revenues absolute max. That's how "important" this particular card is for the bottom line.

There will be ~110 die candidates per wafer but yields are a total mystery. It won't be great but I doubt it will be like Fermi was. Let's be generous and say 80 good die per wafer. The wafer will cost ~$3000 or so.

So 80x$900 = $72K worth of chips per wafer (assuming all are Titan's, which obviously isn't the case). As you can see the silicon cost isn't a problem, but the real costs are in designing the chip. $900 consumer gpu's simply can't exist without the professional market to back it up.
 
So you've never once owned a cut-down GPU? :???:
Yes I have...sort of. A 32MB ATI Rage128 Pro, back in the late 90s. It had a 64-bit memory bus which crippled its fillrate, not that I really noticed all that much as the CPU in that base system was an AMD K6, which wasn't famed for its great performance in 3D apps. That system totally choked on the then-flagship game Return to Castle Wolfenstein, you couldn't get even theoretically interactive framerates with lightmaps enabled. So sad... :p

Of course, the Rage128 wasn't exactly a GPU though, so maybe it doesn't count. Anyhow, I am not terribly impressed by hugely overpriced video cards, and even less impressed by overpriced cards that have cut-down hardware soldered to them. Of course, the rumored price could end up being wrong, and too high. Also, NV will hopefully release a version of the card with all 15 clusters intact, as it should be...
 
Well you can work out 10,000 units * 900 bucks = $9 million in revenues absolute max. That's how "important" this particular card is for the bottom line.

There will be ~110 die candidates per wafer but yields are a total mystery. It won't be great but I doubt it will be like Fermi was. Let's be generous and say 80 good die per wafer. The wafer will cost ~$3000 or so.

So 80x$900 = $72K worth of chips per wafer (assuming all are Titan's, which obviously isn't the case). As you can see the silicon cost isn't a problem, but the real costs are in designing the chip. $900 consumer gpu's simply can't exist without the professional market to back it up.
Your yield is to generous for a chip that size and the wafer cost is likely way to low for 28nm. The chips likely cost between $100 and $200. Yes, I know that's a big range.
 
Your yield is to generous for a chip that size and the wafer cost is likely way to low for 28nm. The chips likely cost between $100 and $200. Yes, I know that's a big range.
That said, 10000 units sounds like a launch quantity instead of a production life time volume, even for a boutique product. The world is a big place with lots of rich people.

And yield if a 14 out of 15 SMX product won't be that much different from one with 8 out of 8.
 
A chip that big will be lucky to get half the dies on a wafer to work.
300mm2 vs 500mm2 with the latter having redundancy and the former not, but high volume.

I don't know the yield of perfect 104s, but I doubt 110 14/15 is much lower. I don't care if its higher than 50% or not, but if 104 perfects can be made in high volume, then so can the 110 14/5.
 
In October we shipped Quadro K5000, our first Kepler based Quadro product in limited volume. This year, we will launch Kepler for Quadro in volume top to bottom into the professional market. We expect Kepler for both Quadro and Tesla to do very well.

Looks like Nvidia will be launching the GK110 Quadro K6000 soon, possibly in March at GTC 2013 from reading the Q4 results earnings call transcript. More revenue will be derived from GK110 GPUs.
 
A Sweclockers report mentions rumors that claim additional GK110 GeForce releases later in the year, although it doesn't actually say they are lower-end compared to Titan.

Which would put hypothetical GK110 salvage parts where exactly compared to performance SKUs based on a GK104 successing chip?
 
Looks like Nvidia will be launching the GK110 Quadro K6000 soon, possibly in March at GTC 2013 from reading the Q4 results earnings call transcript. More revenue will be derived from GK110 GPUs.

Did anyone expect otherwise? Quadros/Teslas are typically high margin/low volume. Alas if NV wouldn't produce and sell as many GK110 chips as they can for those markets.
 
Which would put hypothetical GK110 salvage parts where exactly compared to performance SKUs based on a GK104 successing chip?

The GK110 salvage part could be a GTX 780Ti for the next round. Our favorite pal over at SA claimed a while back that GK114 was only going to be 15% faster than GK104. If so GK110 Titan will remain the top halo product, albeit low volume, and a 2496, 13/15SMX salvage chip would likely be just fast enough to throw a "Ti" on it.
 
Which would put hypothetical GK110 salvage parts where exactly compared to performance SKUs based on a GK104 successing chip?

The normal +10-20%?
The salvage part is expected Q2.

The GK110 salvage part could be a GTX 780Ti for the next round. Our favorite pal over at SA claimed a while back that GK114 was only going to be 15% faster than GK104. If so GK110 Titan will remain the top halo product, albeit low volume, and a 2496, 13/15SMX salvage chip would likely be just fast enough to throw a "Ti" on it.

Artificially keeping volume low, there is no "set" number of cards they are planning on making they will constantly dribble out supply.

So what would be a GTX780 then? At that point they would rename Titan...
 
The GK110 salvage part could be a GTX 780Ti for the next round. Our favorite pal over at SA claimed a while back that GK114 was only going to be 15% faster than GK104. If so GK110 Titan will remain the top halo product, albeit low volume, and a 2496, 13/15SMX salvage chip would likely be just fast enough to throw a "Ti" on it.

If there's even going to be a "GK114" and the entire refresh line won't run under GK20x codenames. Now if they won't release a Titan witch cheese (15 SMXs + slightly higher clocks) and assuming you have an estimated performance difference of 50% average between Titan and the 680, deduct those hypothetical 15% for the GK104 refresh chip and the differnce is relatively small for a Titan salvage part. If the performance difference between Titan and the 680 is smaller on average, the whole speculative math gets even worse.
 
Which would put hypothetical GK110 salvage parts where exactly compared to performance SKUs based on a GK104 successing chip?
If there's even going to be a "GK114" and the entire refresh line won't run under GK20x codenames. Now if they won't release a Titan witch cheese (15 SMXs + slightly higher clocks) and assuming you have an estimated performance difference of 50% average between Titan and the 680, deduct those hypothetical 15% for the GK104 refresh chip and the differnce is relatively small for a Titan salvage part. If the performance difference between Titan and the 680 is smaller on average, the whole speculative math gets even worse.
I realize that bandwidth isn't everything, but a cut-down 320-bit 6 Gbps GK110 would have higher bandwidth than even a 256-bit 7 Gbps GK104 successor (although it isn't by much), so in any case there should be a bandwidth advantage for the GK110 part. I'm thinking that NVIDIA may want to keep parts with the "Titan" moniker clear above the GTX 6xx/7xx series in terms of performance. Also, I wouldn't necessarily count out parts like a hypothetical GTX 780 (OEM) using a heavily cut-down GK110, while keeping GK114/GK204 for the retail GTX 780, similar to what happened with the GTX 465 and GTX 560 Ti (OEM).
 
I realize that bandwidth isn't everything, but a cut-down 320-bit 6 Gbps GK110 would have higher bandwidth than even a 256-bit 7 Gbps GK104 successor (although it isn't by much), so in any case there should be a bandwidth advantage for the GK110 part. I'm thinking that NVIDIA may want to keep parts with the "Titan" moniker clear above the GTX 6xx/7xx series in terms of performance. Also, I wouldn't necessarily count out parts like a hypothetical GTX 780 (OEM) using a heavily cut-down GK110, while keeping GK114/GK204 for the retail GTX 780, similar to what happened with the GTX 465 and GTX 560 Ti (OEM).

That's a 7% difference in bandwidth and no I don't believe that the GTX680 successor will have as high clocked GDDR5.

The real question is whether the GTX680 successor will be able to battle as well Curacao as GK104 is against Tahiti. If the answer should be yes and both IHVs don't intend to seriously reduce 28nm prices any further, my question would be why NV would invest in extensive GK110 wafer runs, if they can yield the revenues they need with GK204/114 whatever it's going to be called instead? It'll still be a way smaller chip with completely different yields and manufacturing costs.

If they'd go for limited GK110 wafer runs throughout its lifetime, they can always dump the salvage parts into the Quadro/worstation market with ease (which they always did, but with limited wafer runs you also gain way fewer salvage parts and in the case of Quadros at huge margins).

I'm not saying it'll turn our like that; I'm merely exploring scenarios since I honestly expected roughly a year after the GK110 tape out NV to be able to produce it in way more decent quantities. Are you sure they never initially planned to release GK110 desktop within 2012? Since they obviously changed their mind last year, what exactly speaks against a slightly modified strategy for this year if odds are favourable enough to support such a scenario?
 
The real question is whether the GTX680 successor will be able to battle as well Curacao as GK104 is against Tahiti. If the answer should be yes and both IHVs don't intend to seriously reduce 28nm prices any further, my question would be why NV would invest in extensive GK110 wafer runs, if they can yield the revenues they need with GK204/114 whatever it's going to be called instead? It'll still be a way smaller chip with completely different yields and manufacturing costs.
I'm not sure that the answer would be yes. Before I say anything else, I'll state my assumptions about the GK11x/GK20x and Curacao/etc.:
  1. Similar to GF114, GK114/GK204 will have the same CC count and bus width as GK104 but with higher clocks on core and/or memory;
  2. Following last year's rumors and speculation (which may not be too accurate since they got the codenames wrong), Curacao will have 2560 SPs, a 384-bit bus, and 48 ROPs, with core and memory clocks not going down from Tahiti.
Last year we had Tahiti ≈ GK104 > Pitcairn ≈ GK106 (from 3DCenter's Perf. Index on the highest-end part of each chip). Since then, Tahiti has had larger performance increases from drivers than GK104 has, and it seems to be comparatively better at higher resolutions than GK104, which probably puts it at an advantage in the future. A GK114/GK204 as described above doesn't exactly solve GK104's lack of memory bandwidth, especially if the core clock goes up (I doubt core clock will go up much for this reason). Curacao on the other hand has more room to go upwards (more SPs and wider bus), and from what I've read, the 48 ROPs should help its performance considerably.

So unlike in 2012, I think in 2013/2014 the chips will be clearly and maybe evenly staggered, with GK110 > Curacao > GK114/GK204 > Hainan > GK116/GK206, assuming each chip exists (and if the 1792 SP rumor(s) for Hainan is correct). One possible reason for a GeForce GK110 might be so NVIDIA can claim the fastest single-GPU title, and they probably realized some time ago that they got lucky with GK104 vs. Tahiti, and that a year or so later, GK104's successor probably wouldn't be able to match or beat Tahiti's successor.

If they'd go for limited GK110 wafer runs throughout its lifetime, they can always dump the salvage parts into the Quadro/worstation market with ease (which they always did, but with limited wafer runs you also gain way fewer salvage parts and in the case of Quadros at huge margins).

I'm not saying it'll turn our like that; I'm merely exploring scenarios since I honestly expected roughly a year after the GK110 tape out NV to be able to produce it in way more decent quantities. Are you sure they never initially planned to release GK110 desktop within 2012? Since they obviously changed their mind last year, what exactly speaks against a slightly modified strategy for this year if odds are favourable enough to support such a scenario?
I forgot about Quadro as a possible dumping ground for GK110, so they could go that route instead of using an OEM part as a dumping ground. I was under the impression that GK110 was planned to go into Tesla first, and given that the release date of K20 was announced as Q4 2012, I assumed that there would be no real chance of a GeForce GK110 anytime in 2012.

If GK114/GK204 could compete well with Curacao, I doubt they would release GK110 for the desktop unless for some reason they still had leftover parts after the Quadros. Maybe using GK110s is cheaper for them than expanding the GK104 to, say, 1920 CCs and a 384-bit bus to match/beat Tahiti and Curacao.
 
That's a 7% difference in bandwidth and no I don't believe that the GTX680 successor will have as high clocked GDDR5.

The real question is whether the GTX680 successor will be able to battle as well Curacao as GK104 is against Tahiti. If the answer should be yes and both IHVs don't intend to seriously reduce 28nm prices any further, my question would be why NV would invest in extensive GK110 wafer runs, if they can yield the revenues they need with GK204/114 whatever it's going to be called instead? It'll still be a way smaller chip with completely different yields and manufacturing costs.

If they'd go for limited GK110 wafer runs throughout its lifetime, they can always dump the salvage parts into the Quadro/worstation market with ease (which they always did, but with limited wafer runs you also gain way fewer salvage parts and in the case of Quadros at huge margins).

Considering that GK110 can command higher prices in the workstation and professional markets, I'm speculating that there's only a limited number that will be made available for desktop use in order to definitively maintain the highest performing single GPU video card crown.

A high price will ensure that the part remains low volume while also not eroding margins as significantly. In fact, if it doesn't keep demand low it also works in Nvidia's favor by allowing it and its partners to increase the average selling price of the GPU and associated video card.

I'd speculate that Nvidia wholely intends to keep GK110 as a very low volume (even lower than top end enthusiast cards have historically been) part in the consumer desktop market. While GK104 services the high volume consumer desktop market (performance to enthusiast).

Regards,
SB
 
That's a 7% difference in bandwidth and no I don't believe that the GTX680 successor will have as high clocked GDDR5.

The real question is whether the GTX680 successor will be able to battle as well Curacao as GK104 is against Tahiti. If the answer should be yes and both IHVs don't intend to seriously reduce 28nm prices any further, my question would be why NV would invest in extensive GK110 wafer runs, if they can yield the revenues they need with GK204/114 whatever it's going to be called instead? It'll still be a way smaller chip with completely different yields and manufacturing costs.

If they'd go for limited GK110 wafer runs throughout its lifetime, they can always dump the salvage parts into the Quadro/worstation market with ease (which they always did, but with limited wafer runs you also gain way fewer salvage parts and in the case of Quadros at huge margins).

I'm not saying it'll turn our like that; I'm merely exploring scenarios since I honestly expected roughly a year after the GK110 tape out NV to be able to produce it in way more decent quantities. Are you sure they never initially planned to release GK110 desktop within 2012? Since they obviously changed their mind last year, what exactly speaks against a slightly modified strategy for this year if odds are favourable enough to support such a scenario?

I'm sure NV is hoping that a GK114(204) can draw somewhat of parity with Curacao, this way they can continue to dribble out 800-1000$ GK110 Titan cards for uber enthusiasts and still claim fastest GPU. Depending on how good AMD's Curacao is will dictate how far NV's hand is forced on GK110 quantities and pricing.

I stilll remember when NV used to sell 500mm2 dies for dirt cheap. It must be a dream come true for them to be actually able to sell such a huge chip and command a huge price.
 
Back
Top