Anyhow, it'd be really really disappointing with a 900 dollar GPU that is crippled from the outset with disabled cores and stuff. I won't buy something like that, that's for sure.
So you've never once owned a cut-down GPU?
Anyhow, it'd be really really disappointing with a 900 dollar GPU that is crippled from the outset with disabled cores and stuff. I won't buy something like that, that's for sure.
Yes I have...sort of. A 32MB ATI Rage128 Pro, back in the late 90s. It had a 64-bit memory bus which crippled its fillrate, not that I really noticed all that much as the CPU in that base system was an AMD K6, which wasn't famed for its great performance in 3D apps. That system totally choked on the then-flagship game Return to Castle Wolfenstein, you couldn't get even theoretically interactive framerates with lightmaps enabled. So sad...So you've never once owned a cut-down GPU?
Your yield is to generous for a chip that size and the wafer cost is likely way to low for 28nm. The chips likely cost between $100 and $200. Yes, I know that's a big range.Well you can work out 10,000 units * 900 bucks = $9 million in revenues absolute max. That's how "important" this particular card is for the bottom line.
There will be ~110 die candidates per wafer but yields are a total mystery. It won't be great but I doubt it will be like Fermi was. Let's be generous and say 80 good die per wafer. The wafer will cost ~$3000 or so.
So 80x$900 = $72K worth of chips per wafer (assuming all are Titan's, which obviously isn't the case). As you can see the silicon cost isn't a problem, but the real costs are in designing the chip. $900 consumer gpu's simply can't exist without the professional market to back it up.
That said, 10000 units sounds like a launch quantity instead of a production life time volume, even for a boutique product. The world is a big place with lots of rich people.Your yield is to generous for a chip that size and the wafer cost is likely way to low for 28nm. The chips likely cost between $100 and $200. Yes, I know that's a big range.
300mm2 vs 500mm2 with the latter having redundancy and the former not, but high volume.A chip that big will be lucky to get half the dies on a wafer to work.
In October we shipped Quadro K5000, our first Kepler based Quadro product in limited volume. This year, we will launch Kepler for Quadro in volume top to bottom into the professional market. We expect Kepler for both Quadro and Tesla to do very well.
A Sweclockers report mentions rumors that claim additional GK110 GeForce releases later in the year, although it doesn't actually say they are lower-end compared to Titan.
Looks like Nvidia will be launching the GK110 Quadro K6000 soon, possibly in March at GTC 2013 from reading the Q4 results earnings call transcript. More revenue will be derived from GK110 GPUs.
Which would put hypothetical GK110 salvage parts where exactly compared to performance SKUs based on a GK104 successing chip?
Which would put hypothetical GK110 salvage parts where exactly compared to performance SKUs based on a GK104 successing chip?
The GK110 salvage part could be a GTX 780Ti for the next round. Our favorite pal over at SA claimed a while back that GK114 was only going to be 15% faster than GK104. If so GK110 Titan will remain the top halo product, albeit low volume, and a 2496, 13/15SMX salvage chip would likely be just fast enough to throw a "Ti" on it.
The GK110 salvage part could be a GTX 780Ti for the next round. Our favorite pal over at SA claimed a while back that GK114 was only going to be 15% faster than GK104. If so GK110 Titan will remain the top halo product, albeit low volume, and a 2496, 13/15SMX salvage chip would likely be just fast enough to throw a "Ti" on it.
Which would put hypothetical GK110 salvage parts where exactly compared to performance SKUs based on a GK104 successing chip?
I realize that bandwidth isn't everything, but a cut-down 320-bit 6 Gbps GK110 would have higher bandwidth than even a 256-bit 7 Gbps GK104 successor (although it isn't by much), so in any case there should be a bandwidth advantage for the GK110 part. I'm thinking that NVIDIA may want to keep parts with the "Titan" moniker clear above the GTX 6xx/7xx series in terms of performance. Also, I wouldn't necessarily count out parts like a hypothetical GTX 780 (OEM) using a heavily cut-down GK110, while keeping GK114/GK204 for the retail GTX 780, similar to what happened with the GTX 465 and GTX 560 Ti (OEM).If there's even going to be a "GK114" and the entire refresh line won't run under GK20x codenames. Now if they won't release a Titan witch cheese (15 SMXs + slightly higher clocks) and assuming you have an estimated performance difference of 50% average between Titan and the 680, deduct those hypothetical 15% for the GK104 refresh chip and the differnce is relatively small for a Titan salvage part. If the performance difference between Titan and the 680 is smaller on average, the whole speculative math gets even worse.
I realize that bandwidth isn't everything, but a cut-down 320-bit 6 Gbps GK110 would have higher bandwidth than even a 256-bit 7 Gbps GK104 successor (although it isn't by much), so in any case there should be a bandwidth advantage for the GK110 part. I'm thinking that NVIDIA may want to keep parts with the "Titan" moniker clear above the GTX 6xx/7xx series in terms of performance. Also, I wouldn't necessarily count out parts like a hypothetical GTX 780 (OEM) using a heavily cut-down GK110, while keeping GK114/GK204 for the retail GTX 780, similar to what happened with the GTX 465 and GTX 560 Ti (OEM).
I'm not sure that the answer would be yes. Before I say anything else, I'll state my assumptions about the GK11x/GK20x and Curacao/etc.:The real question is whether the GTX680 successor will be able to battle as well Curacao as GK104 is against Tahiti. If the answer should be yes and both IHVs don't intend to seriously reduce 28nm prices any further, my question would be why NV would invest in extensive GK110 wafer runs, if they can yield the revenues they need with GK204/114 whatever it's going to be called instead? It'll still be a way smaller chip with completely different yields and manufacturing costs.
I forgot about Quadro as a possible dumping ground for GK110, so they could go that route instead of using an OEM part as a dumping ground. I was under the impression that GK110 was planned to go into Tesla first, and given that the release date of K20 was announced as Q4 2012, I assumed that there would be no real chance of a GeForce GK110 anytime in 2012.If they'd go for limited GK110 wafer runs throughout its lifetime, they can always dump the salvage parts into the Quadro/worstation market with ease (which they always did, but with limited wafer runs you also gain way fewer salvage parts and in the case of Quadros at huge margins).
I'm not saying it'll turn our like that; I'm merely exploring scenarios since I honestly expected roughly a year after the GK110 tape out NV to be able to produce it in way more decent quantities. Are you sure they never initially planned to release GK110 desktop within 2012? Since they obviously changed their mind last year, what exactly speaks against a slightly modified strategy for this year if odds are favourable enough to support such a scenario?
That's a 7% difference in bandwidth and no I don't believe that the GTX680 successor will have as high clocked GDDR5.
The real question is whether the GTX680 successor will be able to battle as well Curacao as GK104 is against Tahiti. If the answer should be yes and both IHVs don't intend to seriously reduce 28nm prices any further, my question would be why NV would invest in extensive GK110 wafer runs, if they can yield the revenues they need with GK204/114 whatever it's going to be called instead? It'll still be a way smaller chip with completely different yields and manufacturing costs.
If they'd go for limited GK110 wafer runs throughout its lifetime, they can always dump the salvage parts into the Quadro/worstation market with ease (which they always did, but with limited wafer runs you also gain way fewer salvage parts and in the case of Quadros at huge margins).
That's a 7% difference in bandwidth and no I don't believe that the GTX680 successor will have as high clocked GDDR5.
The real question is whether the GTX680 successor will be able to battle as well Curacao as GK104 is against Tahiti. If the answer should be yes and both IHVs don't intend to seriously reduce 28nm prices any further, my question would be why NV would invest in extensive GK110 wafer runs, if they can yield the revenues they need with GK204/114 whatever it's going to be called instead? It'll still be a way smaller chip with completely different yields and manufacturing costs.
If they'd go for limited GK110 wafer runs throughout its lifetime, they can always dump the salvage parts into the Quadro/worstation market with ease (which they always did, but with limited wafer runs you also gain way fewer salvage parts and in the case of Quadros at huge margins).
I'm not saying it'll turn our like that; I'm merely exploring scenarios since I honestly expected roughly a year after the GK110 tape out NV to be able to produce it in way more decent quantities. Are you sure they never initially planned to release GK110 desktop within 2012? Since they obviously changed their mind last year, what exactly speaks against a slightly modified strategy for this year if odds are favourable enough to support such a scenario?