NVIDIA Kepler speculation thread

I think they pushed the chip to it's limits in order to make it look a lot better than it was, but that doesn't come cheap.

On the flipside they got to name 6 different cards and 3 different chips "560" and made more money out of the confusion. For me the actual 560 Ti was probably a loss leader.

What are you talking about pushed to its limits?

gf114 was one of the best overclockers of last generation. Considering that gf110 hit 772mhz at a much larger die size and still overclocked pretty well, it was easy for gf114 to hit 840mhz.

Half the gf114 cards sold as overclocked edition and some were sold at about 20% over stock clocks.
 
Sub $150 cards sell far more than the rest combined. This really shouldn't be a surprise to anyone as it's the same as with CPU's.

As far as I can tell the 560 Ti dropped to $180 lowest and was generally >$200.

If you meant the 560 was the best selling Fermi "card" however then sure...but that's only because there were 6 of them spanning 3 different chips. :rolleyes:

You are right sir; I stand corrected. I should have clarified my argument to say that it was primarily based on add-in discrete desktop GPU's. And yes, I was encompassing all GF104/GF114 based cards in saying that GF104/GF114 was Nvidia's best selling die (at the AIB level) on 40nm.
 
My understanding from reading the vr-zone article is that there are two clocks like in fermi, but they aren't *always* bound to a 2:1 ratio. The gk104 core (uncore) clock ranges from 300 to 950 defaulting at 705mhz, and shader clock domain can reach as high as 1411mhz (supposedly). Can clock independently from 2:1 shader:core ratio and "Turbo" from ~700mhz to 950mhz if needed and TDP warrants. GDDR5 is at 6000mhz QDR (higher speed memory than 7970 boards).

A GT640m notebook Kepler clocks
29bj5u8.jpg


http://vr-zone.com/articles/nvidia-...r-dynamic-clocking-2-and-4gb-gddr5/15148.html
http://www.forum-3dcenter.org/vbulletin/showpost.php?p=9202420&postcount=5723
 
I think they pushed the chip to it's limits in order to make it look a lot better than it was, but that doesn't come cheap.
Why don't you first start to explain what you mean by 'making it look a lot better than it was' and how 'that doesn't come cheap' ? All while keeping in mind that GF114 is still produced on an extremely mature 40nm process, which means: low wafer cost, very low process spread and low defect densities, no surprises, basically, and a very high yield for the top bin.

On the flipside they got to name 6 different cards and 3 different chips "560" and made more money out of the confusion. For me the actual 560 Ti was probably a loss leader.
You still haven't replied to my first question: what do you mean by 'loss'? I hope you're not saying that they are selling a die for less than what it costs to produce, because that would be more than ridiculous.

I also challenge you find a single instance in the Nvidia and AMD conference calls that say something like "our gross margins in the consumer space dropped due to the product mix shifting towards higher-end GPU." (No point challenging you to find the opposite statement: there are plenty of them.) News flash: it is much easier to make a good chunk of money on a $200 card than on a $50 card.

Finally: can you explain me the concept of 'loss leader' in the GPU space? I'm really curious about that. Does Nvidia sell ink cartridges to print out Nvidia logos? Did I miss out on this thriving market of Nvidia branded razor blades? Will the sale of a GTX 560 encourage the buyer to buy a companion GTX 550? The GTX 560 products are all high runners: exactly how do you figure Nvidia will be able to recoup the losses made on the initial selling price?

(There was once this John Peddie or Mercury report on Ars Technica that broke down the cost and volumes of Nvidia and AMD GPUs. Anyone able to find this back?)
 
I wouldn't place any bets on that one if I were you.

Oh yeah..i wouldnt either ;)

It's GK110 and it's not going to appear all that soon it seems, at least not for the desktop.

True..the first lot of GK110 has been booked for a supercomputer apparently.

And it's not needed at this time as the GK104 will hold the high-end for now.

The definition of high end is purely on price at the moment. Going by past convention (since RV770), AMD's top chip has been upper midrange. High end has been made up of Nvidia's top dog and the dual GPU cards from both parties. The current positioning of Tahiti is an aberration, which will be corrected in time. Besides, you think AMD wont introduce a refresh of Tahiti to counter GK104?

Nvidia will stockpile the GK110 for HPC/Tesla and when/if AMD releases a Dual-GPU card the GTX690 Single GPU GK110 will be there to regain the high-end.

There's no question of if, AMD is most definitely going to release a dual GPU card based on Tahiti. And as indicated above, Nvidia is going to release a dual GPU card based on GK104 as well.

It's 5% behind 7970.

What interests me most though is the die size. Latest info suggests it's just a bit under 300mm^2.

Past info always suggested it, you just had to know where to look ;)
 
Why don't you first start to explain what you mean by 'making it look a lot better than it was' and how 'that doesn't come cheap' ? All while keeping in mind that GF114 is still produced on an extremely mature 40nm process, which means: low wafer cost, very low process spread and low defect densities, no surprises, basically, and a very high yield for the top bin.

Looking on newegg, half of them (13) have dual fan coolers and 9/26 are clocked over 900 MHz. That does not come cheap.

"Making it look a lot better than it was" is easy enough, just check the reviews that had overclocked cards up against AMD's stock cards. We even had toms increase the clocks on a stock card in a review vs Barts "because so many of them on newegg had overclocks."

Why didn't they just go with higher clocks?

You still haven't replied to my first question: what do you mean by 'loss'? I hope you're not saying that they are selling a die for less than what it costs to produce, because that would be more than ridiculous.
Why would that be ridiculous? Check out AMD's graphics segment revenues and you'll see they've been struggling to break even most of the year. This is with generally smaller chips. Why would Nvidia be any different? There is not a lot of money in consumer graphics cards when there is an ongoing price war.

If you consider the 6950 is selling for a bit more than the 560, and that AMD has barely made any profit in graphics most of the year, by your reckoning that must mean that they are losing money on all of their bottom end cards? I mean if you really believe that these $200+ cards are making them a fortune then there must be a loss elsewhere right?

Finally: can you explain me the concept of 'loss leader' in the GPU space? I'm really curious about that. Does Nvidia sell ink cartridges to print out Nvidia logos? Did I miss out on this thriving market of Nvidia branded razor blades? Will the sale of a GTX 560 encourage the buyer to buy a companion GTX 550? The GTX 560 products are all high runners: exactly how do you figure Nvidia will be able to recoup the losses made on the initial selling price?
How many people do you think buy 560's believing they are 560 Ti's? How many people do you think buy 560 Ti's with single fan coolers and cheap components thinking that they all hit 950 MHz easily?
 
Do we actually know if the Samaritan (UE3) demo was running on GK104? I see the Kepler bit, but people seem to be extrapolating the GK104 part - if this single card replaced a Tri-SLI GTX 580, I don't see that very likely being the 'upper-mid-range' GPU, I see that more likely being the top part, or GK110.
 
We are getting an increasing amount of agreement that there are 1500ish cores and hot-clocks. We're missing something important about those cores -- that's a huge count growth so something must have changed, no?

Exactly what I was thinking. Why are we getting what sounds like around 30%-50% performance improvement from something that has 3x the shaders.
 
Do we actually know if the Samaritan (UE3) demo was running on GK104? I see the Kepler bit, but people seem to be extrapolating the GK104 part - if this single card replaced a Tri-SLI GTX 580, I don't see that very likely being the 'upper-mid-range' GPU, I see that more likely being the top part, or GK110.

I think that while it wasn't confirmed to be on GK104, it probably was running on GK104 but at a lower resolution than what gtx580 was doing and with FXAA (instead of MSAA).
 
Because what dnavas wrote is not true? :???:

Possibly. Extremetech seems pretty confident on those specs though:

The GK104/GTX 680 will have 1536 CUDA cores and a 256-bit memory controller connected to 2GB of GDDR5 memory (4GB will be an option). The core will sit at 705MHz, while the shaders will be clocked at 1.4GHz. The memory will be clocked at 2GHz QDR (6GHz effective) and should be capable of 192GB/s. There’s no information on the GK107, but the GK110 is expected to have 2304 shader cores. It is rumored that the GTX 680 will be slightly faster than the HD 7970 in some tests, and slightly slower in others.

More here:

http://www.extremetech.com/computin...based-dynamic-turbo-boost-arriving-this-month
 
2GHz at Q(uad)DR == 6 GHz? This is quite curious...
The "2 GHz" QDR might be a typo, as all the other info in that report points to 1.5 GHz QDR: 6 GHz effective, 256-bit bus, 192 GB/s bandwidth. (I'm guessing there's no such thing as T(riple)DR memory?)

EDIT: Or maybe Dynamic clock works with memory too?
 
Looking on newegg, half of them (13) have dual fan coolers and 9/26 are clocked over 900 MHz. That does not come cheap.
The clocks are irrelevant. A fan is dirt cheap.

"Making it look a lot better than it was" is easy enough, just check the reviews that had overclocked cards up against AMD's stock cards. We even had toms increase the clocks on a stock card in a review vs Barts "because so many of them on newegg had overclocks."
I have no idea how this kind of 'reasoning' leads to a conclusion that a GF114 is sold with negative GMs.

Why didn't they just go with higher clocks?
Why didn't 7970 didn't go with higher clocks?

Why would that be ridiculous? Check out AMD's graphics segment revenues and you'll see they've been struggling to break even most of the year. This is with generally smaller chips. Why would Nvidia be any different? There is not a lot of money in consumer graphics cards when there is an ongoing price war.
When people don't use right terminology, it's often an indication that they don't really know what they are talking about either, and it's a drag to deal with the imprecision and resulting confusion.

I've worked for many, many years for fabless chip companies. Pretty much without exception, they never ever made a profit. Yet without exception, gross margins were in the 40% range. Some of the CEOs may not have been brilliant, but there were not total idiots either: nobody deliberately sells silicon for a lower price than needed.

The reason companies (or, in this case, a division) don't make a profit is because the huge NRE involved in bringing a chip into existence. But once it's there, it's pretty cheap to produce and you basically hope that you sell enough of them of recoup the NRE.

It is funny that you're talking GF114 as your example, because semiaccurate of all places, did your exercise for GF104. This was discussed at length on Beyond3D, which is why it's better to move this discussion to PM.

First observation: even semiaccurate admits that you can break even on a GF104 based card. GF114 is essentially the same die, so the same thing applies.

But we are now 20 months later: yields on 40nm are amazing and wafer cost has gone down. If you factor that in, even by their numbers, a GF114 die can be sold very profitably.

But here's the kicker: semiaccurate got it wrong, as usual. I've been involved in productization of consumer electronics gadgets:
- their cost of the PCB is a riot. A Chinese manufacturer can sell you a similar size 10-layer PCB for $5, not $10. Nvidia can probably get it quite a bit lower.
- GDDR5 RAM does not cost $24. It's probably $15 or less.
- a dual fan heatsink? My guess is $7.
- packaging and accessories $10? Are you kidding me? How about $3?

If you ever have the misfortune to go to Shenzhen, you should go the SEG Electronics Market and the surrounding shops, all in the same street. I did. It's where thousands of Chinese manufacturers sell their wares: one will only sell HDMI cables ($0.50?), the other only fans etc.

It's all dirt cheap, in volume.

Do the exercise: add the numbers. See how misguided your premise is.

If you consider the 6950 is selling for a bit more than the 560, and that AMD has barely made any profit in graphics most of the year, by your reckoning that must mean that they are losing money on all of their bottom end cards? I mean if you really believe that these $200+ cards are making them a fortune then there must be a loss elsewhere right?
The loss is in NRE, marketing, buildings, whatever. It's all in the open: just read their 10-K statement. What's without question is that the gross margins on high-end silicon are much higher than for low-end silicon. This has been stated again and again in conference calls.

How many people do you think buy 560's believing they are 560 Ti's?
Not many.

How many people do you think buy 560 Ti's with single fan coolers and cheap components thinking that they all hit 950 MHz easily?
How many people do you think overclock in the first place?
 
...The reason companies (or, in this case, a division) don't make a profit is because the huge NRE involved in bringing a chip into existence. But once it's there, it's pretty cheap to produce and you basically hope that you sell enough of them of recoup the NRE...

If so. :???:
Why do they hurry to EOL products which can be produced few months more at lower cost, prices (discounts too), thus higher volume?
See an example- at the moment there is one R6970 at the ridiculous price of 410$. :???:
 
The clocks are irrelevant. A fan is dirt cheap.

Fans, VRM's, PCB's - it all adds up. If it didn't every card would have the best available.

When people don't use right terminology, it's often an indication that they don't really know what they are talking about either, and it's a drag to deal with the imprecision and resulting confusion.
You seem to have forgotten that I was talking about profit (which I mentioned 3 times btw) and you started talking about gross margin.

...<snip>
How do you explain AMD making a tiny profit on graphics all year with their perf/mm2 advantage, yet Nvidia making ~$500m? Nvidia's professional sector is shoring up their losses in consumer and has been for years - it really is that simple. What else could it be? These price wars might be great for us but they are not working for either company.
 
Back
Top