Nvidia BigK GK110 Kepler Speculation Thread

There could be yield problems, but it depends what absolute value we're looking at. You can have bad yields for a desktop release ($500-600 per card) but still sell the chips in the professional market ($2000+ per card). After all, the GPU only accounts for about 25% of the cards sale price in the desktop segment. GF110 was about $120 and the card was sold for 500 - and that is including margins for the AIBs etc.
 
The idiot writes:
... you can read a lot about yields into the choices Nvidia made for the K20. First is that they had to fuse off two full SMXs, so granularity of design and repair-ability is still not considered a good thing on one side of the San Tomas Expressway. That is something we can’t explain. Next up is that the yields are horrific.
I find it very amusing that the author is unable to detect granularity in a design with 15 identical units. (My sense of humor leaves much to be desired.)

Never mind that we don't know what kind of RAM repair functionality is in place: which 15 units you pretty much have all that's needed. The SMX units are large. If you disable 2 out of 15, you should be able to get tons of usable dies: there is no reason to assume a GK110 has a worse defect rate than GK104 and the latter has been produced without disabled SMXs for a long time now.

A second factor is obviously the competition and the prices people are willing to pay for the highest end GPU: if you can dominate the market and ask top dollar with a less than fully functional die, there's no need to go all out and take a hit on your margins. It also leaves the door open for a later refresh... Nvidia is in a very comfortable position right now: it can just sit back, relax, and wait for whatever product AMD comes to market with and adjust accordingly. (While making killer profits on their Tesla line.)
 
The idiot writes:

I find it very amusing that the author is unable to detect granularity in a design with 15 identical units. (My sense of humor leaves much to be desired.)

Never mind that we don't know what kind of RAM repair functionality is in place: which 15 units you pretty much have all that's needed. The SMX units are large. If you disable 2 out of 15, you should be able to get tons of usable dies: there is no reason to assume a GK110 has a worse defect rate than GK104 and the latter has been produced without disabled SMXs for a long time now.

A second factor is obviously the competition and the prices people are willing to pay for the highest end GPU: if you can dominate the market and ask top dollar with a less than fully functional die, there's no need to go all out and take a hit on your margins. It also leaves the door open for a later refresh... Nvidia is in a very comfortable position right now: it can just sit back, relax, and wait for whatever product AMD comes to market with and adjust accordingly. (While making killer profits on their Tesla line.)

What he means is that SMXs are large compared to AMD's CUs, so there are fewer of them and when one is defective, you lose 1/15 of the chip instead of 1/32 (Tahiti).

But there's a good reason for that: SMXs share more resources, notably cache and registers, which lets NVIDIA pack more resources into their designs and probably save a good bit of power too.
 
What he means is that SMXs are large compared to AMD's CUs, so there are fewer of them and when one is defective, you lose 1/15 of the chip instead of 1/32 (Tahiti).
1/15 vs 1/32 is 6% vs 3% performance loss worst case. Pretty much irrelevant in the grand scheme of things. If this is what he means then it's a real nice proof that he's nothing more than a troll.
 
1/15 vs 1/32 is 6% vs 3% performance loss worst case. Pretty much irrelevant in the grand scheme of things. If this is what he means then it's a real nice proof that he's nothing more than a troll.

That's what he means, he's made the same point before, but more explicitly.

And yes, it's way overblown.
 
Judging from the consumer line of products (dunno about FirePro), AMD has disabled GCN-CUs only in pairs per Rasterizer, i.e. 2 on Cape Verde model "50" and 4 on Tahiti and Pitcairn model "50".
 
Judging from the consumer line of products (dunno about FirePro), AMD has disabled GCN-CUs only in pairs per Rasterizer, i.e. 2 on Cape Verde model "50" and 4 on Tahiti and Pitcairn model "50".
Could be a deliberate choice though, not a necessity. On Cypress it was necessary if one didn't want to have an unbalanced load (raster unit was tied to one half of the shader array). It may be still the same in Cayman as well as Tahiti, but I don't rememeber seeing something reliable regarding this (one could try to get information about this with some directed benchmarking while checking for the raster pattern to try to establish a link between screen coordinate of a pixel and the CU the pixel shader runs on?).
Some similar tests on Fermi/Kepler would also be nice to answer the question how the load balancing between GPCs is done (or if GPCs exist at all).
 
That's why I wrote "AMD has..." and not "GCN-CUs can only...". :)
WRT to shader arrays/CU-blocks: Tri-Setup and Rasterizer are primarily assigned to one of the two blocks, but there are exchange possibilities. This was confirmed by AMD. I imagine that those special paths however, would be quite costly, possibly going through some of the chips global memories.
 
Seems like Tahiti LE will follow suit with again foour more CUs disabled.
But we don't know if a group of 4 CUs (sharing the instruction cache and the scalar L1 cache) got switched off or just a single CU from 4 different groups or whatever combination one can think of. All we know, is that the possibilities depend on the engine and array configuration. :LOL:
 
Nvidia just released their third quarter results and they are pretty good. Tegra has really continued to grow and Nvidia quarter was up 10% over last quarter 14.7% year over year(for their graphic lineup). The professional quarter was up(not year over years which is understandable because NV needs to release keplar based pro solutions for the mass market). Seems really strange that AMD has shrunk so much compared to last years while Nvidia grows, especially since they had a lead going into everything this quarter. A complete top to bottom solution 3 months in and a professional solution which competes with its competitors for once.

I think part of this is because APU's are a double edged sword. People no longer buy AMD low end GPU's. In addition, I think it shows a weakness in the brand.
 
People no longer buy AMD low end GPU's.

Oh, come on. The best choice would be an Radeon + Intel. If there are no such options, what should people buy? Ridicoluosly slow AMD CPUs or Intel + nvidia? :LOL:

In addition, I think it shows a weakness in the brand.

Weakness in managerial decisions in companies like MSI, ASUS, Acer, etc. If they negotiate and buy, and offer NV based products only, then what exactly do you expect to happen with quarter results?

At least, AMD owns all next-generation console market. Over there we don't see "weakness" in the brand. I wonder why... :rolleyes:
 
Oh, come on.

Intel gained market share in CPUs from AMD and now it appears that Nvidia did the same in gaining GPU share against AMD.

Intel still has something to brag about. Intel’s share hit 83.3 percent, up from 80.6 percent sequentially. AMD’s share dropped to 16.1, down from 18.8 percent, while VIA garnered a 0.6 percent share.

http://www.fudzilla.com/home/item/29375-x86-shipments-plummet-in-q3
The Truth is that there were more sales of Intel CPUs and Nvidia GPUs and less sales of AMD CPUs/APUs/GPUs.

Intel and Nvidia have a solid brand, AMD not so much.

With AMD you have no idea what they will have in the future as they constantly publish, revise, delay and sometimes cancel products listed on road maps. And with the uncertainty around lay-offs and canceled/delayed products it will only add to more share loss for AMD.
 
Last edited by a moderator:
We will probably have a strong indication when Nvidia announces their quarterly results in two weeks.

Latest earning show Nvidia's gross margin gained 1.1 pts to a record 52.9%.

CFO Commentary on Third Quarter 2013 Results

Jen-Hsun Huang (Nvidia's CEO) stated in the CC that the Oak Ridge Titan (K20) was a very big revenue driver with high margins.

So direct from the CEO of the company to Charlie (the hack): gotcha
 
Oh, come on. The best choice would be an Radeon + Intel. If there are no such options, what should people buy? Ridicoluosly slow AMD CPUs or Intel + nvidia? :LOL:



Weakness in managerial decisions in companies like MSI, ASUS, Acer, etc. If they negotiate and buy, and offer NV based products only, then what exactly do you expect to happen with quarter results?

At least, AMD owns all next-generation console market. Over there we don't see "weakness" in the brand. I wonder why... :rolleyes:

The problem is not that AMD has bad products, its that AMD has no products. All of AMD's integrated solutions are basically their low end GPUs(APU). Products start at 7750 and nothing is left before that. AMD accelerating the integrated GPU race has backfired.

The problem is the prices they sell APU is not enough to offset the lost sale of a low end GPU. And the increased demand for AMD's APU by integrating a superior GPU never happened. The undesirability of AMD's CPU performance has the killed the potential sale of a low-end GPU.

AMD killed their low end GPU line and generally help make the market smaller in this area by forcing Intel to accelerate the APU. Intel doesn't give a damn, Nvidia does and still makes parts and AMD made a gamble that hasn't been paying off financially.
 
What? Again, you are not basing anything on reality. Just because we decided not to move Cedar, Caico, Turks into "7000 Series" in the channel does not mean we have cancelled them and no longer offer them. They are available as 7000 Series for OEM and you'll find the running in many consumer and commercial designs.
 
Back
Top