NVIDIA Maxwell Speculation Thread

Charlie's report on "Nvidia’s Maxwell process choice" is now completely available (it's been 30 days).

Main points:
  • Maxwell will be on "the big die strategy"
  • Maxwell on 28 nm could mean that NVIDIA doesn't think they can make a large chip on 20 nm right away with good yields, not due to an engineering decision
  • Apple is going to TSMC for 20 nm, so they'll probably take up all of their initial 20 nm wafers. So in any case, Maxwell will have to be on 28 nm.

Question:
Were/are Nvidias yields as bad as Charlie writes or does he exaggerate just to make Nvidia look bad?
Not to build a big chip on 20nm right away is an engineering decision in my mind as it is simply not possible.
 
So, that means the transistor budget will be almost the same as Kepler. So, low performance improvements if any noticeable? :???:
My guess is that the high-end 28 nm Maxwell will be considerably bigger than GK104 (4xx mm^2, 384-bit bus), and that would by itself give considerable gaming performance improvements over GK104 (and presumably GK114 too, if it's not too different from GK104).

Against a (hypothetical) desktop GK110, the situation is more nuanced. If my speculative high-end 28 nm Kepler is a "gaming" part, then that means the compute stuff of, say, GK110 may not be included, so there would more transistors available for "gaming" purposes. So even without architectural improvements, a 4xx mm^2 high-end gaming 28 nm Maxwell might have around the same gaming performance of a larger GK110, depending on clocks.

Question:
Were/are Nvidias yields as bad as Charlie writes or does he exaggerate just to make Nvidia look bad?
I don't really know (hopefully someone knowledgeable about this stuff can comment), but at least with GF100, I would think yields had to have been bad (no idea if his sub-2% number is accurate) due to the lack of a full GF100 part.
 
Question:
Were/are Nvidias yields as bad as Charlie writes or does he exaggerate just to make Nvidia look bad?
Not to build a big chip on 20nm right away is an engineering decision in my mind as it is simply not possible.

Yields for Nvidia chips have never been really good, but it dont mean this cost them too much... basically AMD cant have the same approach have Nvidia on this. ( Its maybe one of the main error of AMD when they should / could take some risk. ).


Now going after Semi accurate with the Apple story.. yet Apple have only a 3 months deal with TSMC.. Its a "try on deals", and yet, nobody know if really Apple will use TSMC instead of Samsung for build their chips. TSMC like to point it ( for investors its a good thing ).. but yet this 3 months deal is the only contract between Apple and TSMC. ( Samsung is allready ready to produce ARM SOC in 14nm, so the 20nm of TSMC, I think Apple dont care about it ( Without saying 20nm on a x86 GPU chips is absolutely not the same of 20nm with ARM Soc)
 
Last edited by a moderator:
Against a (hypothetical) desktop GK110, the situation is more nuanced. If my speculative high-end 28 nm Kepler is a "gaming" part, then that means the compute stuff of, say, GK110 may not be included, so there would more transistors available for "gaming" purposes. So even without architectural improvements, a 4xx mm^2 high-end gaming 28 nm Maxwell might have around the same gaming performance of a larger GK110, depending on clocks.

So its not anymore a GK110... it rejoin what i was speculate about something in between, there you look to say it will bring even more performance on the table. Personally i see less transistors/shaders counts but higher clock.

(really sorry, i have push 2 different posts when i was thinking update my first )
 
I don't think TSMC is going to give 20nm exclusively to Apple, and cut out AMD, Qualcomm, and Nvidia simultaneously. I realize Apple will command a very large order, but I don't see TSMC pissing off all their other cutting-edge customers just to appease 1.

So in other words, I still believe Maxwell will be on 20nm in 2014. It wouldn't make R&D sense to design an entire family of chips at 28nm for a short 6 month run, then shrink them all down to 20nm. Especially when perf/watt has become the defacto standard for new chips big and small.
 
Now going after Semi accurate with the Apple story.. yet Apple have only a 3 months deal with TSMC.. Its a "try on deals", and yet, nobody know if really Apple will use TSMC instead of Samsung for build their chips. TSMC like to point it ( for investors its a good thing ).. but yet this 3 months deal is the only contract between Apple and TSMC. ( Samsung is allready ready to produce ARM SOC in 14nm, so the 20nm of TSMC, I think Apple dont care about it ( Without saying 20nm on a x86 GPU chips is absolutely not the same of 20nm with ARM Soc)

Relations are a bit strained between Apple and Samsung at the moment. Samsung significantly raised the cost of the chips they provide to Apple after Apple won its initial preliminary injunctions against certain Samsung products in the US.

Apple then tried to find another supplier, but when it couldn't was forced to agree to Samsung's increased cost. I'm sure Apple is still working on trying to find another source to produce the chips it needs that doesn't involve Samsung.

Regards,
SB
 
Yields for Nvidia chips have never been really good, ...
Are you saying they have been good but not really good? Can you quantify this?

See, I look financial numbers and I see results that are really quite good. I read the financial q&a and a see a steady stream of comment about how good yields are. So you must be on to something here. Please, enlighten us!
 
It could just be that 20 nm is too expensive for nv to migrate to in 2014. There was a slide floating around with nv saying that new processes would be quite costly after 28 nm.

If true, this could set up an interesting dynamic where only Apple, may be Qualcomm and certainly Intel are on the latest process. Nvidia would suffer badly in the HPC and mobile domain. May be this will open doors for Intel in the mobile domain.
 
I just read the drivel of the village idiot and he truly stays in character.

He links to an SEC filing and somehow sees big validation in his earlier claims. The 'risk factors' section talks about how yields have in the past at times hurt their business and how this is always a factor of risk for a fabless company.

Well, duh, it's no mystery that TSMC screwed up early on for 40nm. It's not hard to imagine that this must have hurt their GT216 silicon etc. But it's also well known that yields for initial Fermi products started off pretty well and kept on rising each quarter. He seems also to think that this particular part of financial disclosures are to be taken serious. They are not: no investor would bat an eye if a company would warn of adverse financial effects if a meteor would destroy their headquarters and abduct the CEO: this is par for the course of the 'risk factors' section. But given the idiot's disastrous track record of interpreting financial documents, one should not expect him to know that.

He's reporting an internal goal of 80% performance increase (no context given), then starts spreading all kinds doubts. Classic village idiot: pull a claim out of some very dark place, then blame them later when said claim is not met. Falsifiability potential: none. Can't lose on that one. BTW, I heard from my sources that Nvidia is targeting 99% yields. What competent idiots if don't meet that.

He talks about "especially in light of the claims for GK114 vs where it ended up", which is weird because none of us have seem GK114 yet. (Typo? Did he mean GK104, which he praised effusively before it was released?)

He speculates about more efficient scheduling. Allow me to put some question marks next to his competency on that matter, since he earlier speculated that Kepler would increase CPU load because the CPU had to schedule things individually.

He goes on a long tangent on how management is to blame for low yields, which are supposedly consistently disastrous. Look: my little shop doesn't come close to the expertise and manpower a powerhouse like Nvidia (and known early adopter of the latest and greatest tools) has, yet somehow we manage to get pretty decent results if we simply follow the TSMC design rules, like everybody else. The mere suggestion that an extremely successful fabless company with high gross margins does not have the internal corrective processes to fix earlier mistakes is absurd.

He talks about all most variants having units disabled as proof of bad yield, yet GTX680 was full featured right from the start with decent availability. GTX580 wasn't too bad either in terms of volume if I remember well (he claimed it'd be a paper launch weeks before.)

He claims that a good die only contract is a magic construction that fixes all issues. It is not: if your yield stinks your wafer allocation won't magically go up and you end up with nothing to sell. TSMC is very sensitive about dies that don't meet expected yields based on their models and will root cause issues immediately. If it's a proven issue with the incoming tape, you don't start production. I don't ever remember GF104 and later having volume issues and even GF100 was usually available.

He talks about conspiracies abound. Ok, he definitely has a point there.

We can't know the deal Apple made with TSMC, but then to suggest that Apple can dictate to TSMC who gets wafer allocations? Supposedly because Nvidia, delivering GPUs to Apple, is more of an enemy than AMD? He even names Qualcomm as a threat, you know that little outfit that makes iPhone modems.

You can't make this stuff up.

I feel for those who paid $50 for this.
 
He talks about "especially in light of the claims for GK114 vs where it ended up", which is weird because none of us have seem GK114 yet. (Typo? Did he mean GK104, which he praised effusively before it was released?)
He mentions a performance projection claim for GK114, +15% over GK104, in one of his earlier reports, but that is only one number, not two which is the minimum needed to make that kind of comparison.
 
This. And if anything, that confirms GK110 for Geforce. Usually the performance increase for a new generation + refresh is about +100%. GK104 brought +35%, another +15% would land at +55% vs GTX580. No one in their right mind would think that's it. The GTX480 alone was 65% faster than the GTX285 for example. It just doesn't track. GK114+15% is completely fine - as a GTX760 Ti.
 
GK104 brought +35%

On top of GF110? What about on top of GF104/ GF114?

GK114+15% is completely fine - as a GTX760 Ti.

I think it would be next to impossible. I wouldn't expect 15% more than GTX 680 to be marketed as GTX 760 Ti but rather GTX 780. ;)

I feel for those who paid $50 for this.

I agree. This particular article is not very nice. But to call him idiot because of (or, maybe it's not because of this but, come on, in general it's not fair) his attitude towards NV is too much. :rolleyes:
 
On top of GF110? What about on top of GF104/ GF114?

Yes of course. The 680 is 135% faster than the 460 and about 90% faster than the 560 Ti. Of course it uses significantly more power than the 460 and a bit more than the 560 Ti.

I think it would be next to impossible. I wouldn't expect 15% more than GTX 680 to be marketed as GTX 760 Ti but rather GTX 780. ;)

My expectation:
GTX780 = GTX680 +45%
GTX770 = GTX680 +30%
GTX760 Ti = GTX680 +10-15%
GTX760 = GTX680

This would fit past developments well and make the 780 roughly twice as fast as the 580 as one would expect from a new architecture and a new mature process. But the point is moot, marketing may go haywire again and milk us customers again for all it is worth. Just saying the chip GK114 should be a mild improvement over its predecessor which is fine considering there might be something else on top of it. I mean when has a refresh ever brought more than 10-20% improvement? GK110 is no refresh because there was no GK100, so that doesn't count ;)

I don't think GK114 could compete with a HD8970. Again 2GB vs 3GB, again 256bit vs 384bit, again significantly less raw shading power. This is not Nvidia - Nvidia is overkill (if they can) :D
 
Think again boxleitnerb, 7xx-series will be 28nm just like current 6xx-series is, it'll be something like 4xx>5xx update was, with possible addition of GK110
 
Think again boxleitnerb, 7xx-series will be 28nm just like current 6xx-series is, it'll be something like 4xx>5xx update was, with possible addition of GK110

I understand him but the doubt is if GK110 will ever be released as top-dog gaming card.

GTX780 = GTX680 +45% GK110
GTX770 = GTX680 +30% GK110 salvage
GTX760 Ti = GTX680 +10-15% GK114
GTX760 = GTX680 GK114 salvage

It is possible to expect 45% difference between GK104 and GK110. Everything else is more than clear. ;)
 
How much does it cost to develop and tape out a GPU like GK110?

Apparently, some people have been working on Kepler for 7 years, but only in the last 3 years it became really manpower intensive. 7000 employees, let's take 1000 who have somehow to do with GK110 with $60k/year salary, that is $180m right there, and that is only the tip of the iceberg.

I don't think Nvidia can sell enough K20, K20X and Quadros to break even or earn a significant amount of money for future R&D. The market just isn't there. How many professional GK110 cards would they sell until the end of 2013 altogether? 100,000?
 
I have no idea, the idea of a pro-only GPU may well be bunk.
But it takes just one supercomputer to sell thousands of GPUs at high margin (Cray Titan has over 18000 Tesla K20X)
 
Back
Top