NVIDIA Kepler speculation thread

UniversalTruth said:
Charlie says so, and there is no reason not to believe him.
That's the same guy who wrote that GTX480 only saw 10000 pieces over its life time, right? With yields of 10%? :LOL: Well, at least that explains things. There's nothing more to discuss.

Impossible. And sounds ridiculous in these times of serious and deep economic recession.
Leaving aside that fact that the NBER declared the US recession over by sometime 2009, we're looking at a product here with volumes that are 2 orders of magnitude lower than, say, an iPhone. In a world of 7B, there are plenty of consumers who can afford a $500 GPU, just like there are still plenty of consumers who can afford the latest and greatest iPad/iPhone/MacBook Air.

7970/680 are the first major new GPUs in 2 years and they are both good indeed. There's no question that there is significant demand out there.
 
That's the same guy who wrote that GTX480 only saw 10000 pieces over its life time, right? With yields of 10%? :LOL: Well, at least that explains things. There's nothing more to discuss.

No, there is- you have to prove it that he wasn't right--- there is no doubt Fermi was a disaster with low yields, it was hot and late. What else indeed should we discuss....

About the recession being over in 2009, I can only accept it with a sarcasm smile. :)
 
What they are doing is simply trying to justify the close to nonexistent availability. Their marketing machine is simply working but it would be much better if they direct these efforts in something worth it.

People don't graze grass. :D

Its just marketing... I have no doubt over time, the 680 will sold better of the 580...

But for different reason, i can see too why the 580 have not sell so much on the first 6 weeks... I will be more interested to see comparaison between older models and 680 sell ( just for see if the problem ( little problem ) of availability in some country have an impact or not ).. ( at the same time, time change, and i see more and more peoples who buy high end card instead of change their middle range card each year )

- The high end cards are not what sold the most...
- The GTX580 was the refresh of Fermi, somewhere between fixed a bit and performance increased ( mostly due to clockspeed as finally clock to clock the difference was of 1-2% vs the 480 in best case (not even enough for saying this was coming from hardware itself)
- Fermi have not get a good press, even if the card was not catastrophic at all....
- The 580 have been released the 9th november, the 6970 one month later. this is not at this period peoples will buy hardware, im sure the 580 sell a lot better after thoses 6 weeks.. the 570 had performance at 480 level for less money...
- Peoples who had buy 5870 - 5850-5770, GTX480-470 460.. was not going to jump on the GTX580 ... ( some have do it, but it should have been really a minority the first weeks of the 580 launch.

I have no doubt the 680 sell well, and will sell well, in his category anyway, but looking at the period where was launch the 580, and what was the 580 ( and this troubled period for Nvidia too, they was late with 480, bad press, the 580 who come then ( etc etc I will not write the long story ) many peoples was waiting AMD on the 6970 ( as the 5870 had a so good reputation )... At the same time when i see the cost of the 680 and 7970, i think many peoples think a second time before buy one of thoses and goes for the 670 and lower model instead.

What suprise me the most is Nvidia have never do this type of graph before, so this graph is clearly there for the investors for assure them things is going well, and why it appears now ( 1 week before the end of Q2 and the start of H2.. ( hence the graph without numbers and units too )
 
Last edited by a moderator:
UniversalTruth said:
No, there is- you have to prove it that he wasn't right--- there is no doubt Fermi was a disaster with low yields, it was hot and late. What else indeed should we discuss....
It was hot and late. No argument there. But 40nm yields at TSMC were fixed by the time Fermi came to market. That is a very well know industry fact and it was explicitly stated as such by Nvidia too. I'm sorry if you think otherwise.

About the recession being over in 2009, I can only accept it with a sarcasm smile. :)
This are definitely not rosy and we may well be headed towards recession territory, but for a total market of, say, 2M high end gaming GPUs per year, you should be able to find plenty of takers.
 
No, the funny thing is when I want to buy an 680 from hardwareversand.de and it says for all available cards- delivery more than 7 days. :)

Wait, let me get this straight. When I find an arbitrary shop that has, for example, no Matrox card in stock, that means, Matrox has problems with their yields?

That 680's are in stock at other well known shops like the following (yeah, I know, a coincidental point in time, right?) means nothing then, I understand (and probably your agenda as well).
http://www.computeruniverse.net/products/90456155/palit-geforce-gtx680-2gbd5.asp
http://www.snogard.de/?artikelId=VGAP20-PA6800
http://www.cyberport.de/?DEEP=2E06-01H&APID=14
http://www.arlt.com/index.php?cl=details&campaign=geizhals/GTX680/1022278&anid=1022278
http://www.csv-direct.de/artinfo.ph...yORIGhWevnf18XrSmFFQpDODk58yNzQYURGMM3vj38P8=
http://www.arlt.com/index.php?cl=details&campaign=geizhals/GTX680/1022267&anid=1022267
http://www.caseking.de/shop/catalog...DP-HDMI-DVI::18709.html?campaign=psm/geizhals
http://www.cyberport.de/?DEEP=2E13-18Z&APID=14
http://www.computeruniverse.net/products/90456308/zotac-geforce-gtx680-2gbd5.asp

Or try just the cheapest one, which is available from five+ different shops.
http://imgur.com/DbuEL
 
Last edited by a moderator:
It was hot and late. No argument there. But 40nm yields at TSMC were fixed by the time Fermi came to market. That is a very well know industry fact and it was explicitly stated as such by Nvidia too.

Fixed is a very strong, perhaps marketing word (I would prefer improved)--- being late is a direct consequence of poor yield, severe yield problems, which also translate into poor thermal characteristics too.

@CarstenS: Charlie (and the moles behind him) is one of the best and most reliable industry news sources. I have no idea why you are all so mean and personal towards him.
I would prefer to believe him, instead of random links with Palits and Zotacs only, and "one left card" in one of the links too. :oops:
 
Zotac was chosen randomly and Palit because it was the cheapest listing. I gave the complete link to the price search engine earlier, which you obviously chose to ignore. 680's don't grow on trees nor under bushes, but they are definitely not as scarce as you try to picture them to be.

Your pinpointing at "coincidental points in time" or "not available at [insert random shop]" seems to me that you're deliberately trying to find clues to an availability problem while ignoring the idea that a „market“ is comprised of more than one manufacturer, distributor and re-seller. *shrugs*

WRT to Charlie: Did I mention him in the last couple of posts at all? Your preferrance to believe (in?) him notwithstanding.
 
Last edited by a moderator:
Fixed is a very strong, perhaps marketing word (I would prefer improved)--- being late is a direct consequence of poor yield, severe yield problems, which also translate into poor thermal characteristics too.
Yes... Everything is a problem of yield. Just imagine how kickass and cool the GeForce FX would have been if it hadn't had poor yield!
 
DDR3 is very common for todays GTS 450 and even GTX 550 Ti cards.

GTX 660M (835MHz base, 950MHz boost and 5Gbps GDDR5) was measured with a 3DM11-score of ~P2500: http://www.notebookcheck.net/Review-Asus-G55VW-S1020V-Notebook.74851.0.html
To reach HD 7770 ( ~P3500) they have to go over 1,15GHz.

btw. AMD clocked up the reference HD 7770 to 1,1GHz: http://www.pcgameshardware.de/aid,8...arte/News/bildergalerie/?iid=1796529&vollbild

here the gddr5 version spec:



http://videocardz.com/33274/retail-geforce-gt-640-to-be-released-at-computex
 
UniversalTruth said:
Fixed is a very strong, perhaps marketing word (I would prefer improved)---
There is no process in history that hasn't seen a steady improvement curve over time, so what's your point? The 40nm process, on the other hand, was fundamentally broken up to a certain point in time even if you followed the standard fab design rules. That is something that TSMC was able to fix. After that, the defect rate followed the usual steady improvement trajectory.

being late is a direct consequence of poor yield, severe yield problems, ...
You and I can't know that for sure, but there are some very strong indications that this is not the case. First of all, in the case of 28nm: if yield is as bad as you think it is, and if that contributes directly to being late, then why is Nvidia only 2 months later with 28nm than AMD? 2 months is peanuts in the design cycle of a chip. Second, for Fermi: we know first silicon only came back early September 2009 (GTC) and that the MC was a brick. And that it was a brick not because of yield but because of a cell design problem. So add 2 months for a metal spin and you have first fully functional silicon sometime November? GTX480 was released in April? That's 6 months from full silicon to release. That sounds like a reasonable time for a chip of this complexity even if no yield problems were present.
So, no, unless you believe yield problems delayed the actual initial tape out (harhar), being late is seldom explained by poor yields.

...which also translate into poor thermal characteristics too.
That's truly fascinating. In the case of 40nm, we know from AMD (and I got confirmation from others) that the issue was related to a metal issue. I would love to hear fundamentals behind the effect where broken metal vias result in poor thermal characteristics. Did the sneaky vias migrate to the active transistor region and poison the doping level?

@CarstenS: Charlie (and the moles behind him) is one of the best and most reliable industry news sources. I have no idea why you are all so mean and personal towards him.
Charlie is the broken clock: he once in a while gets something spectacularly right and somehow that absolves this for everything he gets wrong. That's when he's talking about stuff that doesn't require trivial technical understanding, parroting whatever his moles tell him. The moment that comes in the picture, he become that person who thinks he know he stuff, but doesn't at all, he calls that 'analysis'. (See last paragraph: your thermal vs yield argument is exactly the kind of dumb shit he would write. It's what makes it so easy to expose him as a fraud.)

How's that AMD 7970 HPM process coming along, to be released end of Q3/11 btw? The thermal trail wreck that is Kepler (for 2 years until reversed in december)? The impossibility to reliably make something as trivial as a crossbar? I love me some hardware accelerated PhysX acceleration cheats in GTX680, at $300.

It would be nice to write an after the fact review of all his claims. I doubt he surpasses the 20% mark in truthiness.
 
Last edited by a moderator:
I should mention that yield can't be all that bad, as the GTX 6xx cards now seem to have decent availability.
 
There is no process in history that hasn't seen a steady improvement curve over time, so what's your point? The 40nm process, on the other hand, was fundamentally broken up to a certain point in time even if you followed the standard fab design rules. That is something that TSMC was able to fix. After that, the defect rate followed the usual steady improvement trajectory.

Nobody argues about that, the idea is that this steady improvement curve over time lags in comparison to the usual ones seen over older processes. For example, the initial yield for GF100 might have been something like 1%, and later it improved to the stunning 25-30%. Just an example to describe the idea, not necessarily absolutely accurate.

then why is Nvidia only 2 months later with 28nm than AMD?

It's all very relative, I'm not saying GK104 yields are THAT bad, they are simply bad, but no where near as bad as Fermi's. And how can you know when exactly AMD decided to launch their parts..

That's truly fascinating. In the case of 40nm, we know from AMD (and I got confirmation from others) that the issue was related to a metal issue. I would love to hear fundamentals behind the effect where broken metal vias result in poor thermal characteristics. Did the sneaky vias migrate to the active transistor region and poison the doping level?

Maybe it's my fault but my understanding is that somehow poor yield means lower ASIC average quality of a given chips sample. Lower ASIC quality of course would translate into chips with worse characteristics.

GTX680, at $300.

That's NV to blame, not Charlie, he is absolutely right to accept GK104 as a mainstream part with its real and fair price of 300. Like all sane people who value their efforts and hardly earned money.
I'm not saying that the engineers' efforts are cheap... but they have so much higher salaries that you don't have to worry about them. :D
 
Chalnoth:
Yields COULD be bad, because some claims/rumors have it that NV is most favored customer at TSMC. If they get more wafer starts than everybody else, a proportionally greater amount of working chips would be the end result...
*Edit: someone added a bigass post in the time it took me to type two sentences. :D
 
Maybe it's my fault but my understanding is that somehow poor yield means lower ASIC average quality of a given chips sample. Lower ASIC quality of course would translate into chips with worse characteristics.
You make 100 cakes. Each cake varies from the ideal in terms of amount of sugar. And each cake varies a bit in terms of the amount of butter. On average 95 cakes are within range wrt both sugar and butter.

Suddenly, a butter scale brakes down and the amount of butter is all over the map, resulting in only 20 good cakes.

Does this influence the amount of sugar in the cake? Not if they are using a different scale.

sugar -> transistor power characteristics
butter -> metal layers

They are completely separate things. The fact that metal has a high D0 has no influence on the transistor quality and the spread of the thermal characteristics will be the same.

That's NV to blame, not Charlie, he is absolutely right to accept GK104 as a mainstream part with its real and fair price of 300.
What you're saying is that it's Nvidia's fault if somebody makes all the wrong assumptions, declares it as fact, and shouts it from the rooftops? It would love to be subject to that kind of accountability. Did you notice that he didn't write: "it is my personal opinion that the price SHOULD BE"? No, he writes "The price IS".

And, yes, I didn't expect you to refute my other examples. BTW: how's that 341mm2 GK104 die going? And GK117? Where is the full 28nm Fermi line-up? What's your opinion about Kepler not beating Moore's law? How about Kepler having only 50% higher shader count than Fermi and terrible power consumption? Do you also think GK106 is the same die as GK107? Where's my GK104 with 384-bits wide bus? And GK112 and a dual-GK104 GK110? What is your opinion on designing a CPU based on a GPU shader core? Can you explain how an interconnect can be the a big run-away power concern on a GPU? What about Kepler running at lower clocks than AMD, and Kepler's unsuitable architecture? Isn't it fair to say that, up to January 2012, he has been wrong on every single technical aspect?
 
And, yes, I didn't expect you to refute my other examples. BTW: how's that 341mm2 GK104 die going?
That article did say that to arrive at that figure you have to make several assumptions. He never claimed it was a scientific measurement, and he did say he had expected it to have been smaller than that figure.

He was right about that chip, apart from the codename being GF117 rather than GK117:
http://www.notebookcheck.net/NVIDIA-GeForce-GT-620M.72198.0.html

Fair enough. Perhaps it was was going to be Nvidia's strategy before they fully weighed the costs of the 28nm process and the advantages of the Fermi architecture, perhaps it was deliberate disinformation from Nvidia, or perhaps there is some other explanation.

That article was discussing DP performance. Nvidia still hasn't released big-Kepler.

Fair enough, that article used assumptions which are now known to be false.

Do you also think GK106 is the same die as GK107? Where's my GK104 with 384-bits wide bus? And GK112 and a dual-GK104 GK110?
That can all be attributed to a fake roadmap published by 4Gamer. Fair enough, Semiaccurate should have been more skeptical of them.

I see no such claim in the article you referenced. The only comparison is to Transmeta, and Project Denver is still fair way from release.
 
Last edited by a moderator:
He was right about that chip, apart from the codename being GF117 rather than GK117:
http://www.notebookcheck.net/NVIDIA-GeForce-GT-620M.72198.0.html
He was right except about a minor detail: the chip architecture? :LOL:

... or perhaps there is some other explanation.
I can think of something.

That article was discussing DP performance. Nvidia still hasn't released big-Kepler.
If a new architecture has X Perf/W and Y Perf/mm2 improvements over its predecessor for single precision (GK104 vs GF104), that should be a good indicator for double precision too, I would think. But we can wait.

I see no such claim in the article you referenced. The only comparison is to Transmeta, and Project Denver is still fair way from release.
Ah, well. We'll see what it comes out with.
 
Back
Top