UniversalTruth
Veteran
Hahaha that's funny on so many levels!
No, the funny thing is when I want to buy an 680 from hardwareversand.de and it says for all available cards- delivery more than 7 days.
Hahaha that's funny on so many levels!
No, the funny thing is when I want to buy an 680 from hardwareversand.de and it says for all available cards- delivery more than 7 days.
That's the same guy who wrote that GTX480 only saw 10000 pieces over its life time, right? With yields of 10%? Well, at least that explains things. There's nothing more to discuss.UniversalTruth said:Charlie says so, and there is no reason not to believe him.
Leaving aside that fact that the NBER declared the US recession over by sometime 2009, we're looking at a product here with volumes that are 2 orders of magnitude lower than, say, an iPhone. In a world of 7B, there are plenty of consumers who can afford a $500 GPU, just like there are still plenty of consumers who can afford the latest and greatest iPad/iPhone/MacBook Air.Impossible. And sounds ridiculous in these times of serious and deep economic recession.
That's the same guy who wrote that GTX480 only saw 10000 pieces over its life time, right? With yields of 10%? Well, at least that explains things. There's nothing more to discuss.
What they are doing is simply trying to justify the close to nonexistent availability. Their marketing machine is simply working but it would be much better if they direct these efforts in something worth it.
People don't graze grass.
It was hot and late. No argument there. But 40nm yields at TSMC were fixed by the time Fermi came to market. That is a very well know industry fact and it was explicitly stated as such by Nvidia too. I'm sorry if you think otherwise.UniversalTruth said:No, there is- you have to prove it that he wasn't right--- there is no doubt Fermi was a disaster with low yields, it was hot and late. What else indeed should we discuss....
This are definitely not rosy and we may well be headed towards recession territory, but for a total market of, say, 2M high end gaming GPUs per year, you should be able to find plenty of takers.About the recession being over in 2009, I can only accept it with a sarcasm smile.
No, the funny thing is when I want to buy an 680 from hardwareversand.de and it says for all available cards- delivery more than 7 days.
It was hot and late. No argument there. But 40nm yields at TSMC were fixed by the time Fermi came to market. That is a very well know industry fact and it was explicitly stated as such by Nvidia too.
Yes... Everything is a problem of yield. Just imagine how kickass and cool the GeForce FX would have been if it hadn't had poor yield!Fixed is a very strong, perhaps marketing word (I would prefer improved)--- being late is a direct consequence of poor yield, severe yield problems, which also translate into poor thermal characteristics too.
DDR3 is very common for todays GTS 450 and even GTX 550 Ti cards.
GTX 660M (835MHz base, 950MHz boost and 5Gbps GDDR5) was measured with a 3DM11-score of ~P2500: http://www.notebookcheck.net/Review-Asus-G55VW-S1020V-Notebook.74851.0.html
To reach HD 7770 ( ~P3500) they have to go over 1,15GHz.
btw. AMD clocked up the reference HD 7770 to 1,1GHz: http://www.pcgameshardware.de/aid,8...arte/News/bildergalerie/?iid=1796529&vollbild
There is no process in history that hasn't seen a steady improvement curve over time, so what's your point? The 40nm process, on the other hand, was fundamentally broken up to a certain point in time even if you followed the standard fab design rules. That is something that TSMC was able to fix. After that, the defect rate followed the usual steady improvement trajectory.UniversalTruth said:Fixed is a very strong, perhaps marketing word (I would prefer improved)---
You and I can't know that for sure, but there are some very strong indications that this is not the case. First of all, in the case of 28nm: if yield is as bad as you think it is, and if that contributes directly to being late, then why is Nvidia only 2 months later with 28nm than AMD? 2 months is peanuts in the design cycle of a chip. Second, for Fermi: we know first silicon only came back early September 2009 (GTC) and that the MC was a brick. And that it was a brick not because of yield but because of a cell design problem. So add 2 months for a metal spin and you have first fully functional silicon sometime November? GTX480 was released in April? That's 6 months from full silicon to release. That sounds like a reasonable time for a chip of this complexity even if no yield problems were present.being late is a direct consequence of poor yield, severe yield problems, ...
That's truly fascinating. In the case of 40nm, we know from AMD (and I got confirmation from others) that the issue was related to a metal issue. I would love to hear fundamentals behind the effect where broken metal vias result in poor thermal characteristics. Did the sneaky vias migrate to the active transistor region and poison the doping level?...which also translate into poor thermal characteristics too.
Charlie is the broken clock: he once in a while gets something spectacularly right and somehow that absolves this for everything he gets wrong. That's when he's talking about stuff that doesn't require trivial technical understanding, parroting whatever his moles tell him. The moment that comes in the picture, he become that person who thinks he know he stuff, but doesn't at all, he calls that 'analysis'. (See last paragraph: your thermal vs yield argument is exactly the kind of dumb shit he would write. It's what makes it so easy to expose him as a fraud.)@CarstenS: Charlie (and the moles behind him) is one of the best and most reliable industry news sources. I have no idea why you are all so mean and personal towards him.
There is no process in history that hasn't seen a steady improvement curve over time, so what's your point? The 40nm process, on the other hand, was fundamentally broken up to a certain point in time even if you followed the standard fab design rules. That is something that TSMC was able to fix. After that, the defect rate followed the usual steady improvement trajectory.
then why is Nvidia only 2 months later with 28nm than AMD?
That's truly fascinating. In the case of 40nm, we know from AMD (and I got confirmation from others) that the issue was related to a metal issue. I would love to hear fundamentals behind the effect where broken metal vias result in poor thermal characteristics. Did the sneaky vias migrate to the active transistor region and poison the doping level?
GTX680, at $300.
You make 100 cakes. Each cake varies from the ideal in terms of amount of sugar. And each cake varies a bit in terms of the amount of butter. On average 95 cakes are within range wrt both sugar and butter.Maybe it's my fault but my understanding is that somehow poor yield means lower ASIC average quality of a given chips sample. Lower ASIC quality of course would translate into chips with worse characteristics.
What you're saying is that it's Nvidia's fault if somebody makes all the wrong assumptions, declares it as fact, and shouts it from the rooftops? It would love to be subject to that kind of accountability. Did you notice that he didn't write: "it is my personal opinion that the price SHOULD BE"? No, he writes "The price IS".That's NV to blame, not Charlie, he is absolutely right to accept GK104 as a mainstream part with its real and fair price of 300.
That article did say that to arrive at that figure you have to make several assumptions. He never claimed it was a scientific measurement, and he did say he had expected it to have been smaller than that figure.And, yes, I didn't expect you to refute my other examples. BTW: how's that 341mm2 GK104 die going?
He was right about that chip, apart from the codename being GF117 rather than GK117:
Fair enough. Perhaps it was was going to be Nvidia's strategy before they fully weighed the costs of the 28nm process and the advantages of the Fermi architecture, perhaps it was deliberate disinformation from Nvidia, or perhaps there is some other explanation.
That article was discussing DP performance. Nvidia still hasn't released big-Kepler.What's your opinion about Kepler not beating Moore's law?
Fair enough, that article used assumptions which are now known to be false.How about Kepler having only 50% higher shader count than Fermi and terrible power consumption?
That can all be attributed to a fake roadmap published by 4Gamer. Fair enough, Semiaccurate should have been more skeptical of them.Do you also think GK106 is the same die as GK107? Where's my GK104 with 384-bits wide bus? And GK112 and a dual-GK104 GK110?
I see no such claim in the article you referenced. The only comparison is to Transmeta, and Project Denver is still fair way from release.What is your opinion on designing a CPU based on a GPU shader core?
He was right except about a minor detail: the chip architecture?He was right about that chip, apart from the codename being GF117 rather than GK117:
http://www.notebookcheck.net/NVIDIA-GeForce-GT-620M.72198.0.html
I can think of something.... or perhaps there is some other explanation.
If a new architecture has X Perf/W and Y Perf/mm2 improvements over its predecessor for single precision (GK104 vs GF104), that should be a good indicator for double precision too, I would think. But we can wait.That article was discussing DP performance. Nvidia still hasn't released big-Kepler.
Ah, well. We'll see what it comes out with.I see no such claim in the article you referenced. The only comparison is to Transmeta, and Project Denver is still fair way from release.