Blazkowicz
Legend
you mean the GPU that were cancelled long ago?
What would that prove other than showing it can't retire kernels out of order? There can still be 16 running in parallel even if the chip hangs.
It that is true then it proves that can't really run concurrent kernels. Everyone's first instinct to anything that a graphics company claims about their hardware at this point is that they are lying.
I saw his post after I made my post last night. Pretty sweet, nice and simple.RecessionCone's macro seems to be the closest thing to it. There's no single instruction that accomplishes what PSCAN does. Also, I was wrong about syncthreads_count as it returns a single value to all threads - the count of true predicate evaluations. At first I thought it was similiar to rank() from the patent but meh.
This would be a real fun because i read somewhere that GT200b is EOL.
So a wafer is 113 square inches. Average defect counts were 34-45 per wafer, and are now down to 11-34 per wafer.Although TSMC recently said the defect density of its 40nm technology has already dropped from 0.3-0.4 per square inch to 0.1-0.3, the sources pointed out that the improvement in overall yield still needs more time before catching up with market demand.
Per square INCH? Not square centimeter? Couldn't it by a typo? Because e.g. 0.2 per sq. inch (0.031 per sq. cm) would mean >90% yields even for RV870... It wouldn't make sense to produce HD5830/HD5850/HD5870 in 1:1:1 ratio...from 0.3-0.4 per square inch to 0.1-0.3
Per square INCH? Not square centimeter? Couldn't it by a typo? Because e.g. 0.2 per sq. inch (0.031 per sq. cm) would mean >90% yields even for RV870... It wouldn't make sense to produce HD5830/HD5850/HD5870 in 1:1:1 ratio...
Now this article is talking about "immersion-induced" defects. I'm not sure what other kind of defect mechanisms apply to TSMC's current 40nm lines. If there are other defect mechanisms, then of course defect counts would increase.Immersion lithography systems use water, or a similar clear liquid, as an image-coupling medium. By placing water between the lithographic lens and the semiconductor, engineers can preserve higher-resolution light from the lens, enabling smaller, more densely-packed devices.
But liquid mediums present their own challenges, including defects such as bubbles, watermarks, particles, particle-induced printing defects, and resist residue. TSMC's R&D researchers resolved these issues by developing a proprietary defect-reduction technique that, on initial tests, produced less than seven immersion-induced defects on many 12-inch wafers, a defect density of 0.014/cm2. Some wafers have yielded defects as low as three per wafer, or 0.006/cm2. This compares to several hundred thousand defects produced by a prototype immersion scanner without these proprietary techniques and significantly better than published champion data in double digits.
TSMC's immersion lithography technology is targeted at TSMC's 45nm manufacturing process.
That's it. I remember quotation of numbers like 0.4 and 0.2 per square centimeter in 2009... 0.2 defects per square centimeter means 54% fully working RV870s per waffer. That's more believable than 91%...Like no-x, It should be per square Centimeter?! I saw a presentation last week that they were aiming at 0.1 defects per square centimeter for 2010/2011.
Meaty article, off to read it...The graph illustrates yield-versus-chip-area for Do = 0.16, 0.22, and 0.28 defects per square inch (see the Figure).
We don't know what N is for 40nm. N was quoted as being 11.5 and 15.5 for TSMC processes that I can't discern. The Semiconductor article indicates that N is the number of critical layers. The formula assumes the same defect density at each level which is not the case.Bose-Einstein: Y = 1/(1+ADo)^N
where Y = yield, A = die area, and Do = defect density per unit area. For the Bose-Einstein model, N = process-complexity factor.
That's it. I remember quotation of numbers like 0.4 and 0.2 per square centimeter in 2009... 0.2 defects per square centimeter means 54% fully working RV870s per waffer. That's more believable than 91%...
AlexV: Thanks. But that seems to be quite missleading. At least for me and other common users :smile:
Plugging in some numbers for Cypress (334mm² = 0.51in²), using 15.5 for N, for various defect densities per square inch:
Jawed
Plugging in some numbers for Cypress (334mm² = 0.51in²), using 15.5 for N, for various defect densities per square inch:
Assuming 580mm² for GF100 (0.89in²):
- 0.4 = 5.6%
- 0.3 = 11%
- 0.2 = 22.2%
- 0.1 = 46.3%
Jawed
- 0.4 = 0.9%
- 0.3 = 2.6%
- 0.2 = 8%
- 0.1 = 26.9%
Well, even if they're uncorrelated, purely by chance a significant fraction of them will be clustered.Wouldn't defects tend to be more clustered as they are not all random? So one die may end up with more than its fair share of defects. In addition to this are the percentages the rough number of 'perfect' dies which can be fully enabled?
Well, even if they're uncorrelated, purely by chance a significant fraction of them will be clustered.
Maybe we have a different definition of concurrent. To me it means that multiple kernels can execute simultaneously. Even if they can't finish out of order it wouldn't mean they are lying. Also, I want to make it clear I'm not saying Fermi can't pass your test. I was just trying to ensure I understood its purpose.It that is true then it proves that can't really run concurrent kernels. Everyone's first instinct to anything that a graphics company claims about their hardware at this point is that they are lying.