R4xx will break Moore's law

Dio said:
If defects were random across the whole wafer, doubling the size of each die doubles the number of bad chips,

Does it really?
Statistics is a quite weak point of mine, but as I see it, if the number of defects remains constant and their distribution is random, the number of bad chips/wafer actually decreases, because the probability that one die is affected by more than one error is increased due to increased size of each die, and the number of max. theoretically affected dies (=number of errors) doesn't rise.
Does this make some sense or am I completely off track here?
 
You are right and wrong. It depends on your starting point. If things really are random, I would think it goes like this:

Consider a wafer with 1000 dies on it, and 10 defects. The chances of those 10 defects hitting the same die twice is pretty small. So you have 10 bad chips almost certainly. So you have 990 working chips, i.e. 99%.

Consider a wafer with 100 dies on it, and 10 defects. It's still pretty unlikely that the defects will hit the same die twice, so you probably have 10 bad chips, maybe 9. So you have 90 or 91 working chips, i.e. 90%.

If the defect rate is high, the reverse is true. If you have 100 dies and 100 defects, not all the dies will be affected. But I can't be bothered working out the yield (I'm not sure my maths is up to it anymore anyway). If it goes to 200 defects, then as you say many of the bad dies will get hit again rather than the good ones. As I said, it's a non-linear function.

As I said before, I know nothing about silicon foundries though so I've no idea if this model is valid. (The closest I've got to one is the wafer that was nailed to the wall of my cube a few years ago...)
 
zidane1strife said:
But Moore's "Law" deals more with process technologies than it deals with processors themselves. Processors cannot "break" this imaginary law.
Well 1B+ transitor processors could be here by early 2005(about 20Months from now...). Would that break it, or would it be in line with what's expected?
Only if the transistor density was the main thing that increased. If there is indeed a 1B+ transistor processor available by early 2005, then it will, most likely, be a very large processor, or some sort of multicore approach (either way, large area of silicon).

Or, another possibility is that we're just talking about lots of cache, or other comparitively high-density transistors. This is nothing special. The important increase would be in logic density, not transistor density alone.

Lastly, I'll reiterate: it would be the advanced process technology that would "break" Moore's "Law," not the processor itself.

And Moore's "Law" cannot continue for more than about 4.5 to 5 years. With continual doubling of density roughly every 18 months, in that time we'll be down to transistors operating with a single electron. You obviously can't go any smaller, and the behavior of the transistors changes dramatically at that small scale (current transistor theories depend on statistical processes, which means lots and lots of electrons).

This means that silicon processes cannot possibly be made to evolve at the rate they have for the past few decades for much longer. There will be either an acceleration of computational power (due to the use of new technologies, or due to different ways of increasing processing, such as, say, making processors less planar, adding a third dimension with which to increase density), or a stagnation of computational power until the emergence of new technologies.
 
Chalnoth said:
And Moore's "Law" cannot continue for more than about 4.5 to 5 years.
They've been saying that for the last 20 years.

With continual doubling of density roughly every 18 months, in that time we'll be down to transistors operating with a single electron.
The bond length for SiSi (according to webelements.com) is 235.2pm - i.e. about 1/50 the size of the 0.13 micron process.

If this bond is repeated in 2D (which it isn't) then a 0.13 micron feature would contain around 2500 atoms. log2 of 2500 is about 11 - so we have 11 generations, around 17 years, before features reach a single atom.... of course, we might expect things to fall apart at ten atoms, maybe, which chops a few generations off. But it's certainly not 'a single electron'.

This is all really approximate hand-waving, of course. (And might be / probably is wrong, really approximate hand-waving).

I wouldn't necessarily expect someone to solve the problem. But my observation is that the further away from 'deadlines' you are, the less gets done. As the deadline approaches, problems tend to get solved. You already listed a bunch of ideas that might solve it.

Ten years is a long time in silicon.
 
DaveBaumann said:
( 3 RV350s bolted together - for the pixel pipelines, that is, not for much other stuff )

:rolleyes:

You have no idea what you're talking about...

Something i have believed for quite some time. Maybe Uttar should stick with Nvidia speculation from now on ;)
 
Just to make clear if the defects are random, and the original yield is Y then the yield of the double size chip is Y^2.
That is 60%->36% or 50%->25% or 40%->16%.

Yield Y means that the defect probability P = 1-Y. With double size core the defect probability is P' = 2*P - P^2 = 2 - 2*Y - 1 - Y^2 + 2*Y = 1 - Y^2. That makes the yield Y^2.
 
Dio said:
If this bond is repeated in 2D (which it isn't) then a 0.13 micron feature would contain around 2500 atoms. log2 of 2500 is about 11 - so we have 11 generations, around 17 years, before features reach a single atom.... of course, we might expect things to fall apart at ten atoms, maybe, which chops a few generations off. But it's certainly not 'a single electron'.
The problem with your simple calculation is that the "single electron" I was talking about has nothing to do with a single atom. The "single electron" is from a dopant atom. That is, if silicon is the semiconductor, then there will be a very small concentration of Aluminum or Phosoporus. Aluminum, having one fewer electron than silicon, will create a "hole," while Phosphorus will create an extra "free" electron. The concentration of dopants to the base semiconductor must be very small for the crystal structure to remain the same.

This is the primary limitation of silicon. It relies upon doping to achieve conduction (fyi, the only other way for a semiconductor to achieve conduction is through temperature, but depending on a specific temperature for proper operation is clearly not a good idea. Doping is used to take as much of the temperature-dependence out as possible).

To go to a single-atom transistor, we'll need fundamentally new technology. I, for one, doubt that it's possible. Single-molecule logical elements may be possible, but not single atom (note that the idea of a "transistor" is tied to semiconductor technologies. They will, in all liklihood, not exist with the emergence of other computing technologies, such as quantum computing).
 
The bond length for SiSi (according to webelements.com) is 235.2pm - i.e. about 1/50 the size of the 0.13 micron process.

If this bond is repeated in 2D (which it isn't) then a 0.13 micron feature would contain around 2500 atoms.

That's the bond, but IIRC atoms from most elements are a few nano-meters in diameter. Some might begin using 65nm by 2004....
 
zidane1strife said:
That's the bond, but IIRC atoms from most elements are a few nano-meters in diameter. Some might begin using 65nm by 2004....
Atomic diameters are (very) roughly 1-2 angstroms. Within solids, the atoms typically overlap slightly.

But just fyi, there are 7.50*10^22 Silicon atoms per cubic centimeter in a pure silicon crystal.

So, that makes 1.78*10^15 Si atoms for a square centimeter surface of silicon (btw, this is a rough estimate: the actual number depends on the crystal direction taken. I didn't bother to take this into account, as I just took the density per cubic centimeter and took it to the 2/3rds power).

This makes roughly 300,000 Si atoms in a .13 micron square feature.
 
arhra said:
WaltC said:
Could be wrong but I always thought that Moore's law dealt with computational power, and has since been modified in the retelling to mean "number of transistors"...?
you've got that backwards ;)

Gordon Moore made his famous observation in 1965, just four years after the first planar integrated circuit was discovered. The press called it "Moore's Law" and the name has stuck. In his original paper, Moore observed an exponential growth in the number of transistors per integrated circuit and predicted that this trend would continue.

source

Chalnoth said:
No, Moore's "Law" (How in the world can you call an observation of the progression of engineering a Law? Bah...) originally was dealing with an increase in transistor density (double about every 18 months). Its meaning has been modified to mean, colloquially, that computational power will increase by about the same amount.

Heh...;) Thanks fellas....it does seem indeed that I remembered it backwards...Thanks for the refresh...! (Now, I can forget it all over again...)
 
Uttar said:
Wouldn't be surprised if low-end NV4x could double zixel output like the NV3x though, since they most likely will be more focused on 4x FSAA, maybe even 2x FSAA for the extreme low-end.

What the heck is a zixel?
 
reever said:
What the heck is a zixel?

Lol.... :p

We coined the term (well, OK, it was just me...which is why you never heard of it) to mean a "z-only" pixel. That is, a pixel that does not contain any color values...only z or stencil.

It's used as a short-cut to try and explain NV30/35's pixel writing capabilites. (It can write pixels per clock, OR 8 zixels per clock).
 
Its been said before that the video industry has for a long time not played into moore's law and some company rep I fink it was a nv one said the reason for this is unlike cpu gpu/vpu/whatever are able to quite easily able to extend the number of pipelines as they generally independant of each other ( well atleast up until recent times).
 
Gotta admit a 8*1*" (8 Pipes, 1 Texture Unit, 2 Pixel Shading Units) config for the R420 makes a lot of sense to me especially from a historical perspective.

Whack in a few other goodies (Smarshader3, Hyper-Z 4, Truform 3 etc etc) and you'd have quite a nice piece if kit.
 
One thing that may work even better is a 16x1x1 architecture where each pipeline performs both vertex and fragment shading functions.
 
gkar1 said:
DaveBaumann said:
( 3 RV350s bolted together - for the pixel pipelines, that is, not for much other stuff )

:rolleyes:

You have no idea what you're talking about...

Something i have believed for quite some time. Maybe Uttar should stick with Nvidia speculation from now on ;)

Hehe. That's an idea. But no matter what, I'm not losing much at all by saying that - see, if I didn't say it, you wouldn't have said it was incorrect. So it's a nifty way of double-checking stuff ;)
Although now that I said that, I'm not betting on being able to try that again...

You know, if I was smart, I wouldn't speculate only on nVidia. I wouldn't speculate at all, and I'd quit everything related to GPUs so I can work on certain things. Just taking me too much time considering what it's worth.

And don't worry, I'll do so one of those days - in fact, I hope before the end of the year.


Uttar
 
One final addendum re: the definition of Moore's Law.

You guys have got it quite right that Moore's Law refers to a doubling of transistor counts every period (~12 months when originally formulated, ~18 months for the last couple decades). But "a doubling of transistor counts" doesn't mean anything except in reference to some independent variable that is kept constant. (Certainly it is not a doubling of the maximum possible transistor count, which can pretty much be as large as you want if you're willing to waste the money on it.)

One might assume, then, that Moore was referring to a doubled transistor count with cost held constant, or yield held constant, or double the number of transistors per silicon area. Instead, the point of reference is a bit more complex, and a bit more interesting.

Thing is, for any given semiconductor process, there is a transistor count that is the most efficient for that process, i.e. has the lowest cost/transistor. (Note that we're talking cost per packaged and tested good IC.) Cost/transistor starts going up at high transistor counts for obvious reasons--yield declines as die area increases. But cost/transistor also goes up at low transistor counts, because testing costs are roughly constant per chip, and because packaging costs also don't scale that low as transistor counts decrease. (At quite low transistor counts this is due to being "pad-limited"; that is, there is a certain minimum die size required to accomodate the pads for however many pins a chip needs, and thus lowering transistor counts beyond what would fill up that die size is pointless, because you need to expend the silicon anyways.)

These two effects combine to give cost/transistor vs. transistor count a sort of U-shaped curve. The minimum of this curve is the most efficient transistor count for that particular process node. And that value is what Moore's Law predicts will double every period.

A very important thing to note is that if, to enable the functionality you seek, you require much above this most efficient transistor count, eventually it becomes more efficient to use two (or more) ICs instead of one. That is, Moore's Law is primarily a measure of levels of integration. Much more directly than it explains increasing CPU performance (much less clock speed), Moore's Law explains things like integrated FPUs, integrated geometry/T&L units, integrated video chipsets and SoCs. It's also key to explaining why digital cameras will have gone from essentially none of the consumer camera market to essentially all of it in the space of 5 or 6 years or so.

The amazing thing is that these are exactly the sorts of predictions Moore makes in his paper (and, again, not anything to do with increasing clock speed or even performance): in the future, levels of integration will rise to the point where more and more functionality will be integrated onto fewer ICs, and more devices (particularly mobile devices) will become commercially feasible as a result.

Arstechnica did a great article on Moore's Law some months back that basically covers the above with some pictures, etc. Or you could just read the darn thing yourself. I highly recommend it. To me it's perhaps the most prescient document in computer science with the possible exception of Turing's paper on AI. Of course, I'm a sucker for that sort of thing, so YMMV.
 
Atomic diameters are (very) roughly 1-2 angstroms. Within solids, the atoms typically overlap slightly.

But just fyi, there are 7.50*10^22 Silicon atoms per cubic centimeter in a pure silicon crystal.

So, that makes 1.78*10^15 Si atoms for a square centimeter surface of silicon (btw, this is a rough estimate: the actual number depends on the crystal direction taken. I didn't bother to take this into account, as I just took the density per cubic centimeter and took it to the 2/3rds power).

This makes roughly 300,000 Si atoms in a .13 micron square feature.

Thanks for the correction, I was probably recalling the length for something else.... err, nope, I think I forgot the decimal point. So as you said atoms are in the .1-.5nm in diameter... not that far off...

edited
 
Well, 30nm could be available by 2005-2006, at this rate we'll be at 10nm or below by the end of the decade, that'd be like 100 atoms approx. in length... hmmm....
 
Back
Top