What complacency?LoL. Thank you so very much for illustrating my point. This complacancy leads to idleness and a catastrophic failure of innovation. I hope you're not one of those engineers!
All transistors leak.
All devices that have different voltage levels will leak.
You can make a device out of anything and the materials it is made of will always have non-ideal behavior.
There is no magical "frequency" that makes a transistor stop leaking, save the case where no power is fed to it at all.
you've just plainly stated "don't use it, and it won't leak". You've also said i'm not using the right term, because it might confuse people. lol.
I was restating your argument, or at the very least the implications of an argument I do not believe you have fully thought through.
Your use of words in ways they are not used when discussing this topic leads to confusion because nobody can tell which meaning you are using.
I can't read your mind if we were talking in person, and I certainly can't read it when it's text over the internet.
If we don't know what terms the other is using, we can't have a meaningful conversation.
I've given you examples, I've pointed out that what you want is the removal of the very mechanism that makes semiconductor devices work.you seem pretty defensive over this. I don't get it. Have a great day.
I've pointed out that your reasoning about the usefulness of going multichip to combat leakage is incorrect.
To get this back on the topic of multichip design, I'm going to put out a list of some pros and cons of a multichip design:
Pro:
1) Die size is no longer a hard limit on the amount of transistors that can be used in a design.
2) As a result of this, a design can contain more features and more units while the cost of making the product remains at some more linear multiple of a chip produced at a manufacturing sweet spot. Larger chips scale superlinearly when it comes to price of manufacturing.
3) This approach is more flexible when it comes to amortizing design costs of the chip over multiple price segments.
4) Device variation and defects can be selected around with more granularity.
Less overall silicon can be tossed out due to one component of the die falling out of spec.
This is the one argument where multichip can help mitigate the impact of variation in device leakage on the binning of processors. Individual chips with poor leakage characteristics can be selected out or mixed and matched, instead of having larger monolithic cores that tend to either do very well or very badly.
This does not solve leakage, it just reduces the number of individual products that cannot reach some minimum spec.
5) Simpler chips can be less picky about power management over a single monolithic chip. Instead of complex arbiter logic for individual sectors, a multichip card can just throttle or gate an entire chip. This may or may not be a win, depending on just how far down on the complexity scale each die is.
The rise in complexity in other areas can counteract this advantage.
Con:
1) Multichip is more complex to manufacture. There is a balance between the defect rates of individual chips and the defect rates from packaging them together.
The primary tipping point will be based on the overall cost of discarded products and the distribution of dies amongst higher margin price segments.
Large dies scale worse than linearly in cost, while multichip modules have a more complex set of parameters, including the number of chips, the amount of interconnect on the package, and the pinout of the package.
2) Multichip, with all else being equal, is not a performance win per transistor. Any off-chip communication, even if on package, is longer latency and possibly more prone to error. Additional buffers, signal drivers, and control logic is necessary, whereas a single monolithic core would simply not need them.
This also has the side-effect of possibly worsening per-die yield rates, as control and communications logic must take up a larger portion of the die, and this is the hardest to keep redundant.
3) Multichip, with all else being equal, will likely scale better than multi-board GPUs. However, since GPUs are already so well distributed internally, the overall factor of improvement going multichip is not going to be quite as high as multichip or multicore is for CPUs. GPUs have already gathered a lot of low-hanging fruit in this arena. On the other hand, GPUs on current loads scale so well anyway, so it's more of a wash.
4) Multichip, aside from keeping some units from being discarded due to binning, does not improve power consumption or leakage. The hefty IO requirements for an increasing number of chips and a more complex package will worsen performance per watt.
At small numbers of chips, this should be a small amount. It will become more significant the more complex the package and pinout.
5) Software-wise multichip is more complex. Naively treating multichip as a single core is a fast way to run into performance problems. Communications overhead will be less than that for multi-board, but more than that of a single chip.
How access to video memory will be handled as chip counts rise will be an interesting thing to watch.
The moral of the story, as it is for CPUs, is that monolithic is the way to go unless it becomes impractical.
It seems that this point is approaching rapidly for both graphics IHVs.
One question I have is whether they will both hit this point at the same time and at the same segments.
Whichever IHV is able to keep monolithic GPUs longer and is able to keep them at higher market segments will likely have a much simpler time with performance and software issues.