NVIDIA may announce NV 40 tomorrow

rwolf said:
The GPU/VPU would cost too much with that many transistors.

Plus, I challenge you to get such a beast at clock frequencies even anywhere as high as 500Mhz. In the end, it might not be that much faster than the "real" design of these chips.


Uttar
 
@Uttar
The amount of transistors has no significant influence on the possible core clock of a chip, if you can manage the cooling.
But the yields are depending on both ;)

so the "fat R300" could reach 325Mhz @150nm and R50/380 ~400Mhz...its a matter of design but not of amount of transistors (P4 EE could be clocked as well as P4 Northwood with significant more transistors)
 
The amount of transistors has no significant influence on the possible core clock of a chip, if you can manage the cooling.

That single conditional is what makes him actually have a point.
 
Robbitop said:
@Uttar
The amount of transistors has no significant influence on the possible core clock of a chip, if you can manage the cooling.
Heat is already a major factor. In fact, I'd say it's one major reason why GPU's aren't currently designed to run in the multi-gigahertz range. After all, since most of the GPU is active at all times during 3D rendering (which will probably be even more the case if the pipelines are unified), the same number of transistors at the same clock speed would tend to produce quite a bit more heat on a GPU than on a CPU.

Another major reason is, of course, that you would need incredible pipelining to clock a GPU higher, so the number of functional units would have to decrease in order to increase the clock speed.
 
Robbitop said:
@Uttar
(P4 EE could be clocked as well as P4 Northwood with significant more transistors)

That's true, but afaik the high transistor count comes from increased L2 cache, not from increased core complexity. Shouldn't this matter?
 
Increased transistor count may not lower the clock frequency of a design; however, if the transistor count doubles, the defect rate grows by a factor of four. Not only that, but less working chips fit on a platter. Therefore, going from 150 M to 175 M reduces the number of possible chips by ~17 percent and the the defect rate is raised by ~36%. Therefore, as you can see, the costs for each working chip increase, and a possibility to recover those costs is to reduce the clockspeed slightly so more chips work. Either way, keeping the transistor count to a minimum is best.
 
rzr919 said:
That's true, but afaik the high transistor count comes from increased L2 cache, not from increased core complexity. Shouldn't this matter?
Certainly. Cache definitely won't be used at all times, and so won't produce as much heat as transistors used for logic.
 
P4 EE consumes much more power than Northwood. It needs 91A Mosfets...P4 NW only needs 70A ...


Of course there is an indirect influence on the clocks. You can lower the clockrate so the yields increase. That is a way to compensate the growing Chipsize by throtteling the yieldrate.

But directly the amount of transistors has no influence on the possible clockrate.
 
Xmas said:
lost said:
however, if the transistor count doubles, the defect rate grows by a factor of four.
Not true.
Indeed. AFAICS it's an exponential function so it can be both better and worse.

If the area of a chip is A, and the probablity of an error (or errors) occuring in the manufacture of a particular unit area of a chip is E, then the probability of the chip working is going to be E^A.

The cost per chip (which is the bigger issue), however, is going to be a different matter as the number of produced chips per wafer is going to be worse than inversely proportional to the size of the chips. When you factor in the increasing failure rate, things could become expensive very quickly.

Of course, some sections of chips can be made tolerant to a certain number of errors (eg RAM sections), but not all can be treated in this manner.
 
do current ASIC manufacturers produce duplicated functional unit to reduce the process error's effect on yield?
 
991060 said:
do current ASIC manufacturers produce duplicated functional unit to reduce the process error's effect on yield?
I think what you see instead is the selling of "crippled" chips. For example, the original Radeon 9500 was simply a Radeon 9700 with half of its pipelines disabled. This allowed many of the 9700's with errors to be simply labeled as 9500's. We've seen similar situations a few times over the years.
 
Back
Top