Will video cards soon be seeing new transistors?

K.I.L.E.R

Retarded moron
Veteran
http://news.zdnet.com/2100-9584_22-5570137.html

While processors used in todays computers can be made larger to support more transistors, what about video cards?
I for one don't see Ati or nVidia using anything smaller than 70nm with current silicon based chips.
The article states that in about a decades time microchips will use hybrid materials.

Will that be too long for Ati/nVidia?
With the current trend we are seeing I believe both IHV's could move to integrated multi-chip boards, 2 chips integrated together as a single chip.
Alteratives to this idea would be that both IHV's will move to hybrid silicon chips earlier, say within the next 6 years.

I believe the latter is a more feasible approach because feasibly integrating multiple (Rxxx or nVxx) chips under one roof at a cheaper price than just sticking 2 chips on the boards is going to cost the R&D teams far too much than to reasearch how to mold other elements into silicon to get further performance iincreases.

What do you guys think?
 
K.I.L.E.R said:
http://news.zdnet.com/2100-9584_22-5570137.html

While processors used in todays computers can be made larger to support more transistors, what about video cards?
I for one don't see Ati or nVidia using anything smaller than 70nm with current silicon based chips.

So do you see Ati or nVidia building fabs in the future? If not I think the more pertinent discussion may be along the lines of what the different fabs are going to be offering when that time rolls around.

With the current trend we are seeing

With respect to the current trend of SLI, I really think it has far more to do with marketing than it does any sort of performance walls. All it does is give a leg up to who ever has it in marketing speak. Then you can convince people that there is a viable upgrade route in the future. Instead what will happen is when ATI/Nvidia go to 90nm you'll see double the pipelines, faster memory and the performance improve roughly as one would expect, unlike how SLI does in most circumstances. They will then push those chips, and their SLI counterparts. By the same token there won't be problems with render to textures, having to double the amount of memory but still having the same amount useable.

That's why I doubt you'll see SLI for anything other than the high end boards, it's just simply not cost effective. And in many cases doesn't have the equivalent performance return as price increase.

I believe both IHV's could move to integrated multi-chip boards, 2 chips integrated together as a single chip.
Alteratives to this idea would be that both IHV's will move to hybrid silicon chips earlier, say within the next 6 years.

I'm a little confused as to why you think that multicore is dependent on current silicon, and that hybrid silicon or the other techniques would change that.

Though in actuality I guess I'm a little bit more confused as to why you think that multicore is the way of the future? When instead it would be more condusive to performance, transistor count, as well as R&D costs to increase the number of pipelines.

I believe the latter is a more feasible approach because feasibly integrating multiple (Rxxx or nVxx) chips under one roof at a cheaper price than just sticking 2 chips on the boards is going to cost the R&D teams far too much than to reasearch how to mold other elements into silicon to get further performance iincreases.

What do you guys think?

Like I said above, increasing the number of pipelines works much better on all fronts for performance and cost than does multicore, or at least in the world of GPUs. CPUs on the other hand are a totally different story.

And I guess I would like a little clarification on how you're trying to tie together manufacturing process and multicore a little more, just incase I'm not understanding something quite right. Because the two are pretty much independent other than number of gates available for a given die size.
 
Just about anything you do in a silicon chip will use up transistors. More pipelines mean more transistors.
Soon getting more performance and using a smaller die will not be feasible with the current materials.

Silicon based chips will be forced to use new materials sooner than 10 years and if that isn't feasible then a multi chip solution is the only way to go.

Hybrid silicon will allow more shrinkage with smaller leakage which is currently a problem and it will become a larger problem after the next generation chips.

I just don't see any other alternatives after the upcoming videocards.
Unless of course they break Moore's law.
Faster video cards coming out every 3-4 years rather than every 2 years.
 
Well, multiple chips within the same packaging may be a good idea for graphics moving into the future, as it would allow for much less waste of silicon than current startegies of disabling inactive units. There are obvious challenges with getting this of the ground, naturally, but it seems like a plausible direction that the technology could move (but it'd probably only be a viable option for high-end designs with low yields).
 
"Will video cards soon be seeing new transistors?"

They're always new. Do you think they prize them out of old redundant chips to assemble new ones? :p :p :p
 
Chalnoth said:
Well, multiple chips within the same packaging may be a good idea for graphics moving into the future, as it would allow for much less waste of silicon than current startegies of disabling inactive units. There are obvious challenges with getting this of the ground, naturally, but it seems like a plausible direction that the technology could move (but it'd probably only be a viable option for high-end designs with low yields).

I'd figure that it would be a consideration when scaling the amount of units "per chip" would get critical.

I have the impression though that the purpose of disabling units on a chip, is primarily to still being able to sell a fully functional chip while disabling the possibly mal-functioning parts of it. It's always better IMO than ending up in a trash-can. I don't see why there would be a reason to abandon that kind of strategy for single chip per package sollutions (I figure that multichip per package would theoretically only be considered with fully functional chips).
 
Ailuros said:
I have the impression though that the purpose of disabling units on a chip, is primarily to still being able to sell a fully functional chip while disabling the possibly mal-functioning parts of it. It's always better IMO than ending up in a trash-can. I don't see why there would be a reason to abandon that kind of strategy for single chip per package sollutions (I figure that multichip per package would theoretically only be considered with fully functional chips).
Well, the problem with disabling units on a chip is that you're never going to have the same number of chips that fail to work in total as you want to sell in that price range. So, you end up wasting lots of silicon in the end.

If the mechanism for linking together multiple chips on a package becomes efficient enough, no, there would be no reason to disable portions of the chip as you'd just make each one very small.
 
Chalnoth said:
Ailuros said:
I have the impression though that the purpose of disabling units on a chip, is primarily to still being able to sell a fully functional chip while disabling the possibly mal-functioning parts of it. It's always better IMO than ending up in a trash-can. I don't see why there would be a reason to abandon that kind of strategy for single chip per package sollutions (I figure that multichip per package would theoretically only be considered with fully functional chips).
Well, the problem with disabling units on a chip is that you're never going to have the same number of chips that fail to work in total as you want to sell in that price range.

But shouldn't the price range be set by the number of chips available(as well as demand)?

So, you end up wasting lots of silicon in the end.

If the mechanism for linking together multiple chips on a package becomes efficient enough, no, there would be no reason to disable portions of the chip as you'd just make each one very small.

Problem is that any mechanism used to efficiently link the chips is also going to waste a lot of silicon for one of the two configurations.
 
Killer-Kris said:
But shouldn't the price range be set by the number of chips available(as well as demand)?
Many fully-functional chips are sold as cut-down designs. This is evidenced by the fact that many people find they can unlock the disabled units with no adverse effects. This is wasted silicon for the manufacturer.

Problem is that any mechanism used to efficiently link the chips is also going to waste a lot of silicon for one of the two configurations.
How? With a cut-down configuration you just won't use as much silicon.
 
Chalnoth said:
How? With a cut-down configuration you just won't use as much silicon.
You waste space on parts that you need only once, and on the link itself.

Do you mean linking two identical dies (dice?), or having one "interface" and attaching multiple "shader units" to it?
 
Well, I would suspect that what you'd do is make it so that your multiple dies could sort of connect right up to each other (perhaps with some appropriate machining to get things right). Now, I'm really not going to be able to get any more specific than that, as this sort of an architecture would really require one hell of a lot of problem solving to get to work. But, on the surface it seems like it could be more efficient than disabling pieces of chips.
 
Chalnoth said:
Ailuros said:
I have the impression though that the purpose of disabling units on a chip, is primarily to still being able to sell a fully functional chip while disabling the possibly mal-functioning parts of it. It's always better IMO than ending up in a trash-can. I don't see why there would be a reason to abandon that kind of strategy for single chip per package sollutions (I figure that multichip per package would theoretically only be considered with fully functional chips).
Well, the problem with disabling units on a chip is that you're never going to have the same number of chips that fail to work in total as you want to sell in that price range. So, you end up wasting lots of silicon in the end.

If the mechanism for linking together multiple chips on a package becomes efficient enough, no, there would be no reason to disable portions of the chip as you'd just make each one very small.

Not in the case where you have for the same model in parallel a number of chips with disabled units and the same thing build from ground up with less units. I could easily think of such a scenario for NV41/40 for 6800nonU's. In that case unlocking will be obviously worthless since the disabled units would be damaged anyway, but that isn't a consideration for the IHV. That way chips with small defects can be saved and no silicon gets wasted to meet the demand.

If anything throwing away a 222M chip isn't ideal either, unless it would be possible to not have any faulty chips at all; as complexity rises even more that doesn't sound feasable to me either.

By the way I severely doubt that NV is losing any money on cut back 6800nonU's. Not with a single slot, 128MB/DDR1@350MHz sollution at least.

***edit: how "small" anyway? With today's standards it couldn't be smaller than one quad, which sounds too small to me and my limited knowledge. If I'd assume two quads, it still sounds too complicated, since future sollutions (WGF2.0 and beyond) aren't obviously going to be limited to 4 or 6 quads in the end.
 
Back
Top