Future complexities of chips and the side effects

K.I.L.E.R

Retarded moron
Veteran
Okay, I am trying not going to juggle around the current NV30 events.

I will just say this:

The more complex our chips get, the less headroom is left in the current silicon structure. To think that would be the worst of our problems?

Nope! We also have to deal with optimising the cards themselves and being able to take advantage of them.
Considering that the NV30 has numerous problems associated with it's architecture, we would think it was strictly from bad engineering.

I don't believe that it would be a strictly engineering problem, I believe that some of the problems are because of working with a complex architecture.

This will not get any easier, the more transistors we push in, the more complex the design will get. A more complex design will be much harder to optimise.

So how do we overcome the problem of complex architectures?
I am pretty sure I am only cutting the surface of the amount of problems presented. I am sure heat dissipation will also become more of a problem. I won't be surprised if the NV50/R500 are running at 400MHz. :LOL:
 
Let me give you a few points:

1. GPUs are all about having multiple units work in parallel. That means that with 8 pipe designs, you've really only got to program one pipe, and multiply it 8 times ( it's slightly more complicated than that, but you get the point )
2. To minimize required time, it might make sense to actually use higher level languages. The requirement thus is of having more efficient compliers and transformers, in order to get code as efficient or more efficient than otherwise. VHDL is already quite high-level, though, isn't it?
3. The NV30 is a bad example - nVidia themselves accepted to say the NV30 was underresourced due to the XBox ( see recent conference call )
4. It's all about the mythical man/month ratio.


Uttar
 
Uttar said:
Let me give you a few points:

1. GPUs are all about having multiple units work in parallel. That means that with 8 pipe designs, you've really only got to program one pipe, and multiply it 8 times ( it's slightly more complicated than that, but you get the point )
2. To minimize required time, it might make sense to actually use higher level languages. The requirement thus is of having more efficient compliers and transformers, in order to get code as efficient or more efficient than otherwise. VHDL is already quite high-level, though, isn't it?
3. The NV30 is a bad example - nVidia themselves accepted to say the NV30 was underresourced due to the XBox ( see recent conference call )
4. It's all about the mythical man/month ratio.


Uttar

Keep in mind that it is far more complicated than that. I do realise the basics of GPUs and believe me, they will get far more complex and as they do they will run into problems whichever way you look at it.

Yes the NV30 was a "bad" card but I was only using it as an example. Maybe I shouldn't have but I couldn't think of anything better at the time.

Do you really think silicon will last forever and that GPUs will follow current designs for eternity? I don't think so. I am looking into the future and not into the present. Things drastically change with time.

Take a look at what the first computer was and how it was used and compare it with todays PCs. Massive difference isn't there?
 
Elroy: Nope, don't remember anyone posting a summary.
Some of the key points:
- Jen Hsun is being much more prudent about the XBox 2, one of the reasons being that the XBox is what caused the underresourcing of the NV30 ( which caused some features to be left out )
Sony being very aggressive means that the XBox 2 would have to be a project as big as the XBox 1, if not bigger. This may result in the same problems as before, and thus prudence is crucial here.

- nVidia currently got 25% of the AMD chipset market, and are entering the workstation AMD market. Their internal goal is of having 50% of the AMD chipset market by year end. They are not currently trying to get an Intel license, but think that ocne they pretty much hit a maximum of the AMD market ( = 50% ) , they'll have to reconsider it. They also make note that Intel chipsets are always used in things like Dells, HPs, ... and never VIA - it might thus be harder to gain share in that market.
They hope that once they'll have establised themselves as the market leader of the AMD chipset segment, Intel will accept a slightly lower license cost.

- They're going to manufacture an undisclosed part at IBM as soon as July 2003.


Note: There was no details about whether this was mass production or limited, test production. So it might be the NV40, or it might not be.


K.I.L.E.R: Eternity is a very strange way to look at things, IMO.
Let's face it: in 10 years, Moore's Law is likely to no longer be truly applicable. Transistions are gonna get slower. And product cycles, too.
Heck, if you put more time into the design of a GPU, what's the problem? It's really all about man/month. So while cycles *will* get longer, and prices maybe higher, I believe no dramatic changes are gonna happen, at least not in the next 15 years.


Uttar
 
K.I.L.E.R: Eternity is a very strange way to look at things, IMO.
Let's face it: in 10 years, Moore's Law is likely to no longer be truly applicable. Transistions are gonna get slower. And product cycles, too.
Heck, if you put more time into the design of a GPU, what's the problem? It's really all about man/month. So while cycles *will* get longer, and prices maybe higher, I believe no dramatic changes are gonna happen, at least not in the next 15 years.

Not literally eternity. :LOL:

Though you do realise the time frame I am looking at is a very long time in the future. When silicon = obsolete and GPU/CPU architectures are completely different.

I would like to know what alterantives to silicon are there?
I have heard so many alternatives which made my head spin. Also I don't trust anyone outside Beyond3D as anyone else either lacks knowledge or is talking out of their ass.

Actually, there are maybe very few people I do trust outside the B3D community. Though when people mention to me that photons are the way of the future, I find out that they have absolutely no idea what they are talking about. Just repeating the same old rhetoric as some old guy on the Time magazine or they are just smoking something and think they know it all.

I tend to keep asking people questions and they lead to a brick wall.

This is why I wanted to get this question out in the open over here.

Just imagine what will come out in 15 years time? Or even in 100 years time. I can't even begin to imagine. I certainly won't even be alive. All I can do is ask what current alternatives to silicon are planned.

Thanks
 
Uttar said:
I believe no dramatic changes are gonna happen, at least not in the next 15 years.
Uttar
Uttar you know what you just said? they will always keep on improving and always find new ways of doing things so YOU NEVER KNOW. anyway uttar are you waiting to buy an nv35 or skipping it for the nv40? or even getting an ati card ? just curious thats all:)
 
The main alternatives to silicon CMOS logic that I am aware of:
  • Silicon BJT logic: About 2-4x faster than CMOS, but each gate is large and draws so much more power than a CMOS gate that it's essentially useless in logic designs as large as GPUs. Mainly used for high-speed inter-chip interfaces.
  • Gallium Arsenide: About 5-10x faster than CMOS, but also draws a lot of power per gate, making it mostly useless except for very fast inter-chip interface circuits. Most commonly used for optical<->electrical signal conversion and multi-gigahertz signal lines.
  • Other semiconductors, such as SiGe (silicon-germanium) and InP (indium phosphide) have their own problems as well, limiting them to niche applications.
  • Diamond and silicon carbide have been mentioned as potentially good semicondictors as well, given that they can resist extremely high temperatures, conduct heat very well, and could potentially be faster than silicon. But AFAIK, no functioning logic gate has been demostrated with either material yet.
  • Superconductors: The RSFQ superconductor logic family has AFAIK been demonstrated to work at 770 GHz, but superconductors that don't require liquid nitrogen cooling seem to be rather far off still. Look here for a company that actually makes a living out of manufacturing superconductor chips today.
  • All-optical circuits: Look here for an introduction to optical circuits and how they actually work. This technology hasn't come very far, with basic AND/OR/XOR/NOT gates being made to work only very recently. So far, it seems that gates must be made physically very large (to the point where you can see the individual gates) to work correctly, so it is unknown if this technology can ever reach the degrees of integration that silicon chips have reached.
  • Carbon nanotubes: Allow for some extremely small and fast transistors to be built, and can apparently be useful for on-chip optical interconnect. Still in its early stages - structures more complex than a few transistors haven't been built yet.
edit: added carbon nanotubes to the list.
 
borntosoul said:
Uttar said:
I believe no dramatic changes are gonna happen, at least not in the next 15 years.
Uttar
Uttar you know what you just said? they will always keep on improving and always find new ways of doing things so YOU NEVER KNOW. anyway uttar are you waiting to buy an nv35 or skipping it for the nv40? or even getting an ati card ? just curious thats all:)

Well, what I meant is no *dramatic changes in the way of increasing speed*
Of course I never know - that's why I said "I believe" :)

I'm getting a NV35, probably an Ultra. I've had enough of having a Ti4200 64MB ( mine doesn't even overclock that amazingly... ) - Just can't wait anymore :) The NV40 does look good though, but I'm wayyy too impatient.


Uttar
 
Oh, BTW, forgot another key point with the conference:

- nVidia expects the IBM/TSMC ratio to be about 10/90 initially, but they expect it to become around 50/50 in the next 3 or 4 years.

I think that really tells a lot about how much nVidia thinks TSMC is insufficent for their high-end. And considering the 50/50 number, I'd guess they consider they'll become insufficent at their mid-end too.


Uttar
 
Perhaps TSMC don't have the capacity to supply the number of chips that nVidia require, rather than being insufficient/incompetent? Having more than one supplier can only be a good thing IMO. BTW, thanks for the brief CC summary Uttar. I'll be looking forward to the CC held on the 8th :).
 
Uttar said:
Oh, BTW, forgot another key point with the conference:

- nVidia expects the IBM/TSMC ratio to be about 10/90 initially, but they expect it to become around 50/50 in the next 3 or 4 years.

I think that really tells a lot about how much nVidia thinks TSMC is insufficent for their high-end. And considering the 50/50 number, I'd guess they consider they'll become insufficent at their mid-end too.


Uttar
i think its wise anyway to have 2 suppliers, ive been hearing that some saying that the nv35 will be at ibm, maybe there will be 3 chips, *nv35 at lower clock, nv35 and *nv35 with 256 megs and it might be the 256meg version that will be at ibm. *im not sure what number they will be
 
Uttar said:
borntosoul said:
Uttar said:
I believe no dramatic changes are gonna happen, at least not in the next 15 years.
Uttar
Uttar you know what you just said? they will always keep on improving and always find new ways of doing things so YOU NEVER KNOW. anyway uttar are you waiting to buy an nv35 or skipping it for the nv40? or even getting an ati card ? just curious thats all:)

Well, what I meant is no *dramatic changes in the way of increasing speed*
Of course I never know - that's why I said "I believe" :)

I'm getting a NV35, probably an Ultra. I've had enough of having a Ti4200 64MB ( mine doesn't even overclock that amazingly... ) - Just can't wait anymore :) The NV40 does look good though, but I'm wayyy too impatient.


Uttar

Unnecessary name calling and foul language edited by JRR

I dont think the .13 nv35 will be a lot faster than the .15 r350, and if they get a .13 r350 say bye bye to ur nv35 crap performance lead(if it leads at all!). Do you even care about image quality?

Anyway, I hope nVidia comes back strong, but it wont be with the nv35, you can bet ur hairy ass on it. :rolleyes: I'm upgrading my gf4ti4200 for a 9800 pro this summer, too bad you dont buy/support the best.

I have a problem with fan boys, they dont promote good engineering. They buy a company's product even if it sucks. Sorry no respect for you, not like you care, but I dont care either, just felt like saying stuff.
 
IceKnight, you have no right to attack Uttar and crap in my thread.

Uttar can state his opinion regardless if you like it or not.
You will recieve a warning from a moderator soon.

Can we PLEASE get back on track?

Topic: Alternatives to silicon.

So diamond and optical systems are the best bet?

So can anyone please explain how much it will cost to make diamond based chips?
 
micron said:
K.I.L.E.R.......your a pretty smart kid for being 20 or under :)

Oh thanks.

/me slaps Micron's bottom

I'm actually 18. :)

Though it would be great if I wasn't an idiot. :LOL:
 
Back
Top