What's next?

Oh, we're going to continue to push the envelope. But doing so is going to require fundamentally new technology, not just a continuation of current tech. I mean, this should be blatantly obvious if you just look at the slowing pace of die shrinks: this slowdown is only going to get worse.
 
True, but coming up with new technologies is something that a lot of people seem to do lately. ;)

When I was a kid I read a lot of sci fi, and there was sooo many things/tech that I never thought would happen that are just taken for granted today.

I got a lot of faith in the cleverness of people to overcome technical challenges.
 
1. The development cycle has to and will slow way down.

2. The press on current levels of IC technology can continue for another 3 full generations before the limits are really hit. Especially considering wafer yields, where the pinch will really start to be felt.

3. there will likely be a lull in the market depending on how fast new IC techniques are made production ready, starting about 5-6 years from now.

4. There likely will not be a shift to technologies like Optical, or quantum or molds for at least 15 years. Putting the dark zone starting around the turn of the decade.
 
boltneck said:
1. The development cycle has to and will slow way down.

Talking new architectural generations, yes. But we will still get our 6-month refresh cycles regardless, methinks...
 
Well, for one thing, there's still a whole heck of a lot that can be done in terms of software and in terms of more efficient hardware design to continue improving performance even after silicon process technologies have ground to a halt.

One easy example is multicore: harder to program for, but much higher potential performance for the same die space.
 
I just wonder what kind of performance we could get if a GPU was made with the detail that an AMD or Intel CPU is. You know, optimising it for multi-Ghz clockspeeds and such. Perhaps the longer development time would be worth it in the end. Although it would be weird, every time a new line of cards come out the performance would solidly double. I wonder how the market would react?
 
DudeMiester said:
I just wonder what kind of performance we could get if a GPU was made with the detail that an AMD or Intel CPU is. You know, optimising it for multi-Ghz clockspeeds and such. Perhaps the longer development time would be worth it in the end. Although it would be weird, every time a new line of cards come out the performance would solidly double. I wonder how the market would react?

Well, If GPU's Cut a couple hundred million transistors out they could run at 3 GHz as well. ;)

3.0Ghz P4 EE has about 125 million transistors.

Well its really not that either. CPU's are high latency, high redundancy, high number of process stages low number of process pipelines. (lots of error correction logic)

GPU's are low latency, low redundancy, low number of process stages large number of "pipelines".
 
While you could conceivably cut power usage and area with careful transistor tweaking, multi-GHz GPUs are likely to suffer severe heat buildup. Going from, say, 600 Mhz to 3 Ghz will allow much more performance from a given chip area (about 2-3x), but also increase the amount of heat per rendered pixel greatly (about 2-3x) due to proliferation of pipeline steps, so you get a chip with 5-10x the power/heat density of today's GPUs.

And the argument that it can be done in a CPU isn't particularly valid either; a modern CPU is mostly cache (which draws fairly litttle power), and even its non-cache sections tend to see about an order of magnitude less logic utilization that what a modern GPU is usually doing.
 
We won't be reaching the limits anytime soon. At least not from a software point of view.

My current research project will happily destroy my Radeon 9800 for performance, and give the latest-n-greatest a good run for their money. I've only implemented the core swapchain parts and it's chewing up the better part of ~90mb of VRAM.

A developer will always be able to make the GPU's cry and create a market for a bigger, badder and all round faster GPU :devilish:
 
arjan de lumens said:
Any numbers on approximately how expensive? It's not like the mask sets used today are cheap, at around $1M per tapeout at 90nm.

Just read the article again. Doesn't say anything about the cost. Just that making the VERY small mould to make the chips from is very complicated. But of course, you only make one (final) mould..
 
Yes, there are a few hard limits. The maximum attainable speed has reached it's peak (for air cooling), and there is definitely a hard limit at or just below 45 nm for the current process technology. And, as Chalnot said, you're entering the terrain of quantum physics around there as well, or a bit lower (about 30 nm and less, depending on the process resolution).

We can likely make 45 nm work and even go a bit lower, by using extreme ultraviolet light (13 nm). But you need really special mirrors and a preposterous radiation source to make that work. And below 30 nm, things become tricky anyway, as quantum tunneling starts to become reasonably likely. Chips as we know them will end around 15-20 nm or thereabouts.

So, I agree that bigger, multicore chips are the way to go. Many small cores/pipelines is best, so you can disable them when defective. And multi-chip carriers (packages) are in the picture again as well. Even stacked, and liquid cooled. They should anyway, to keep the path lengths as short as possible.
 
Last edited by a moderator:
Also, I agree about the artwork as well. Making the resources needed for a current game is starting to take so long, that the technology used is many generations old by the time the game hits the store. And it has become more expensive to make a good computer game, than a blockbuster movie.
 
DiGuru said:
Yes, there are a few hard limits. The maximum attainable speed has reached it's peak (for air cooling), and there is definitely a hard limit at or just below 45 nm for the current process technology. And, as Chalnot said, you're entering the terrain of quantum physics around there as well, or a bit lower (about 30 nm and less, depending on the process resolution).
Actually, what I'm talking about has very little to do with the fabrication technology, but rather the behavior of objects that are that small. Put simply, a transistor that is built at 30nm or smaller will no longer behave quite like a transistor at 90nm.
 
Chalnoth said:
Actually, what I'm talking about has very little to do with the fabrication technology, but rather the behavior of objects that are that small. Put simply, a transistor that is built at 30nm or smaller will no longer behave quite like a transistor at 90nm.
Yes, I know. But it depends on the tolerances. Too thin == trouble. With larger tolerances, you hit that spot sooner.
 
Well, the problem I'm talking about has actually more to do with statistical mechanics.

Specifically, transistor operation depends upon electron diffusion, which is a statistical phenomenon dependent upon many mobile electrons.

When you get down to 10nm features, you'll only have one single mobile electron in the circuit. That totally destroys the operation of the transistor, because electron diffusion no longer is a factor. You have to look at quantum mechanics for details as to how this new type of transistor behaves.

As far as I'm concerned, with fabrication technologies, there's always something around the corner that can potentially solve any problems that prevent companies from going smaller.
 
45nm process is two full generations away that will provide up to four new products per vendor. That - with the additional generation of products on 90nm from ATi/NV - will provide new boards / refreshes through 2010.

Additionally, through the concurrent progess made with driver development made with dual/quad sli and crossfire technologies, we will have multi-processor Billion transistor GPUs working in conjunction with 2nd Generation Physics Processors along with highly optimized drivers for multi-processing CPU environments.

These will be connected to Ultra HD Displays (typically in the 30" variety) that will provide an emersive experience that will allow the future game developers a platform that will facilite visually breathtaking games (although I still wonder if there will be better plots).

All that to say, that the common mantra in 2011 will be:

"When Will be computers / consoles have graphics that match Toy Story 4?"
 
Chalnoth said:
Well, the problem I'm talking about has actually more to do with statistical mechanics.

Specifically, transistor operation depends upon electron diffusion, which is a statistical phenomenon dependent upon many mobile electrons.

When you get down to 10nm features, you'll only have one single mobile electron in the circuit. That totally destroys the operation of the transistor, because electron diffusion no longer is a factor. You have to look at quantum mechanics for details as to how this new type of transistor behaves.

As far as I'm concerned, with fabrication technologies, there's always something around the corner that can potentially solve any problems that prevent companies from going smaller.
Yes. But that goes for the smallest structures. Larger tolerances means, that the variance in the size of the structures is larger. So, you will encounter that quantum tunneling effect sooner. With current proces technology, you're even likely to hit 30 nm sized structures (or even smaller) when making a chip in 45 nm. Just because the resolution is much too coarse at that size. You'll probably even see interference effects in the geometry of the surfaces, like sine waves in the structure walls.
 
DiGuru said:
Yes. But that goes for the smallest structures. Larger tolerances means, that the variance in the size of the structures is larger. So, you will encounter that quantum tunneling effect sooner. With current proces technology, you're even likely to hit 30 nm sized structures (or even smaller) when making a chip in 45 nm. Just because the resolution is much too coarse at that size. You'll probably even see interference effects in the geometry of the surfaces, like sine waves in the structure walls.
Ah, tolerances in that regard. I'm not so sure that the tolerances are actually larger, just that they're not as much smaller as the process shrink. The only reason I could think of that would cause the tolerances to increase would be if the wavelength of light that was used was too large, and the mask was too far from the silicon.

Anyway, all I'm saying is that while I don't doubt that this is a significant problem (aside: one thing that may help with poor tolerances in fabriaction would be asynchronous designs), I'd be rather surprised if the problem weren't solvable by being more clever about the design.
 
Chalnoth said:
Well, for one thing, there's still a whole heck of a lot that can be done in terms of software and in terms of more efficient hardware design to continue improving performance even after silicon process technologies have ground to a halt.

One easy example is multicore: harder to program for, but much higher potential performance for the same die space.
I think that a heck of a lot can be done in terms of making software tools to design more efficient hardware. :) Think dynamic logic and asynchronous design or maybe even analog. (guaranteed bit-for-bit accurate results maybe a *bit* difficult)

On the transistor side, I read a paper some while ago, that basically calculated that the limit for the statistically working transistor is somewhere at 29nm, after which the performance will actually get worse. The main reason as far as I can remeber was power usage and leakage. But going forward, does anybody know what are the latest news on SETs? I remember a lot of news about working examples a few years ago.
 
Back
Top