Diminishing returns with advancing 3d hardware?

Dennard scaling failed around 2003-ish. A lot of the reason why it feels like image quality improves so slowly is that graphics hardware is improving so slowly. In the late 90's when 3D acceleration was a brand new thing, performance clean doubled every year (quadrupled if you count voodoo to voodoo 2 SLI) then things stabilized around 40-60%/year until 2003 and a lot of new exciting features like hardware T&L, per pixel fixed function lighting, pixel shading, anisotropic filtering and MSAA cheap enough to actually use got added. Now it has tapered of to about 20% per year.

My old metric for when it was worth to buy a new CPU or new graphics card from the times of Dennard scaling was when performance had clean trippled. That was just a couple of years inbetween. Pentium MMX 233 to 2 GHz Athlon 64 was like 5 years. Voodoo 2 to Radeon 9700 pro was a little over 4 years. Voodoo 2 had no shaders, no hardware T&L, no anisotropic filtering, 90 mpixels/s and 180 mtexels/s fillrate. 9700 pro had those things and a 2700 mpixels/s fillrate and enough memory bandwidth to do it justice. That's where that Quake 2 to Far cry image quality leap comes from.

3x increase in performance is really not that much. That's just one step up in resolution (e.g. 1080p to 1440p) with a small bump in image quality. Much less than that and it's barely noticeable.
 
I wonder. Did artists stop producing great art when papyrus technology stopped evolving quickly around the Egyptian times?
 
My old metric for when it was worth to buy a new CPU or new graphics card from the times of Dennard scaling was when performance had clean trippled. That was just a couple of years inbetween.
Exactly the same experience.
I used to feel that 30% was needed to get a (barely) noticeable change, and 3 times the speed was required to actually change how or what anything was done, such as running calculations over night as opposed to over the weekend. Those kinds of changes are still possible in graphics (though they take way longer), and in embarrasingly parallel applications with extremely high data locality. For general purpose computing though, I’ll be amazed if a decade is enough to triple our throughput from where we are.
 
I wonder. Did artists stop producing great art when papyrus technology stopped evolving quickly around the Egyptian times?
I do think that art history provides very interesting parallels here, though it's probably better to look a bit later in time.

During the Renaissance era, art tended to follow a trend of increasing realism. Painters sought to make their works vibrant and alive by making them as close to reality as they could get it. After a while, though, that ceased to be of interest, and painters diverged in other directions, such as impressionism. Realism can be neat, but it's often quite boring. These days you still get a few artists who go for hyper-realism, but mostly artists go in entirely different directions.

We've started to see a very similar divergence in games, and I expect it will accelerate.
 
I do think that art history provides very interesting parallels here, though it's probably better to look a bit later in time.

During the Renaissance era, art tended to follow a trend of increasing realism. Painters sought to make their works vibrant and alive by making them as close to reality as they could get it. After a while, though, that ceased to be of interest, and painters diverged in other directions, such as impressionism. Realism can be neat, but it's often quite boring. These days you still get a few artists who go for hyper-realism, but mostly artists go in entirely different directions.

We've started to see a very similar divergence in games, and I expect it will accelerate.
For as long as I can remember ”photo realism” has been a target in computer graphics rendering. And of course as long as it is unachievable, it sets a goal and thus drives investigation, understanding and development.
But for the most part photography is irrelevant to what you would like to do in computer graphics. For certain titles you might say that realism helps immersion, but the reality of the situation - that you are sitting on your butt staring at the screen, is so far removed from what is being displayed that the finer nuances of rendering arguably matter little for immersion by now. (VR being a much more fundamental immersive step.)

Diverging from reality is OK even when alternate world immersion is the goal. Whether it is by making the marines unrealistically muscular and bald, or by cel shading as in Zelda, neither seem to be a detriment to game immersion compared to more realism. So not only is it the case that we are up against diminishing returns in terms of realism vs rendering effort, we may also be facing diminishing returns as far as realism vs immersion goes.

It’s time to get creative.
 
Whether it is by making the marines unrealistically muscular and bald, or by cel shading as in Zelda, neither seem to be a detriment to game immersion compared to more realism.
In the past we had impressionism, surrealism, pointillism etc. Now we have muscle dudes and chicks with big ass titties. Progress I can get behind.
 
A fun exercise is to just look at clock speed. We had 50%/year growth during the 90s on average (performance growth wasn't much higher, about 60%/year, with most of architectural improvements being simply to remove road blocks to higher clocks and hiding the resulting latency). Clocks increased faster than Dennard scaling allowed, thus increasing power density close to where it is today, blowing past the humble hotplate with the pentium pro and rapidly approaching nuclear fuel pellets (light water reactor).

Had this continued until present, just a crazy exponential extrapolation, you would have a ~1500 GHz single core CPU, with a core voltage of ~0.1V and a ~1500 W TDP. Power density would have reached a rocket nozzle.

Trees don't grow to the sky. To get back on that kind of exceptional growth trend CMOS must be replaced, if it is possible at all. That's at least 20 years out because there are no (at least publically) known candidates that are a clear win in all dimensions. They all have nasty trade offs; like quantum computing, which in general will be slower, but for a small subset of all problems with a large enough problem size the speed up is greater than the speed up of going from an abacus to a modern computer. Silicon CMOS was really quite exceptional.
 
A fun exercise is to just look at clock speed. We had 50%/year growth during the 90s on average (performance growth wasn't much higher, about 60%/year, with most of architectural improvements being simply to remove road blocks to higher clocks and hiding the resulting latency). Clocks increased faster than Dennard scaling allowed, thus increasing power density close to where it is today, blowing past the humble hotplate with the pentium pro and rapidly approaching nuclear fuel pellets (light water reactor).

Had this continued until present, just a crazy exponential extrapolation, you would have a ~1500 GHz single core CPU, with a core voltage of ~0.1V and a ~1500 W TDP. Power density would have reached a rocket nozzle.
I think power consumption would start to get extremely non-linear if you went too far higher than today's clock speeds. Once you have a circuit which has a voltage oscillation rate such that the wavelength of light at that frequency has a wavelength comparable to the circuit size, that circuit will start to radiate EM radiation very efficiently. So at some point you just hit a wall based on the physical dimensions of the chip. If we assume a smallish circuit (1cm), the limit is around 30GHz.

I don't honestly know what breaking the chip up into stages does to this effect, but either way it's unlikely to ever be possible to push reasonable-sized chips into the tens of GHz or higher.
 
I think power consumption would start to get extremely non-linear if you went too far higher than today's clock speeds. Once you have a circuit which has a voltage oscillation rate such that the wavelength of light at that frequency has a wavelength comparable to the circuit size, that circuit will start to radiate EM radiation very efficiently. So at some point you just hit a wall based on the physical dimensions of the chip. If we assume a smallish circuit (1cm), the limit is around 30GHz.

I don't honestly know what breaking the chip up into stages does to this effect, but either way it's unlikely to ever be possible to push reasonable-sized chips into the tens of GHz or higher.

That's true of electromagnetic devices, and not at all a given for beyond-CMOS technologies. The point was more how breathtakingly, stupendously fast improvements were in the 90's, that naive extrapolation at the same rate gets you into some very silly territory. The rate was exceptional even compared to before the 90's (if you do the same extrapolation backwards from 1990 to the original 4004 in 1971, it "ought" to have been an 11 kHz chip rather than 740 kHz).

Would diminishing graphical returns really be an issue if hardware still advanced like that? It's certainly possible; the scope and depth of games seems to have shrunk back as the cost of creating content has ballooned out of control; but the extra performance could also have gone into real time global illumination and other things which don't necessarily mean higher development costs.
 
That's true of electromagnetic devices, and not at all a given for beyond-CMOS technologies.
It's always hard to make very concrete statements about unknown technologies. But if the tech uses electric current, it'll have this limitation. The easiest way to avoid this problem is massive parallelism, which doesn't always help with a given problem.

The point was more how breathtakingly, stupendously fast improvements were in the 90's, that naive extrapolation at the same rate gets you into some very silly territory. The rate was exceptional even compared to before the 90's (if you do the same extrapolation backwards from 1990 to the original 4004 in 1971, it "ought" to have been an 11 kHz chip rather than 740 kHz).
There's a difference, though: computer technology in the 90s did not have any significant competition. This meant they didn't have a very high bar they had to reach to be useful. Any new computing paradigm will have to compete with decades of innovation on transistor-based designs. Heck, even alternative transistor-based designs such as asynchronous processing have a large uphill climb despite their superiority on paper to current designs.

New designs would have to out-compete existing designs in some area to be sold. That means that a lot of R&D needs to be done before the products even make it to market. And by then the new system may be much closer to its potential than transistor-based designs were 60 years ago. Furthermore, there's no reason to believe that the scaling of an entirely new technology will be similar to the scaling of transistor-based designs. Any new technology will have its own scaling based upon its own physical properties.

For example, with quantum computing, one of the fundamental limitations that engineers will have to grapple with is decoherence: as the quantum system interacts with its environment, its quantum behavior goes away. And if the quantum behavior goes away, its usefulness as a quantum computer evaporates. That means quantum computers need to be isolated from their environment. The larger the quantum computer, the harder it will be to keep isolated.

There are two general ways for a quantum system to remain isolated from its environment:
1. Make sure the degrees of freedom at hand just don't interact very well with their environment. This will necessarily make it more difficult to set up/measure the quantum state later. But maybe there's a clever work-around.
2. Keep the temperature really, really low. Low temperature means fewer photons bouncing around. These photons are one of the primary ways for quantum systems to interact with their environments. Low temperature solutions will keep quantum computing out of reach in the consumer sector permanently.

This quantum limitation is inherently non-linear, which means that if we ever get a useful quantum computer, there won't be any guarantee of anything approaching exponential growth.
 
There's a difference, though: computer technology in the 90s did not have any significant competition. This meant they didn't have a very high bar they had to reach to be useful. Any new computing paradigm will have to compete with decades of innovation on transistor-based designs. Heck, even alternative transistor-based designs such as asynchronous processing have a large uphill climb despite their superiority on paper to current designs.

This is always true and was true for silicon CMOS.

This is true on the small scale. 90 nm competed against mature and entrenched 130 nm etc. The difference was more than skin deep; almost every generation had to make major changes like copper metal layers instead of aluminium, low K, immersion lithography, multi-patterning, ad nauseum.

Before CMOS there was PMOS (such as the 4004) and NMOS (such as the original 8086). Before FETs there were RTL logic with bipolar junction transistors packaged on small ICS. Before ICs there were discrete transistors. Before transistors there were vacuum tubes. Before vacuum tubes there were relays.

On every level you have new technologies that start slow and unable to compete, grow exponentially and replace the existing paradigm and eventually saturate and then get replaced by something better. CMOS is probably not the end; physical limits on reversible computing are ridiculously, stupendously more power efficient than silicon CMOS is today, but a clear successor is not known, so we'll probably have stagnation for a good 20 years.

Furthermore, there's no reason to believe that the scaling of an entirely new technology will be similar to the scaling of transistor-based designs. Any new technology will have its own scaling based upon its own physical properties.

There's no reason to believe it won't either. Even discrete transistors and vacuum tubes scaled in a similar fashion to silicon CMOS until they peaked and where replaced by something better.
 
This is always true and was true for silicon CMOS.

This is true on the small scale. 90 nm competed against mature and entrenched 130 nm etc. The difference was more than skin deep; almost every generation had to make major changes like copper metal layers instead of aluminium, low K, immersion lithography, multi-patterning, ad nauseum.

Before CMOS there was PMOS (such as the 4004) and NMOS (such as the original 8086). Before FETs there were RTL logic with bipolar junction transistors packaged on small ICS. Before ICs there were discrete transistors. Before transistors there were vacuum tubes. Before vacuum tubes there were relays.

On every level you have new technologies that start slow and unable to compete, grow exponentially and replace the existing paradigm and eventually saturate and then get replaced by something better. CMOS is probably not the end; physical limits on reversible computing are ridiculously, stupendously more power efficient than silicon CMOS is today, but a clear successor is not known, so we'll probably have stagnation for a good 20 years.
That's a ridiculous argument. Building a silicon-based microchip with 130nm lithography is not very different from building one with 90nm lithography. All of the same basic rules still hold. It's just that the variables (such as leakage and capacitance) have different values. Designers and manufacturers can still use the same basic toolkit with minor changes at every step.

This is, fundamentally, why the advancements in computing have been so rapid and so regular over the last half century or so. Every step was a small tweak over the previous one.

This is absolutely not the case when comparing existing designs to a different computing paradigm.

For instance, let's take the simple example of asynchronous circuits. Asynchronous circuits are theoretically superior to synchronous circuits in virtually every way, with the difference between synchronous and asynchronous growing dramatically as lithography sizes shrink. But nearly all of the industry's expertise in designing processors is focused on synchronous designs, so in practice it's genuinely difficult to build an asynchronous design that has the remotest chance of outcompeting a real, modern synchronous design.

That gap in experience is likely down to history rather than any physical limitations. If early microchip developers had selected asynchronous designs instead, today's computers might be much more efficient. But they didn't, so here we are. And it's almost impossible to get out of this trap because it will take so much money to sink into asynchronous chips before they have a snowball's chance in hell of being competitive. These chips aren't any harder to physically manufacture, but we don't have the institutional knowledge that would allow us to actually design them well enough to compete.

Eventually the slowdown in computer processing advancements may allow designs like asynchronous processors enough time to find a niche in the market, and then grow. But these processors don't really change the fundamental, physical limitations: they're just a more efficient system. So we might get a factor a few speedup overall from the switch, but we won't blow open a whole new realm of computing advancements.

There are computing paradigms that have the possibility of opening our horizons, such as quantum computing or biological computing. But at this point they're all hypothetical. And the longer it takes, the stiffer their competition will be. Getting away from silicon-based designs is going to be incredibly difficult.
 
Back
Top