Will PC graphics hit a deadend?

chaphack

Veteran
I am not sure, but i do hear alot of yanking about how PC age old x86 legacy architecture are holding down the endless possibilities of future tech, but as we live on, I see that PC graphics are improving without a doubt.

So how ya? :oops:
 
With the advent of 64-bit desktop processors, x86 architecture should start to fade away slowly.
 
Actually judging by what happened in the past ... with the advent of x86-64 other 64 bit architectures should start to fade away slowly instead :)
 
Everything in the Pc will hit a dead end sometime. During 2010-2030 there may be a very long period when we cant make processors and GPU's any smaller simply because they are just too small, and we might have to wait years for engineers to create a new way of making things faster using different methods than the usual clock speed advancements
 
reever said:
Everything in the Pc will hit a dead end sometime. During 2010-2030 there may be a very long period when we cant make processors and GPU's any smaller simply because they are just too small, and we might have to wait years for engineers to create a new way of making things faster using different methods than the usual clock speed advancements

I firmly believe that as we approach the limit of our current chip manufacturing process, that engineers will find 1) other ways to get chips smaller and 2) other ways to improve performance... with more parallelism, multi-core products, smaller but more generalized processing cores that can be dynamically linked (think Cell), and things like that.

We will reach a limit where shrinking for the added speed/transistor budget will have diminished returns, but we've got a while before that will come about, and theres a lot of R&D years left to explore other performance avenues.
 
MfA said:
You can always make the processors larger.

And put them in the freezer :).

Remember that the problem we are more likely to hit even with current technology is power density.
 
I think graphics have already hit a dead end. They're well into the region of diminishing returns. A 40 polygon character looked twice as good as a 20 polygon character, but a 4000 polygon character looks much the same as a 2000 polygon character.

Nothing impressive is going to happen graphically until characters (for example) become more than 6 inches tall, and become something other than flat.

Hopefully, this will leave developers kicking their heels to such an extent that they get round to taking game deisgn forward, rather than perpetually feeding hardware which colours in ever more triangles.
 
RoOoBo said:
Remember that the problem we are more likely to hit even with current technology is power density.

You can keep power density low and still improve computing power by making the processor larger. To make them a little more efficient we could also just integrate them into the central heating unit of homes :)
 
Alistair said:
I think graphics have already hit a dead end. They're well into the region of diminishing returns. A 40 polygon character looked twice as good as a 20 polygon character, but a 4000 polygon character looks much the same as a 2000 polygon character.

No, that is not neccessarily true. Once you get close up to a character, you start to notice things don't look very good. Fine details are either crudely modeled (ears for example) and/or simulated using textures (necklaces, other jewellry, pockets etc). Take your typical half-life scientist, the pens in the front pocket of his lab-smock is a part of the texture. A policeman's belt with all the stuff he got there is typically NOT modelled very precisely. Etc etc. It's all in the fine details.

It's okay as long as all you do is run around and shoot in a "twitch-reaction" fps like Unreal Tournament, but other games could use LOTS more detail than 4000 polys per char. I want things modelled at least down to the holes shoelaces pass through in a character's sneakers. I wanna see the hairs on people's legs! :)

*G*
 
I've kind of been wondering if we'll start to run into dead ends not from a hardware design point of view but from a software design point of view. It seems to me that games in particular are going to become increasingly complex, requiring more detailed artwork, more intricate planning, and greater time to complete than ever before. Are development costs going to begin to grow so large that it will be financially unfeasable to create more complex games? I think it's already next to impossible for people to create games in their basement, something that used to occasionally see the occasional hit. Is there anyone else who thinks that this will be the primary limit to graphics development in the upcoming years and/or decades?
 
I'm with Clashman here.
We also already have hit the line where stupid backward compatibility limites the game designer choices. Or we have hit this limit a long ago: A current PC with high end graphics card easily beats even the best of consoles. But still what we hear? Whining that the consoles have all the delicious graphics. (Well, that's not true, but they bet to see the good graphics more!)

I think that Flight Simulators even died to this commercial block: they became very very detailed, needing huge 3D modelling, map bases, and not least of all the flight and instrument models. Today's flight sims are either erratic eastern products (IL2+) or forever-projects that are not known when or if they finish.

With the already very detailed levels of the 3D shooters I have been wondering how long does it take that the game designers can't anymore realistically do that improvement anymore?!

The graphics card capabilities can improve, faster polyrate, better shaders, more multiple lights, "cinematic effects"... even easier APIs for them, but do we see benefits from these? When do we see them?

I still have crudge that Evolva is about the only game where the bump-mapping was used effectively (oh yes, it is here, now, but still I don't see it in too many places!) and where are the pixel-shadered games?? They are waiting for the TNT2 graphics cards to die away...

So I think we area already limited.
 
I would have thought that tools have to evolve, to avoid the problem of creating more and more detailed content by hand. In the same way I can direct a film without designing the humans I'm filming, developers will presumably need access to libraries which provide 'standard' stuff. People for example. Or jeeps. Tea-cups. Whatever.
 
With respect to Moore's Law:

This is the most important issue, so let's deal with it first. Moore's Law shows no signs of slowing. Yes there are unsolved issues looming on the horizon--there always are. Once they're solved, and the solution can be ramped up to allow for low-cost mass-production, then we shrink process geometries and move another step along Moore's Law. The fact that this process has proceeded at such a remarkably steady pace for four decades now should tell you something about the pace of innovation.

Right now the current boogieman is power consumption. With current techniques, leakage current increases as geometries shrink, threatening to eventually swamp the heat/performance gains that normally come with smaller geometries. It's a decent problem. But there are already partial solutions in the fabs (like SOI) and better ones in the labs. Power consumption as an issue and potential limiting factor is probably here to stay (although, was it ever really gone?). But it doesn't seem present anything like a brick wall.

Next up are probably the limits of photolithography--eventually, chip features get smaller than the wavelengths of light used to etch them, forcing you to shift further and further up the electromagnetic spectrum. Of course, we've already been defying those limits (or what once seemed to be limits) for years. But the problems getting today's ultraviolet photolithography to work certainly seem much more tame than those of getting x-ray lithography going. But who's to say that x-ray lithography can't be cracked? Or electron lithography? And if not, advances in nanotechnology seem likely to pick up the slack from a new direction. Photolithography has been the basis for much of our Moore's Law progression, but there's no reason we can't move forward under different techniques.

So, in general, we have no reason to think Moore's Law will be ending any time soon, or even necessarily hitting a slow patch. On the other hand, it's important to realize how important Moore's Law (which, BTW, refers only to the transistor density of cost-effectively produced ICs, and does not directly refer to clock speed, performance, etc.) is to continuing improvements. Probably 95% of the rate of performance improvements is due to Moore's Law, and only 5% or so to improvements in knowledge and design technique. So keeping Moore's Law going is the most important thing.
 
With respect to general-purpose microprocessors:

Progress in MPUs is not slowing down. If you think it is, you've probably been spending too much time watching AMD.

Seriously, a few years ago the x86 space saw performance increasing at an unsustainable rate--that is, faster than underlying Moore's Law advances. In late 1999 and most of 2000 in particular you had the situation where the Athlon was providing a huge competitive jump for AMD and Intel still had enough headroom in the PIII to pretend they could compete. And on into 2001, the Athlon design hit its stride and AMD continued to milk it for all it was worth.

Unfortunately, the dymanics of the x86 market make it almost impossible to really compete with Intel for the high-end, day in and day out. Product cycles for modern high-performance MPUs stretch into 5 or 6 years, and AMD certainly doesn't have the design resources to try to speed that up. But they also don't have the market power that Intel does, to come to market with an ahead-of-its-time core that won't hit its stride until a couple years after introduction. (Like the P4.) Instead their new cores need to hit the ground running, as Athlon did. Unfortunately, this inevitably means they will run out of steam sooner, in about 3 years instead of 5 (like Athlon did).

K8 is a nice stop-gap, but it remains to be seen if the fundamental core has enough of a redesign to scale along with the P4 (P5?) long enough for K9 to come to the rescue. But meanwhile, the rest of the general-purpose MPU industry is showing plenty of signs of life.

IBM's Power4 is a solid performer, and while the hype about Power5 is completely ridiculous, it should nonetheless be a strong chip. The PPC 970 isn't the world-beater Apple claims, but it, too, is a solid desktop CPU, able to put Apple in the same performance ballpark as a >$500 PC, something that hasn't been the case for years.

And not to undersell K8--Opteron in particular is a world-beating part for the low-end x86 server space. Athlon64 I'm less sure about, but it should at least keep things interesting for a while.

And the architecture-formerly-known-as-Itanic is putting up some amazing performance numbers, and proving that many interesting things still lie ahead for high-end MPUs, even with the demise of the EV8. Hell, even the things SUN has on their roadmap look moderately interesting.

There's plenty of new stuff around, and plenty of room for improvement. CMP and SMT are still in their infancy, where OOO was roughly a decade ago. MPUs will be a field worth watching for quite some time.
 
Clashman said:
I've kind of been wondering if we'll start to run into dead ends not from a hardware design point of view but from a software design point of view. It seems to me that games in particular are going to become increasingly complex, requiring more detailed artwork, more intricate planning, and greater time to complete than ever before. Are development costs going to begin to grow so large that it will be financially unfeasable to create more complex games? I think it's already next to impossible for people to create games in their basement, something that used to occasionally see the occasional hit. Is there anyone else who thinks that this will be the primary limit to graphics development in the upcoming years and/or decades?

We'll actually nowhere near the point where software development costs are going to be the limiting factor in graphic (though it certain will become a limiting factor). The average game today costs only a few million or so to make AFAIK, compared to something like 50-150 million for a hollywood movie. However, unlike movies, there won't any room for small, independent movie makers with budgets <1M unfortunately, like you said. Although this may greatly reduce the number of games made, the overall market will still stay viable. One thing that may happen as another has noted here is that the amount of developer resources available to the developer may have major importance.
 
Aside from CPU, how about the x86 available bandwidth and such. What type of improvements do you see for the future? Will PCI-X really be able to open up the small pipes?

About DirectX, looking at how DX9 is, where do you think DX10 will be heading? How about the GPU hardware, what other places NV and ATI has yet to explore? Current DX9 demos do look pretty cool.
 
RoOoBo said:
And put them in the freezer :).

Remember that the problem we are more likely to hit even with current technology is power density.
Actually, power density should be a relatively simple one to solve.

Firstly, current technologies will break down before about 0.01 microns, so we're not going to get further than around five years before transistor densities start to slow their increase dramatically.

But, moving to alternative semiconductors can hugely affect the amount of power that it takes to operate the chips. Gallium Arsenide, for example, has an electron mobility (which translates almost directly to conductivity) of about five times that of Silicon. I think this, or a similar step, will be the first one taken to continue to advance processors. This will probably be used in conjunction with smarter chip design.

After that, there may be movements to, say, analog processing or other data compression techniques (do more with fewer transistors), possibly with a move to make many smaller, higher-frequency cores on a single chip (possibly with slightly offset frequencies to prevent radiation or interference).

Some time later, there will likely be a large move to some other processing method. Quantum computing is one possibility, but I'm not yet convinced that it is widely-applicable. Some algorithms will work many times faster on a quantum computer, others will not. So, we'll have to see. At any rate, it will be highly intriguing.
 
AFAIK, Gallium Arsenide is not really useful except for niche products - it draws something like ten times as much power as silicon and has awful yields once you get more than about a million transistors per chip. Something similar applies to most of the alternative semiconductors as well.
 
And with respect to realtime graphics ASICs:

First off, the notion that we are anywhere close to "good enough" is silly. Can you still tell the difference between best-of-breed game graphics and high-end offline CG (e.g. Finding Nemo or, gag-worthy as it was, The Hulk) displayed on your computer monitor? And can you still tell the difference between The Hulk and a film of a theoretical real-life Hulk? And, for that matter, can you still tell the difference between a film of actors (projected in a theater) and people in real life?

We have a long, long way to go.

With that said, one of the most notable features of consumer 3d ASICs is their extraordinary rate of performance gains all the way back to when they first hit in 1995. The rate of performance gains has arguably been faster for longer than that of any other category of IC. Will progress continue to be that rapid? To figure that out, we need to look at the sources of the steady performance gains:

1) Increasing size of the market. As 3d graphics have become more and more compelling, and 3d-capable consumer GPUs have become ever cheaper, the size of the market has grown immensely, from nil in 1995 to nearly saturation in the desktop space now. There are still significant market size gains to be made as 3d becomes more common in laptops--and near-total engineering overlap between laptop and desktop parts. And everyone is betting on large gains from diverse markets such as set-top boxes and PDAs, although GPUs targeted at those markets will necessarily require significantly different designs than desktop parts. And there might still be a decent bump from Longhorn.

But in general, this trick is played out: the fact of huge money on the demand side is now quite well matched by huge resources on the supply side at ATI and Nvidia.

2) An embarrassingly parallel problem. General-purpose CPUs have a very difficult time converting the increased transistor budget given them by Moore's Law into corresponding performance gains. That's because the limiting factor in general-purpose performance is extracting parallelism from inherently serial instruction streams, rather than putting up the resources to actually execute the instructions. The problem of putting an image on the screen--at least the way 3d rendering is done today--is, on the other hand, inherently parallel: each pixel's color value is calculated independently of any other's.

This isn't going to change. However, the workload involved in calculating each individual pixel's value is becoming more serialized and taking on some of the characteristics of general-purpose computing. This seems to be inevitable for us to continue to reap increases in realism. And increases in raw pixel throughput are worthless, as we can already output relatively simple pixels faster than monitors can display them.

Future hardware will have to make the tradeoff between (just to take the pixel shader pipeline as an example) a smaller number of more capable pixel pipes and a larger number with fewer resources per pipe. At the moment, this tradeoff is rather meaningless, since the pixel pipeline is so simple it can pretty well be described simply by counting its functional units. Once shader workloads become more like general-purpose computing--dominated by control flow rather than execution resources--shader pipelines will start to look more like the datapath of a general-purpose CPU, and then this tradeoff will become much more important.

3) One-time algorithmic increases. Perhaps the most significant reason GPUs have reaped performance gains at a faster rate than Moore's Law proceeds is that increasing transistor budgets have allowed not just for more parallelism in the hardware design, but for new algorithmic techniques that substantially improve efficiency. A perfect example is multisampling plus anisotropic filtering vs. supersampling. Each achieves the same ends (more or less), but MS + AF is an extraordinary efficiency gain over SS. But MS + AF also takes more logic to implement, particularly when you include the color compression that makes MS + AF significantly more bandwidth efficient than SS. Other examples are hierarchical Z, early Z reject, and Z compression. Yet another example is tile-based deferred rendering. Another is texture compression.

Greater transistor budgets allow designers to dedicate the resources necessary to implement more efficient algorithms in hardware. The resulting performance is often much greater than if the transistors had been used instead to provide more of the same naive functional units. The question is how many such leaps await us in the future, and how much of the low-hanging fruit has been picked. One such algorithmic improvement we can probably expect in the next couple years is analytical AA techniques like Z3. (Matrox's FAA is similar, but trades a simpler implementation for probably unacceptable artifacts in the form of missed edges.)

But how many others are there on the horizon? And will they enable us to sustain the benefits reaped in the last few years from MS + AF, compression, and early Z?

Beats me.

In the medium term, I'd say this is the biggest factor determining whether GPUs will continue to scale in performance at their remarkable historic rate.
 
Back
Top