How will GPUs evolve in comming years?

The way I see it is this: today we have 16 pipelines, 500MHz and half-flexible pipelines. All this does rasterisation.

To get to interactive renderman quailty, we need more of what we have now, but also stuff we don't have.

Can 256 Pipelines, 5GHz, fully programmable hardware do the job?

It seems we need some other things too.

- How do we get better AA?
- Primitive processor?
- Should texture memory be totally virtualised and addressable?
- Does it make sense to use the same hardware block for pixel and vertex units? (I asked John Montrym about this and he said there's very little difference between PS and VS 'capabilities' but he was not supportive or against the idea of unifying the hardware).

How will GPUs evolve to close the gap?
 
Probably a stupid layman idea from my behalf, but would something like a SoC stand a chance in the not so foreseeable future?
 
I don't think it's specifically a function of pipelines, clockrate, or any other physical implementation of advancement.

The day the quality of graphics are exclusively the direct result of generalized computational power (no real distinction between CPU and GPU) is the day that development tools will have to be equally as evolved.
 
This is certainly interesting food for thought. Just seeing 256 pipelines @ 5GHz and imagining it gives me chills. There was one thing that this leaves out and I think it is crucial, so let me just add this in so we see an immediate constraint to balance the dream that the other specifications send us into.

Let's assume that the current hardware is balanced in the same manner that future hardware will be. That is to say, this hypothetical future device works much in the same way as contemporary hardware, just faster. Let's also assume that the 6800 Ultra, for reference, is properly balanced at 425 core clock with 600Mhz DDR memory (1.2GHz effective). Some quick computation....

256 pipelines / 16 piplines = 16 times the number of pipelines
5,000 MHz / 425 Mhz = 11.76 times the core speed

This means the burden on the memory sub-system to feed this hypothetical core logic is 16*11.76 = 188 times the memory bandwidth requirement of today.

This is just to keep it happy in the way contemporary hardware is operating. This means an effective data rate of 188*1200= 225,792 MHz or 112,896 MHz DDR. Let's say you quadruple the width of the memory crossbar (1024-bit memory interface) and you still need 28,224 MHz DDR. Almost needless to say, you would probably need to quadruple the memory interface width one more time to get to some numbers that look right.

Looking at it this way, the hypothetical specifications look ludicrous and are best reserved for a manufacturing plant in our dreams. The size of the memory controller alone would be huge and these are all transistors that need to be running at the proposed 5GHz. We are talking huge and hot. Then again, I am sure this is exactly what what said about ICs in the 70s and what we have today breaks all rules, including reality.

Then again, a 5GHz part with 256 pipelines would offer 188 the computational power of what we have today. That's quite a bit. Doubling performance every 18 months would put this part in 2017 or so. Maybe there is still time for a miracle :D

Do we really need something 188 times as powerful as what we have in the 6800 Ultra right now to make photrealistic VR a reality?

EDIT:

Just to realign my naive computations to apply to memory development I computed where memory speed would be in 8 cycles and it actually works out nicely at 307,200 MHz (or 307 GHz). Now, for some reason I don't think that quite realistic, but this would imply that a 256-bit bus holds up.

Disclaimer: I did all this really quickly, rough & dirty, all my numbers could be wrong.
 
I had one more thought and I include it here as a separate post so as not confuse others and, especially, myself.

This future rendering would obviously need higher quality that we are used to seeing today. When thinking about it, it may be useful to think in terms of everything being done with HDR in mind. Think performance of HDR today and scale that forward.

I only bring this up because I fear someone will soon compute what this imaginary device will post in Quake III timedemos :p Instead we need to think "everything HDR" and "everything shaded...twice and then twice again" :D
 
wireframe,

The reason why you're running into rediculous bandwidth requirements is because you're working under the premise that bandwidth should scale linearly with computing power.

This premise may be right or wrong. I think for graphics, it may be too restrictive. I think we won't need a linear increase in bandwidth to keep up with the GPU.

Doing better graphics means increased algorithmic complexity. This means more math on exiting data rather than more data with the same math. Of course you need more detailed textures and models, but not linearly. And there's always the option of dynamically generated content.

If pipelines are virtualised and one rendered pixel requires traveling X bytes of data through 256 pipes at 5GHz, then we don't have much of a bandwdith problem. :)
 
What if we hit a thermal wall like we did with CPUs. Wouldn't be more realistic to think a 256 pipelines running at 500 MHz?
 
phenix said:
What if we hit a thermal wall like we did with CPUs. Wouldn't be more realistic to think a 256 pipelines running at 500 MHz?

I was going to answer Aidan's second post, but I think I was pretty clear that I was making naive assumptions based on contemporary hardware so I think we can skip that. However, I was going to point out this, which relates to your post:

A NV40 running at 5GHz would probably drop a substantial number of jaws around here. With 16 times the number of pipelines, let's be kind and only multiply the trannies by 8, this would mean a GPU consisting of 1.8 billion transistors.

Thermal implications aside, just consider the power requirements. Let's not forget this thing would need lots of memory because you would need very good base textures to work with (add geometry and working space, etc).
 
I'm going to go with what I call the "Carmack Theory" and say that better graphics means better lighting and shadowing above all else. If you add Pixar-quality lighting and shadowing then you won't be needing anywhere near the same ration of computing power to memory bandwidth.

What we need is high-quality, real-time global illumination with all of the other fancy things included. I believe that people notice much more the find details of the way light interacts with objects than we do the fine details of the objects shape and inherent colouring.
 
In order to advance into the holodeck realm I firmly believe we are going to have to scrap everything we currently know and start from scratch.
 
wireframe said:
256 pipelines / 16 piplines = 16 times the number of pipelines
5,000 MHz / 425 Mhz = 11.76 times the core speed
Such a configuration is unlikely, to say the least. 5GHz is essentially going to be unattainable except on rather small processors (smaller than today's GPU's), and transistor densities have an absolute limit.

I think it's highly likely that we'll have started to move towards some other technology (other than silicon-based transistors) before we have a silicon-based part with the above specifications.

As a side comment on your other calculations, though, it should be obvious that as densities rise above some amount, it will suddenly give higher performance to have fewer pipelines in order to store large amounts of on-die memory.
 
I say all gfx chips will soon be based on some kind of deferred rendering, regardless of lighting/shadowing (which will become better, for sure).
 
I don't think in 2017 they'll use textures anymore. Everything will be shader or other stuff they come up with.
 
we are already entering the time where the graphics quality is determined by the quality of the artists more than the speed of the hardware. There are already games that aren't technologically very impressive but ARE impressive artistically and then there are games with all sorts of shaders and special effects that in the end, don't look very good at all.
 
maniac said:
I don't think in 2017 they'll use textures anymore. Everything will be shader or other stuff they come up with.
Absolutely, except that those shaders will use a big table of constants :) :p
 
And than you have to store that generated stuff somewhere, which brings us back to the beginning...
 
While I do think we'll see a lot more shaders, I just don't see textures going away. You can't procedurally generate a Coca Cola logo, or a magazine cover, or a human face. Too much game content is about human environments, and those have be made explicitly.

Guys, what about hardware? Where do you see GPU hardware heading?
 
Back
Top