Tim's thoughts

A lot of Beyond3D members contributed to some questions for Interviewing Tim Sweeney not so long ago and while I can't just reproduce the answers here, I did want to post some feedback. :)

He does not expect to rewrite the rendering engine from scratch as it'll take too long.

The 'fuzzy shadow' demo shown at NV30 will be used in this next gen engine. Was only running on one character, but he says now it's scalaing well. Implementation details not sure, but he mentioned shadow Z-buffers.

The engine will focus on this technology and high quality encoded textures (geometry based normal maps).

Other thoughts:
He doesn't think micro-polygons like renderman will happen for PC. The visual quality improvement is greater from better encoded textures than geometry.

Tim still holds the view that CPUs will make GPUs redundent in the future but high end machines will still ship with graphics accelerators. I mentioned bilinear filtering much faster on voodoo than P4. He says some may chose to do pixel shader based filtering once the hardware is fast enough. And that CPUs will be more efficient once the ops all become floating point.

He's not as optimistic as John C on 'only one more engine to write'. He sees the problem of realtime radiosity, then real time dynamic radiosity (for specular lighting) as taking many more decades to solve.

And on CELL, he emphasised the need for unified memory access. "You can't just send DMA packets everywhere" ;) He said 16 cores is very reasonable and we're likely to see that from Intel / AMD in the future. But I don't think Mfa's question was word for word answered.

Sorry for the rush, must go away now. No internet access in the outback for 5 days. Will be back by Saturday. Have fun!
 
JF_Aidan_Pryde said:
He's not as optimistic as John C on 'only one more engine to write'. He sees the problem of realtime radiosity, then real time dynamic radiosity (for specular lighting) as taking many more decades to solve.
I don't think JC ever said that there was only "one more engine to write." I seem to remember him stating something along the lines that he was only going to write at least one more engine, which says more about him than about the industry as a whole.
 
JF_Aidan_Pryde said:
Implementation details not sure, but he mentioned shadow Z-buffers.

I don't know for sure what they're doing, but there are tons of very quick soft shadowing algorithms that are similar to:
http://graphics.csail.mit.edu/~ericchan/papers/smoothie/

That a lot of people are going for. It's slower than normal shadow buffers, but still a lot faster than normal stencil shadow volumes, and it's soft. Win, win. It's not actually accurate though, since the softness extends out as the area of the light inreases, rather than inward. But I really doubt any gamers are going to care, or even notice.

Chalnoth said:
I don't think JC ever said that there was only "one more engine to write." I seem to remember him stating something along the lines that he was only going to write at least one more engine, which says more about him than about the industry as a whole.

We are very very quickly approaching the point of diminishing returns for real-time computer graphics, as CG did a long time ago, where the people actually buying the games won't see enough of a difference to care anymore (i.e. very few people that I talk to outside of computer graphics seem to notice any real difference between Toy Story 1's graphics, and Finding Nemo's). By all accounts, an engine with the r300's feature set as the absolute minimum baseline should be close to that, and if not, DX Next's feature set will get there. Though if you ask a bunch of PS2 owners on the street, they'd tell you that this transition came years ago :)
 
Tim still holds the view that CPUs will make GPUs redundent in the future but high end machines will still ship with graphics accelerators. I mentioned bilinear filtering much faster on voodoo than P4. He says some may chose to do pixel shader based filtering once the hardware is fast enough. And that CPUs will be more efficient once the ops all become floating point.

And what if GPUs turn in the foresseable future into specialized graphics functions "CPUs"? I have a hard time understanding how a general function unit can turn out to be more adequate than dedicated hardware, but then again what the heck do I know...
 
Well, I don't believe Tim's vision will turn true in the reasonably close future, or even in my lifetime. Unless there will be some major technological breakthrough that will make CPU's inifinitely fast or something like that. Until then, dedicated hardware will always be faster.
 
One problem I see with the idea of the CPU taking over the duties of the GPU is that it's another step back towards "one big server," where a single processor handles (and controls) more and more functions. The fact of the matter is that a more decentralised system, much like the ones we have now are going to provide more total capability. It's a lot more efficient to have dedicatec processors for video, audio, general processing, and even I/O operations than have everything run through one processor.

However, I wouldn't be surprised to see his prediction taking an ever larger part of the budget market. It's relatively common even now to find a fairly cheap system with a 2.6 or 2.8Ghz CPU, and onboard software sound and onboard graphics where the CPU is doing all the processing. Those systems are going to be all too common in future, simply because of the way numbers sell, and having an engine that will work on such systems isn't that bad an idea, no matter how much weaker they are than a properly balanced system.
 
However, I wouldn't be surprised to see his prediction taking an ever larger part of the budget market. It's relatively common even now to find a fairly cheap system with a 2.6 or 2.8Ghz CPU, and onboard software sound and onboard graphics where the CPU is doing all the processing. Those systems are going to be all too common in future, simply because of the way numbers sell, and having an engine that will work on such systems isn't that bad an idea, no matter how much weaker they are than a properly balanced system.

I don't see today even a 3.2GHz CPU equipped system with integrated graphics to be able to cope with today's recent games. INTEL has such a high marketshare in the graphics market for a reason.

I don't doubt that CPU computational power and efficiency will increase in the future more than I can imagine, yet I don't expect in game AI and physics to remain idle either.
 
A big part of Intel's graphics market share is due to the fact the majority of people base their computer purchases on two factors, CPU speed and price. The easiest way to maximise CPU speed (in terms of raw clocks) while minimizing price is to go Intel CPU/Extreme Slideshow graphics onboard.

While I wouldn't call such a system balanced, or even particularly able to perform well in current games, it's common, and if you can offload things to the CPU so it will perform better on those systems it's going to be likely to improve your game sales. That's exactly why Unreal's gone back to adding a software mode.
 
While I wouldn't call such a system balanced, or even particularly able to perform well in current games, it's common, and if you can offload things to the CPU so it will perform better on those systems it's going to be likely to improve your game sales. That's exactly why Unreal's gone back to adding a software mode.

I lived under the impression that early Unreal development was in fact entirely based on software rendering. I recall it being present in all engines and not it being an "afterthought".

Games sales are an entirely different beast; it still doesn't explain how generic hardware no matter how powerful it will be and with the addition of appropriate software will be able to render dedicated hardware redundant.

The purpose of having dedicated hardware is to offload the CPU; I don't see how that difference can be that easily surpassed, nor have I seen this far CPU processing power to increase with the same increments as GPU processing power has since the advent of 3D.
 
In some way I agree with Tim. As graphics cards move towards being general purpose pixel processors, and devs move away from straightforward texture layering - it should be possible for general cpu's to start to close the gap again.
It may take a while... but imagine a 4 way hyperthreaded P4 handling a graphics process - laying down the iterators for a triangle 1 by 1 into streams, then checking visibility and storing the results in a parameter buffer.
and finally running pixel shaders stage by stage from textures held in cache as long as possible.
The efficiency may not match the top of the range graphics cards,
but brute force may just be enough ( Just like everyone takes the brute force
Z buffer for granted today )
 
I don't know about you, but I for one will always want something faster and better. I don't forsee dedicated graphics cards going anywhere within the next 10 years. All the cycles used by the processor can be better spent on other important items such as the AI and Physics... Why steal cycles away from what is typically the weakest part of any and all games?
 
Well if Sweeney should be right, I'd have every reason to jump up and down just with the thought of all the possible options. Think of all the transistors a VPU could dedicate for IQ improving features, such as anti-aliasing :LOL:

It may take a while... but imagine a 4 way hyperthreaded P4 handling a graphics process - laying down the iterators for a triangle 1 by 1 into streams, then checking visibility and storing the results in a parameter buffer.

No prediction how a graphics acceletor could look like by then? As I said GPUs have scaled through the years in processing power by a lot more than CPUs, and anything a future CPU could do would be possible on a GPU too just with much higher efficiency, since it's dedicated hardware and it can be a very good companion for the CPU.

Graphics hardware uses multithreading in one or another already. Recent sophisticated DSPs on sound chips too...
 
Personally I'm wondering, and have been, if it's not so much performance that will move dedicated graphics processors even further out of the mainstream so much as cost.

When I spec out a gaming system for someone I usually budget about the same amount for the video card as I do for the CPU + motherboard. Admittedly that's more often with AMD than Intel hardware, but the principle is still there, just the margins are smaller. With Intel systems I"d probably budget about the same for CPU and graphics.

A lot of people want to cut costs and that's the easiest way to do it-- so slowly but surely those systems will advance to take ever more of the market. I don't think it's a good trend, quite the opposite, but I am coming to think that's more and more the way the market is heading.
 
Rugor said:
One problem I see with the idea of the CPU taking over the duties of the GPU is that it's another step back towards "one big server," where a single processor handles (and controls) more and more functions. The fact of the matter is that a more decentralised system, much like the ones we have now are going to provide more total capability. It's a lot more efficient to have dedicatec processors for video, audio, general processing, and even I/O operations than have everything run through one processor.

However, I wouldn't be surprised to see his prediction taking an ever larger part of the budget market. It's relatively common even now to find a fairly cheap system with a 2.6 or 2.8Ghz CPU, and onboard software sound and onboard graphics where the CPU is doing all the processing. Those systems are going to be all too common in future, simply because of the way numbers sell, and having an engine that will work on such systems isn't that bad an idea, no matter how much weaker they are than a properly balanced system.

Agreed. Used to be that you had to have a hardware sound card and hardware modem. These days (and it's not such a new development) most people can do with software implementations. When I see something like Nick's software pixel shaders, I feel that it may indeed be possible to have CPU rendering in the future. It'd be slower than dedicated hardware, but may be good enough for most people. After all, a lot of the power of current graphics cards goes to providing decent performance with AA and AF, and I imagine that most people won't really feel the need for this just like the don't feel the need for EAX, 5.1 or whatever sound technology.
 
Ailuros said:
No prediction how a graphics acceletor could look like by then? As I said GPUs have scaled through the years in processing power by a lot more than CPUs, and anything a future CPU could do would be possible on a GPU too just with much higher efficiency, since it's dedicated hardware and it can be a very good companion for the CPU.

IMHO the "dedicated" hardware mantra has run it's course and the sooner this concept dies the better we'll all be. Fundimentally, the basic microarchtectural constructs are basically the same in CPU's or GPUs as both are conducting analogous ops at the lowest of levels. What has enabled graphcal ICs to outpreform CPU's hstorically has been the ability for IHV's to have massive concurrency in their design - which for operations such as filtering or AA have enabled a speed-up that's orders of magnitude greater.

Yet, two basic tenets (in my mind) have recently emerged that point to Tim seeing deeper and farther than what many of you gve hm credit for. These being:

  • The shift (DX8/9) to shaders/microprograms and it's intrinsic demand for programmability and architectural flexibility which is basically abrogating many of the traditional "fixed" aspects of a dedicated IC. Already we're hearing about Unified Shaders and Topography being adopted - why stop there?
  • The ability to keep pace with Moore's Law is seeing the creation of 100Million+ gate devices at sub-100nm steppings. What we'll see is the birth of large scale concurrency in the CPU marketplace, much like predicted by Intel and IBM whose activly presenting Dual-Quad cored, SMT enabled ICs at symposiums. This (concurrency) is, in fact, the only logical way to utilize the influx of logic and extract meaningful preformance gains as the well has otherwise dried up.
Together, these two convergent events have set the stage for CPUs (or CPU-like) ICs to have a future much like Mr. Sweeney is discussing. And this isn't even touching projects like the Broadband Engine/STI Cell - which is difficult to classify. IMHO, it's a distruptive force and wll turn heads when the public sees what it is.

EDIT: my 'I' Key is dirty, I apologize if I missed any.
 
ET said:
Rugor said:
One problem I see with the idea of the CPU taking over the duties of the GPU is that it's another step back towards "one big server," where a single processor handles (and controls) more and more functions. The fact of the matter is that a more decentralised system, much like the ones we have now are going to provide more total capability. It's a lot more efficient to have dedicatec processors for video, audio, general processing, and even I/O operations than have everything run through one processor.

However, I wouldn't be surprised to see his prediction taking an ever larger part of the budget market. It's relatively common even now to find a fairly cheap system with a 2.6 or 2.8Ghz CPU, and onboard software sound and onboard graphics where the CPU is doing all the processing. Those systems are going to be all too common in future, simply because of the way numbers sell, and having an engine that will work on such systems isn't that bad an idea, no matter how much weaker they are than a properly balanced system.

Agreed. Used to be that you had to have a hardware sound card and hardware modem. These days (and it's not such a new development) most people can do with software implementations. When I see something like Nick's software pixel shaders, I feel that it may indeed be possible to have CPU rendering in the future. It'd be slower than dedicated hardware, but may be good enough for most people. After all, a lot of the power of current graphics cards goes to providing decent performance with AA and AF, and I imagine that most people won't really feel the need for this just like the don't feel the need for EAX, 5.1 or whatever sound technology.

No offense but I'm sorry this is just a stupid analogy. Modems and sound cards don't scale the way graphics need to. A 56K modem from yesteryear and one today didn't become faster and faster nor did it need to. Same with the sound cards with the exeption of Nvidia's DICE chip which is still done on dedicated hardware. Graphics will continue to NEED faster and faster GPUs, that' s not going to change so the analogy is flawed.
 
PC-Engine said:
No offense but I'm sorry this is just a stupid analogy. Modems and sound cards don't scale the way graphics need to. A 56K modem from yesteryear and one today didn't become faster and faster nor did it need to. Same with the sound cards with the exeption of Nvidia's DICE chip which is still done on dedicated hardware. Graphics will continue to NEED faster and faster GPUs, that' s not going to change so the analogy is flawed.

Perhaps instead of calling it stupid, you should ponder it a bit longer. These two tasks were both capable of being preformed by the host CPU but due to their inherient nature (open to a speed-up threw parallel processing implimented in dedicated logic) and were early to be exploited in this way when commodity IC viability rose and costs fell in relatively contemporary times. Yet, both are less intensive operations than 3D - and have both fallen back to software implimentations to a large extent in the very recent past.

I'd consider this a precursor for a time when CPUs have the concurrency and speed to take on tasks as intensive as 3D, instead of the Sound & Communications that they have already re-assimilated.
 
PC-Engine said:
No offense but I'm sorry this is just a stupid analogy. Modems and sound cards don't scale the way graphics need to. A 56K modem from yesteryear and one today didn't become faster and faster nor did it need to. Same with the sound cards with the exeption of Nvidia's DICE chip which is still done on dedicated hardware. Graphics will continue to NEED faster and faster GPUs, that' s not going to change so the analogy is flawed.

Current high end cards are an overkill for most games unless AA is used. Used to be that 640x480 at 30 FPS was considered great, and only the best cards did that. Now in many cases if you run 1024x768 with no AA, $100 cards don't perform all that differently than $400 ones.

Once it was necessary to buy an expensive system to play games. You needed a decently fast CPU and a decently expensive graphics card to get good performance. Now anyone who's not a power gamer can buy the least expensive CPU and an inexpensive graphics card, and still be able to run games decently.

Will graphics NEED faster and faster GPUs? Not "need" with capital letters, probably. For eye candy, yes, but it's just like EAX and other sound technologies, which are ear candy, and most people can do without them. In that respect, I think that the analogy is sound (although I agree that the modem analogy isn't that good).
 
Back
Top