Sweeney and Richards on the future of 3D

Started watching the first... have these guys never heard of a tripod?
 
Not having watched the videos I'd have to say... software renderer for Tim.

@Simon: Semi Accurate... positioning. It's in the name!™
 
They talk about a lot of stuff.

They talk about how DirectX is a dead end due to how it fundamentally works. Tim thinks that DirectX is holding everything back and is the cause of diminishing returns regardless of having many times more powerful hardware than years ago. They want to get rid of the usual rasterization artifacts (texture and polygon aliasing). Tim says that if they'd continued working on UE1's software renderer that today it would probably have fewer artifacts than DirectX does today. There were aspects to the software renderer that were better than what they could do with 3DFX hardware (due to its limitations). Software innovation makes fixed function hardware unimpressive and limiting too quickly.

There is some chat about how hardware architectures aren't around long enough to be fully explored before they are obsolete.

They talk about software rasterization from custom hardware or from GPGPU. Also, current GPGPU is pretty useless from both software and hardware angles. Separate GPU and CPU chips are bad due to communication problems. They need to be able to quickly communicate because they are good at different things and so don't work well alone and separation equals unusable latency. Cell is interesting but not all that great (lots of things that need to change). But they are stuck with dealing with Cell for years yet. And they talk about the viability of the economics for the various parties involved in the hardware and the games. And who would be interested in breaking the mold. Who has the power to do so. Etc.

There is chat about multicore CPUs and their chicken and the egg issue of not having many multithreaded apps while designing the CPUs. Compiler info. Epic made big investments into developing for multicore when it became clear that it was the future. Chat about the major challenge of heavily threading games. Need to find ways to go even more fine-grained to utilize more and more threads. Isn't getting easier. Cue more chat of GPU/CPU "fusion". Current trend of ever wider general purpose CPUs is not particularly useful in the long term.

Power consumption comments. Andrew thinks that the next consoles can't exceed the power usage of the current ones.
 
Last edited by a moderator:
They talk about how DirectX is a dead end due to how it fundamentally works. Tim thinks that DirectX is holding everything back and is the cause of diminishing returns regardless of having many times more powerful hardware than years ago. They want to get rid of the usual rasterization artifacts (texture and polygon aliasing). Tim says that if they'd continued working on UE1's software renderer that today it would probably have fewer artifacts than DirectX does today. There were aspects to the software renderer that were better than what they could do with 3DFX hardware (due to its limitations). Software innovation makes fixed function hardware unimpressive and limiting too quickly.

Quelle surprise! :sleep:
 
Here is the transcript of the conversation (thanks harison for pointing it to us ;) )

I'm really grateful for the ones that wrote this down because between my bad english and the noisy vids I barely understand one word out of two...

By the way I missed a pretty interesting part:
CD: Does CPU backed by a GPU, passing stuff over a bus, will that satisfy the flexibility needs plus fixed function? And do you think moving that onto a single die will help, or not be good enough to be left behind by more flexibility?
*
TS: Well you've touched on a key thing that's broken with CUDA right? So you have a lot of high level code that makes control flow decisions and high level decisions that's running on the CPU, and then when it wants to instigate a vector computation it hands that work off to the GPU, the GPU can then be use to run it and it hands the results back. So this workload is continually ping-ponging back and forth between the CPU and the GPU, the problem is there is a million clock cycles of latency between the two, and that's what's broken. What you need is at least the computing and the graphics side done on the same chip, so that the communications latency is minimal. Even preferable to that is a single unified core architecture which supports both scalar and vector computing like Larrabee, so you can run all of this computation together without having to switch cores or switch caches or anything.
*
AR: I'd just like to say, i agree with Tim totally about this issue of latency between the CPU and GPU, i think that's a really really really serious issue. So i do think it makes sense to have the CPU and the GPU on the same chip, for exactly the same reasons that a million cycles of latency really messes things up.
*
CD: Does that fix the problem, or does that minimize the problem? Or does it just help a little?
*
AR: I think it makes a lot of difference if you can just say 'right this bit of code here, you know it's making lots of decisions, so it should be on this core' right, 'this bit of code here is like really wide vectors so it should be on this core'. But, as Tim says you need to be able to switch from one to the other very quickly and that requires you to be on the same chip. So the interesting thing is that at the moment we have the CPU on one chip and we have the GPU on another chip, i think you know that division makes no sense, we should have CPU and GPU on one chip, if you're going to have 2 chips you should have another CPU+GPU, because actually the CPU+CPU com... (Charlie cuts in)
I find this POV interesting especially after reading this thread
It's a pretty fresh take as comparing a fusion chip (obviously one chip) to a dedicated CPU + dedicate GPUs (obviously two chips) is the basis of most conversations I read here and there.
Dual sockets mobo to strike back?
 
Yes, he has accomplished a lot, but it is simply absurd how Sweeney persists that a software render pipeline is the certain future of game rendering.

Maybe there will be some cases where an alternative to classic rasterization could lead to a new pipeline standard. But why in hell would this not be implemented in (a) hardware (standard)? He has completeley lost touch with reality by suggesting that even a marginal amount of people in the industry is interested to create a complete render pipeline from ground up.
 
Well I would not say absurd when fixed function gpus overtook software the hardware that would have allow software rendering to survive didn't exist and more importantly it still doesn't exists. As AR stated one could try (tried actually) to push such a chip and fail for what ever reasons, power consumption, not meet clock speed, etc,etc. Even if one were to succeed point is history took another direction does that make his statement or belief absurd? Honestly I doesn't think so.
On the other A.Richard points and statements are more "down to earth" and somehow interesting as they account the world as it is.
 
Maybe there will be some cases where an alternative to classic rasterization could lead to a new pipeline standard. But why in hell would this not be implemented in (a) hardware (standard)? He has completeley lost touch with reality by suggesting that even a marginal amount of people in the industry is interested to create a complete render pipeline from ground up.
The implication of the software-only approach is that everyone not willing to build their own software platform and ecosystem would buy a software engine from a company that builds software engines.
 
The implication of the software-only approach is that everyone not willing to build their own software platform and ecosystem would buy a software engine from a company that builds software engines.

And I'm sure Epic and ID would be right there to help you out with that, for a phenomenal fee ;)
 
His software rendering rhetoric is annoying to me. He *could* have developed a high performance software rendering if he really though it was better than hardware. Why would he continue to develop hardware renderers if he didn't think it was they way to go. If he still doesn't think its the way to go why does he keep doing it.

The major issue with a software renderer is the CPU just doens't have anywhere near enough memory bandwith to support a software renderer that gets anywhere near the performance of GPU. A Cell like architechture might be able to produce a quick deferered renderer by rendering to localstore, but an x86 CPU would be shithouse. I almost think that he is talking about what software rendering would be like if CPU advancement went exactly like what he wanted since 1997, and it hasn't.
 
To be fair, both sides are right and wrong :smile: For now, hardware implementation is the way to go because it delivers much faster performance with acceptable quality within manageable power consumption. DirectX while not perfect, unifies the market and simplifies creation of new games. So for foreseeable future, hardware approach + DirectX is the way to go.

However, Tim is right that devs needs more flexibility and more programable approach, and thats exactly where industry is moving to. DirectX and new generation of video cards are more flexible than ever, just in baby steps. Larrabee would be what Sweeney is praying for, but it will take few more years till Intel releases drivers for gamers IMO (chips will be out to mass market as soon as next year), give it several more years to mature, and you get exactly that - fully programmable chips, if you want that.

The rest of the market will stick with DirectX for many years to come, regardless on what HW they'll run it - AMD, Intel or NV. And its good, as much as I respect Sweeney and Carmack, I would hate if market would totally depend on fully software approach, this would only make them richer, but also make very diverse market with different engines. This could fix few DirectX issues, but introduce truckload of other issues and reduce overall quality of the games. Not everyone is as talented and has as much resources as Sweeney/Carmack, and if you think DirectX is buggy/limited, watch out for loads of semi-baked software engines, filled with bugs.

For now, I'm happy with HW + DirectX ;)
 
I think while programmability of GPUs will continue to grow, some amount of ff hw (like the rasterizer) will always be there.
 
He is not talking about making the existing render pipeline more programmable. He is saying that hardware should become completely generalistic so that it is not designed according to certain render pipelines.

I don't understand why there are still people here who say that things like a software rasterizer/raycaster/whatever should turn out all right in the future after considering that even Intel now openly says that this idea is ridicules.

It has been PROVEN with Larrabee that hardware tailored to a certain render pipeline is a least two times more efficient than generalistic hardware (Larrabee even had texture units). No sane company would pay that price for the freedom to write their own, COMPLETE render pipeline. And the worst part is that the vast majority doesn't even see that goal as a benefit in the first place.

Yes, we might see some completely new render pipelines in the future, but it would be insane not to make these a standard and not mold this into specialized hardware.
 
Who knows if one were to spend a lot of time on Larrabee he could prove Sweeney right.
I mean he may end with lower fps, lower resolution but ends up with "better" pixels as Sweeney put it.
An huge problem with larrabee no matter what "absolute" perfs the chip achieved in the end is that it had to play a game he was not really design to play.

But Sweeney asks indeed a lot and is a bit dishonest, from my outsider pov real-time rendering looks like an adventure built by humans and companies some have disappeared. Sweeney would have wanted a stable language, after so many man years invested this language still doesn't exist. It's kind of a stretch not only hardware didn't exist when software rendering died but neither was the software language.
Actually I wonder if he would not have been howling with the wolves if back in time a company came with a hardware that would have allow software rendering to persist. Say a 4 tiny simplistic vliw cores, I could hear him and other scream as they scream some years ago (when multi cores CPU happened)... we don't want multi-core we want more serial performances, which languages are we supposed to use to make the most of the such a chip, etc.
Sweeney has to be French he is always complaining :LOL:
 
Last edited by a moderator:
I'm interested in why Sweeney would say that DirectX is the reason we are seeing diminishing returns in graphics.... I thought that the reason there would just be our physical reality, requiring exponentially more horsepower for every unit increase in detail. Does anybody have any other thoughts on that? How could directX be producing these diminishing returns?

As for the whole software rendering thing, doubt it. Just cause any algorithm can be accelerated in hardware and hardware is always faster.
 
As for the whole software rendering thing, doubt it. Just cause any algorithm can be accelerated in hardware and hardware is always faster.

Unfortunately this is not true. Else you would see accelerator cards for all major offline renderers. Some attempts have been made and all of them failed miserably.

Current real time 3D graphics are constrained by what the GPUs can do.
 
Unfortunately this is not true. Else you would see accelerator cards for all major offline renderers. Some attempts have been made and all of them failed miserably.

Current real time 3D graphics are constrained by what the GPUs can do.

It's also a question what pays off and what doesn't. Is there really a market to make a dedicated hardware for every software iteration of every major offline renderer? How much would such hardware cost to design and how much would you be able to sell it for?
 
Back
Top