Is DirectX 10 the last DirectX graphics API that is truly relevant to developers?

Farid

Artist formely known as Vysez
Veteran
Supporter
I acknowledge that there is already a thread on this particular interview, but its subject has a slightly different scope and the OP doesn't really start any debate (nor did it start on its own with the replies).

Tim Sweeney, local real-time 3D celebrity and chief architect at Epic, answered some questions on the future of gaming/rendering on the PC platform in an interview with Tom's Hardware. Nothing new, or that we didn't read Sweeney talk about in earlier interview, really caught my interest, save for that particular claim that I'm quoting in its full context:

I think Microsoft is doing the right thing for the graphics API. There are many developers who always want to program through the API - either through DirectX these days or a software renderer in the past. That will always be the right solution for them. It makes things easier to get stuff being rendered on-screen. If you know your resource allocation, you'll be just fine. But realistically, I think that DirectX 10 is the last DirectX graphics API that is truly relevant to developers. In the future, developers will tend to write their own renderers that will use both the CPU and the GPU - using graphics processor programming language rather than DirectX. I think we're going to get there pretty quickly.

I expect that by the time of the release of the next generation of consoles, around 2012 when Microsoft comes out with the successor of the Xbox 360 and Sony comes out with the successor of the PlayStation 3, games will be running 100% on based software pipelines. Yes, some developers will still use DirectX, but at some point, DirectX just becomes a software library on top of ... you know.

And that particular claims sounds extremely interesting to me. If only for the fact that it falls inline with other folks opinions that graphics APIs will lose their appeal, or more exactly their stronghold, in the PC rendering scene.

Now, obviously, everybody I read talking about that had his own opinion on why and when they expected that to happen. I'll throw a few keywords in here, it might remind some pro/con arguments you B3D wanted to to make but never got around making:
  • GPGPU flexibility with general programming languages
  • CUDA, Brook, CTM
  • API segmentation
  • Software Rendering
  • Ray-Tracing
  • CPU integration of graphical functions

Now, my questions would be:
  • Do you happen to think that graphics APIs will lose their importance on the PC space as far as large scale game projects are concerned?
If you do think so, what are the reasons being this idea of yours and when do you see that happening?
Now, if you disagree with the idea, or consider it to be a non-issue for the foreseeable future, feel also free to chime in with your own argumentation why.
 
Well, if the GPUs will really envolve to a general-purpose stream processing units with a unified stream processing API, I see this happening. Otherwise, no. You will need to write different codepathes for different companies (CUDA/CTM/soem Intel API/etc.). Also, if the hardware has some graphics-specific features that are not exposed through the API like CUDA or CTM (depth buffering anyone?) it would be a waste not to use it. I really like Sweeney for his "forward" thinking, and I agree with many of his thoughs (like functional programming/garbage collection); but this one is a bit too hastly in my opinion.
 
So we're going to walk away from a standardized API, and go back to programming to the metal? Standards are what makes a single app work on multiple types and brands of hardware.

How are you going to code a high-end game to the metal and simultaneously support the R600/RV670 series, G80/G92 series, S3 400 series and Intel's 4x00 series? What then about the still "good" but older hardware, like R5x0 and G7x?

Sure, build yourself some form of hardware abstraction layer -- find all the hardware quirks for every major GPU silicon revision ever made. Let me know how fast you're able to ship something like that, especially with new revisions and new chips coming out every 12-18 months...

Standard API's are what make it even remotely possible to build a game in a meaningful timeframe without ignoring 92% of your likely user base. If that means you aim at DX8 functionality, then so be it.
 
I'd hardly call C or C++ "programming to the metal." Although granted today's fetish for "safe languages" it might as well be...

Sure, build yourself some form of hardware abstraction layer -- find all the hardware quirks for every major GPU silicon revision ever made

Hasn't GCC already been doing that for a few decades now?
 
Sure, build yourself some form of hardware abstraction layer -- find all the hardware quirks for every major GPU silicon revision ever made

Anyone else noticing a bit of a devolution going on as of late? Seems like all of a sudden everyone's back to writing assembly code and now we're actually asking for the days of DOS game programming to come back?

Ok, that's definitely a bit of an over dramatization but, back to the original topic, I somewhat agree but for totally different reasons. I don't think we're going to have another API to allow such a broad range of possibilities as DX10 did (relative to the previous generation) for quite some time, if ever and in that sense it may possibly be considered "the last relevant big step" to developers.

Beyond that, if we're going to move to exclusively CUDA-style GPU programming, we at least need some common platform on which to write that code and a common interface for communicating with that platform and the CPU. Personally, I'd still consider that an 'API' of sorts so in that regard, no I don't think this will be the last relevant API.


Even if that were the direction the industry took, I'd still like to have a graphics specific API that kinda prohibited me from writing ungodly slow code or, at least, steered me away from it.
 
I agree with the statement. An API is what you use to control a hardware peripheral like current generation graphics adapters. The CPU is still the 'host' controller the peripheral. However, I believe the graphics adapter, as a standalone piece of hardware you control through an API is going out the door.

What will be coming through the door next (perhaps in 2012) is *platforms* with heterogeneous processing units. Some units are good at SIMD computation (current GPU cores) and then you have powerful CPU cores for creating complex data structures on the fly. Look at ps3 and xbox360 with their very high bandwidth CPU/GPU interconnectivity and this is where the PC hardware is going. Think shared memory with multiple 'host' processors using their own unique performance advantages to operate on data structures in different ways.

Trying to program graphics on these platforms becomes more of a parallel programming problem with shared data structures, coherence issues between caches/DMA buffers in these heterogeneous units.

I really think different upcoming advanced graphics effects/rendering techniques are going to have their own *unique* share of these problems at the platform level and having a software rendering pipeline gives the flexibility to deal with these issues and program the platform correctly.
 
Sweeney is always dreaming of the “good old days” when he could write its own render code. Funnily in every interview that hit’s this he always predict that this time will come back in around 5 year. So far this was not a self-fulfilling prophecy.

To make a GPU API obsolete GPUs need to use an common ISA. I don’t can see this will happen anytime soon. As a developer I would not take the risk to write (or compile) for a bunch of current GPUs that may not work on the next generation because the ISA was changed. That may work for closed systems like consoles but not for the PC. Therefore the PC needs some kind of abstraction layer to talk with a GPU. For Microsoft this layer way always called Direct3D. IMHO if upcoming GPUs allow more programmability like write your own rasteriser function Direct3D 10+x will offer you a way to use this. Therefore I don’t see that GPU APIs become obsolete. They will be changed and maybe Microsoft will finally give Direct3D a new name. It is even possible that future versions may allow using additional cores of the CPU in the same way you can use a GPU.
 
Sweeney is always dreaming of the “good old days” when he could write its own render code.

Am I the only one tempted by a more cynical view?

Imagine we did somehow get a common ISA / programming model, and all implementations had similar performance characteristics. Essentially developers only had to target a single platform.

Even then, I wouldn't expect the majority of game developers to write their own renderers. A significant minority perhaps, but most developers will *still* license a middleware rendering engine so they can focus their effort on making their game fun and unique rather than reinventing the wheel at such a low level (unless they think reinventing the wheel will help them sell more copies of the game.)

Now, who would be providing such middleware?

I think it's interesting that the person who's most excited about removing abstractions of the hardware and turning rendering into a pure software problem is also someone who's well positioned to profit from such a direction. Writing all this low-level code for one hardware target is harder than writing to D3D, much less writing it for multiple targets.

No disrespect for Tim Sweeney meant -- he's a brilliant guy and has thoroughly earned the respect he gets. And he may well be right about where things are going, regardless of who stands to gain from it. I just have to wonder if he's a bit more optimistic about it than he would be if he made his money from game sales rather than engine licenses.
 
To make a GPU API obsolete GPUs need to use an common ISA. I don’t can see this will happen anytime soon. As a developer I would not take the risk to write (or compile) for a bunch of current GPUs that may not work on the next generation because the ISA was changed. That may work for closed systems like consoles but not for the PC. Therefore the PC needs some kind of abstraction layer to talk with a GPU. For Microsoft this layer way always called Direct3D.

This is where I was going, but wasn't able to express it the same way. Hell, even WITH a standard API we have problems with certain cards not having certain features, or features that are non-performant so you have to step around them. It's only going to get worse if you're programming per-GPU at an even lower level...
 
With all respect for his accomplishments, Sweeney has consistently been wrong in the past about where the industry is heading. My prediction is that he'll continue to be wrong.
 
Agreed. At least he's consistent in that.
 
With all respect for his accomplishments, Sweeney has consistently been wrong in the past about where the industry is heading. My prediction is that he'll continue to be wrong.

You are not the only one to say this, but you put it so starkly. So I am curious what you and others think he has been so wrong about.

To my recollection he accurately predicted the convergence of CPUs and GPUs early in this decade - apparently before Intel knew it was headed in that direction.
 
He's been spouting the same thing for nearly a decade -- ever since the 3dfx Voodoo 1 days. He's always said it with a target of in a year or two. If you keep predicting something long enough it might just happen, but it doesn't mean you were right. That's especially true when you originally touted it to be soon now.

His other prediction I clearly remember is he always spouted off on The Motley Fool forums was that Nvidia's NV30 would wipe the floor with the competition.

Here's my prediction which is equally as true as his: "We will travel to Mars in a couple years."
 
i seem to remember him saying maybe in 2002 that the gpus and cpus would converge in 8 years or so. if my recollection is correct that is pretty spot on. moreover, he frequently expresses a personal desire for a return to software rendering and believes it will happen when gpus and cpus converge. so to say he has always been saying it will happen tomorrow is probably not accurate.

it seems pretty clear this is the direction he is going with the unreal 4. and it will probably be an amazing engine with the ability to rasterize and ray trace. just a guess there.
 
Tongue in cheek reply: If MS releases DX11 supporting general purpose computing then yeah I can see DX10 becoming irrelevant.

More seriously now, I really do not believe that game developers are going to spend time and effort basically reinventing the wheel just because they want a single new feature.

It was Tim himself that said he'd gladly trade 10% performance for a 10% increase in developing efficiency. Heck, many developers cite programming easyness as one primary reason for switching to consoles because they get a coherent API that is (basically) immutable. I just don't see these people going for a custom layer to leverage CPU + GPU computation.

Even Carmack's comments about doing some ray tracing algos for geometry in id Tech 6 should be viewed with the proper timeframe in mind: in 5 or more years. So that's a hybrid system still using a graphics API. I'd be surprised if DX is dumped within the next 10 years.

If MS gets the physics (and other general purpose computing) in the API soon-ish (say, within 2 years) and the documentation is solid, I just don't see many advantages is leaving that programming model.

Or to put another way, what Tim is implying (unknowingly of course) is that developers also shouldn't license an engine/middle-ware because there's always more flexibility in writing your own.

Also, where is this CPU - GPU convergence that I'm reading? I see GPUs becoming more general (CUDA, etc.) but I'm not seeing CPUs advancing quite as much into the graphics realm. And until we see what Larrabee is actually capable of I'm remaining skeptical. (In this situation Tim's in a position to know a lot more).
 
Or to put another way, what Tim is implying (unknowingly of course) is that developers also shouldn't license an engine/middle-ware because there's always more flexibility in writing your own.

It seems he is writing his own and other developers will benefit from his engine as per usual.

He is not saying he favors a CPUish product versus a GPUish product. But there is some clue in that he is talking about working on CUDA now. I mean it supports existing hardware that is only getting better and works on a growing number of platforms. And I thought I saw some slide somewhere about C++ coming to CUDA shortly.
 
But CUDA only works on the newest generation of NV cards last I checked; people with a 7900GTX aren't going to benefit, people with ANY sort of ATI card won't benefit. So then he's going to build a whole new rasterization pipe for CTM? Fine and dandy, but the newest CTM doesn't support anything older than the RV670, so now people with 2900XT's are also out of luck.

Do you see where this is going? If not, then there's no convincing you.

Until a common programming language exists across multiple platforms, what he's suggesting that development houses do is write even MORE code-paths for MORE platforms. And since about (guessing) 5% of the populace has compatible hardware with what he's asking, he's also asking them to castrate nearly all of their potential user base.

API's may not be the favorite, and the complaints about the WIDE disparity of performance are certainly valid, but there's not any other viable solution at this time.
 
You are not the only one to say this, but you put it so starkly. So I am curious what you and others think he has been so wrong about.

Sweeney has always been a CPU guy. He views things from a software point of view, but doesn't seem to understand (or want to understand) the hardware point of view. He's been predicting that we'll return to software rendering for almost a decade now. His reasoning has been along the line that "CPUs keep getting faster, so eventually they'll be fast enough for software rendering". The problem is that GPUs are also getting faster, and much faster than CPUs, so the gap between GPUs and CPUs has actually gotten wider, making software rendering increasingly less attractive. And with hardware consuming ever increasing amount of power, I don't see CPUs replacing GPUs anytime soon, since specialized hardware will always not just be faster, but also a lot more power efficient per flop.
 
Here is a Sweeney quote from the Tom's interview:

"But realistically, I think that DirectX 10 is the last DirectX graphics API that is truly relevant to developers. In the future, developers will tend to write their own renderers that will use both the CPU and the GPU - using graphics processor programming language rather than DirectX. I think we're going to get there pretty quickly."

Sweeney to me seems very supportive of GPUs. I'd say what he does not like is the API model. And it seems like GPGPU may finally enable him to realize his dreams of writing his own renderer. It seems like that will be the case for Unreal 4. Considering Epic's history it's hard to imagine them coming to market with an engine that underperforms.
 
First, I don't think developers will write their own rendering code. I'm sorry but that's really a waste of time, as NVIDIA and ATI/AMD seem to be able to deliver much better "rendering code" than most people can come up. Remember it's not just getting pretty effects, it's also about performance, and that's the hard one.

I also remembered that Tim Sweeney once said in 10 years everyone will be using software renderers because CPU will be so fast! That's probably about 10 years ago.
 
Back
Top