Tim Sweeney Interview at Ars

Rufus

Newcomer
There's a good interview up with TS over at Ars Technica: http://arstechnica.com/articles/paedia/gpu-sweeney-interview.ars

There's the expected bit form him about the death of the current APIs and how everything will go back to software rendering (how long has he been saying that? maybe it'll be true this time). However he's quite down to earth on this bit about architectures, and how current DX/OGL games are what will really matter for the next few years (2? 3? 5?).
JS: So to follow up with that, I hear that Larrabee will be more general-purpose than whatever NVIDIA has out at the time, because NVIDIA is still gonna have some hardware blocks that support whatever parts of the standard rasterization pipeline.

TS: That's kind of irrelevant, right? If you have a completely programmable GPU core, the fact that you have some fixed-function stuff off to the side doesn't hurt you. Even if you're not utilizing it at all in a 100 percent software-based renderer, there are economic arguments that say it might be worthwhile to have that hardware even if it goes unused during a lot of the game, for instance, if it consumes far less power when you're running old DirectX applications, or if it can perform better for legacy usage cases.

Because, one important thing in moving to future hardware models is that they can't afford to suddenly lose all the current benchmarks. So DirectX remains relevant even after the majority of games shipping are using 100 percent software-based rendering techniques, just because those benchmarks can't be ignored.

So I think you'll see some degree of fixed-function hardware in everybody's architectures for the foreseeable future, and it doesn't matter. And as long as the hardware is sufficiently programmable, we're fine with that.
 
There's the expected bit form him about the death of the current APIs and how everything will go back to software rendering (how long has he been saying that? maybe it'll be true this time).

Yea, I think he has a point, he's just been expressing it in a very poor way for years.
I suppose what he was trying to get at was that GPUs are becoming more and more generic and more and more programmable, to the point where eventually you can write your own rasterizing rules again, or even use an entirely different technique, such as raytracing.
But then it's a pretty obvious point anyway (and more like 'evolving' back to software rendering rather than 'going back').

The way he expresses it, sounds more like we're going to throw out our GPUs, go back in time, and write trifillers on the CPU again.
 
I highly doubt DirectX is going to die. Like Tim sais, we're evolving towards having a C++ like language, but someone has to specify it and I'm sure Microsoft will be on the front row. Furthermore, if the majority will be using it to render triangles anyway, DirectX can still offer the building blocks. Heck, it could almost be like an O.S. running on the GPU. You have access to the underlying hardware, but there's also a wide variety of runtime libraries, compiler tools, APIs, etc. D3DX is already a first step in that direction, compute shaders might be the next.

The thing is, not everyone is looking forward to writing a software renderer again. Some game developers want the highest possible abstraction, some even want to create a game without writing code. Others, like Tim Sweeney, want/need to constantly innovate their engines and need the lowest possible access to the hardware. In my opinion DirectX can still address both...
 
I agree to some extent - we'll just see more levels of services offered by first and third parties. Now we have Unreal Engine for instance as middleware for game engines, and Havok for physics, and OpenGL, Cuda and DirectX will become similar 'middleware' services. We'll see which one becomes the most popular eventually - who knows something new altogether.

However, what is important for a company like Epic is that now they can try to improve their Unreal Engine by going lower level and improving the graphics rendering pipeline. For some purposes that may actually be less work then trying to achieve things using existing graphics APIs, and it certainly offers them a new means of standing out and making their services valuable. So if any party will make good use of the new freedom, I think Epic would be one, id definitely another - judging from some of their papers, they look like they've already started significant work. But it will also be interesting for graphics cards manufacturors to use their extensive programming expertise towards creating a software layer that optimises their hardware as we enter a new phase of more 'free-form' competition.

I'm looking forward to the new creativity possible - I think that there are some real opportunities for some amazing graphics advancements once we are more 'free'. It may take a while for this to happen though for various reasons (cost, training of people, etc.). But I'm sure some third-parties will start offering new and interesting stuff once the hardware will allow them to.

I think the first opportunity for anything like this to come to fruition may be the next generation of console hardware. The economics align at that point, and there is enough time to prepare. It will be very interesting to see what happens here, and what roles Sony and Microsoft will play, if any. This aspect of software is going to be more important compared to last time, and certainly Sony knows this by now.
 
I don't think that OpenGL/D3D will literally disappear, but chances are that they will be hidden and/or irrelevant to most developers (they will continue to be relevant for many years to come, for support of legacy applications).
Namely, if people aren't interested in writing a software renderer, they're probably not interested in writing a renderer period. Even with hardware rendering it takes a whole lot of time and effort to get from OGL/D3D to the point where you can render animated geometry with decent shading, interaction and all that...adding a software renderer to the list of things to implement isn't going to be a deal-breaker. In fact, if you develop your own renderer, you can save time on certain other aspects, because you'll have a more seamless integration with your animation system and material management and all that.
So I guess that this is the market that depends on middleware (as we already know it today).

This middleware can either continue to use OGL/D3D, because for many games these 'legacy' engines will do a fine job, and are proven technology (the videocard driver will be implementing a software renderer to support it anyway)... Or they could change to use their own renderers, but that would be transparent to the end-user.

I hope he's right about great new opportunities for developers with software rendering experience. I could use a career boost ;)
 
"JS: Well, NVIDIA has suggested, at least to me, that they'll have something that's comparably programmable in the Larrabee timeframe, i.e., what they're gonna have isn't going to look as different from Larrabee as the current generation of GPUs look from Larrabee, in the sense of programmability."

So what do we expect as per future programmability from NVidia?

This is wild speculation, but perhaps in the near term NVidia ends up providing ROP/OM access programmaticaly into a high latency per microprocessor write+readable cache (something like the missing CUDA PTX .surf functionality). I'd guess this would be without cross microprocessor concurrency, instead relying on atomic global memory access for concurrency between microprocessors?
 
Thanks for the link! I think Tim contradicts himself a bit when he says renderers being freed from API restrictions is a good thing and then at the end saying developers will never embrace something if it's going to escalate their difficulty/budget. It's not a contradiction for Epic because lots more people are going to be licensing engines if it requires more work but licensing will also increase the budget for these companies.

In principle, his idea of a single general processor that can run everything using a single language AND developers only having to write simple code is, of course, every programmer's dream. There are, however, a lot of open questions: can a general purpose processor run fast rendering code for games? Can it be easy to use? Can it be cheaper than today's GPU + CPU? I don't think it will happen for the next generation. We're fast aproaching the 3rd anniversary of the 360 and I guess Microsoft will want to jump the gun, again. I expect the first next-gen console to hit by 2011 so that leaves just three years to get hardware that doesn't suck and is affordable, ease the development platform so that developers don't continuously hit perf sink-holes and enough experience is gained with the new capabilities that someone can credibly argue that those new games look much better than if they were built for CPU + GPU.

My programming duties (sadly) do not include making games but I'm always looking for ways to save time. Right now I only use managed code because the performance threshold has raised enough that the cost/benefit of c++ just isn't worth it. More and more I want frameworks rather than just APIs because I don't want to reinvent the wheel. I don't want to be bogged down by small implementation details when I'm trying to think of the big picture. Tim talking about using bare-bones C++ to create my own, particular, rasteriser just doesn't make sense for me. But like I said, I don't code game engines so I could be way wrong here.
 
This is wild speculation, but perhaps in the near term NVidia ends up providing ROP/OM access programmaticaly into a high latency per microprocessor write+readable cache (something like the missing CUDA PTX .surf functionality). I'd guess this would be without cross microprocessor concurrency, instead relying on atomic global memory access for concurrency between microprocessors?
This is going to be enough 'only' if each core alwayys works on different screen tiles (which I think it's the case on G80), otherwise you'd need a coherent cache.
 
This is going to be enough 'only' if each core alwayys works on different screen tiles (which I think it's the case on G80), otherwise you'd need a coherent cache.

Seems to me that there are some very good reasons not to add any complexity of tiles being on anything other than a set microprocessor, both to maintain draw ordering and Z buffer without synchronization and to insure scalability. Would probably do this even if I was to write a new parallel software renderer as well.

BTW, worth a read if you haven't already, http://www.icare3d.org/GPU/CN08.
 
"So DirectX remains relevant even after the majority of games shipping are using 100 percent software-based rendering techniques,"

Is he expecting this to happen ?
 
Interesting read. I still remember starting up Unreal for the first time and being amazed at how good the software only rendering was. And it wasn't dog slow either at the resolution it was meant to play at.

I'd certainly like to see this direction he's indicating. Voxels for example had a lot of promise but the rise of triangle based 3D acceleration hardware pretty much killed it off. Novalogic did some good work with that in DOS and even tried a hybrid voxel/triangle engine that didn't do so well. View distances in their games was stunningly impressive. Resolution of objects however, not so much.

I wonder if full programmability will herald another explosion of indie game devs as was experienced in the early to mid 90's.

Regards,
SB
 
Interesting read. I still remember starting up Unreal for the first time and being amazed at how good the software only rendering was. And it wasn't dog slow either at the resolution it was meant to play at.

I'd certainly like to see this direction he's indicating. Voxels for example had a lot of promise but the rise of triangle based 3D acceleration hardware pretty much killed it off. Novalogic did some good work with that in DOS and even tried a hybrid voxel/triangle engine that didn't do so well. View distances in their games was stunningly impressive. Resolution of objects however, not so much.

I wonder if full programmability will herald another explosion of indie game devs as was experienced in the early to mid 90's.

Regards,
SB
Probably not. Content is very expensive to create these days :(
 
If the game industry had any sense they would create an asset repostitry and all donate to it not trademark stuff (you couldnt expect bungie to donate a master cheif model) but generic stuff like tables, chairs, crates (must forget crates), wall textures, grass textures ect, that they could all make use of.
The man hours spent doing this stuff over and over again must be huge...
 
Last edited by a moderator:
Most games have a very specific looking art style; I doubt anyone but hobby game developers* would use the generic chairs, boxes, etc.

*Not that I wouldn't want them to - art is hard, if you can express your new idea with those generic objects, the possibility of getting funding and a real team together is much better.
 
well hopefully if the idea took off there would be a huge range
and they could be customised slightly to give a unique look

its sort of allready being done (but using comercial assets)
remember the short-lived controversy about stalker using doom3 assets, turned out they both used a comercial library
http://www.shacknews.com/onearticle.x/46449
 
Last edited by a moderator:
Well with Carmack and Sweeney talking about voxels maybe something will actually happen in that regard. It seems like it could be a great thing to me.
 
Well with Carmack and Sweeney talking about voxels maybe something will actually happen in that regard. It seems like it could be a great thing to me.

Didn't AMD/ATI use Voxels on their 48x0 demos?
 
Didn't AMD/ATI use Voxels on their 48x0 demos?

Well, the company that did the demo's used Voxels, yes. Which is one reason they made me so excited. :)

The extent to which the Voxels are accelerated (if any) isn't exactly revealed however. But just to see them used brings a grin to my face. :)

Regards,
SB
 
Back
Top