Part 1: Unreal creator Tim Sweeney: "PCs are good for anything, just not games”
http://www.tgdaily.com/content/view/36390/118/
Part 2: Tim Sweeney, Part 2: “DirectX 10 is the last relevant graphics API”
http://www.tgdaily.com/content/view/36410/118/
Part 3: Tim Sweeney, Part 3: Unreal Engine 4.0 aims at next-gen console war
http://www.tgdaily.com/content/view/36436/118/
http://www.tgdaily.com/content/view/36390/118/
TG Daily: You have to admit, the margin is obviously there.
Sweeney: Agreed. But it is very important not to leave the masses behind. This is unfortunate, because PCs are more popular than ever. Everyone has a PC. Even those who did not have a PC in the past are now able to afford one and they use it for Facebook, MySpace, pirating music or whatever. Yesterday’s PCs were for people that were working and later playing games. Even if those games were lower-end ones, there will always be a market for casual games and online games like World of Warcraft. World of Warcraft has DirectX 7-class graphics and can run on any computer. But at the end of the day, consoles have definitely left PC games behind.
TG Daily: In other words: Too big?
Sweeney: Yes, that is huge difference. If we go back 10 years ago, the difference between the high end and the lowest end may have been a factor of 10. We could have scaled games between those two. For example, with the first version of Unreal, a resolution of 320x200 was good for software rendering and we were able to scale that up to 1024x768, if you had the GPU power. There is no way we can scale down a game down by a factor of 100, we would just have to design two completely different games. One for low-end and one for high-end.
That is actually happening on PCs: You have really low-end games with little hardware requirements, like Maple Story. That is a $100 million-a-year business. Kids are addicted to those games, they pay real money to buy [virtual] items within the game and the game.
TG Daily: Broken down, that means today’s mainstream PCs aren’t suitable for gaming?
Sweeney: Exactly. PCs are good for anything, just not games.
TG Daily: Can that scenario change?
Sweeney: Yes, actually it might. If you look into the past, CPU makers are learning more and more how to take advantage of GPU-like architectures. Internally, they accept larger data and they have wider vector units: CPUs went from a single-threaded product to multiple cores. And who knows, we might find the way to get the software rendering back into fashion.
Then, every PC, even the lowest performing ones will have excellent CPUs. If we could get software rendering going again, that might be just the solution we all need. Intel’s integrated graphics just don't work. I don't think they will ever work.
Part 2: Tim Sweeney, Part 2: “DirectX 10 is the last relevant graphics API”
http://www.tgdaily.com/content/view/36410/118/
TG Daily: In the first part of our interview you implied that software rendering might be coming back. Daniel Pohl, who rewrote Quake 3 and Quake 4 using ray-tracing [and is now working as Intel's research scientist] recently showed ray-tracing on a Sony UMPC, an ultraportable device equipped with a single-core processor. True, the resolution was much lower than on PCs of today, but it looked impressive. What are your thoughts on ray-tracing? How will 3D develop in the next months and years?
Sweeney: Ray-tracing is a cool direction for future rendering techniques. Also, there is rendering and there is the ray scheme of dividing the scene into micro-polygons and voxels. There are around five to ten different techniques and they are all very interesting for the next-generation of rendering.
Rendering can be done on the CPU. As soon as we have enough CPU cores and better vector support, these schemes might get more practical for games. And: As GPUs become more general, you will have the possibility of writing a rendering engine that runs directly on the GPU and bypasses DirectX as well as the graphics pipeline. For example, you can write a render in CUDA and run it on Nvidia hardware, bypassing all of their rasterization and everything else.
All a software renderer really does is input some scene data, your position of objects, texture maps and things like that - while the output is just a rectangular grid of pixels. You can use different techniques to generate this grid. You don’t have to use the GPU rasterizer to achieve this goal.
TG Daily: What kind of advantage can be gained from avoiding the API? Most developers just utilize DirectX or OpenGL and that's about it. How does the Unreal Engine differ from the conventional approach?
Sweeney: There are significant advantages in doing it yourself, avoiding all the graphics API calling and overhead. With a direct approach, we can use techniques that require wider frame buffer, things that DirectX just doesn't support. At Epic, we're using the GPU for general computation with pixel shaders. There is a lot we can do there, just by bypassing the graphics pipeline completely.
TG Daily: What is the role of DirectX these days? DirectX 10 and the Vista-everything model promised things like more effects and direct hardware approach, claiming that lots of new built-in technologies would enable a console-like experience. DirectX 10.0 has been on the market for some time and the arrival of DirectX 10.1 is just ahead. What went right, what went wrong?
Sweeney: I don't think anything unusual happened there. DirectX 10 is a fine API. When Vista first shipped, DirectX 10 applications tended to be slower than DirectX 9, but that was to be expected. That was simply the case because the hardware guys were given many years and hundreds of man-years to optimize their DirectX 9 drivers. With DirectX 10, they had to start from scratch. In the past weeks and months, we have seen DX10 drivers catching up to DX9 in terms of performance and they're starting to surpass them.
I think that the roadmap was sound, but DirectX 10 was just a small incremental improvement over DX9. The big news items with DirectX 9 were pixel and vertex shaders: You could write arbitrary code and DX10 just takes that to a new level, offering geometry shaders and numerous features and modes. It doesn't change graphics in any way at all, unlike DX9. That was a giant step ahead of DirectX 7 and DirectX 8.
TG Daily: Since you are a member of Microsoft's advisory board for DirectX, you probably have a good idea what we will see next in DirectX. What can we expect and do you see a potential for a segmentation of APIs - all over again?
Sweeney: I think Microsoft is doing the right thing for the graphics API. There are many developers who always want to program through the API - either through DirectX these days or a software renderer in the past. That will always be the right solution for them. It makes things easier to get stuff being rendered on-screen. If you know your resource allocation, you'll be just fine. But realistically, I think that DirectX 10 is the last DirectX graphics API that is truly relevant to developers. In the future, developers will tend to write their own renderers that will use both the CPU and the GPU - using graphics processor programming language rather than DirectX. I think we're going to get there pretty quickly.
I expect that by the time of the release of the next generation of consoles, around 2012 when Microsoft comes out with the successor of the Xbox 360 and Sony comes out with the successor of the PlayStation 3, games will be running 100% on based software pipelines. Yes, some developers will still use DirectX, but at some point, DirectX just becomes a software library on top of ... you know.
TG Daily: Hardware?
Sweeney: GPU hardware. And you can implement DirectX entirely in software, on the CPU. DirectX software rendering always has been there.
Microsoft writes the reference rasterizer, which is a factor of 100 slower than what you really need. But it is there and shows that you can run an entire graphics pipeline in software. I think we're only few years away from that approach being faster than the conventional API approach - and we will be able to ship games that way. Just think about the Pixomatic software rendering.
Part 3: Tim Sweeney, Part 3: Unreal Engine 4.0 aims at next-gen console war
http://www.tgdaily.com/content/view/36436/118/
TG Daily: Throughout GDC, there have been several companies that were presenting game controllers that are so-called brain-computer interfaces. A while ago, I used OCZ's NIA device in Unreal Tournament 2004 and was blown away by the usability of that device. How do you see the interface between us, humans, and the computer evolving, given the fact that Nintendo has seen such a huge success with its Wii-mote?
Sweeney: I think the key challenge here is to look at all the cool things engineers are developing and identify which ones are just gimmicks, which ones are cool ideas that might benefit one part of the market, but aren't fundamental changes and which ones are things that really change the way we work with computing interfaces. I still think that motion controllers, such as the Wii controller, have a limited purpose, sort of a gimmicky thing. Standing there and holding a bunch of devices and moving them around wildly is great for party games, but I don't think that will fundamentally change the way people interact with computers. Humans are tactile beings, so things such as touchscreens fundamentally improve the user interface.
TG Daily: That brings us back to the iPhone, which we talked about earlier. Apple appears to have made a lot of progress in this area.
Sweeney: I agree. You are not just bringing up the map on the screen, but you move it with your fingers, you zoom in and zoom out. It's incredible that nobody thought of that earlier. With 3D editing tools, the touchscreen approach would be an excellent thing. You can grab vertices and drag them in different directions. A touchscreen could really improve and change things. I think that we might see that technology migrating to Maya. It is hard to tell how exactly that will pan out, but I see that as a very big improvement in computing versus the motion stuff. These are just neat toys.
The other big direction is head tracing - cameras built into consoles. They watch you and detect, for example, your arm movement. It is just more natural, because it is somewhat annoying to hold a bunch of wired plastic do-adds, wireless things you have to pick up and recharge them every once in a while. To me, it's more compelling to just use free-form movement and have computers recognize your gestures.
Sweeney: Five years into development of personal computers we exhausted all the major ideas such as keyboard, mouse, joystick and gamepad. But then you see something like Apple’s multi-touch, or you see that YouTube video on a big screen based interface where people walk around and just start manipulating objects that are projected there. That is new stuff, that's entirely new. No one really has done that before and it is clear that there are still a lot of major ideas that haven't surfaced yet. Yet. As the technology improves, one thing is certain: As you increase complexity of the user interface, you need more processing power.
TG Daily: Let’s talk about your game visions for the future and the next Unreal Engine? Where is EPIC going with the Unreal Engine 3.5 and 4.0?
Sweeney: The Unreal engine is really tied to a console cycle. We will continue to improve Unreal Engine 3 and add significant new features through the end of this console cycle. So, it is normal to expect that we will add new stuff in 2011 and 2012. We're shipping Gears of War now; we're just showing the next bunch of major tech upgrades such as soft-body physics, destructible environments and crowds. There is a long life ahead for Unreal Engine 3. Version 4 will exclusively target the next console generation, Microsoft's successor for the Xbox 360, Sony's successor for the Playstation 3 - and if Nintendo ships a machine with similar hardware specs, then that also. PCs will follow after that.
Also, we continuously work on transitions, when we go through large portions of the engine. We completely throw out parts and create large subsystems from the ground up, while we are reusing some things that are still valid.
TG Daily: Like ...?
Sweeney: The Internet bandwidth. In five years, the bandwidth isn't going to be more than 5-6 times higher than it is today. So the network code we have in the engine now will stay the same. Our tools are still valid, but we will rewrite large sections of the engine around it, as the new hardware develops.
TG Daily: What part of the engine will need a completely new development?
Sweeney: Our biggest challenge will be scaling to lots and lots of cores. UE3 uses functional subdivision and paths, so we have the rendering thread that handles all in-game rendering. We have the gameplay thread that handles all game-plays and uses AI. We have some hopper threads for physics. We scale very well from dual-core to quad-core, and actually you can see a significant performance increase when you run UT3 on a quad-core when compared to a dual-core system.
Down the road, we will have tens of processing cores to deal with and we need much, much finer grain task-parallelism in order to avoid being burdened by single-threaded code. That, of course, requires us to rewrite very large portions of the engine. We are replacing our scripting system with something completely new, a highly-threadable system. We're also replacing the rendering engine with something that can scale to much smaller rendering tasks, in- and out-of-order threads. There is a lot of work to do.
TG Daily: You already have started working on Unreal Engine 4.0?
Sweeney: We have a small Research & Development effort dedicated to the Unreal Engine 4. Basically, it is just me, but that team will be ramping up to three to four engineers by the end of this year - and even more one year after that. In some way, we resemble a hardware company with our generational development of technology. We are going to have a team developing Unreal Engine 3 for years to come and a team ramping up on Unreal Engine 4. And then, as the next-gen transition begins, we will be moving everybody to that. We actually are doing parallel development for multiple generations concurrently.
TG Daily: Stepping back, what do you see as the most significant technology trends these days?
Sweeney: When it comes to the PC, Intel will implement lots of extensions into the CPU and Nvidia will integrate many extensions into the GPU by the time next-gen consoles begin to surface. We are going to see some CPU cores that will deal with gameplay logic, some GPU stuff that will run general computing... and two different compilers. One for the GPU and one for the CPU. The result will be a reduction of our dependence on bloated middleware that slows things down, shielding the real functionality of the devices.
It would be great to be able to write code for one massively multi-core device that does both general and graphics computation in the system. One programming language, one set of tools, one development environment - just one paradigm for the whole thing: Large scale multi-core computing. If you extract Moore's Law, you see that with the number of cores that Microsoft put in Xbox 360, it is clear that around 2010 - at the beginning of the next decade - you can put tens of CPU cores on one processor chip and you will have a perfectly usable uniform computing environment. That time will be interesting for graphics as well.
At that time, we will have a physics engine that runs on a computing device, we will have a software renderer that will be able to do far more features that you can do in DirectX as a result of having general computation functionality. I think that will really change the world. That can happen as soon as next console transition begins, and it brings a lot of economic benefits there, especially if you look at the world of consoles or the world of handhelds. You have one non-commodity computing chip; it is hooked up directly to memory. We have an opportunity to economize the system and provide entirely new levels of computing performance and capabilities.
Last edited by a moderator: