Tim Sweeney interview with state of multicore programming on consoles+engine stuff

A PC/360 developer responded to this presentation (developer for the PC/360 game Prey):

"Sure you can just about get away with bad code now on the 360"

Hmm, not sure I'd agree with that. I've seen bad code run slower on a 360 core compared to the original Xbox 1's cpu! Bad code can easily bring a 360 core to it's knees.
 
Your definition is inappropriate for the context, and there's nothing silly about Tim Sweeney's or anyone else's comment about physics being a parallel task.

It's silly, because not all physical solvers are even remotely parallel.
It's silly, because you need to do a lot of work to make it parallel.
It's silly, because most of PDE solvers, for example, are much slower in their parallel form.
Etc.

And in fact lighting using a physics model is also extremely parallel. You can trace light-rays per pixel in parallel.

No, using a more or less correct physics model it's extremely nonparallel (see radiosity).
You need to fake it with a lot of clever tricks to make it parallel.
 
I'd go for less rather than more when describing the physically correctness of radiosity :)

Do the big physics engines directly solve sets of PDEs BTW? Or do they linearize them?
 
I'd go for less rather than more when describing the physically correctness of radiosity :)

At least it's more "correct" than Phong.

Do the big physics engines directly solve sets of PDEs BTW? Or do they linearize them?

Linearize. Parallelize. Solve. :)
There was a good article about SPU collision solvers at the top of the thread. For example, Gauss-Seidel is a linear solver, but you can linearize some ODEs and solve it with GS at acceptable speeds.
 

DeepBrown in the future it would be appreciated if you didn't excise all context from a transplanted quote. Frankly I think it's worth obvious mention that the people you're quoting are forum members, and this discussion took place in a B3D thread; as it stands for someone unfamiliar with the thread in which it originated, the quotes you provided are lacking serious contextual background. If you were quoted somewhere else on this forum, would you want your introduction reduced to "some guy on some forum?" ;)

(For those who have not seen it, the source thread is Insomniac's GDC presentation: http://forum.beyond3d.com/showthread.php?t=47057 )
 
Last edited by a moderator:
I think the main issue most programmers switching over to parallel programming are having is it's just new to them. It requires a completely different mindset than serial programming with a whole host of considerations you have to keep in mind that most people just aren't used to thinking about. They write a bunch of code they think should work in parallel and then when they get data corruption, dead-locks, race conditions et al they chock it up to "random unpredictable errors" and tear their hair out spending hours debugging the simplest code trying to find fixes (or 'workarounds' for what they consider to be inherit platform bugs that are really just their own errors in logic). When you've been working with this type of thing for a very long time, these bugs start to become a rare case and not a common, everyday case as they are with people new to parallelism. These bugs are also fairly obviously and immediately fixed by people with experience as all they have to do is look at what's happening and can quickly reduce the list of possible suspects to a handful of options and have it fixed in about the same amount of time any other programming bug would take to fix.

I think that's exactly it; a lot of legacy "old school" thinking resident in a lot of these teams, and a lot of inertia in terms of changing practices and viewpoints. It's no surprise really that Sweeney and his team coming from a very serialized experience - one in which they were extremely successful - view the present shift with not a lot of warmth. Sweeney himself certainly doesn't seem to think it worth his own time to relearn best practices, and likely the majority of Epic's top people are in the same boat. This attitude can be found throughout the development space, notably among longtime PC devs, and it harkens back to the comments of yesteryear and equating the XeCPU to 'Celeron-esque' performance. Which of course is total nonsense.

Well, one way or another these attitudes will begin to change over time. It's just a shame though that the likes of Sweeney, Carmack, and Newell seem so antagonistic to this shift; IMO it alludes to a new set of "star" developers that will take the reigns of game development in the future, though I would never diminish the contributions of their respective teams up until now.
 
Last edited by a moderator:
As someone who has been working with parallel programming for a much longer time than consumer multi-chip solutions have been available, I actually take a good bit offense from this section of the interview (or any statement about anything being 'hard' really as I usually adopt the mindset of "it's not hard, you just don't understand it well enough to realize how easy it is!"). For the longest time when I've had to develop single-threaded applications I've always run into sections of code that I immediately recognize as being able to easily split into separate, independent tasks that seem to naturally suggest parallelism and I've always had to force my brain into ignoring this and instead write it as serial code.

I would like to know what you think of something I wrote earlier today.

The saving grace for development right now is SDK 3.0 (with an April 07 miracle called Runtime 2.1) and a fix for fatal stalls.
As well as new and improved developer tools, plus real university taught education programs.
(PlaystationEdge, PhyreEngine, MIT, CIT, GeorgiaTech, USC, NC State, etc, etc)
Quite frankly parallel processing was too advanced/new an idea for most gaming studios to develop for.
Even on the PC, only the very largest studios were in 2007 just beginning to use multi-threaded engines.
So it was critically to get into the educational institutions and teach multi-core architecture, ISA's, and programming structures.
This way developer would be able to work for themselves and not rely on reworking other game's engine.
Now that is happening and developers again have hope for the following development cycles will be much better.

Some of this seems to fit with what you have said.
 
The reason that Multithreaded games aren't the norm on PC has nothing to do with technical prowess or desire, it has to do with supporting a minspec platform that has single CPU. Since you have to provide an adequate experience on your minspec platform, your usually reduced to figuring out what superficial features you can jam on to the other CPU's as opposed to how to best exploit them.

The concept that parallel progamming is just hard because it's new, and that some magic solution to make it as easy as sequential programming will come along or programmers will eventually just get it, is IMO misguided. It's hard because of the way the development complexity scales with code size size and complexity.

Most good game programmers could write a 1000 line parallel program and do a good job of it. A large team writing and debuggng 1000000+ lines of code various parts of which are significantly parallel is a diferent problem.

Now I will say that on these scales we're all pretty new to the problem and there is an element of re-establishing best practices, both architecturally and how the team works with the code, I know my viewpoint on how to approach some classes of problem has changed during my current development process. But I don't believe that there is going to be some magical breakthrough where this stuff becomes easy.
 
DeepBrown in the future it would be appreciated if you didn't excise all context from a transplanted quote. Frankly I think it's worth obvious mention that the people you're quoting are forum members, and this discussion took place in a B3D thread; as it stands for someone unfamiliar with the thread in which it originated, the quotes you provided are lacking serious contextual background. If you were quoted somewhere else on this forum, would you want your introduction reduced to "some guy on some forum?" ;)

(For those who have not seen it, the source thread is Insomniac's GDC presentation: http://forum.beyond3d.com/showthread.php?t=47057 )

Are we talking about the final comment? I was unaware it came from B3d - I sourced straight from MikeB's Neogaf thread, who also did not give a source - so how I was meant to know it originated from here...well i don't know. I also gave the Insomniac presentation as a source.
 
Are we talking about the final comment? I was unaware it came from B3d - I sourced straight from MikeB's Neogaf thread, who also did not give a source - so how I was meant to know it originated from here...well i don't know. I also gave the Insomniac presentation as a source.

Well I found the thread in question on NeoGaf, and it looks like it's been edited to include the link for context (so thanks Mike B). You can understand why I would have of course thought you sourced it from here originally. ;)
 
Well I found the thread in question on NeoGaf, and it looks like it's been edited to include the link for context (so thanks Mike B). You can understand why I would have of course thought you sourced it from here originally. ;)

Of course...but I am a member on a number of forums. Anyway, sorry, I should have linked to Mike B's thread anyway! :oops:
 
The reason that Multithreaded games aren't the norm on PC has nothing to do with technical prowess or desire, it has to do with supporting a minspec platform that has single CPU. Since you have to provide an adequate experience on your minspec platform, your usually reduced to figuring out what superficial features you can jam on to the other CPU's as opposed to how to best exploit them.

The concept that parallel progamming is just hard because it's new, and that some magic solution to make it as easy as sequential programming will come along or programmers will eventually just get it, is IMO misguided. It's hard because of the way the development complexity scales with code size size and complexity.

Most good game programmers could write a 1000 line parallel program and do a good job of it. A large team writing and debuggng 1000000+ lines of code various parts of which are significantly parallel is a diferent problem.

Now I will say that on these scales we're all pretty new to the problem and there is an element of re-establishing best practices, both architecturally and how the team works with the code, I know my viewpoint on how to approach some classes of problem has changed during my current development process. But I don't believe that there is going to be some magical breakthrough where this stuff becomes easy.

Yet games like Crysis require GPU's that cost more than two Dual cores and more than a new Quad core? :p

Sincerely though, Thank you for the multiple following insights.
Getting the perspective from someone who does this stuff for real is very helpful.
 
As someone who has been working with parallel programming for a much longer time than consumer multi-chip solutions have been available, I actually take a good bit offense from this section of the interview (or any statement about anything being 'hard' really as I usually adopt the mindset of "it's not hard, you just don't understand it well enough to realize how easy it is!"). For the longest time when I've had to develop single-threaded applications I've always run into sections of code that I immediately recognize as being able to easily split into separate, independent tasks that seem to naturally suggest parallelism and I've always had to force my brain into ignoring this and instead write it as serial code.

I think the main issue most programmers switching over to parallel programming are having is it's just new to them. It requires a completely different mindset than serial programming with a whole host of considerations you have to keep in mind that most people just aren't used to thinking about. They write a bunch of code they think should work in parallel and then when they get data corruption, dead-locks, race conditions et al they chock it up to "random unpredictable errors" and tear their hair out spending hours debugging the simplest code trying to find fixes (or 'workarounds' for what they consider to be inherit platform bugs that are really just their own errors in logic). When you've been working with this type of thing for a very long time, these bugs start to become a rare case and not a common, everyday case as they are with people new to parallelism. These bugs are also fairly obviously and immediately fixed by people with experience as all they have to do is look at what's happening and can quickly reduce the list of possible suspects to a handful of options and have it fixed in about the same amount of time any other programming bug would take to fix.

I've gotten to the point where I'm so used to thinking in parallel that I don't personally feel I spend any hugely noticeably longer amount of time developing parallel applications as doing the same in serial. I'm sure under the surface it's requiring slightly higher efforts than purely serial programming but I certainly wouldn't say it's twice as much. After a while it just seems to come naturally.. but maybe that's just me.
That's kind of missing the point.

It doesn't matter whether it is "hard" or not to write multithreaded code. But the fact is that is certainly harder than writing serial code. Given a limited amount of resources, you need to choose where to place your priorities. This is what Sweeney was getting at - he'd very much rather spend the time and development resources on something else. Multithreaded programming is here to stay, but it doesn't mean its ever going to be any easier than writing serial code. Which is why so many game developers dislike the notion.
 
Given a limited amount of resources, you need to choose where to place your priorities. This is what Sweeney was getting at - he'd very much rather spend the time and development resources on something else.
At some point they'll notice that bitching about it is a waste of their time too ;)
 
It doesn't matter whether it is "hard" or not to write multithreaded code. But the fact is that is certainly harder than writing serial code. Given a limited amount of resources, you need to choose where to place your priorities. This is what Sweeney was getting at - he'd very much rather spend the time and development resources on something else. Multithreaded programming is here to stay, but it doesn't mean its ever going to be any easier than writing serial code. Which is why so many game developers dislike the notion.

It's not the idea of preferring to spend time and resources on x task vs y problem that is irksome though, but rather the broad generalizations he's willing to apply to his analysis. To say that 'multicore' is 2x the effort, and Cell 5x the effort, and GPGPU 10x the effort... don't these figures strike you as overly arbitrary? And I don't think it'd be brought up as a point of contention at all if people didn't feel that those estimates were wrong to boot.

So ultimately in this instance it comes down to the methods/skill-sets of the team, and the state of the tools they are reliant on. It would be nice if he mentioned those aspects rather than making it seem an inherent cost differential associated with the architectures themselves. It is what it is of course, and one can't expect the industry to change overnight in terms of practices, but it does get a little perplexing hearing some of these guys poo-poo'ing the multithreaded apocalypse to come with the answer primarily being to ask for the middleware providers to hurry up with some tools.

His answer on the IBM v Intel aspect of the consoles given in the interview I found a little aggravating as well; the primary reasons each of the three console makers went with IBM - different in the case of each - are almost completely separated from the premises he offers. Granted for MS part of the move was definitely a reaction based on last gens sour situation in terms of both CPU and chipset sourcing, but Nintendo and Sony were going to do their chips regardless of what Intel was offering and at what price. Even for Microsoft I think, the XeCPU pursued a course philosophically more aligned with what the 360 was trying to achieve vs an Intel chip; where some aspects of its implementation could have of course been better, I think what it comes down to in Sweeney's view on the subject is basically OOE vs IOE - who has it and who doesn't. And that's certainly a subject that's made the rounds on these forums a number of times. :)

With the above noted, I'll say though that I thought the interview as a whole was fairly constructive - even some of the answers I disagree with - but it's of course the points of disagreement that get discussed.
 
Last edited by a moderator:
Now I will say that on these scales we're all pretty new to the problem and there is an element of re-establishing best practices, both architecturally and how the team works with the code, I know my viewpoint on how to approach some classes of problem has changed during my current development process. But I don't believe that there is going to be some magical breakthrough where this stuff becomes easy.

I think that's more of an issue with proper project management and architecture design than anything else. I don't care if you're working on a 10,000 line serial project or a million+ line parallel project, in a properly designed and managed project new code segments should not break other sections and one programmer's work shouldn't break another's already properly working code. Now I know as well as anyone that in the real world things don't always work out this way but that's true for both serial and parallel programs.

Parallel systems aren't exactly new (dating back 40-50+ years), they've been around for quite some time; they're just 'new' to the mainstream. There have been numerous multi-million line projects throughout history, those people obviously learned how to deal with all the differences at some point.

I never said (nor meant to imply) that there's some sort of magical point in a programmer's life when all of a sudden they start to write flawless parallel (or serial) programming out of some divine understanding. I do believe that things certainly get a lot easier as you get more experience with it (as is true for everything in life) but more than that I think the primary thing holding a lot of people back is more on the architectural design level than anything else. The Unreal Engine in particular is a very large project with a largely serial-oriented design that's now being modified into a parallel program. That's certainly going to increase the amount of effort you're going to have to apply to any given problem because the framework just isn't there to naturally facilitate you.

There's also the fact that they have a huge licensee base and a pretty large amount of income coming in on their current design that's naturally going to lead to some rather large inertia - they can't exactly change everything overnight.
 
I think that's more of an issue with proper project management and architecture design than anything else. I don't care if you're working on a 10,000 line serial project or a million+ line parallel project, in a properly designed and managed project new code segments should not break other sections and one programmer's work shouldn't break another's already properly working code. Now I know as well as anyone that in the real world things don't always work out this way but that's true for both serial and parallel programs.
In theory. However the complexities of getting people to design and work with flawless systems aren't inconsequential, especially in something as complex as a computer game where pursuit of performance can encourage short-cuts that break proper code structure. I'm one of those trumpeting the new regime and thinking the devs that are complaining aren't approaching it the right way, but at the same time I wouldn't expect developers, not even a notable percentage of developers, to write game code where changes in one system never have impact on other systems.
 
Personally I would assume his statements regarding parallel program development defiicultiew could have been focused on the agreggate & not on an individual basis..

I'm pretty sure someone like Tim Sweeney wouldn't have a problem sitting down & buidling a massively parallel software architecture however in the world of practical games development, he wouldn't get that chance to write most of it..

I think the largest difficulties come from managing teams of mostly moderately-experienced programmers who may or may not have had academic background with multi-threading with many whom may have never even seen multi-theaded code beofre let alone wrote it..

It's hard to manage a team whereby the VAST majority are *learning on the job* & there aren't strong enough established practices & guidelines for that particular field accessible internally..

So in that respect I'd say multi-core games development isn't inherently "hard" or even "harder" per say, however the process of developing said software is where the increased investment lies..
 
I'm pretty sure someone like Tim Sweeney wouldn't have a problem sitting down & buidling a massively parallel software architecture

And why are you so sure?
If he doesn't have a clue about physics solvers programming, why he should have expertize in multithreading?
 
Back
Top