John Carmack on PS3 Video

Legend said:
for us slow pokes (dialup-ers) can someone post the key points mentioned, please?

thanks.

It's just the usual:
* Sony pick asymmetric style for Cell
* MS pick symmetric
* Better peak theoretical performance on Cell
* Nothing extremely bad with it but will take a little more work that studios closer to Sony will do but iD won't
 
but he said the ps3 has more peak performance ..

and how is 3 cores symetrical actually?
 
"identical" or "triplets" is probably what he meant. :p

Anyone know where I can get the full interview?
 
hey69 said:
but he said the ps3 has more peak performance ..

and how is 3 cores symetrical actually?
Symmetrical in this case means the same. Cell is asymmetric because it mixes different types of processors, though to be honest it's only two. It's not 7 different processors. Devs already know how to write for for the PPE. The problem which Carmack really faces is learning how to use the SPEs effectively as they are different. If Cell consisted of one PPC core and 2 Intel cores, it'd still be asymmetric but he wouldn't be complain so much ;)

What he did say which I don't think right, was that writing for the SPE's needs different tools and a different compiler to writing for the PPE, and he even called the PPEs Cells.
 
Shifty Geezer said:
What he did say which I don't think right, was that writing for the SPE's needs different tools and a different compiler to writing for the PPE, and he even called the PPEs Cells.

No he's right compiler is obviously different.
The issue with the SPU's is there limited access to main memory, it's hard to work around for a lot of obviously parallel tasks.
 
Ooo. So SPE apulets are compiled separately and built into the main program? Doesn't the IDE provide SPE and PPE development within the same program?
 
Bah, that's no big deal... At least you have a compiler for the SPEs... It's not like the PS2 where you've got 2 different compiler targets, and none for the VUs (actually there were a couple of compilers, but nothing great), along with build chain having to deal with 3 different sets of vector opcodes... In this regard the PS3 is FAR more simpler and cleaner...
 
archie4oz said:
Bah, that's no big deal... At least you have a compiler for the SPEs... It's not like the PS2 where you've got 2 different compiler targets, and none for the VUs (actually there were a couple of compilers, but nothing great), along with build chain having to deal with 3 different sets of vector opcodes... In this regard the PS3 is FAR more simpler and cleaner...

I think thats the point id didn't make any ps2 game either. I think the thing about Carmack is that he likes to try new stuff and not have to rethink ways to get simple things done
 
pegisys said:
I think thats the point id didn't make any ps2 game either.
id, themselvs, didn't developed anything for today's Consoles. Be it the PS2, the GC, the Xbox or even the DC.

Quake III was ported on the PS2 by EA and it was ported on DC by Raster.
Doom III on Xbox is the work of Activision's Vicarious Vision.

The last game id developed on Console was Doom for the Atari Jaguar...
 
ERP said:
No he's right compiler is obviously different.
The issue with the SPU's is there limited access to main memory, it's hard to work around for a lot of obviously parallel tasks.

Yes, but for obvious, embarasslyng parallel tasks, one doesn't want a huge shared memory pool. One wants local memory to avoid concurrency issues and the need to use concurrent datastructures, or locks.

Yes, a UMA and SMP architecture presents an easier model to code for, but it does not neccessarily provide optimal performance. To get optimal performance, one might need to implement local memory areas anyway to get rid of lock contention, and to allow the compiler to make optimizations that it may not be able to do if the default assumption is conservative (aliases, concurrency)

I think when it comes down to it, if you want high performance on tasks which can be parallelized, you need to write your code in a way that takes advantage of the inherent data-parallel nature of your problem, rather than thinking you can get away with a solution that is a serial algorithm with a couple of OpenMP annotations launched in multiple threads.
 
DemoCoder said:
Yes, a UMA and SMP architecture presents an easier model to code for, but it does not neccessarily provide optimal performance.
Which appears to be Carmack's point - he doesn't think the extra performance is worth the extra effort it seems. It's interesting the different take on these machines. You also get some devs saying PS3 is a challenge, but an exciting one with lots of potential. Everyone has their tastes. I guess for the games it depends where the average lies as to whether exotic hardware gets used properly or not.
 
If you want a friendly coding environment that badly, I'm sure it can be provided via a software layer. In fact, I wouldn't doubt that Sony will make such a library available.
 
Shifty Geezer said:
Which appears to be Carmack's point - he doesn't think the extra performance is worth the extra effort it seems. It's interesting the different take on these machines. You also get some devs saying PS3 is a challenge, but an exciting one with lots of potential. Everyone has their tastes. I guess for the games it depends where the average lies as to whether exotic hardware gets used properly or not.

That depends entirely on what you do with the extra performance. By his definition, he doesn't think its worth it, and maybe it isn't for the kinds of problems he is trying to solve (MegaTexture). Carmack seems to desire high texture rate:alu right now for example, which is the opposite of what other devs are pushing (low tex:alu)

But other devs might have novel uses for SPUs on algoithms that carmack has no interest in. I do think Carmack is one of the best game programmers out there, but Doom for example represents one particular vision for where games can go. Each engine imparts it's own special set of restrictions depending on what it is optimized for.

I think one can definately say that Xenos is probably easier to code for, and to avoid penalties and achieve optimality easier. But XGPU vs CELL is a different story, and I don't think one can make the conclusion that ease of use for a fixed platform target that may lead lower peak thoroughput is necessarily ideal.

Ease of use is a factor if you want massive adoption at all levels. For a console, I may not want shovelware developers. I may want top-tier developers who can rench maximal performance from my system and come up with innovative designs. I want my top-tier developers to have ease of use and good tools, but one shoudn't design a system where the hands of those developers will be tied.

For example, one could abstract away details of everything and force devs to go through a virtual machine layer like .NET, and use a builtin scenegraph language, and that may make for rapid development, less bugs, and ease of use, but it will lower the maximum performance of what's achievable.

Rapid Application Development and Maximum Performance High Quality games may not go together. RAD means development productivity. But you have to be careful that the excess productivity doesn't give up too much in trade.
 
DemoCoder said:
Yes, but for obvious, embarasslyng parallel tasks, one doesn't want a huge shared memory pool. One wants local memory to avoid concurrency issues and the need to use concurrent datastructures, or locks.

Sure but how many embarassingly parallel tasks are there in you average game?
And how many are there you can add that add value to the game?

Here's my current thought on this, having not gone through an entire dev cycle, so I'm likely to change my mind. I am pretty positive the limitting factor will be the speed of the PPU, we'll off load all the easy stuff, and probably add stuff that can be easilly offloaded. But I don't believe at there core most game engines will be particularly parallel.

I think there is a real desire to scale gameplay, I want to put 100 people in the streets of a city to populate it instead of the 20 or so in GTA. I don't want them to disapear when I turn around. I'd like them to exhibit reasonable behavior in response to what's happening, I'd rather thay didn't run into each other all the time and look stupid. All this stuff will likely increase the load on the main game thread.

Most of the game engines I've seen don't even do things like attempt to batch physics queries, or deal with the fact that they might be asynchronous. It's very much do a raycast here and change state in response to it.

Parallelism isn't trivial in general and it's easy to actually reduce performance if your not careful, even something as seamingly trivial as creating an object asynchronously has subtle gotcha's that will screw you if your not careful. And unfortunately a lot of things appear to work until they don't.

IMO It's going to be a long slow transition to effective parallelism in game engines.
 
Wow, the 3rd time around with this info was the charm :D

I think it is interesting see how different developers react to the systems, and as Demo said that may be due to the different problems they want to solve.

Carmack has said he had wished the consoles were single-core this generation, and had moved to multi-core next gen. While this recent interview is more "PS3 v 360" in the past he has stated he was not very happy with either CPU, just more unhappy with Cell. Then again what solution was there? CPU makers hit the wall 3 years ago, there was not many options. It seems to be more an industry problem and that we should have been looking at parallelization in hardware, tools, and software years ago. Then again he is pretty up front that he thinks both consoles are not bad (from a dev perspective), unlike last gen. So his comments were all negative.

John and id Software seem to focus on "smaller" developer teams, and as he has noted in the past the issue is not necessarily him but other developers he is concerned with. They seem to be pretty close with Raven, Splash Damage, etc.

If I understand him correct, I think his general philosophy is the platforms should enable him to spend as much time possible working on the game and not fighting hardware. Dev time and budget are increasing, and game complexity is increasing. This frequently means your team size increases, and with it the risk of more problems. Multi-core processors just increases the stress from going from 1 generation to the next. Further adding to the complexity with asymetric cores is just another thing to worry about.

Of course Carmack does not speak for everyone in the industry (PS2 devs are probably shaking their heads!), but his concerns seem pretty reasonable from a developer from a PC perspective and as the owner of an independant game studio. This is one reason I don't see MS and Sony getting together on a console. As similar as they are, they also have different philosophies and approaches and invested interests.

What I would like to ask John is: "What would have been your preferable multi-core solution?" My guess would have been a dual core OOOe x86. Of course size, heat, and cost issues are there, as well as IP ownership and floating point performance.
 
Simple breakdown: John Carmac isn't happy about changing his habits. While there is much to gain by doing it the PS3 way, it's a lot of hard work. And he was doing very well doing what he did, thank you very much. And the Xbox360 is very much as he is used to things, Visual Studio and all the common classes and objects.

Cynical? Yes. True? Yes. The higher people are elevated, the less they like having to start over and proving themselves once again. When you're considered a Demi-God, you're not trampling to change your territory and run the risk that you get your ass kicked by the inhabitants.
 
Back
Top