JC Keynote talks consoles

mckmas8808 said:
I guess you missed the whole physics interaction with the mud in Motorstorm. Killzone has at least 20 to 30 guys on the screen with terrific A.I. in the video. All the terrific post processing effects going on with these next gen games that I named are not happening on the PC scene today. Its more than just graphics. But I know you recognize that.

You must be referring to the imitation of physics in animated movies. ;)

I do agree that we will see games Next Gen that look *similar* to those movies.

But I doubt they will have the cinema-like perfect timing and reactions of the other soldiers/enemies or that level of animation.
 
blakjedi said:
i think he prefers the xbox 1 not the x360...

I don't know about that.

Gamespy said:
Carmack raved about the relative ease of developing for Xbox 360.

But the Xbox 360 was designed to have a very thin API layer. In Carmack's words, he can "basically talk directly to the hardware ... doing exactly what I want."

Here Carmack heaped praise on the decisions that Microsoft has made with the Xbox 360. "It's the best development environment I've seen on a console," he says. Microsoft has taken a very developer-centric approach, creating a system that's both powerful but easy to code for. This is in contrast to Nintendo, Sony, and (formerly) Sega, who generally focused on the hardware.

http://www.gamespy.com/articles/641/641662p2.html
 
jvd said:
The ps3 has 2 diffrent cores . Then you have to master using the 7 of them together .
You program both cores in the same programming language though (C++ or whatever the compiler offers), so that they run different code on a hardware level isn't anything the programmer's ever going to notice, or needs to worry about.
 
Guden Oden said:
You program both cores in the same programming language though (C++ or whatever the compiler offers), so that they run different code on a hardware level isn't anything the programmer's ever going to notice, or needs to worry about.

Both cores are going to excel in diffrent areas . Knowing which ones are best and how to best squeeze out the power is going to take time . More time than learning a single core .
 
Man, you're stretching. The SPUs are what does the gruntwork in PS3, there's where all the heavy code is going to go anyway.
 
Guden Oden said:
Man, you're stretching. The SPUs are what does the gruntwork in PS3, there's where all the heavy code is going to go anyway.

I disagree. Both are going to be very important and both need to have alot of work put into them to get the best out of them (Same with the xbox 360)

What makes me laugh is i siad the same thing about the xbox 360. Yet u came to the defense of the ps3 . Very telling very telling
 
Again you're stretching! This time into fantasyland, I might add. I'm not coming to the defense of anything by saying it's obvious to pretty much everyone the SPEs are where the gruntwork in PS3 will be carried out, considering they stand for 90% or so of the chip's computing resources.

And "very telling" of WHAT exactly, considering I'm going to preorder x360 as soon as it's even possible? :LOL: Don't be a silly person!
 
I'm not stretching at all. Its you thats stretching Unless you believe that developers just use the force to know what code will excell on what chip and how to push them to thier upmost potential and ot optimize the code to go over 7 cores or 3 cores .


As for very telling . Its telling that you jump to the ps3s defense yet say nothing about the xbox 360 . If i was wrong you should be correcting me about both not about your fav system . Which is obvious by your postings
 
Well.. from the furthest reaches of space, the 360's triple core is more similar in concept to the PC's (current) dual core than PS3's one core + 7 SPEs. *shrug*
 
Guden Oden said:
You program both cores in the same programming language though (C++ or whatever the compiler offers), so that they run different code on a hardware level isn't anything the programmer's ever going to notice, or needs to worry about.

If your doing any serious performance work, the PPU and SPU are completely alien. People often focus on things like ISA, which IMHO is largely irrelevant, as Knuth said that only thing that matters is memory access and PPU and SPU are like chalk and cheese there.
 
pso said:
Well if he's using code that was written for OoO processors on an in-order processor(Xenon), there's your problem. Of course OoO code is going to run faster on an OoO processor compared to an in-order.

Don't tell me Carmack is pulling an Anandtech....

John Carmack said:
"If you just take code designed for an x86 that's running on a pentium or an athlon or something, and you run it on either of the PPCs for these new consoles, it'll run at about half the speed of a modern state-of-the-art system, and that's because they're in-order processors; they're not out-of-order execution or speculative..."


man... I just don't have time to do a full transcript these days. :(
 
I don't always agree with JC's choices or opnions but they are always valid and well thought out.

Nobody can argue that in-order multi-core programming will make games more expensive and difficult to make. JC is just saying that maybe we jumped one generation too soon. From a technical point of view, I disagree but from a production point of view I'm not so sure. The problem isn't whether we can get better theoritical performance from multi-core architectures, its whether we have the tools and staff to get near the theoritical performance.

Its fine for the JC's of the world, we can lap this stuff up in our sleep but if we are only 5-10% of the programming staff, how good is the overall code base going to be?

JC isn't a lone coder anymore, he's a lead of a team and thats his concern, not his personal skill set. I'm sure JC could sit down and write the most awesome game on PS3 with-out blinking an eye but nobody buys games written by a single guy anymore.
 
These are great points, but we've been here before. The outcry with the PS2 was worse than this.

The thing is, people are still going to be coding on Cell and the like for PS3 in 4 or 5 years time. When they're building the box, Sony has to decide whether to go for something conventional and "approachable" or go with something with almost purely performance in mind. They've always traded off "approachability" for performance, but I think that's an understandable strategy when you consider it's going into a box to last at least 5 years. Putting a regular Intel or AMD in there would date it very quickly, but as it is Cell offers exceptional capability over them in some areas and will continue to for some time (Cell may have some shortcomings vs Intel/AMDs, but the opposite is also very true, and arguably in areas more pertinent to games? In such areas, Cell will hold its own against evolving PCs for a lot longer than it would if they went with a regular Intel/AMD). Sony knows that developer experience will accumulate over time, and from title to title - the guys in teams who are not so good now with Cell..where will they be in 3 or 4 years time? Possibly getting better bang out of the chip than they would have with a conventional, "approachable one? That's the bet Sony is taking. And of course, some teams will hit the ground running better than others - I think your outlook on this depends very much on your background (as said before, polling PS2-centric devs may yield a sunnier picture) . Unfortunately for John Carmack, very few in the console arena are playing the tune he's used to hearing anymore (at least not MS or Sony).

Carmack recognises this, I think. He says Sony is making hardware for the best devs, basically, for the devs that are going to make the investment to make their design justifiable. And this is true. But if that means the best devs aren't held back in terms of performance in order to "make things easier" for others, then that's an appreciable position to hold. And the "others" will develop experience over time, and the average will rise. This is the exact same strategy as with PS2, except I think more aggressive on one level, but better executed on a nuts and bolts level - Cell I don't think is as scary to get started with as EE was, but I think relatively there's more headroom. And there won't be an 18-month newer OoOE chip in a console coming along to show it up ;)

Some of his other comments wrap interestingly into this too. At least from what I'm reading - I'm still dling the video. But his comments on physics/ai seem almost like a white flag generally on CPUs, and are rather disappointing. A Q4/D3 level of AI/physics certainly isn't "enough". But graphics is Carmack's forte, so I guess it's understandable if he wants to try and retain the focus on that - it is his strong suit, and it probably doesn't pay him to have other areas are becoming as important as graphics. But other devs will demonstrate the usefulness of those things beyond what iD is doing, whether Carmack likes it or not (really, it's already happened some time ago with games like HL2), and I think with next-gen systems, some devs will put iD to shame in those areas if they stand still. Funnily enough, these other areas - physics/ai - are of CPU interest ;)

edit - having started to watch the video, it seems as if Carmack will be devving from the start with his next project with consoles also in mind. Though not purely so. But that may make things easier for him, versus porting a PC codebase with something like Q4. Maybe he'll have different things to say in the next couple of years as he stops having to fight the hardware with code that wasn't made for it. Then again, having to accomodate the PC, he may naturally approach things such that they fit ideally with PCs, and not so ideally with consoles still.
 
Last edited by a moderator:
Titanio said:
He says Sony is making hardware for the best devs, basically, for the devs that are going to make the investment to make their design justifiable. And this is true. But if that means the best devs aren't held back in terms of performance in order to "make things easier" for others, then that's an appreciable position to hold. And the "others" will develop experience over time, and the average will rise.

I think a lot of them will just license the UE3, which is quite sad because in 3 years time that engine will be outdated. Hopefully by then enough developers will have come up with their proprietary engines to tap into both consoles' resources, cause i'm not sure UE3 will be the best thing compared to proprietary engines built for the 2 platforms specifically.
 
london-boy said:
I think a lot of them will just license the UE3, which is quite sad because in 3 years time that engine will be outdated. Hopefully by then enough developers will have come up with their proprietary engines to tap into both consoles' resources, cause i'm not sure UE3 will be the best thing compared to proprietary engines built for the 2 platforms specifically.

On the other hand, though, middleware can help disseminate the experience of the "best devs", and help bring that "average" up. UE3 certainly doesn't seem bad for starting off with the next-gen systems, I don't think, although I agree it will date. But I'm sure we'll see more middleware, hopefully. It may make sense for someone like Sony to get their best first party devs to share tech, and to perhaps distribute an engine or engines from such devs. Then again, making a game engine, and making a game engine that is useable more generally by others are two different things.
 
Please

jvd said:
As for very telling . Its telling that you jump to the ps3s defense yet say nothing about the xbox 360 . If i was wrong you should be correcting me about both not about your fav system . Which is obvious by your postings
Now, that was uncalled for jvd. Useless and uncalled for, really...
 
DeanoC said:
JC isn't a lone coder anymore, he's a lead of a team and thats his concern, not his personal skill set. I'm sure JC could sit down and write the most awesome game on PS3 with-out blinking an eye but nobody buys games written by a single guy anymore.
Who is that nobody? The publisher or the consumer?
If it's the consumer, why should they care how many people worked on the code as long as the game is the "most awesome game on PS3"?

And, personally I have my doubts that JC would be the best person around to get the best results out of an embedded system like the PS3/X360. But that's just me.
 
london-boy said:
I think a lot of them will just license the UE3, which is quite sad because in 3 years time that engine will be outdated. Hopefully by then enough developers will have come up with their proprietary engines to tap into both consoles' resources, cause i'm not sure UE3 will be the best thing compared to proprietary engines built for the 2 platforms specifically.

Epic will keep updating it though. Look at the good games that have been made with the UE2/2.5 engine even at the end of it's life. It still has a lot of functionality that can't be used because the hardware can't run it at a reasonable speed.

Reducing engine development massively will enable devs to concentrate on making better games, especially given today's lead time for a triple-A title.
 
After listening to the video, I don't think a lot of the chopped quotes and so forth floating around are fully or accurately representing what he was saying. I think it's better to have the full quotes, so here's the part of the speech that centered around multicore console processors and physics/ai, that seems to have generated some controversy. It seems a little more agreeable when you just listen to everything he's saying (and have all the "and there's some truth to that" qualifications kept in ;)):

Parallel programming when you do it like this is more difficult. And anything that makes the game development process more difficult is not a terribly good thing. So the decision that has to be made there, is the performance benefit you get out of the this worth the extra development time and there's sort of an inclination to believe that, and there's some truth to it, Sony sort of takes this position where, "ok, so it's going to be difficult, maybe it's going to suck to do this but the really good game developers will just suck it up and make it work." And there's some truth to that. There will be the developers that go ahead and have a miserable time and do get good performance out of some of these multi-core approaches - and Cell is worse than others in some respects here. But I do somewhat question whether we might have been better off in this generation having an OoO main processor rather than splitting it all up into these multicore processors systems on here. It's probably a good thing for us to be getting with the programme now. The first generation games for both platforms will not be anywhere close to taking advantage of all this extra capability. But maybe by the time the next generation consoles roll around, the developers will be a little bit more comfortable with all this and be able to get more benefit out of it. But it's not a problem that I actually think is going to have a solution, I think it's going to stay hard. I don't think there's going to be a silver bullet for parallel programming. There have been a lot of very smart people researchers and so on that have been working this problem for 20 years,and it doesn't really look any more promising than it was before.

So that was one thing that I was pretty surprised when talking to some of the IBM developers of the Cell processor on there. I think that they made to some degree a misstep in their analysis of the performance would actually be good for where one of them explicitly said, basically "now that graphics is essentially done, what we have to be using this is for physics and AI", Those are the 2 poster childs for how we're going to use more CPU power. But the contention that graphics is essentially done, I really think is way off base. First of all, you can just look at it from the standpoint of "are we delivering everything a graphics designer could possibly want to put into a game, with as high a quality as they could possibly want?" And the answer is no. We'd like to be able to do Lord of the Rings quality rendering realtime. We've got orders of magnitude performance that we can actually soak up in doing all of this. There are, what I'm finding personally in my development now is that the interfaces that we've got to the hardware, the level of programmability that we've got, you can do really pretty close to whatever you want as a graphics programmer on there but what you find moreso now than before is that you get a clever idea for a graphics algorithm that will look really awesome and make a cool new feature for a game, you can go ahead and code it up and make it work, make it run on
the graphics hardware, but ultimately too often I'm finding that well this works great but
it's half the speed that it needs to be or a quarter of the speed, or I start thinking about
something "well, this would be really great but that's going to be one tenth the speed of
what we'd really like to have there". So I'm looking forward to another order of magnitude or two in graphics performance because I'm absolutely confident we can use it. We can actually suck that performance up and do something that will deliver a better experience for people there.

Which if you say, "well here's 8 cores or later it's going to be 64 cores or whatever,"do some physics with this that's going to make a game better"", or even worst "do some AI that'll make the game better". The problem with those, both of those,is that both fields have been much more bleeding edge than graphics has been, and do some degree that's
exciting where people in the games industry are doing very much cutting edge work in many cases, it is THE industrial application for alot of that research that goes on, but it's been tough to actually sit down and think how we'll turn this into a real benefit for the game. Let's go ahead, how do we use this however many gigaflops of processing performance to try and do some clever AI that you now, winds up using it fruitfully. And especially in AI, it's one of those cases where most of the stuff that happens in especially single player games is much more of a director's view of things. It's not a matter of getting your enemies to think for themselves, it's a matter of getting them to do what the director wants and putting the player in a situation you are envisaging in the game. Multiplayer focussed games do have much more of a case - you do want better bot intelligence, which is more of a classic AI problem, but the bulk of the games still being single player, it's not at all clear how you use incredible amounts of processing power to make a character do something that's going to make the gameplay experience better, I mean i keep coming back to examples from the really early days of Doom, where we would have characters that are doing this incredibly crude logic that fits inside a page of C code or something, and characters are just kind of bobbing around doing stuff, and you get people playing the game that are believing that they have devious plans and they're sneaking up on you and they're lying in wait and this is all just people taking these minor minor cues and incorporating them in their heads into what they think is happening in the game. And the sad things is, you could write incredibly complex code that does have monsters sneaking up on you, hiding behind corners, and it's not at all clear that that makes the gameplay better with some of these sort of happenstance things that happen with emergent behaviour. So until you get into cases where you think of games like the sims or MMO games where you really do what these sort of autonomous agent AIs running around doing
things, but then that's not really even a client problem, that's more of a server problem,
and that's not really where the multicore consumer cpus are going to be a big help.

Now, physics is sort of the other poster child of what we're going to do with all this CPU power, and there's some truth to that, certainly some of things we've been doing on CPUs for the physics stuff, it's gotten a lot more intensive on the CPU, where we find that things like ragdoll physics and all these different objects moving around, which is one of these "raise the bar" issues, every game now has to do this and it takes a lot of power. And it makes balancing some of the game things more difficult. When we're trying to crunch things to get our performance up, because it's not ..the problem with physics is, it's not scaleable with levels of detail in the way graphics are. Fundamentally when you're rendering an image of a scene, you don't have to render everything to the same level. It'd be like forward texture mapping which some old systems did manage to do but essentially what we have in graphics is a nice situation where there's a large number of techniques that we can do that we can fall off and degrade gracefully. Physics doesn't give you that situation in a general case. If you're trying to do physical objects that affect gameplay you need to simulate pretty much all of them all the time. You can't have cases where you start knocking some things over and you turn your back on it, and you stop updating the physics or even drop to some lower fidelity where you get situations where you know that if you hit this and turn around and run away, they'll land in a certain way, and if you watch them they'll land in a different way. And that's a bad thing for game development. And this problem is fairly fundamental. If you try to use physics for a simulation that's going to impact the gameplay, things that are going to block passage and things like that, it's difficult to see how we're going to be able to add a level of richness to the physical simulation world that we have for graphics without adding a whole lot more processing power. And it tends to reduce the robustness of the game, and bring on some other problems. So what winds up happening in the demos and things you'll tend to see on PS3 and the physics accelerator hardware. You'll wind up seeing a lot of stuff that effectively are non-interactive physics, this is the safe robust thing to do but it's a little bit disappointing when people think about "i want to have this physical simulation of the world". It makes good graphics when you can do things like, instead of the smoke clouds have the same clip into the floor that we've seen for ages on things, if you get smoke that pours around all the obstructions, if you get liquid water that actually splashes and bounces out of pools and reflects on the ground, this is neat stuff but it remains kind of non-core to the game experience. An argument can be made that we've essentially done that with graphics, where all of it is polish on top of a core game, and that's probably what will happen with the physics, but I don't expect any really radical changes in the gameplay experience from this. And i'm not really a physics simulation guy so that's one of those things were a lot of people are like damn this software for making us spend all this extra time on graphics, I'm one of those people who's like "damn all this software for making us spend all this extra time on here". But I realise things like the basic boxes falling down, knocking things off, bouncing around the world, ragdolls, that's all good stuff for the games, but I do think it's a mistake for people to try and go overboard and try and do a real simulation of the world, because it's a really hard problem, and you're not going to give really that much real benefit to the actual gameplay on there. You'll tend to make a game that may be fragile, may be slow, and you'd better have done some really really neat things with your physics to make it worth all of that pain and suffering on there. And I know there are going to be some people that are looking at the processing stuff with the cells and the multicore stuff and saying "well, this is what we've gotta do, the power is there, we should try and use it for this", but I think that we're probably going to be better served trying to just make sure all of the gameplay elements that we want to do, we can accomplish at a rapid rate, respectable low variance in a lot of ways. Personally I would rather see our next generation run at 60 frames per second on a console, rather than add a bunch more physics stuff. I actually don't think we'll make it, I think we will be 30fps on the consoles for most of what we're doing. Anyways, we're going to be soaking up a lot of CPU just for the normal housekeeping type of things we'll be doing.

I think he's been taken out of context and misrepresented with some of the reports and quotes going around. Some quick thoughts:

1) He did not say that IBM made a misstep with the Cell design as is being reported by some, he said takes issue with IBM's contention of how that power should be used. They say it should be used for physics and AI since graphics is "done". Carmack obviously disagrees. And if Carmack wanted to use any CPU's power for graphics, I think he'd be better off with Cell regardless. But I don't think he's not saying he wants to do that. He was simply using that comment as a jumping off point to assert the primary importance of graphics.

2) He also did not say that physics was unimportant or unnecessary as such for games, or at least in the way that was being portrayed. He's saying that if you want to take a pure simulation route, you're going to find it much more difficult to control what happens, and thus ensure a good game experience. Some people earlier were making the point that physics can contribute to the eye candy, so why would Carmack think it wasn't important if he thinks graphics and presentation is important? He actually does say it can be used in that manner to make things look better. Physically based visualisation doesn't have to upset the apple cart as far as game design is concerned, and he points at that - liquid water physics, smoke that behaves realistically etc. etc. So tying physics to visuals is useful as far as he's concerned. But he does tend to make it seem less important than just graphics alone, which is flatly contradictory IMO. He talks physics down to a degree as being mostly relegated to that - unless you're feeling lucky/ambitious - but graphics is "just" about presentation too and he seems keen on that. He concedes that point, although he glosses over it quickly, and he does admit he's not a physical simulation guy. He also does admit that this is something that requires a lot of power regardless of whether you go a pure simulation route or not. And of course, even if Carmack doesn't feel comfortable making physics a lynch-pin of the gameplay, others may (and others arguably have already).

3) With regard to AI, I think he's right in that it's as much about what the player perceives as what the characters are actually doing. But purely directorial approaches just don't work, or at least his own examples don't. Doom3's "directed" AI was horrible IMO. Maybe he thinks most people don't notice, but I do, and I'm sure I'm not alone.

4) His comment about perhaps being in a better position next-gen with multi-core etc. is of course true, but if the current systems were all OoO as he ponders, that wouldn't be the case. You gotta start sometime.

I also thought his comments on HD were interesting. He flat-out said that while enforced minimum resolutions may be OK for now, with Quake4 etc. with his next-gen rendering tech, he'd prefer to do more complex per-pixel rendering with a lower resolution vs having to cut that to meet a higher resolution. It'll be interesting to see Sony's policy on enforcing minimum resolutions or not. Nintendo might also get some credibility out of that aswell ;)
 
Last edited by a moderator:
Back
Top