Carmack's Hands On Impressions of Xbox 360 & PS3

I don't think I agree with that completely... I look at it as more of an engineering time & budget problem. A Core 2 Solo wouldn't be very big if it existed, but IBM never created a CPU core with an IPC comparable to that, and if you look at the direction they're taking with the Power6, possibly never will.
That's not the issue, the issue is how do I put on 200 mm2 90 nm something that can approach for real 200 gigaflop/s (or integer ops, for what is worth) ?
IPC on SPU is very good (approaching 2..), on optimized code, the key factos here is: this stuff is not going to change for 5 or more years, devs will optmize the hell out of it.
We could have got a blazingly fast intel CPU with 2 cores, or even 4 and then we would have been able to code for it without going crazy and we would be all happy...and we will be outperformed by next year pc.
Try to guess how many core 2 solo cores you need to implement a modern game post processing rendering pipeline (and how much bandwidth you need as well from your caches when the other cores are supposedly trying to run some other code at more than an instruction per minute rate ;) ) and how many SPUs you need to do the same job.
I'm sure a lot of devs would answer something like "but we don't do post processing effects on the CPU, or we don't tessellate geometry on the CPU, or we don't trim every single not visible primitive at subpixel level on the CPU, or we don't implement sw rasterizers to compute dynamically occluded geometry on the CPU, or we don't generate IBL data or do ambient occlusion at run time with the CPU, etc etc..and for sure we don't do all these things at the same time cause we don't have the bandwidth and/or the computational resources..." even in a closed system enviroment.
In the end it's all matter of tradeoffs, if you sell a console and you need to do it for several years to make a profit and you think that other platforms that will make your platform look outdated will decrease your profits you have to do something about it, this is a game that has to be played for 5,6 or even more years, and you need to be competitive for a long long time.
A lot of assumptions here, many of them are probably wrong (see Wii..), so yeah..I should probably shut up :) (linking complete, back to more productive ;) )
 
The thing is that the waxy look jumps right out and blows any kind of illusion whatsoever, whilst something less "specularish"(think what HL2 tried to do, that was fairly good looking, though simple, and the illusion was better IMHO than either D3 or FarCry Mme. Tussaud's approach) is considerably better.

Skin is highly specular, it's one of the main properties (after translucency). Getting the size, intensity and color right is not easy, and it also requires bump mapping with proper intensity and patterns to scatter the highlight. Also, translucency softens up the bump mapping effect on the diffuse component by bleeding light into shaded areas, but speculars will remain strongly scattered. Remove translucency and the correct amount of bump looks too strong.
HDR reflections should help, too, some games may make use of that.
It's also important how you light the characters, a strong light from the camera tends to make people look ugly in real life too (think about ugins a flash with your camera up close to someone).

There are also some titles that will approximate(well, fake) SSS, and that should provide some improvement.

Yeah, SSS is still kinda slow, especially if you want to avoid noise and flickering artifacts. I'd expect the next gen consoles to handle it to an extent, though.

By the way id's characters looked quite good to me actually, and I'm sure they'll continue to tweak them until the release. I'd say it's the lack of SSS that disturbs people here, which is a part of the diffuse component.

( HL2 faces lookd not so good to me, with noisy blurry photo-textures, total lack of specularity and bump mapping... far too smooth.)
 
Last edited by a moderator:
The 84MB OS memory reservation is not entirely used by the OS but a portion of it. The rest is not used. It is reserved for future uses and optimizations.
Sony decided to have an overhead in the os memory department should they decide to have more features to run in concurrent with games. Right now there are not many programs running in the background.
If there are not many features to be introduced in the future then the OS reservation will be reduced and future games will utilize it.
The 360 OS memory reservation is so short-sighted. They cannot introduce more features once the 32MB filled up later in the life of the console. If they increase the size of the os memory then the older games won't work.
One apparent problem the 360 having right now that you can see is when you want to access the blades while in game, the game is staggered a lot before you can see the blades.
You do realize, right, that about 15 years ago you could run the whole of Windows 3.11 + Microsoft Word, a development environment, and mail _at the same time_ in 4MB of RAM and 4 MB of swap (for 8MB total). Please. They could keep adding features until the cows come home, and as long as they're not heavily texture based, there'll be no problems with memory.

The reason the UI is a little chuggy in the 360 has nothing to do with the memory, it has to do with the fact that they only allocate 5% of cores 2 and 3 to the system. Audio mixing and encoding, background downloading, and all in game system UI happens in that 3.33% of total system power. Of course, they could have allocated more, but then that power would not be available for games.

Sony allocates an entire SPE, and I believe a percentage of the PPE. That's 14% of the available SPE power, and I have no idea how much PPE power unavailable to games just to make the UI a little smoother.
(Edit: 2nd SPE allocation is apparently incorrect, sorry, irrespective, it's a fair chunk of overall system power)
 
Last edited by a moderator:
You do realize, right, that about 15 years ago you could run the whole of Windows 3.11 + Microsoft Word, a development environment, and mail _at the same time_ in 4MB of RAM and 4 MB of swap (for 8MB total). Please. They could keep adding features until the cows come home, and as long as they're not heavily texture based, there'll be no problems with memory.

The reason the UI is a little chuggy in the 360 has nothing to do with the memory, it has to do with the fact that they only allocate 5% of cores 2 and 3 to the system. Audio mixing and encoding, background downloading, and all in game system UI happens in that 3.33% of total system power. Of course, they could have allocated more, but then that power would not be available for games.

...and GEOS ran on a 64KB C64 and you could do Word like documents with that as well, without even using a hardisk - but that's no point, welcome to "multi-media" age.

Sony allocates an entire SPE, has options on a second SPE, and I believe a percentage of the PPE. That's 14% of the available SPE power, and I have no idea how much PPE power unavailable to games, and another 14% that could be made unavailable at any time, just to make the UI a little smoother.

They cannot allocate a 2nd SPE (nor PPU), that's total nonsense. Otherwise no "old" game would run anymore and they just cannot afford to do that. They did a pretty smart move here, same as for the PSP with options to free up more processing power in the future, that's the way to look at it.
 
I really cannot understand what all the fuss is about...
The man said, (at least as far as I can tell) that the PS3 is harder to develop for.
Isn't that something that everybody generally agrees upon?

My opinion is that, John Carmack, will say exactly what comes first into his mind without caring how it will sound and how it will be interpreted... And that is because he can afford to do it.
Although I own both consoles, I have to admit that I favour the PS3, maybe because some of my favourite games were on the playstation brand.
But to say that Carmack is on M$s payroll would be really far fetched...
 
Yes..Wii is outselling everything out there but do we really care about it, again, from a technological standpoint? I don't, most of you don't as well (given the fact that this is a technology oriented forum) and Carmack does not as well.

Hate to look like I'm arguing for the argument's sake, but... :) He actually said that he likes the I/O innovation part of it a lot. Let's face it, people have been developing for the same keyboard/mouse combo or controller for almost two decades now. And even the analog sticks came from Nintendo as I remember. And on the other end we still have static, flat 2D displays..
 
all the argument really comes down to is whether Carmack is prepared to develop for the Cell.

Carmack wants to develop higher level solutions to interesting graphical problems, like texture and geometry virtualization in this case. It is a very different kind of programming than the low-level bit hunting; that may have it's magic too, but he's done enough of that a decade ago on Quake and on other projects.
Presonally, I'd rather have him work on high level problems instead of paralelizing the hell out of some average engine code...
 
But lets moderate that statement, we know that we wont have any incredibly fast single core machine. But what about being O-o-Oe or not ? Isn't this also a significant part of this? That console cpu's are in order based, and the AMD\Intel cpu's are not?

OOE circuits take up more than half of a typical modern CPU. Consoles traded this space for more execution units and left the programmers responsible for writing efficient code. Adding OOE to the PS3 or X360 would cut their potential in half, in exchange for easier development...
 
Carmack's statement was something like that:

from memory said:
The PS3 version will be equivalent to the others, in some regards better, but it will have consumed much more effort then the others.

Now, I think no one in their sane mind would question the second part - although this trend shows that there are many people even on this forum, who flush their sane minds down the toilet the moment they see a SONY nametag on a piece of equipment.

What I'm curious about is in what ways will the PS3 version be better, given what we know about his tech? The obvious and trivial thing of having one instead of many discs is a given; what use could he find for his megatexturing setup of the greater CPU power?
 
It takes more than just "'worker thread' style" to optimize for PS3. The memory layout, who (SPU/PPU) does what and how many, the way the data are encoded and grouped to hide latency manually, etc. will be affected non-trivially.

If JC is "only" prepared to look at worker thread, then there is still much room for improvement. Then again, code and opinion do change. We may need to give JC and Olick some more time to sort things out. There is only so much 1 person can do after-the-fact. A PS3 optimized version will take time and extra resources regardless of whether JC is pro or against Cell design principles.
Well, he hasn't really gone into any detail on that. So we really don't know what they're doing, either way.
 
Well, Carmack himself has stated many a time that he prefers developing for the 360, and is not a fan of the choices Sony's made. So I don't think you'll find anyone disagreeing with you there. And with your acknowledgment that Carmack could in fact program effectively for the Cell, then all the argument really comes down to is whether Carmack is prepared to develop for the Cell. So you're questioning his commitment, and I don't see what grounds you have to do that.

My acknowledgment as to whether Carmack could "in fact program effectively for the Cell" is irrelevant because it was never a question to begin with. It would seem you came up with this line of questioning all by yourself. I merely said he could care less for the Cell architecture. And to lower you batting average even further I'm also going to disagree with you that I was questioning his commitment for developing for the Cell. After all, he's so committed that he even hired one of the more talented PS3 developers available - I consider that a commitment relative to the best interests of id.

The issue is whether he contemplated doing it (not whether he actually did it), and in his keynote he did just that. I mean all we have are Carmack's actual words, so what grounds are there for superimposing your beliefs over what Carmack actually said?

Yes, I'm sure he really contemplated re-writing his entire engine to make it more data centric just so that it would run well on the PS3. Let's be realistic here - there was no way he was going to ever radically rewrite his engine code. If there was any contemplation then it was for reasons for not doing it as opposed to doing it. Once again, I'm not superimposing my beliefs, but rather just looking at the herculean task from a realistic perspective.


To get the PS3 up-to-par with the other two platforms, running at 60 FPS? Carmack has told us that a lot more effort has been put into the PS3 port, but yes we don't know how they're managing the SPEs and what role Carmack played in this and what the PS3 specialist is being used for, and that means that any claim about Carmack's role is pure speculation. He's told people that he's prepared to look at a 'worker thread' style system for the PS3, so it seems rather arbitrary to me to decide that actually he isn't, or actually he doesn't want to put the required work in, or actually he'd never commit to the required programming model.

Well, anytime you need to re-architect an entire graphics engine it's going to take a lot more effort - this is pretty obvious. But, it's something that's done once, if done correctly, and optimized thereafter. As for speculation on how the engine was modified, well, once again since the PS3 developer was an EDGE contributer I would hazard to guess that the same techniques or a variation thereof were applied to the PS3 id Tech5 engine. I'm not trying to discredit John Carmack, but his expertise and time are probably served best elsewhere rather than on the topic of PS3 optimization.
 
OOE circuits take up more than half of a typical modern CPU. Consoles traded this space for more execution units and left the programmers responsible for writing efficient code. Adding OOE to the PS3 or X360 would cut their potential in half, in exchange for easier development...


Is it really half - including cache? As when comparing to similar sized (transistor wise) x86 processors I didn't think Xenon had close to twice the execution units. Single core Athlon 64 with 1MB cache being about the most comparable.

Also, surely OOOE adds something to performance and isn't there purely for ease of programming. It seems to me that even with heavy optimisation, there are some things you simply can't do at coding time with an in order core that an OOOE core can do automatically at execution time to get your code running more efficiently.
 
It's been a while since I've got down to the details of it, but as far as I can tell the actual figure is about half the die, but probably even more than that. Reordering instructons, renaming registers, predicting branches etc. etc. - lots of logic there.
 
Uh, you need to go back and read what Carmack said again. He's most frustrated by the PS3's split memory (and how much memory is used by the OS), not Cell. Your whole point is completely off track (as Jawed also pointed out).
Actually Carmack has been criticizing multiple core architectures. He clearly said that eh prefers a fast single processor.
 
Well, he hasn't really gone into any detail on that. So we really don't know what they're doing, either way.

:) You seem to be saying this thread is pointless since we don't have enough context to weigh his feedback either way to begin with ? e.g., He could have done X but didn't do Y, or he may not have started on X even, or he may have accomplished everything he wanted but couldn't achieve the results he desired, etc. etc.

I'm outta here. Am going to wait for the final release and more concrete info
 
But lets moderate that statement, we know that we wont have any incredibly fast single core machine. But what about being O-o-Oe or not ? Isn't this also a significant part of this?
If you are talking about Carmack's dislike of multi-core, certainly not significant if it plays any part at all. Unlike multiple cores or dedicated local memory, OoOE almost never affects the algorithm.
That console cpu's are in order based, and the AMD\Intel cpu's are not?
true for execution
A bit off topic, but do you think we would been better of with a quad-conroe than a Cell?
Carmack would ;)

But to say that Carmack is on M$s payroll would be really far fetched...
Whoever said that is not familiar with open source.
People don't seem to realize how much id is loosing by insisting on open sourcing their work.
 
Presonally, I'd rather have him work on high level problems instead of paralelizing the hell out of some average engine code...
Some average engine code does not need 7 SPUs to run, believe me, even if your SPU code is as crappy as the most crappy code ever written :)
I'm sure there are plenty of high level problems that could be solved/addresed using a lot of computational power in some smart way, and that computational power is available right now if you're willing to write specific code for it.
At the same time if your code has to run on 4 platforms then yeah, I agree with him, who gives a damn about asymmetric cores? I wouldn't..cause I would never be able to use that computational power if the first place, and it would be sitting there, idle, making my life a lot harder just to achieve what the other platforms can do without requiring the same effort.
Rage, for example, looks remarkably good for being 60 fps and multiplatform title but I'd like to remind to everyone that years ago ID was stunning us with things we have never seen before in realtime, now we have ppl that thinks Rage does not look even that good compared to what we have already seen in other multiplatform games (COD4?).
 
One of the presentation videos has the id guy talk about the post processing stuff and that it has been implemented just a very short time ago as an "overnight" feature. I guess Tech5 is far from complete, especially if you factor in that year they've spent on a completely different game project they've scrapped.

In some way it suggests that both the engine and the content will mature before release; but it'll also face more serious competition in that time, if we already have COD4 release this year.
That game still has to be the surprise of the year... just where did it come from?
 
I don't know any developer that does not share Carmack's point of view, we would all prefer to work on an incredibly fast fast single core machine. Said that this incredbly fast single core macihnes don't exist and won't for the foreasable future, and it seems to me that a lot of devs got this right in their minds a long long time ago..
Replace 360 and PS3 CPUs with an intel or amd OoOE core and then we would have now a last year PC as next gen console, saying that the transition was done too early is not, imho, an extremely smart thing to say, at least from a technological standpoint.
Yes..Wii is outselling everything out there but do we really care about it, again, from a technological standpoint? I don't, most of you don't as well (given the fact that this is a technology oriented forum) and Carmack does not as well.

This is really what has bothered me about Carmack's (and Newell's) comments over the last couple of years. parallelism is hard, it's annoying, it's fraught with peril. What use is there in complaining about it though? It's the fastest way forward, and in some ways I think it's a good thing. It forces you to think about your problem differently, and you'll be a better programmer because of it. I think long term this will benefit the industry, even if it means spending the first few years relearning how to solve the same problems. I suppose that is little comfort when trying to develop the next blockbuster game though. Everyone on the team will have to be good, not just the lead.

Nite_Hawk
 
They cannot allocate a 2nd SPE (nor PPU), that's total nonsense. Otherwise no "old" game would run anymore and they just cannot afford to do that. They did a pretty smart move here, same as for the PSP with options to free up more processing power in the future, that's the way to look at it.
I stand corrected on the second SPE, however, I find it hard to believe they could run the system code purely on a SPE without any PPU involvement, so I still firmly believe that they reserve some percentage of the PPU.
 
Back
Top