JC Keynote talks consoles

Acert93 said:
Note how it was questioned how much work/experience he has in consoles and non-x86 (like PPC/Mac). Not only did I give a lot of information on this, note what the keynotes have the same info for the most part!:



Yet people make silly comments THAT ARE ANSWERED IN THE KEYNOTES and questions that his keynotes without first knowing what is fully in them. And if he was not clear enough, JC believes that the PS3 and Xbox 360 are:

You keep repeating this, and yet it still isn´t true. John Carmack has not made any console game engine himself in the last 10 years. How can his opinion on their performance have much weight?

Even more, when he states that when looking at these new CPUs you have to divide their power by half because it does not perform well with native OoO code, something has got to be wrong here.
 
I will wait to see what devs that make games like MGS, Gran Turismo, Ninja Giden, Resident Evil, Halo, etc can do before I put limitations on what going to happen and whats not going to happen. I think that's fair.
 
There is a reason that there is no explicit form for OOe code or rather a method of creating it...if it did exist it would go against the very purpose for OOe.

However, there is a method or rather a set of practices one should keep in mind when dealing with in-order processors. Code must be structured so that in-order execution's limitations can be minimized or avoided altogether.

With this said code that would run just fine on an OOe processor (because OOe is meant to handle all sorts of code by virtue of dynamic execution) can and most likely will not run so well on an IO processor because as a matter of chance it is unlikely code will just avoid the pitfalls that are detrimental to the performance of an IO processor.

This is why IMO it is still significant to note the difference between typical x86 code where OOe processors are relied upon either intentionally or unintentionally to exact ILP and generally keep throughput high at the instruction level and placing this very same code on an IO processor where no mechanisms are in place to do this as contrasted with code that is optimzed for IO exectution.

The fact of the matter is that IO processors require special care, but with that "necessary" care they can perform exceptionally.

Now...

I feel some think along the same lines as I do and feel JC really doesn't frame these CPUs in the right light. Saying that these CPUs are about half as fast with x86 code as x86 processors with OOe seemed like stating the obvious. So one is compelled to ask why THIS was noted. Look...JC could've simply wanted to show these processors still won't cook you breakfast in the morning...and this I wouldn't disagree with. JC may have wanted to fight the good for the PC realm in the face of so many claims that these consoles are simply going to OWN PCs. JC could also have been slyly providing support for Gabe Newell's comments about tossing away your code...as you would at least have to rework PC-centric code for these IO processors. Just a personal thought...I would imagine console devs have to toss code just about every time the next round of consoles land, but since it's only been like what...the GCN and Xbox1 with OOe processors...I'd imagine things are not so bleak for the console folk as they are accustomed to and accomplished with having to deal with these issues. I used the word bleak intentionally. Given the omission or absence of the things these processors will be good at or potentially what they could do with optimized code...I suppose he did mention in passing that "miserable" devs will make these processors do wonderful things....I think it's fair for some, like me, to feel JC was down on next gen system's CPUs a bit. I mean...JC was rather verbose with his minor quibbles...quite.

Please believe I can see how and respect that others get a totally different vibe from JC's keynote address. I wrote all this to say that it's not like there is no justification for disagreement with JC's comments. If anyone is wondering I disagree with allot of what JC said, but behind allot of those little quibbles there were valid points I nor most would disagree with at all.

...wait....

Ok I am prepared...you may fire when ready.
 
Last edited by a moderator:
Almasy said:
You keep repeating this, and yet it still isn´t true. John Carmack has not made any console game engine himself in the last 10 years. How can his opinion on their performance have much weight?

Go back and read my quote of JC from his keynotes. He specifically notes working on the SNES and numerous consoles since then. I already mentioned numerous games from id that appeared on the Xbox, PS2, PS, N64, Dreamcast, 32x etc... from id + game engines for a lot of titles like CoD, MoH, etc... + porting his software to the MAC (PPC... someone was questioning if he had any PPC experience) I think it is safe that at some point in time he had to look at the consoles. And in fact, that is what he stated (which I already quoted). Typically a company does not just drop an engine on someone with no support--especially when it is their IP at stake.

Either John was lieing about working on numerous consoles since the time he did stuff on the SNES or you are wrong. Anyhow, he claims to have worked on them and that is enough for most of us to accept it at face value.

And as other developers noted here in this thread noted his obsession with working at the lowest level on hardware puts him in a good position to comment on the actual power of the hardware. Overall he is positive and notes those who spend the time, time, and have the skill can do some great things.

So far ERP, Shoot, Gabe, and a lot of other devs are backing him up.

Yet you are saying he is spreading FUD, is lazy/inept, and has got complacent. Is this true of these devs as well?

Btw, what makes you qualified to make such an assessment of these men? I asked earlier why you can say he is unqualified, yet you have engaged in console debates here (in this very thread!) with strong positions on the consoles!! Are you a programmer working on a next gen console? Are you basing your judgements of John based on Sony/MS PR?

very sincerely doubt both Sony and MS would bother with multicore CPUs if they could be dwarfed by a current single core CPU. The lazyness (or ineptitude) is on someone´s side, and I doubt it´s on the console maker´s.

You create an "Either / Or" construct but that is not the reality. How about "Neither"?

You are assuming that MS/Sony chose CPU's because of what was best for developers, not necessarily what was best from a hardware/PR perspective. As Hannible noted the CPUs were chosen in a way that puts more work on the DEVELOPER. It was a conscious decision on MS part (arsetechnicia's Xenon article). MS saves money on the hardware (but passes it on to the devs). And as John noted things that make development harder are not good for devs. You forget that costs are going up already, toss in more art assets + now dealing with stripped down In-Order Execution parallel designs and you have asked devs to do a lot.

From the MS perspective there were other factors:

1. The x86 IP would not be licensable for integration, and therefore would cost a lot of money in the longtern yet again.

2. Going with a single, high frequency chip would be a problem due to heat/power limitations and the fact 4GHz is about the current wall (they are already clocked at 3.2GHz)

3. They had to go multi-core at some point. It is the only option in the long term. Why not now?

4. MS knew they would need more floating point performance to not only compete with Sony but also to accomplish tasks like procedural synthesis of geometry (one of the specific 360 design features).

And there were other reasons, but we can see the MS's decisions were financial (IP licensing), heat/power consumption related, and the fact they needed to give up some features (like OoO) for others (like beefier FP units). Not all those reasons or Dev oriented--unlike Xbox1.

But as John noted the tools/skillset is not there yet for working well with parallel architectures. This is a problem they have been working on for decades and there is no easy solution.

Anyhow, if you did not miss it, overall John LIKES the MS design. Just because he has some reservations and complaints does not mean he did not like it. He considers it on par with a high end PC, called it "wonderful", praised the tools, and is even building his next engine on it. While he did not like MS's more closed platform approach or the fact they are demanding 720p, overall he said it himself: He was happy with MS's console, so happy id's next game is being developed on it.

I am still not sure how a couple fair criticisms in his opinion (and validated by other Devs in this threads) turns into this massive "bash John" thread.

What, are we going to bash ERP, Game, and Shoot because they agree with much of what he said?
 
good post scificube


I like the way you look at this specifically...


scificube said:
snip...

... So one is compelled to ask why THIS was noted. Look...JC could've simply wanted to show these processors still won't cook you breakfast in the morning...and this I wouldn't disagree with. JC may have wanted to fight the good for the PC realm in the face of so many claims that these consoles are simply going to OWN PCs. JC could also have been slyly providing support for Gabe Newell's comments about tossing away your code...as you would at least have to rework PC-centric code for these IO processors. ....snip.
 
Acert93 said:
Please believe I can see how and respect that others get a totally different vibe from JC's keynote address. I wrote all this to say that it's not like there no justification for disagreement with JC's comments. If anyone is wondering I disagree with allot of what JC said, but behind allot of those little quibbles there were valid points I nor most would disagree with at all.

I have felt this way for the last couple of days. I feel 100% like what you quote above. That's what I've been trying to say the whole time. I disagree with some of the things that he says, but understand that he made plenty of valid points.
 
mckmas8808 said:
I have felt this way for the last couple of days. I feel 100% like what you quote above. That's what I've been trying to say the whole time. I disagree with some of the things that he says, but understand that he made plenty of valid points.

I mean unless you're totally dense or just want to give JC a stab or two you have to recognize that there are some very valid points in his address. I just can't go with everything he said.
 
Acert93 said:
Go back and read my quote of JC from his keynotes. He specifically notes working on the SNES and numerous consoles since then. I already mentioned numerous games from id that appeared on the Xbox, PS2, PS, N64, Dreamcast, 32x etc... from id + game engines for a lot of titles like CoD, MoH, etc... + porting his software to the MAC (PPC... someone was questioning if he had any PPC experience) I think it is safe that at some point in time he had to look at the consoles. And in fact, that is what he stated (which I already quoted). Typically a company does not just drop an engine on someone with no support--especially when it is their IP at stake.

Either John was lieing about working on numerous consoles since the time he did stuff on the SNES or you are wrong. Anyhow, he claims to have worked on them and that is enough for most of us to accept it at face value.

Yes, yes, I read the quote. You still missed my point. His involvement with consoles for the last 10 years has been minimal. Yes, he has worked on them, but to what extent? All I´ve seen are a couple of ports of the Quake 3 engine done by entirely different teams. Tech support from what I understand is quite different than making an specific engine and make it perform suitably for consoles. I repeat, has he made an engine for a specific console to properly gauge its performance potential in any recent time?

And as other developers noted here in this thread noted his obsession with working at the lowest level on hardware puts him in a good position to comment on the actual power of the hardware. Overall he is positive and notes those who spend the time, time, and have the skill can do some great things.

His obsession to work with the lowest level hardware? Hasn´t he been working on PC and OpenGL calls for the better part of his career? If he were so obsessed with obtaining as much performance from a machine as possible he would be working on consoles, wouldn´t he?

And come on, didn´t he dismiss PS2 because "there´s very nasty stuff in there"? Look at the pityfull performance the few Quake 3 engine games on it versus the greatest looking games on the plattform. If he were so obsessed with low level hardware, wouldn´t he have made sure his engine worked and looked better than the rest?

So far ERP, Shoot, Gabe, and a lot of other devs are backing him up.

One would have to be pretty dumb if one does not agree with the notion that different CPUs need a different way of thinking, don´t you agree? And yet I haven´t seen them saying they´d be better off with a x86 design and that CELL/X360´s CPU perform terribly.

Yet you are saying he is spreading FUD, is lazy/inept, and has got complacent. Is this true of these devs as well?

Well, when he´s comparing straight OoO code on In Order CPUs then yeah, he´s pretty lazy and complacent.

Btw, what makes you qualified to make such an assessment of these men? I asked earlier why you can say he is unqualified, yet you have engaged in console debates here (in this very thread!) with strong positions on the consoles!! Are you a programmer working on a next gen console? Are you basing your judgements of John based on Sony/MS PR?

The comments of great performance do not come from me, they come from what I´ve read from developers. I´m not doing any analysis, nor am I basing them from dreadfull PR. This is the first time I´ve seen the next gen CPUs´ performance equaled to half of the performance of current CPUs. Do all of the developers here agree with this? That we would be MUCH better off with an off the shelf design in terms of both ease of programming AND performance?

That´d be a pretty depressing fact for both MS and Sony if this were true, don´t you think? So much money thrown away when they could have just thrown a P4 in there.

You create an "Either / Or" construct but that is not the reality. How about "Neither"?

You are assuming that MS/Sony chose CPU's because of what was best for developers, not necessarily what was best from a hardware/PR perspective. As Hannible noted the CPUs were chosen in a way that puts more work on the DEVELOPER. It was a conscious decision on MS part (arsetechnicia's Xenon article). MS saves money on the hardware (but passes it on to the devs). And as John noted things that make development harder are not good for devs. You forget that costs are going up already, toss in more art assets + now dealing with stripped down In-Order Execution parallel designs and you have asked devs to do a lot.

From what I understand, the decision to go with multicore CPUs was performace, integration and cost. Both of these CPUs were what Sony/MS came with when pondering these factors. If x86 was more suitable for the work, I´m sure either one of them would have made sure to make a similar CPU.

But you say it yourself, MS went with the tricore design in order to be able to compete with Sony in terms of performance. My point is this, if a conventional x86 derivative or conventional PPC core would have been able to offer the performance offered by these two multicore CPUs, then why even go with them? And both compaines making such a terrible mistake at the same time? I just don´t believe that.

But as John noted the tools/skillset is not there yet for working well with parallel architectures. This is a problem they have been working on for decades and there is no easy solution.

Well, isn´t it obvious that there are new challenges for a new kind of CPU? Problems that will have to be solved to some extent to extract the potential performance of it?

Anyhow, if you did not miss it, overall John LIKES the MS design. Just because he has some reservations and complaints does not mean he did not like it. He considers it on par with a high end PC, called it "wonderful", praised the tools, and is even building his next engine on it. While he did not like MS's more closed platform approach or the fact they are demanding 720p, overall he said it himself: He was happy with MS's console, so happy id's next game is being developed on it.

I am still not sure how a couple fair criticisms in his opinion (and validated by other Devs in this threads) turns into this massive "bash John" thread.

Well, if he doesn´t like an ATI card with a performance that is at worst on par with the very best that is currently offered on the market, he is a pretty damn dumb person.:p

I´m not bashing anyone. I´m merely responding to some parts of the discussion I disagree with strongly. What he doesn´t like are the CPUs, and the justification is that some OoO code works terrrible on In order CPUs. Don´t you agree that kind of comparison is inacurrate? Shouldn´t a programmer work with the hardware instead of against it in order to do a proper performance comparison? Shouldn´t his attitude be forward looking instead of being sorry that he just won´t be able to copy & paste his existng job? Afterall, everyone is going multicore at some point.
 
scificube said:
There is a reason that there is no explicit form for OOe code or rather a method of creating it...if it did exist it would go against the very purpose for OOe.

However, there is a method or rather a set of practices one should keep in mind when dealing with in-order processors. Code must be structured so that in-order execution's limitations can be minimized or avoided altogether.

Set of practices ? Be specific man.

The difference is that on an in-order CPU, the compiler, or worse, the developer, has to do the scheduling. And even though it can look at a very big window at any one time, it can only use static dependency information. This has implications!

1.) Loop unrolling is used to remove false dependencies.
2.) Inlining is used to let the compiler schedule instructions across function calls

Both of these bloats code, and you have to make up for it by having more instruction cache (or local store), more architected registers (large contexts in a multithreaded evinronments makes sense, no ? No!). So the advantage in silicon real estate is alot lower than at first anticipated, and with guaranteed lower performance.

scificube said:
With this said code that would run just fine on an OOe processor (because OOe is meant to handle all sorts of code by virtue of dynamic execution) can and most likely will not run so well on an IO processor because as a matter of chance it is unlikely code will just avoid the pitfalls that are detrimental to the performance of an IO processor.

Like I said OOE handles interoperation dependencies at runtime, in-order at compile time. The fact that your in-order CPU have no knowledge at runtime of dependencies (other than it knows it must stall on one) is not a pitfall that can be avoided, it is a fact of life for in-order CPUs.


scificube said:
This is why IMO it is still significant to note the difference between typical x86 code where OOe processors are relied upon either intentionally or unintentionally to exact ILP and generally keep throughput high at the instruction level and placing this very same code on an IO processor where no mechanisms are in place to do this as contrasted with code that is optimzed for IO exectution.

You don't optimize code for in-order execution. Your compiler schedules the code in the most optimal way (or you, the developer, do it). Scheduling instructions is not what I consider coding.

scificube said:
The fact of the matter is that IO processors require special care, but with that "necessary" care they can perform exceptionally.
Outside of easily-predicted-memory-access workloads, in general and compared to OOOE, no.

Go look up the performance of current CPU cores. The only in-order CPU that is close is Itanium and it has twice (or more) the die size of Opterons/P4s. For shits and giggles compare SUN's in-order UltraSparc IIIe to Fujitzu's out-of-order Sparc V, and Sparc V is hardly state of the art OOO.

Cheers
Gubbi
 
Last edited by a moderator:
Gubbi said:
Like I said OOE handles interoperation dependencies at runtime, in-order at compile time. The fact that your in-order CPU have no knowledge at runtime of dependencies (other than it knows it must stall on one) is not a pitfall that can be avoided, it is a fact of life for in-order CPUs.
Couldn't a development environment use run-time dependency evaluation to make some 'runtime usage' compilation improvements? That is, run the code on an OOO CPU, record the OOO activities, and compile the IO solution using those same optimisations where appropriate? Only where there's obvious repeated patterns and so forth would this work but it'd certainly offer some insight into where to make improvements.
 
Shifty Geezer said:
Couldn't a development environment use run-time dependency evaluation to make some 'runtime usage' compilation improvements? That is, run the code on an OOO CPU, record the OOO activities, and compile the IO solution using those same optimisations where appropriate? Only where there's obvious repeated patterns and so forth would this work but it'd certainly offer some insight into where to make improvements.

No.

Dependencies are there to stay, that is what constitutes a program after all, stringing producers and consumers together to get work done.

The best you could do is make a simple RAT (register alias table) device. This could remove false dependencies in loops and hence limit the need for loop unrolling. The rotating register file found in Itanium (IPF) would qualify as such.

Cheers
Gubbi
 
Yes, yes, I read the quote. You still missed my point. His involvement with consoles for the last 10 years has been minimal. Yes, he has worked on them, but to what extent?
Many developers get free devkits specifically for evaluation purposes from time to time. And with someone as influential as Carmack, he's probably gotten at least one of everything. And he's probably gotten intimate with every one of them and looked down to the metal and played with every last little quirk if for no other reason than to satisfy his geek drive.

His obsession to work with the lowest level hardware? Hasn´t he been working on PC and OpenGL calls for the better part of his career? If he were so obsessed with obtaining as much performance from a machine as possible he would be working on consoles, wouldn´t he?
What makes you think that writing a PC OpenGL engine doesn't involve working at the lowest possible level? I mean low-level code doesn't even dominate a PS2 codebase -- it's saved for key points, and the same can be said of PC engine. Part of the things you want to do with any engine whether is XBox, PS2, or PC is keep the pushing of data to the GPU to a minimum. Why you'd expect it to be magically different on the PC is beyond me.

Besides which, he was doing stuff on the PC long before there was such a thing as OpenGL. When you do software rasterizers on a 386, it's a whole other ball game that gives you that deep understanding of what goes on in a graphics pipeline.

And come on, didn´t he dismiss PS2 because "there´s very nasty stuff in there"?
Uh-huh... and he's absolutely right. Are you saying that proves he didn't do anything meaningful with it? The same can be said of PS2 that he was trying to say with this talk about next-gen: The people who put in the time and money to develop on it are the ones who got some good results.
 
Gubbi said:
Set of practices ? Be specific man.

Do I have to...this is Beyond3D...many full well know better than I do what I'm talking about.

Gubbi said:
The difference is that on an in-order CPU, the compiler, or worse, the developer, has to do the scheduling. And even though it can look at a very big window at any one time, it can only use static dependency information. This has implications!


1.) Loop unrolling is used to remove false dependencies.
2.) Inlining is used to let the compiler schedule instructions across function calls

Both of these bloats code, and you have to make up for it by having more instruction cache (or local store), more architected registers (large contexts in a multithreaded evinronments makes sense, no ? No!). So the advantage in silicon real estate is alot lower than at first anticipated, and with guaranteed lower performance.

Ok....I don't think I implied this wasn't the case. In the case of the PS3 at least we know there is quite a bit of HW real estate with 128 128bit registers per SPE and so forth. I would agree that switches would be costly but then switches are always costly and to be avoided as much as possible in all cases.

...oh I just have not payed as much attention to the X360s CPU...I'm no Sony ******...I just find Cell to be more interesting.

Gubbi said:
Like I said OOE handles interoperation dependencies at runtime, in-order at compile time. The fact that your in-order CPU have no knowledge at runtime of dependencies (other than it knows it must stall on one) is not a pitfall that can be avoided, it is a fact of life for in-order CPUs.

That is why these architectures are multi-core in trying to leverage TLP. When such stalls should arise everything still doesn't come to a halt as other processor elements should be moving right along. I realize it is most ideal to leverage both ILP and TLP but it is understandable as why that would have been a tall task...and it been over several times on these board I think. IPC can still be good per core if the compiler is very good at scheduling things and in the case that it isn't then devs must then go through the pains of handling the task themselves. I feel things cannot be reduced to scheduling alone but that's just me.

I'm not here to test my might or anything. I was just giving my opinion.

Gubbi said:
You don't optimize code for in-order execution. Your compiler schedules the code in the most optimal way (or you, the developer, do it). Scheduling instructions is not what I consider coding.

Outside of easily-predicted-memory-access workloads, in general and compared to OOOE, no.

Hey...if that's your opinion that's fine by me although I feel there are other instances to consider than IO bound jobs.

Gubbi said:
Go look up the performance of current CPU cores. The only in-order CPU that is close is Itanium and it has twice (or more) the die size of Opterons/P4s. For shits and giggles compare SUN's in-order UltraSparc IIIe to Fujitzu's out-of-order Sparc V, and Sparc V is hardly state of the art OOO.

Ok...I think I'll do that...just what performance metric should I be making note of?
 
Last edited by a moderator:
His obsession to work with the lowest level hardware? Hasn´t he been working on PC and OpenGL calls for the better part of his career? If he were so obsessed with obtaining as much performance from a machine as possible he would be working on consoles, wouldn´t he?

Okay, so we're talking about the guy, who - with Michael Abrash - wrote a software 3D renderer that basically fit into the cache of the Pentium processor? And as far as I know he had ideas like using BSP to process the game levels' geometry, use lightmaps for static lighting, or even things like smooth scrolling on a PC with Commander Keen, fast raycasting based 2.5D engine, and so on... So, is this the same John Carmack we're talking about? Because then I'm probably interpreting things a bit differently...
 
Almasy said:
Y
Well, when he´s comparing straight OoO code on In Order CPUs then yeah, he´s pretty lazy and complacent.

There is NO SUCH THING AS OoO code. Next person who brings it up gets spanked, sent to the corner and wears the cone hat!

Aaron Spink
speaking for myself inc.
 
Last edited by a moderator:
Shifty Geezer said:
Couldn't a development environment use run-time dependency evaluation to make some 'runtime usage' compilation improvements? That is, run the code on an OOO CPU, record the OOO activities, and compile the IO solution using those same optimisations where appropriate? Only where there's obvious repeated patterns and so forth would this work but it'd certainly offer some insight into where to make improvements.

No. what part of dynamic do you need explained. FDO is already part of most compile loops that care about performance and gets about an equal speed up on both IO and OoO uArchs.

Aaron Spink
speaking for myself inc.
 
ShootMyMonkey said:
Many developers get free devkits specifically for evaluation purposes from time to time. And with someone as influential as Carmack, he's probably gotten at least one of everything. And he's probably gotten intimate with every one of them and looked down to the metal and played with every last little quirk if for no other reason than to satisfy his geek drive.

This doesn´t go beyond speculation, even if likely. Also, I don´t believe that with the many responsabilities he must have at iD he had the time to create a well performing engine and acquire a lot of experience with a single plattform.

What makes you think that writing a PC OpenGL engine doesn't involve working at the lowest possible level? I mean low-level code doesn't even dominate a PS2 codebase -- it's saved for key points, and the same can be said of PC engine. Part of the things you want to do with any engine whether is XBox, PS2, or PC is keep the pushing of data to the GPU to a minimum. Why you'd expect it to be magically different on the PC is beyond me.

Well, I was under the impression that consoles provide the benefit of being able to exploit a platform´s strengths while PC´s are dominated by software layers such as OpenGL that provide compatibility, but reduce efficiency considerably. Is this not true?

Besides which, he was doing stuff on the PC long before there was such a thing as OpenGL. When you do software rasterizers on a 386, it's a whole other ball game that gives you that deep understanding of what goes on in a graphics pipeline.

Yes, I do know of his work, and why a lot of people worship the guy, but that was a long time ago. Responsabilities, focus and even skills change. From what I know, he has not done any significant work on consoles for a long time.

Uh-huh... and he's absolutely right. Are you saying that proves he didn't do anything meaningful with it? The same can be said of PS2 that he was trying to say with this talk about next-gen: The people who put in the time and money to develop on it are the ones who got some good results.

Well, I am not dumb enough to say that quote proves he didn´t do anything meaningfull with it. I´d say the products using his engine that ran terribly, didn´t look particularly good and had to be heavily reworked by other people to run somewhat decently do prove that.

Look, I´ve read what he had to say, and many of his points are indeed very interesting. I know of his status among many people but that still doesn´t mean that I´m not going to question what I disagree with him on. His gameplay comments I do not share, due to the fact that his latest product shows he has not many interesting ideas to bring to the table.

I am also terribly curious to know if his claim that the new CPUs are terrible to work with and do not perform well at all is true. Because I´d say a CPU that signified billions in investment and only manages to perform at half of a P4 is a failure. So is it? Is slapping a P4 to a motherboard with an NVidia card a much better solution? Did Sony/MS waste their money?
 
aaronspink said:
There is NO SUCH THING AS OoO code. Next person who brings it up gets spanked, sent to the corner and wears the cone hat!

Aaron Spink
speaking for myself inc.

So, is there no possible way to optimize code for In order CPUs? That´s what I was trying to refer to. I wonder why Sony/MS went for In Order CPUs if the performance is so below conventional ones.

Okay, so we're talking about the guy, who - with Michael Abrash - wrote a software 3D renderer that basically fit into the cache of the Pentium processor? And as far as I know he had ideas like using BSP to process the game levels' geometry, use lightmaps for static lighting, or even things like smooth scrolling on a PC with Commander Keen, fast raycasting based 2.5D engine, and so on... So, is this the same John Carmack we're talking about? Because then I'm probably interpreting things a bit differently...

I admit my ignorance on some specifics of what he has done. While they are fantastic, they do seem to be of a long time ago. I´m trying to learn, but I can only speak from what I´ve seen. And for a significant amount of time, I have not seen him contribute much, if at all, to the development of graphics on consoles. That is what I tried to say, and I´m probably wrong, but I must ask to know, not just nod my head and accept what he says as if he were a demi god.

For example, what ShootMyMonkey posted got me wondering, is it really true there are no differences in programming consoles? Is the benefit of optimizing code for a specific plattform and knowing how it works real? If it is, to what extent?
 
Back
Top