gurgi said:You're just mad he took a stab at the saturn.
Me? Nah, God spare me from being a Sega fan. :shivers:
gurgi said:You're just mad he took a stab at the saturn.
Laa-Yosh said:Okay, so we're talking about the guy, who - with Michael Abrash - wrote a software 3D renderer that basically fit into the cache of the Pentium processor? And as far as I know he had ideas like using BSP to process the game levels' geometry, use lightmaps for static lighting, or even things like smooth scrolling on a PC with Commander Keen, fast raycasting based 2.5D engine, and so on... So, is this the same John Carmack we're talking about? Because then I'm probably interpreting things a bit differently...
IIRC, Quake 3 got a 40% speed up from going to a dual CPU machine. That's close to the expected peak for making an arbitrary app dual threaded. The rule of thumb I've heard is the square root of the number of processors. So 2 processors should be 1.41x faster, 4 processors 2x faster, etc.Nite_Hawk said:What I do doubt is his authority on parellel computing given that his only brief stint with it seems to be the lackluster implementation (which was later removed) in quake 3.
Whoah, there. Are you comparing hardware engineers to software engineers? Because I don't think the bulk of the engineers at those companies have actually tried to do what Carmack did. They're working on chips, not debugging race conditions.I don't want to you to take away from this message that I think John is incapable of writing multithreaded code, or that he's incompetent. It is just that what John says basically dismisses the work of hundreds of very very smart engineers at Toshiba/IBM/Sony that have spent their entire careers designing and working on multithreaded processor designs and algorithms.
Nite_Hawk said:In this specific case, it just seems as though there are other people in the field that may have a more qualified opinion than John does.
Well it's not always true. Xbox certainly hid things from you and you had no choice but to work behind an API layer. Though I never did anything with PS1 (was in middle school at the time), it was supposedly the case there as well. But the thing in either case was that you had enough hardware power to get by without having to worry too much about it. Still, it's up to you to really decide how you want to use it. When do you push data across, how do you want to group your packets, optimizing everything that goes on in software as opposed to just the graphics pipe... keeping everything that costs a lot down to a minimum. That part of optimization and getting "down to the metal" is the same no matter the platform. Granted, what costs a little and what costs a lot will be different from platform to platform, but the point of actually having to deal with it is what's the same.Well, I was under the impression that consoles provide the benefit of being able to exploit a platform´s strengths while PC´s are dominated by software layers such as OpenGL that provide compatibility, but reduce efficiency considerably. Is this not true?
Again, it's not just in-order. It's in-order, poor branch prediction, long pipelines, small caches, high latency memory, but with high theoretical throughput. Fundamentally, it's just that you could take a high-clocked dual-core A64, and get something quite powerful, and you'll have a $1200 console. Next-gen Neo-Geo, anybody? And even then, the raw throughput would have been, what, 10 GFLOPS? If we can figure out dual-threading, I think we can hit a wall at 10 GFLOPS soon enough, considering that current-gen hardware happens to include a CPU triplet that approaches a theoretical limit of 6 GFLOPS. Using simple hardware means more functional hardware in place for less cost.I wonder why Sony/MS went for In Order CPUs if the performance is so below conventional ones.
I didn't get any impression that he was talking about how parallel programming works -- it was more like he was addressing the issues with game code in parallel programming. And he's certainly qualified to talk about game code.In this specific case, it just seems as though there are other people in the field that may have a more qualified opinion than John does.
Almasy said:So, is there no possible way to optimize code for In order CPUs? That´s what I was trying to refer to. I wonder why Sony/MS went for In Order CPUs if the performance is so below conventional ones.
Inane_Dork said:IIRC, Quake 3 got a 40% speed up from going to a dual CPU machine. That's close to the expected peak for making an arbitrary app dual threaded. The rule of thumb I've heard is the square root of the number of processors. So 2 processors should be 1.41x faster, 4 processors 2x faster, etc.
And it was removed because OS and hardware issues made the benefit not work on several setups. It wasn't that he wasn't getting good enough performance from it.
Whoah, there. Are you comparing hardware engineers to software engineers? Because I don't think the bulk of the engineers at those companies have actually tried to do what Carmack did. They're working on chips, not debugging race conditions.
And I'm pretty sure Carmack is not dismissing their work. I didn't get that sense from the video, anyway.
Yeah, it's from the keynote. It was the classic "it worked on my machine" problem.Nite_Hawk said:What specific OS/Hardware issues are you thinking of? Are you just basing this off what he said in the keynote?
I really don't think that's reasonable. The skills it takes to synchronize caches across a bus and arrange transistors on silicon and not that comparable to the skills it takes to design an effective parallel system and debug it. At least, those two skill sets seem very different to me.I think when you are talking about a field as specific as parallel computing you are going to end up with a lot of knowledge about both hardware and software in the same company.
Cell engineers have a vested interest in promoting one thing. Carmack does not. Plus, to me anyway, he only seems to comment on hardware as it affects his ability to program.Certainly John seems to feel comfortable commenting on both the software and hardware aspects of these system. I don't think it unreasonable to hold his opinions next to those of both the hardware and software engineers working on Cell.
I think what he's pointing out is that he's going to have to work significantly harder to attain the same level of performance on Cell & XeCPU that he can get on a high-end Athlon or P4. And that's reasonable, IMO. It may be downplaying Cell and XeCPU, but maybe they should be downplayed.Carmack doesn't outright dismiss anything, but he certainly seems to downplay any significance there is. The feeling I get from him is that he thinks of both Cell and the xbox360 cpu as just being hard-to-program incremental improvements on what we had before. Personally, that seems like dismissing their work to me. The Cell and to a lesser extend the Xenon processor seem a lot more revolutionary than evolutionary.
Carmack is not complaining. I did not get that sense at all.Whether or not they will fulfill their potential is up for contention, but so long as Carmack is complaining about them, I doubt it will be him that leads any revolutions on these new processors.
Inane Dork said:And that's reasonable, IMO. It may be downplaying Cell and XeCPU, but maybe they should be downplayed.
MfA said:There are things you can do on Cell which you simply wouldnt have the horsepower for on a current x86 ... consoles have a long lifetime, some developers will suck it up and get good performance out of it eventually just as he said.
Saem said:These discussions are absolutely ridiculous. The technical people on this board have basically corroborated John's premises and generally agree with his conclusions, we have people on the other side, bringing up all sorts of odd ball concerns about how John's attacking something.
What's ludicrous is the assertion that he's some how disrespecting work, he's not. He's merely saying, with what has been delivered this is what he expects to happen. The "huge" gains that were being droned on about by the marketting departments just aren't exactly feasible for many situations.
To sum it up, there were areas where we've taken a step back and those areas mean that the programmers, not the hardware guys basically have the onus on them for hitting performance marks, more so than ever. This isn't a case of programmers being lazy, in fact, I'd put it on the hardware guys.
Or more specifically the business guys. What hardware designer wouldn't want the best of both worlds? Fast single threaded performance and multiple threads. Cost and schedules get in the way of these things sometimes.Saem said:This isn't a case of programmers being lazy, in fact, I'd put it on the hardware guys.
Anything which requires more than the peak performance of a current x86 processor.mckmas8808 said:What things are you talking about?
Or more specifically the business guys. What hardware designer wouldn't want the best of both worlds? Fast single threaded performance and multiple threads. Cost and schedules get in the way of these things sometimes.
I must question what you've seen and heard about Cell and XeCPU. If it's the same as what I've seen and heard, I'm not sure what's disagreeable with my comment.mckmas8808 said:Can you tell me why they should be downplayed? What have they (they being Sony and MS) done to deserve this.