Future console CPUs: will they go back to OoOE, and other questions.

And a question to Faf or other senior devs on teams.

What percentage of your current team would you trust to write good thread safe code?

I've been writing threaded code for years and I recently added asynchronicity to an existing system. I managed to construct two potential deadlocks, because of re-entrancy paths I hadn't expected. Now I caught them by code inspection, but I wouldn't expect a lot of the junior engineers on my team to have done so.
 
@ ERP: Carmack made a comment that he though MS and Sony were 1 gen too early. Where do you (and other game developers here) feel on that? Would have you preferred in the 2005/2006 timeframe:

• A top of the line Single Core OoO chip (like AMD FX)
• A middle range Dual Core OoO chip (like AMD FX 3800+, i.e. about what MS originally aimed for)
• A Xenon/Cell like processor

Would this opinion change in 3 or 4 years? (i.e. do you feel you will be getting more out of a Xenon/Cell processor in 3 or 4 years, in regards to end product in the game per man hour, as you would be if they had gone with the other options?)
 
What percentage of your current team would you trust to write good thread safe code?
Is that a trick question? ;)

None, of course. I wouldn't even trust myself to get it right, while I do know what to watch out for. There is only one other developer in our company who would know as well, and he's mostly a manager nowadays.

And you need totally different unit tests, with different kind of units.
 
I guess not me ... Just a month or so ago I tried our main Office application written in .NET 2003 in the 2005 environment, and it pointed out some unsafe control access (without the proper invoke stuff).

And that's just your basic ... well ... basic. ;)

But the other threads were ok. I like the way you can do threads in .NET, just two lines - create a thread object that points to your subroutine, and invoke the start method. Everyone can do it. ;) We're getting our first Dual Core desktops next week so it's going to pay off more to think about threads properly.

But this is off-topic, I guess - I am starting to get more interested though in threads and multitasking on a lower level for the 3D Browser / Engine project we have going on. I'm going to ask for opinions on what an efficient way to set that up shortly.
 

does that even account for one whole developer? ; )

ERP said:
What percentage of your current team would you trust to write good thread safe code?

and then this addresses only a part of the problem, as thread-safety per se is a required but not a sufficient factor for good threaded code. given that the parallelization of the code was originally beneficent, then the thread-safety should not break that. i've seen way too much 'thread-safe' code ridden by so much false concurrency issues that the whole parallelisation exercise lost its meaning in those cases.
 
ERP said:
As I've said before I don't think that X360 or Cell are bad processor designs, they we're clearly extremly heavilly optimized for one thing at the expense of good general performance.
Well, that was the whole point of them after all - to shift the emphasis towards "media centric" rather than "general purpose" computing at a given time and cost.
The questions that still haven't been answered are:
* Did they achieve their goal?
* If they succeded, will they fail anyway due to the capabilities remaining untapped?

Only application developers can really answer those questions. And even for developers it may be difficult to make a full evaluation until another couple of years or so has passed. But the questions are both interesting in their own right, and of course critical to predict what the next generation will bring.
 
Well, that was the whole point of them after all - to shift the emphasis towards "media centric" rather than "general purpose" computing at a given time and cost.
.

I would rather know what is considered "media centric" and what is considered "general purpose".

Seems to me like 90% of your average benchmarking suite at any given site could be considered "media centric" (encoding, decoding, (de)compression, gaming, file conversion) with pretty much only your "office productivity" tests being considered "general purpose", at least in the sense that general purpose is everything that isn't "media centric".

But if thats the case im finding it hard to beleive that AMD/Intel spent so much effort speeding up that part of the CPU which is already clearly fast enough at the relative expense (compared to what they could be doing) of the parts which are mostly used to judge their speed these days.

If anything I would have thought Intel and AMD would focus most on gaming performance since thats what gets the most focus from the media. Hell the P4 and A64 were fairly even outside of gaming except for the last 2-3 processor speed bumps, but the whole world seemed to focus on the A64 being the far superior CPU purely because it thrashed the P4 in games.
 
Well, that was the whole point of them after all - to shift the emphasis towards "media centric" rather than "general purpose" computing at a given time and cost.
The questions that still haven't been answered are:
* Did they achieve their goal?
* If they succeded, will they fail anyway due to the capabilities remaining untapped?

Only application developers can really answer those questions. And even for developers it may be difficult to make a full evaluation until another couple of years or so has passed. But the questions are both interesting in their own right, and of course critical to predict what the next generation will bring.

What they are doing, I think, is redefining general purpose. They are looking at the demands of their target group, and have distributed power accordingly. In my opinion, very rightly so. Soon at work we are going to get our first Dual Core desktops, and another generation of desktops will be phased out. I usually get one of the desktops that are phased out to play with at home, and have a few of those lying around already.

The funny thing is, one of the first I ever got is still our favorite machine to use. It's an old 400Mhz device, and I only increased the memory to 256mb. My girlfriend uses it for everything (Windows XP and mostly Opera and Office 2003, Messenger, Winamp for Internet Radio) and it's more than fast enough for nearly all of it. Best of all, it makes no noise whatsoever, unlike all our modern desktops (when the cpu actually has to do something). Same story for an old Dell laptop I got, with very similar specs (Celeron @ 400Mhz) - I have worked on that laptop in the train for a year already, without problems, doing just about everything I have to do at work (Visual Studio .NET, compiling, debugging and running a decent size VB.NET Application, and writing documentation using Word, and I've also made most of my PSP projects with that one).

Most telling, then, is where these machines are slow: encoding video or decoding hi-res video, 3D graphics, etc. ... You get what I'm getting at.

But I don't know. The truth often lies somewhere in the middle.
 
He isn't having a negative attitude toward effort. But I think the key is ERP and others are game developers, who want to make games. At some point the hardware becomes an abstraction and the only thing that truly matters at the end of the day is the game.
Isn't that quite a sweeping generalisation? To make a game needs lots of disciplines, and who likes one might not like another. Consider making a movie. You could have a lighting technician, photographer, director and actors. All of them are film-makers. Do they all wish the job of creating a film could be reduced down to a mind-reading holographic movie-creating engine? The photographer has to deal with challenging situations to capture mood and detail, and though he has bad days, it's the challenge he likes. The lighting artist has to set that mood and work with the cameraman to provide light where needed. The director wants everyone to do what the director wants. The actors want to explore emotions and play characters. All are film-makers. All have challenges, but the challenges make their particular job. The result at the end of the day is the film, but the quickest, easiest way isn't the ideal solution. The day a director can push a button and create a movie is a sad day for many many film-makers, not a happy one.

As a game creator, you'll have designers, engine builders, artists etc. Game makers may want the game to be made as cheaply and quickly as possible, matching their vision, as an overall desire. But you'll have people who like the challenge of creating the engine, and who want performance to leverage to let the other team-members be less restricted. I would say that someone who wants to make games but doesn't want to write low-level code is probably in the wrong job as a low-level engine creator, and may prefer a different role. Just like a person who wants to work in movies but has trouble working with the lights shouldn't be a lighting technician, and should try their hand at another discipline. Speaking personally, I have ideas for software but don't enjoy writing software much. I would love a super-easy software writing method. But at the end of the day, I shouldn't be writing software. I should be a designer, and leave it to those who enjoy writing software to handle that part of the process.
 
Well, that was the whole point of them after all - to shift the emphasis towards "media centric" rather than "general purpose" computing at a given time and cost.

Mediacentric isn't the term I'd have used.
They are clearly optimised for raw FPU performance that is not necessarilly the same as "media centric".
 
I want to be clear here.

I stated earlier in the thread that I consider technology to be support for the people making the games (and I absolutlely do). That however does not preclude technical excellence, it's merely an observation.

10-15 years ago technology was a game, content was an afterthought or something brought in to complete a game now games are primarilly content built on technology.

It's like the old Gameplay vs Graphics debate in no way does good gameplay preclude good graphics.

Someone mentioned Wii earlier, personally I think Nintendo bailed on the technology curve too early. The hardware is too constrained to let designers really express themselves freely. IMO Wii is more about pushing gaming at a different market than it is about pushing gaming over technology. And I really hope Nintendo succeed.
 
Mediacentric isn't the term I'd have used.
They are clearly optimised for raw FPU performance that is not necessarilly the same as "media centric".

Surely the raw FPU performanec is just a part of something that has been designed to handle streaming data more efficiently? One of the Cell's early demos was decoding a pile of video streams, for instance. The thing's full name is Cell Broadband Engine ...
 
Surely the raw FPU performanec is just a part of something that has been designed to handle streaming data more efficiently? One of the Cell's early demos was decoding a pile of video streams, for instance. The thing's full name is Cell Broadband Engine ...

I guess that it's a question of the definition of mediacentric.

I don't believe that decoding video streams is a good indicator for workload in a game which is the primary function of Cell.

If decoding Video or doing FFT's was a good indicatior of game workload why aren't we just running glorified DSPs?

I do think that some areas of a game are heavilly FPU limited on current gen consoles, but I think that for the most part the larger issue is memory access. Most games spend more time copying data around than they do computing things, and as games get more complex I think this tends to become more dominant..
 
What they are doing, I think, is redefining general purpose. They are looking at the demands of their target group, and have distributed power accordingly. In my opinion, very rightly so. Soon at work we are going to get our first Dual Core desktops, and another generation of desktops will be phased out. I usually get one of the desktops that are phased out to play with at home, and have a few of those lying around already.

The funny thing is, one of the first I ever got is still our favorite machine to use. It's an old 400Mhz device, and I only increased the memory to 256mb. My girlfriend uses it for everything (Windows XP and mostly Opera and Office 2003, Messenger, Winamp for Internet Radio) and it's more than fast enough for nearly all of it. Best of all, it makes no noise whatsoever, unlike all our modern desktops (when the cpu actually has to do something). Same story for an old Dell laptop I got, with very similar specs (Celeron @ 400Mhz) - I have worked on that laptop in the train for a year already, without problems, doing just about everything I have to do at work (Visual Studio .NET, compiling, debugging and running a decent size VB.NET Application, and writing documentation using Word, and I've also made most of my PSP projects with that one).

Most telling, then, is where these machines are slow: encoding video or decoding hi-res video, 3D graphics, etc. ... You get what I'm getting at.

But I don't know. The truth often lies somewhere in the middle.
Good post!
IMHO most SOHO applications are light. Sometimes it gets a little heavy because of bad code and too much C++ or other things (some say that at least half of the newest GetRiight code is selfprotection). Today most of my docs (word, excell, PP, pdf, e-mails) are really small. Maybe pdf could get some optimization using an stream media processor. Skype is light. The CAD/CAE tools I eventually use dont need much processing power.

I still miss my old Tually 1.13GHz 512K cache. Cool, quiet, fast. Just turn-off some nonsense XP options like windows smothness and you are OK. And if you have time clean the multiple services in XP to make it small size.

The only thing pushing my PC needs are midia (HD video and audio) and games.

If people are serious about the need of OOOE then maybe in the future we could have chips with AMP (Asymetrical Multiprocessing). All sharing the same ISA, but few with more GP oriented implementation (more OOOE performance) and others more stream oriented implementation (more FP performance). But the programmer will have to identify the kind of code he/she developed to the OS.

Lets say you have 16 cores chip in the future. Maybe 4 more OOOE oriented and 12 stream oriented ;)
 
Last edited by a moderator:
I do think that some areas of a game are heavilly FPU limited on current gen consoles, but I think that for the most part the larger issue is memory access. Most games spend more time copying data around than they do computing things, and as games get more complex I think this tends to become more dominant..
During the 90´s we had the same problem with VLSI CAD tools running on large mainframes.
Then we asked the compiler programmers to have support for some acrobatics with data (with multiple dynamic heaps) and specially data disposal (just discard the entire heap, etc..). They loved the idea.
Imagine trying to run a Design Rule Checker in an state-of-art 16MBits DRAM design with a 128MB mainframe, while other people is using it for other important things too.
 
Last edited by a moderator:
@ ERP: Carmack made a comment that he though MS and Sony were 1 gen too early.

Carmack's absolutely right. MS and Sony jumped on the multi processor bandwagon a bit too early, even before devs got a chance to play with it on PCs. We're just now barely getting dual core PCs. I'm really excited about it but I really wish I have the time to sit down and play with it for a while before I need to show production code.
For PS3/Xbox360 I personally would've preferred a Core 2 Duo with 8 SPEs. You get stellar performance off the bat and when time permits you can move stuff to SPEs.
 
I am not certain that conventional symmetrical multi-core processors with more cores are the future for desktop PCs. For file and database servers they are probably the right way to go, but for desktop PCs, what is it exactly that you need to go faster? Does anybody really need MS Word code to run ten times faster? Running Word code ten times faster won't help if the thing that limits speed is the time it takes to redraw the screen, or load or save a file or page virtual memory to disk, or if Word's speed is limited by the fact that the user can respond/type only so fast.

The speed limiting bits that slow PC applications are graphics, multi-media, loading programs from disc, and paging virtual memory to disk. These are the things that need to be accelerated. I think the future for desktop PCs lies in a CPU closer to Cell than the current multi-core PC - many media and FP accelerating assymetric cores attached to a conventional oooe core, an ultra fast hard disk maybe with multiple independent heads working off the same platter or maybe an ultra fast solid state disk, a huge amount of RAM to reduce the need for virtual memory paging, and ever more powerful GPUs.

My idea of a fast PC general purpose desktop PC isn't dual quad core Conroes, it is a PC with 4 disk RAID 0 or RAID 10, a powerful GPU, and a powerful media accelerator.

Re: dual core Conroe plus 8 SPEs, I think that will cost far too much for a console. If somebody produces a cheap Cell on a PCIe card - for example if Ageis drops it's proprietary PPU and uses Cell instead to reduce cost, you always have that option on a PC.
 
Last edited by a moderator:
I don't think symmetric is the plan for the big chip vendors out there. Intel certainly is looking to bring in specialized cores, and AMD's already starting to test the waters by allowing specialized vendor chips to use coherent Hypertransport.

It's more of a question of when one can expect expertise in general to match the challenges.

Right now, the low-hanging performance fruit is still with symmetric setups up to quad core, apparently.
 
Carmack's absolutely right. MS and Sony jumped on the multi processor bandwagon a bit too early, even before devs got a chance to play with it on PCs. We're just now barely getting dual core PCs. I'm really excited about it but I really wish I have the time to sit down and play with it for a while before I need to show production code.
For PS3/Xbox360 I personally would've preferred a Core 2 Duo with 8 SPEs. You get stellar performance off the bat and when time permits you can move stuff to SPEs.

The idea with consoles is that the owner invests in a certain piece of technology, and any form of subsequent upgrades consist solely of programmers learning to better harnass the power of said technology. If you create a console that is basically nothing more than current PC hardware, in less than 2 years, consoles would just be rather slow PCs.

Instead, a console should be a set of hardware fine-tuned to the demands of gaming, with a certain amount of future proofing. Playstations tend to last at least 5 years, and look how the PS2 is still getting a number of great (and still great looking) games in its sixth year (God of War 2, Final Fantasy XII, Okami, and lots of multi-platform games). Basically, if you got a PS2 in its first or second year, you got incredibly value, even if you paid 1000 USD for it at the time (and most people have paid much, much less).

Basically, what Carmack has said, is that consoles were 1 gen early, from his perspective as a PC centric developer. If, instead, you'd ask any developer who's been focussing on PS2 development primarily, I'm fairly certain you'll get a different answer.
 
Back
Top