Is PS3 hard to develop for, because of CELL or ...

And despite the Xbox being basically a PC in a box it was just peachy and could do just about everything the PS2 could and more as a console.

As mentioned already it came out later, right? If you reverse the situation and Xbox had been released before PS2 then the PS2 would probably have been more powerful.

So having a PC heritage in no way cripples a console as a console.

Agreed, but neither gives it an instant advantage.

The choice of the IBM PPC was more to do with theoretical processing power per dollar spent. I'm sure they would have been perfectly happy with a Core 2 Duo or Core 2 Quad. But that would have been at higher overall cost.

I'm pretty sure it also had to do with control, as they own the IP for the design. Would Intel (or AMD) be willing to fully license a Core2Duo-style CPU? I think that's probably the main reason although of course you need good performance/dollar.

In hindsight it may have been better as there are some things about PPC that you need to code around/work around.

This is the part I don't get. I've programmed on PowerPC before and that ISA is as elegant as it gets. Maybe what happens is that devs who think the world is a PC with an x86 CPU need to learn a different arch, but that's another story.

It'll be interesting if they go for another semi-custom CPU for the next Xbox.

I'm pretty sure they will, for the reasons cited above.

As to the PS3, as noted before, in a world where Playstation dominates sales, the esoteric and basically proprietary nature of the PS3 wouldn't have been a problem. You either code for it and make lots of money (PS2) or code for the "lesser" machine and make less money (Xbox 1).

So, the key question here is: Do they make it esoteric and complex because they can or because they think that's the best design possible? Or, put it another way, accepting the fact that programming for the Cell (and the PS2 before it) is hard, do devs get the best performance/results possible even though it takes them a while to master the system, or would it be better to have a very easy to program for platform even if it was not so powerful?

I'm not a dev (not professionally anyway) but I'd love to hear from people who write games for these consoles what their thoughts are.

And with it's ease of developement (since many devs are familiar with PC developement) devs standardized on doing that first then porting to other consoles.

Of course, for us PC fans, this has the unfortunate side effect of accelerating the move of PC devs from PC first (or only) to Console first (or only). :( And, of course, this migration also helped to cement the X360 as the preferred console to develope on first for multiplatform devs.

Rampant piracy and poor sales might have some to do with it as well. If developing for xbox and then porting to PC is straight forward you should still get pretty decent games on PC as long as the assets are ready for higher res/more powerful hardware (thinking shaders here).

I'm afraid the golden age of PC game is pretty much gone.

As to the specific weaknesses, I can't pull any off the top of my head, just that I'm aware of them from reading the various threads in the console tech sub forum, and how devs have mostly figured out how to work around the differences from a traditional PC CPU.

Is a PC CPU the idea choice for games? Maybe not.

Cheers.
 
A box with PC heritage is no worse as a console as a box with esoteric and parts that are different just to be different.
Maybe as technology progresses this is becoming more the reality, but traditionally consoles used custom parts because standard parts just couldn't do the job. PS2 was holding it's own in some areas against PC titles years after PS2's tech was technically superceeded. If PS2 had been built around PC components available at the turn of the millenium, we wouldn't have had CON or GT3 or GOW and whatever else has impressed people.

This generation, the potential in PS3 exceeds a moderate PC from its creation (remember PC's have a potentially unlimited budget so the top-end can never be matched!), but the business of creating to PS3's potential limits what is achieved, as teams don't have the time to really delve into the machine, unlike PS2 where it so dominated it was more worthwhile to dig deep and create a stand0out title. Plus the tools sucked! As such, designing a system with 'hidden potential' makes less sense, when 'readily available potential' will actually see more use. Herein lies the case that the console companies should move away from non-standard hardware and just offer something decidedly PC like. Perhaps in support of that, the promises of this gen that required all that processing power never happened. We don't have procedural synthesis creating living, varied worlds. We just have standard gameplay with fancy graphics on the whole - something possible with a reasonable CPU and a monster GPU a la PC.
 
I mean are Sega's various PC based arcade systems any less of an arcade machine because they use off the shelf PC parts?

Regards,
SB

I agree, most arcade machines are basically pc behind curtain

Mod: You'll get editing rights after a few posts. Please note my capitalisation. Capital I's are more important than correctly spelt curtains!
 
Maybe as technology progresses this is becoming more the reality, but traditionally consoles used custom parts because standard parts just couldn't do the job. PS2 was holding it's own in some areas against PC titles years after PS2's tech was technically superceeded. If PS2 had been built around PC components available at the turn of the millenium, we wouldn't have had CON or GT3 or GOW and whatever else has impressed people.

This generation, the potential in PS3 exceeds a moderate PC from its creation (remember PC's have a potentially unlimited budget so the top-end can never be matched!), but the business of creating to PS3's potential limits what is achieved, as teams don't have the time to really delve into the machine, unlike PS2 where it so dominated it was more worthwhile to dig deep and create a stand0out title. Plus the tools sucked! As such, designing a system with 'hidden potential' makes less sense, when 'readily available potential' will actually see more use. Herein lies the case that the console companies should move away from non-standard hardware and just offer something decidedly PC like. Perhaps in support of that, the promises of this gen that required all that processing power never happened. We don't have procedural synthesis creating living, varied worlds. We just have standard gameplay with fancy graphics on the whole - something possible with a reasonable CPU and a monster GPU a la PC.

I fully agree that prior to the year 2000, making a console based on PC parts didn't make a lot more sense, I even provided some reasoning behind why PS2 was better off with custom CPU vs. a standard PC CPU. I can even see an argument for the custom GPU, especially considering that developement for it started in the 90's.

But it would also be argued that a PC based graphics solution would have been just peach. The Dreamcast didn't suffer graphically compared to the PS2 with basically a PC based graphics chip for example. Additionally I would be willing to bet that the Dreamcast GPU was less costly to develope and implement than the custom graphics system in the PS2. However, you won't hear anything about that as the PS2 was such a success that it didn't really matter how much was spent on R&D specifically for the console. Likewise, if PS3 had continued to dominate the console space, there wouldn't be any discussion about how much it cost to R&D and manufacture.

But starting sometime between the original Xbox 1 and X360, cost of leveraging or slightly modifying a PC part reduces R&D and cost to manufacture. Leveraging a large pool of new programmers already relatively familiar with the architecture is an added bonus over having to relearn virtually everything related to hardware all over again.

Going forward, unless one of the big players seriously stumbles again (could happen) cost is going to remain a large factor, as is time spents for R&D (thereotically less time for something similar to a PC versus more custom hardware), and ease of developement. Not only that but it's going to cost a LOT of money to make a custom piece of hardware more performant and cost effect than a standard PC component now days. Which wasn't necessarily the case in the years prior to 2000.

But yes, no argument from me, that Atari 2600, NES, Super NES, PS1, etc. pre year 2000 was better off with custom hardware. Year 2000 and on however, I see custom hardware making less and less sense. Other than as a means to reduce cost/power consumption/retain IP rights.

Regards,
SB
 
The consoles get specialized hardware because it, in theory, allows them to get the best hardware for the application per cost. They can cut out useless extras. They can get more control over its manufacture and the tech itself. Xbox 1 was a massive cost failure and friction with NV over the Xbox chipset's cost may be why MS didn't go with NV this time.

I always was amused that the Gamecube could essentially match what the Xbox could do most of the time. Nintendo made money on every Cube. MS sure as hell didn't make money on the Xbox. Cube was a very cheap, very efficient little piece of hardware. I think it showed that Nintendo knows how to engineer a game machine and MS was a money throwing noob rich kid. :) I see 360's design as MS learning from their Xbox 1 screw ups by cutting costs everywhere and increasing control of the hardware IP.

I don't think custom hardware is going away. They don't need to start from scratch you know. RSX is custom hardware even if it is basically a GF 7600/7900 hybrid. NV2A was not exactly a GF3 or a GF4. Xbox 360's GPU isn't completely separate technology from the PC space either, although it is certainly somewhat unique. The graphics companies just throw together their technology into whatever the client wants and it's all fueled by R&D from the PC space. All of the 3D consoles have been based on tech that originated in 3D workstation/PC hardware.
 
Last edited by a moderator:
The consoles get specialized hardware because it, in theory, allows them to get the best hardware for the application per dollar. They can cut out useless extras. They can get more control over its manufacture and the tech itself. Xbox 1 was a massive cost failure and friction with NV over the Xbox chipset's cost may be why MS didn't go with NV this time.

Yes, I already addressed the CPU aspect previously, the only place where you may need a different set of functional units over a PC in the past. But as said, with consoles taking on more and more PC centric duties, it starts to make more sense to keep some of that. Especially if it ends up being cheaper to go with a slightly modified PC core than a relatively custom core. GPU features will mostly be needed regardless.

Xbox 1 had a lot of cost increasing features. But much if it had to do with MS wanting to get something out before it was rendered totally irrelevant. Being able to use off-the shelf PC parts made that easier, but costlier since MS couldn't control manufacturing of key parts. IE - everything that couldn't be sourced from multiple vendors was under manufacturing control by someone else. Not only adding an additional layer of cost, but not being able to actively pursue cost cutting measures.

As a result X360 is basically taking everything MS learned from Xbox 1 and applying it to a PC-ish box.

While PS3, other than the GPU was basically continuing the trend of what had worked previously for sony.

That said, I think X360 will continue to use a non-pure x86 CPU (I doubt Intel would want to give MS any significant IP rights for a CPU core). And since Cell has now been around awhile (lessening R&D costs) it's possible they may stick with that. But I think it's just as likely (50/50 chance) that they'll go with something more PC like.

Regards,
SB
 
What is the weakness of the PPC ?
I mean i know of weaknesses of the 360 PPC like the In Order Execution - but what weaknesses have PPC in general ? - It's just an ISA. The 360 CPU is what it is because Microsoft wanted it this way.
For the ISA per se there is no problem but for the specific implementation MS and IBM come with some issues come to mind ;)
LHS for example which cannot always be avoided.
Overall xenon is pretty "weak" as a CPU if you compare it to X86 or the cell and it's far from being the best PPC implementation to date (understatment of the year :LOL: ).
 
For the ISA per se there is no problem but for the specific implementation MS and IBM come with some issues come to mind ;)
LHS for example which cannot always be avoided.

That's what i did (tried to) say in my post. :)

Overall xenon is pretty "weak" as a CPU if you compare it to X86 or the cell and it's far from being the best PPC implementation to date (understatment of the year ).

Sure but as i said they got what they paid for - they could for example have gotten a PPC970 but they descided against it.
I read somewhere (i think it was in the "The Race For A New Game Machine") that Microsoft initially wanted OOE for the Xenon but they couldn't fit 3 cores inside their size budget. So they decided against it.
 
Is the original PS3 CELL that bad for developers that they have warned Sony a riot will break out if SCE went back to CELL...? Intriguing...

Gongo don't over-analyze; the PS4 processor decision and the forces guiding it have been hashed out a hundred times around here. Yes PS3 was hard, and yes because of Cell... but that's all over now for all intents and purposes. Cellv2 would have (and may still have) a fine shot for PS4, but the processor decision will be based on the cost/benefit and judged alongside competing solutions. I'd be a fan of a v2 myself. As for advance notice for devs, I think with the tools that will be in place regardless of architecture, Sony intends for the transition to be a smooth one. That said, devs are still firmly entrenched in the current gen, lest we forget and get ahead of ourselves.
 
I was not sure where to post this, but I hope I am not way out bounds with this thread.

I looked through this presentation and most of it, is way beyond my ability to fathom, but I did some assembly in high school on 6502 and 6510 cpus so I grasped a bit.

http://www.infoq.com/presentations/click-crash-course-modern-hardware

Still what I found interesting, was that at around 48:20 it says think Data and Not Code, just like Insomniac (Acton) / Sony keeps chanting and the way to get performance is to avoid cache misses. To avoid those you need to think data and not code. Then thinking back about missing branch prediction spu and no ooo on ppu (I think) etc, I got a feel for why the Cell is so different and "harder" to write optimized programs for, especially when we are converting from x86 or even the Xbox Xenon.

My conclusion, how faulty it might be, is that they just removed the training wheels and put the programmers back 40 years and basically said we do it over this way now. But you need to crank out projects better than before in less time.
 
Well, it would help to have an actual developer talk about it...

Regarding cost, there's more to a machine than processors. IIRC, the first PS3 had a bigger HDD than the 360, more ports and slots, more expensive RAM, extra processors for BC with the PS2, and a controller with built-in rechargeable battery and tilt. Not to mention that Sony's engineers seemed to have a better handle on the heat transfer problem, and the console seemed to be overall better manufactured (MS wouldn't be getting all those RRODs early on unless they cut corners somewhere between the final design specs and manufacturing).

On topic, I dunno...how could you even tell? KZ2 throws a lot of crap on the screen and has excellent animations. Could the 360 do the same game? I have no idea, to be honest.
 
It's very difficult to quantify the cost developing for Cell, the most reliable (in my opinion) is Epic's rough numbers, which run something like this:

Compared to standard code,
Multithreaded code is 2x as expensive,
Playstation 3 code is 5x as expensive,
GPGPU code is 10x+ as expensive :mrgreen:

I would argue that to get 'optimal' performance out of either console, then then it's 5x for each platform.
The key difference is that a much larger proportion of PS3 code needs this attention.
You can very approximately show this by comparing the PPU and Xenon cores: Assuming performance parity between cores, then the implication is that to gain parity, two thirds of your performance critical code must run on SPUs in parallel.

Of course that's glossing over a lot of the subtitles involved. There is a case to argue it may be 3/4 or more.

There is no doubt that the SPUs are exceptionally fast.
The problem being it's not a trivial thing to split a game's workload over 8 threads and a GPU, let alone when 6 of the threads have to do their own memory management.
Which is exactly why - as mentioned - you need to treat it as a data problem, not a code problem.
 
It's very difficult to quantify the cost developing for Cell, the most reliable (in my opinion) is Epic's rough numbers, which run something like this:

Compared to standard code,
Multithreaded code is 2x as expensive,
Playstation 3 code is 5x as expensive,
GPGPU code is 10x+ as expensive :mrgreen:

I would argue that to get 'optimal' performance out of either console, then then it's 5x for each platform.
The key difference is that a much larger proportion of PS3 code needs this attention.
You can very approximately show this by comparing the PPU and Xenon cores: Assuming performance parity between cores, then the implication is that to gain parity, two thirds of your performance critical code must run on SPUs in parallel.

Of course that's glossing over a lot of the subtitles involved. There is a case to argue it may be 3/4 or more.

There is no doubt that the SPUs are exceptionally fast.
The problem being it's not a trivial thing to split a game's workload over 8 threads and a GPU, let alone when 6 of the threads have to do their own memory management.
Which is exactly why - as mentioned - you need to treat it as a data problem, not a code problem.

Hmmm, Epic forgot one for those who were early adopters:
trying to get support from Epic for xx3 on the xx3 is xxx more expensive

Hypothetically, of course...
 
That said, I think X360 will continue to use a non-pure x86 CPU (I doubt Intel would want to give MS any significant IP rights for a CPU core). And since Cell has now been around awhile (lessening R&D costs) it's possible they may stick with that. But I think it's just as likely (50/50 chance) that they'll go with something more PC like.

Regards,
SB

Why x86 for the next generation since the 360 is PPC? I personally would explect(and hope) to see x86 game development wind down due to PC piracy and lackluster sales, and everything move on to console + GPGPU. I don't like the thousands of x86 instructions that need die space to be decoded when it's not needed for console BC, unlike a PC BC, where it's needed.
 
It's very difficult to quantify the cost developing for Cell, the most reliable (in my opinion) is Epic's rough numbers, which run something like this:

Compared to standard code,
Multithreaded code is 2x as expensive,
Playstation 3 code is 5x as expensive,
GPGPU code is 10x+ as expensive :mrgreen:
Note the explanation is there is if it costs X to make a single-threaded algorithm. A game doesn't cost 2x/5x. And once the experience is there, the cost of developing algorithms should reduce to something very similar to what we have now. There'll always be the parallel data to worry about, but directing those many cores to process data should be no harder to design for. As I see it, back in the day few people had the mindset to create efficient algorithms, but as they did, mainstream programmer could just adaopt them. Now the standard has changed for massively parallel code, and once again we'll be dependent on the specific thinkers to solve the design issues, and then everyone will be able to make use of the path they've forged.

And never was and never would be a situation where the hardware paradigm would change and yet the developers' way of thinking would change accordingly to make the transition seemless.
 
Note the explanation is there is if it costs X to make a single-threaded algorithm. A game doesn't cost 2x/5x. And once the experience is there, the cost of developing algorithms should reduce to something very similar to what we have now. There'll always be the parallel data to worry about, but directing those many cores to process data should be no harder to design for. As I see it, back in the day few people had the mindset to create efficient algorithms, but as they did, mainstream programmer could just adaopt them. Now the standard has changed for massively parallel code, and once again we'll be dependent on the specific thinkers to solve the design issues, and then everyone will be able to make use of the path they've forged.

And never was and never would be a situation where the hardware paradigm would change and yet the developers' way of thinking would change accordingly to make the transition seemless.

But isn't this whole Epic estimation PR bogus?!
I mean it highly depends on the algorithm you look at...so even the 2x cost for multithreading feels meaningless, and the same goes for the CELL numbers...
 
But isn't this whole Epic estimation PR bogus?!
I mean it highly depends on the algorithm you look at...so even the 2x cost for multithreading feels meaningless, and the same goes for the CELL numbers...

This is not an "Epic PR estimation", it's a quote from a Tim Sweeney lecture about programming languages; no wonder it doesn't include art production, marketing etc.

Trivial algorithms aren't interesting, the worst case is.

It's trivial to sum two float arrays of 1 million elements each into a third one (even though the 1x-2x-5x-10x thing holds true there too, as anyone trying to set up an OpenCL/CUDA environment, or ship an application using OpenMP can attest), but few jobs in programming are so easy.
 
This is not an "Epic PR estimation", it's a quote from a Tim Sweeney lecture about programming languages; no wonder it doesn't include art production, marketing etc.

Trivial algorithms aren't interesting, the worst case is.

It's trivial to sum two float arrays of 1 million elements each into a third one (even though the 1x-2x-5x-10x thing holds true there too, as anyone trying to set up an OpenCL/CUDA environment, or ship an application using OpenMP can attest), but few jobs in programming are so easy.

do I understand this estimation correctly (only the multithreading one):

Say, I need time=10 to develop the single thread algorithm.
Does the estimation works like that:
I need time = 2*10 = 20 to turn this single thread algorithm into a multithread one??
Summing up, the whole development needs time=30, i.e. 2/3 of this time is spend in parallelization?

Then in my opinion, this is hard to believe, especially if we are looking at complex algorithms and especially if we consider shared memory parallelization!
Because it really needs a lot of time to set up a complex algorithm** , and if I need additionally the double amount of time for the shared memory parallelization..does not compute in my head and is definitely not what I am experiencing (scientific computing community).

If we are talking about MPI parallelization, we have to discuss again, but I don't think that MPI parallelization applies to computer graphics...

Regarding GPGPU computing...I suppose that this lecture was held some time ago, because this is a very fast developing area with lots of new methods/tools (keyword in this case is 'PyCuda': http://mathema.tician.de/software/pycuda)

**I am not talking about just implementing an existing algorithm, but I am talking about developing the algorithm and then implementing it...it would be unreasonable if we are only talking about implementing a single thread code without taking the dev time (needed to design the algorithm) into acount.
 
Last edited by a moderator:
Coming up with new algorithms is something so extremely rare, it can be safely ignored for all intents and purposes. Most of the programming work is in implementing well-known ideas, trying different combinations and configurations; I thing the numbers quoted by Tim Sweeney are for that, implementation.

In many ways coming from a MPI mindset will be more useful for Cell work than SMP experience.

PyCUDA is just a Python wrapper for CUDA? Hardly anything revolutionary that will save the industry from the Rising Evil Force of parallelization.
 
Coming up with new algorithms is something so extremely rare
ah okay, so that is the difference.

, it can be safely ignored for all intents and purposes. Most of the programming work is in implementing well-known ideas, trying different combinations and configurations; I think the numbers quoted by Tim Sweeney are for that, implementation.
okay, it makes sense now.

PyCUDA is a Python wrapper for CUDA?
It is (includes for instance some fency tools for GPU noobs, to help you for the memory layout...)

save the industry from the Rising Evil Force of parallelization.
I agree.
 
Back
Top