Yoshida confirms SCE working on new hardware

Status
Not open for further replies.
The problem there is that all the really good looking 1st party titles take quite a few years + a lot of cash investment to develope. Unless, of course, you're already leveraging work done previously (KZ3 leveraging work done on KZ2 for example).

SB

That's really more an issue of poor planning on Sony's part. I mean if MS can launch with a high caliber game like Gears of War, why cant Sony?
 
There are tasks where an algorithm tweaked to work on a "see all memory" type architecture can run faster than an algorithm tweaked to run on "see only a sliver of memory at a time" architecture. If you are talking about any type of task that sequentially churns through data then cell will always win. But there are other cases such as when you are randomly dealing with very large data sets, and being able to see all that data at all times lets you write an algorithm tailored to that strength and possibly run faster on a traditional cpu compared to spu.
Any task that needs to access large data sets randomly isn't going to run fast anywhere and be probably bottlenecked by I/O, but what are some such tasks? In all the CS problems that I've seen, the search space is so large that it's infeasible to look at the entire solution set and pick the best (min or max) result, that's why we have a ton of algorithms to solve NP complete problems by approximation..
 
That's really more an issue of poor planning on Sony's part. I mean if MS can launch with a high caliber game like Gears of War, why cant Sony?

Didn't Gears of War come out a year after the 360 launched?

Launching with a killer app is difficult and a big risk. MS managed it with Halo, but that game had been in development for a long time and was an almost freakish rarity - and even that had the traits of a desperately rushed launch title (repetition of huge chunks of levels).

It's only Nintendo that launch reliably with killer apps, but they base their console releases around having first party killer apps.

To use one of my favourite examples (for everything), Sega started work on Shenmue for the DC about 2 years before the DC came out, before they even had prototype hardware. Consequently, development was more difficult and inefficient than it would have otherwise been, and some of the assets suffered in quality (being pre-first gen and old by the time they were used up to four years later).
 
Any task that needs to access large data sets randomly isn't going to run fast anywhere and be probably bottlenecked by I/O, but what are some such tasks? In all the CS problems that I've seen, the search space is so large that it's infeasible to look at the entire solution set and pick the best (min or max) result, that's why we have a ton of algorithms to solve NP complete problems by approximation..

We had ai code at a old firm I was at whose dependencies and data set were too difficult to make work on spu without changing how it determined ai. It did stuff like keep many frames of data around, keep history/patterns of player inputs, keep history of success/fail in ai decisions, etc, much of that data spanning not just the current game but other games in the players past as well. If you have enough spu time to spare then you can just let them constantly shuffle data in/out non stop from main memory, but we didn't. On spu you only have around ~110k or so to use more or less, given enough time most tasks can probably be reworked to fit that model in the way they are currently coded, but not always. I think you would agree that ai is one of those areas of gaming that really hasn't made much advancement, cpu characters are as dumb as ever so it's one of those fields where approximation just isn't working well enough. Note as well the "in the way they are currently coded" part that I wrote, being forced to think in ~110k chunks is fine for decoding video and cutting through verts but it can be a barrier when thinking about/designing logic as fluid as ai.
 
One place it has benefit is for the younger audience who are dependent on their parents buying them the console. A very common question a parent will ask in said cases is "does it play our old games", so there is some benefit there.

I think that is a pretty accurate analysis. It should be noted that the mass market does not camp outside the shops when a new console is released. Many people go and buy a new console when their old breaks, hence BC can be a selling point worth nurturing.
 
Last edited by a moderator:
I'll ask the dumb questions then :)

I would assume that Cell2 with souped up PPU (or multiple) and more SPU's would make it easier for current developers to take advantage of it, based on their knowledge with the current Cell.
And libraries for Cell would probably be easier to port to Cell2 than writing them again/porting for a new CPU.

So the question is then how much "worse" will that Cell2 be compared to another CPU (heterogeneous multicore?) that will fit into a console for the next generation ie "xbox 720". By worse I mean performance and "ease" of development.

Also I keep reading that developers are/had a hard time in doing "proper" multicore development and the the wacky Cell architecture helps push developers into doing it "right" and these new multicore skills also benefits x360 development.
So assuming that is correct, would it be unreasonable to expect the X720 to demand more "correct" multicore development practices ie it gets closer to the development needed for Cell anyway?
 
We had ai code at a old firm I was at whose dependencies and data set were too difficult to make work on spu without changing how it determined ai. It did stuff like keep many frames of data around, keep history/patterns of player inputs, keep history of success/fail in ai decisions, etc, much of that data spanning not just the current game but other games in the players past as well. If you have enough spu time to spare then you can just let them constantly shuffle data in/out non stop from main memory, but we didn't. On spu you only have around ~110k or so to use more or less, given enough time most tasks can probably be reworked to fit that model in the way they are currently coded, but not always. I think you would agree that ai is one of those areas of gaming that really hasn't made much advancement, cpu characters are as dumb as ever so it's one of those fields where approximation just isn't working well enough. Note as well the "in the way they are currently coded" part that I wrote, being forced to think in ~110k chunks is fine for decoding video and cutting through verts but it can be a barrier when thinking about/designing logic as fluid as ai.

Thank you for the insight. Would having 512kb ls help or would you need megabytes of it? AI is just plain hard in any kind of architecture, thus everyone resorts to scripting. It's just a hard problem to solve and halo 3 AI seems to suit gamers just fine.
 
Really, what task that's not related to running an OS is better on the xenon? Can xenon do MLAA? Can it do folding at home? Can it decode h264 streams that are higher than 10mbits?

The tasks a x86 or Xenon can do better than Cell are not very important or relevant in a game console, I'd love to see you try to run MW2 on a 7900GTX and 1.86 Ghz C2D.

The only point you have is that Cell requires programming effort, but that effort has been largely spent and there are now frameworks each dev uses.

Games have come out that run just as well or better on a PC with said specs than on a PS3 and games not running well on that PC would be because of the GPU, not the CPU.
 
Last edited by a moderator:
You got to remember the CPU on the PS3 isn't the Cell Sony had hoped for (or their original vision).

The so called Broadband Engine was supposed to be 4 Cell @ 4GHz with 64 MB of off chip L3 cache. Algorithms that aren't suitable for the SPUs are supposed to run on the PowerPC cores, there are suppose to be four of them that can access the L3 cache as well as the main memory without having to break things up into small chuncks for the SPUs.

What you get on the PS3 is a severed compromised of their original vision. They just didn't have enough PPU in the PS3 to carter to the problems they already know about; that is no all algo would suit the SPUs. Frankly if they were planning for 7 SPUs Cell, they might as well go for 4-6 PoiwerPC cores CPU and call it a day. The Cell in PS3 doesn't have enough peak performance to bother with the extra work (read more development dollar) for the gain it offers.

Moving forward they would have to cram like 16+ Cell into PS4, I think there are cheaper alternatives that can offer better solution.
 
That's really more an issue of poor planning on Sony's part. I mean if MS can launch with a high caliber game like Gears of War, why cant Sony?

If PS3 had launched with an easy to program for CPU using established graphics programming libraries (DirectX) they probably could have.

It took years just to get solid common libraries to use on PS3. The quality of multiplaform games on PS3 tracks fairly well with the developement of programming libraries for Cell.

None of that stuff existed in anything other than very rudimentary form when the console was launched.

The major benefit of a easy to program for platform is that there's much less time involved to get your initial games up and running. You just leverage stuff you already know (DirectX is a fairly well understood API). With PS3 you have to not only learn what is possible and what isn't. You have to learn how to do things its way. Then you have to create the libraries and everything else.

A very time intensive task that can't realistically be shortened. Moving to PS4 if they re-use Cell, they'll be able to leverage that in a similar way that devs could leverage DirectX on X360 as well as the more familiar CPU architechture. Then again if developement of Cell slows, you risk being underpowered compared to the competition if they go with a CPU that still undergoes strong R&D spending. For example, I wouldn't be at all surprised if for the next Xbox, MS went with either an AMD or Intel CPU. By then it's quite possible that a 6 core AMD CPU will be fairly cost effective.

Regards,
SB
 
The 360's launch library wasn't incredibly strong, and it certainly didn't include Gears. So it's not a completely nonsensical comparison, you probably want to talk about the PS3's first year vs. the 360's instead.
 
The 360's launch library wasn't incredibly strong, and it certainly didn't include Gears. So it's not a completely nonsensical comparison, you probably want to talk about the PS3's first year vs. the 360's instead.

Yes, I was inferring the first years or so of each consoles lifetime. Around the 3rd and 4th years of the PS3 lifetime in particular is when Multiplatform titles started being closer in parity as libraries were developed and shared and developers came to grasps with the PS3 architecture.

KZ2 was certainly quite impressive launching around 2 years and 3 months after PS3's launch. And showed what PS3 was capable of with 4+ years of developement.

Had the system been easier to program for with fairly standard API's I don't think it'd be a stretch to say it could have come out within 1 year of launch similar to Gears of War (which wasn't even a first party title with the same level of funding). Presumably being a first party dev, the KZ2 team may have had far more intimate knowledge of the PS3 during it's R&D cycle which should have given it a leg up.

Regards,
SB
 
2008 wasn't that bad, either. GTA was probably the biggest disparity out of the games people cared about (and the biggest game that year).
 
If PS3 had launched with an easy to program for CPU using established graphics programming libraries (DirectX) they probably could have.

It took years just to get solid common libraries to use on PS3. The quality of multiplaform games on PS3 tracks fairly well with the developement of programming libraries for Cell.

None of that stuff existed in anything other than very rudimentary form when the console was launched.

That, along with the high price really screwed PS3, but yet its still only ~5m behind 360 ww.

I doubt sony will repeat those same mistakes with PS4, thats why it can launch a year later and still be in great shape against MS.
 
Games have come out that run just as well or better on a PC with said specs than on a PS3 and games not running well on that PC would be because of the GPU, not the CPU.
Which games would those be, and make that 7900GTX with a 128-bit memory interface and 256 mb video memory...
The cell is just fine today, and an improved one with more PPU and SPU cores, or even configurable cores where you'd use em as PPU where LS becomes cache with some speed penalty or use as SPU and handle all memory transfers, i.e. manually to unlock more speed when you have the coding resources available, would work fine with the next PS4. It's the GPU side that needs work.
 
The point I was (doing a bad job of) making was that there hasn't been, and still isn't, a special "Cell advantage" for multiplatform games. The reasons may have changed, but the fact that 360 stuff still looks and runs better hasn't.

Exclusive games don't matter - if you hire the talent and spend the time and money making an exclusive game it'll look good or perhaps even great, and then you can easily market it to your fanbase as being technically impossible on the other guy's machine (because it lacks Cell or BluRay or eDram or Blast Processing or whatever else it is that the core gameplay and art style could easily survive without).

If Sony truly are going to be last entering into the next generation as they're suggesting then all they really need from their next gen CPU is to be able to run quick ports of big games at least as well as their competition. That's it. Any more power would be overkill (which is fine if it's cheap or free), and any extra processing power that came at the expense of multiplatform games would be a mistake. Graphics, at least in the early stages of the generation, are a different matter though.

I'm not sure that a bunch of Cell processors will be as usable as an 8 core x86 or Power processor even by next generation.

This is really embarrasing to read someone or anyone comparing "blast processing" to Cell and Blu Ray and just shows how screwed up the mentality of technology is to this day.

But your other argument is just as ridiculous, care to explain why multiplatform devs DID NOT BOTHER to make meaningfull distinctions in gameplay, AI, physics and GRAPHICS back when PS2 and XBox 1 represented extreme difference in supposed specs... I wonder what excuse can be used, the Xbox 1 offered a higher clocked CPU, a 2001 level technology GPU and well over TWICE the ram as well as the supposedly easier to develop for Direct X API tool set.

On the current gen a first party game like MLB The Show graphically embarrases the multiplatform efforts, its too bad there are no first party football, soccer, and hockey or cricket anymore...

BTW it makes more sense to mention DirectX API, XNA with Blast Processing because they are both registered marketing trademarks only the latter is not owned by Microsoft.

I'll try and put it as a question:

What's the value in an awesome supercomputer CPU in a console if at first (during the most crucial period) it holds the system back then, later, it sits there underutilised because multiplatform games aren't targeting it?

Cell has to be some of the most expensive (in terms of $$ and silicon) culling and anti-aliasing hardware ever made. ;)



I don't know how expensive re-engineering Cell would be, or if anyone would want to do it. Maybe Cell could easily deal with next-gen engines primarily targeting more complex processor cores, but I wouldn't want to bet on it!

Bobcat could get around many of the x86 pricing issues, and if ATI could integrate Bobcat cores with a powerful GPU maybe that could be the best of both worlds...

Again look at XBox 1's technological lead over PS2 and what you are saying is that Microsoft should never have bothered in making that console specially considering that only one game managed to rise above the sea of mediocre mess but current standards the XBox 1 is basically a failiure then and it wasn't microsoft's constant investing and marketing that did anything to maintain that console.

Also Cell software development is as complete as it should technically be considered "easy to develop for" however to be a programer you have to study books and such so if you don't put any effort to study you are not getting anywhere.

The only way to go with Cell is to evolutionize it, its not that hard really, there is already PowerXCell, assuming that X86 or ATI should be considered for Sony's next Playstation is basically going to cause the system to collapse, this thread is about new hardware but its not really clear about what hardware it is.

Didn't Gears of War come out a year after the 360 launched?

Launching with a killer app is difficult and a big risk. MS managed it with Halo, but that game had been in development for a long time and was an almost freakish rarity - and even that had the traits of a desperately rushed launch title (repetition of huge chunks of levels).

It's only Nintendo that launch reliably with killer apps, but they base their console releases around having first party killer apps.

To use one of my favourite examples (for everything), Sega started work on Shenmue for the DC about 2 years before the DC came out, before they even had prototype hardware. Consequently, development was more difficult and inefficient than it would have otherwise been, and some of the assets suffered in quality (being pre-first gen and old by the time they were used up to four years later).

Halo 1 also had a very slow acceptance that grew only because magazine articles kept being written on the old game and forum boards kept talking about the game... then again G4 TV also greatly kept giving the game alot of free advertising and it wasn't until 2004 that it became clear that halo was a real franchise.

SEGA-AM2 under Yu Suzuki had prototype Shenmue/Project Berkeley running on Saturn hardware with its much anti-hyped "nightmare to program for" dual SH2 set up also when the first rumors of Dural were showing up in 1997 the specs pointed to dual SH4s before that changed.

You got to remember the CPU on the PS3 isn't the Cell Sony had hoped for (or their original vision).

The so called Broadband Engine was supposed to be 4 Cell @ 4GHz with 64 MB of off chip L3 cache. Algorithms that aren't suitable for the SPUs are supposed to run on the PowerPC cores, there are suppose to be four of them that can access the L3 cache as well as the main memory without having to break things up into small chuncks for the SPUs.

What you get on the PS3 is a severed compromised of their original vision. They just didn't have enough PPU in the PS3 to carter to the problems they already know about; that is no all algo would suit the SPUs. Frankly if they were planning for 7 SPUs Cell, they might as well go for 4-6 PoiwerPC cores CPU and call it a day. The Cell in PS3 doesn't have enough peak performance to bother with the extra work (read more development dollar) for the gain it offers.

Moving forward they would have to cram like 16+ Cell into PS4, I think there are cheaper alternatives that can offer better solution.

That 64MB of off chip L3 cache sounds a bit ridiculous (transistors are not free), the 4Ghz Cell was Sony's engineer's target at 65nm engineering chip process that was clearly two years away back in 2005.

Microsoft forcing a next gen in 2005 clearly forced Sony to panic and not allow them to get a lead, as is the nature of competition as a matter of fact I suspect that since PS2 was officially announced in March 1999, well after the launch of Sega's Dreamcast, Sony had no clue that there would be a Microsoft console coming so soon and by the time rumors started to come out in late 1999 it was too late to change anything on PS2 since it was really complete technology waiting for a die shrink to .18micron

Cell was revealed in development in 2001 but it was far off technology if Microsoft had not forced a console launch in 2005 and they would have waited, you can bet your sweet butt that that would have given Sony the extra time to work on their chip process for a later launch, basically current gen launched too soon, for all parties reguardless of Nintendo taking the crown which by the way Microsoft bows down to.

If PS3 had launched with an easy to program for CPU using established graphics programming libraries (DirectX) they probably could have.

It took years just to get solid common libraries to use on PS3. The quality of multiplaform games on PS3 tracks fairly well with the developement of programming libraries for Cell.

None of that stuff existed in anything other than very rudimentary form when the console was launched.

The major benefit of a easy to program for platform is that there's much less time involved to get your initial games up and running. You just leverage stuff you already know (DirectX is a fairly well understood API). With PS3 you have to not only learn what is possible and what isn't. You have to learn how to do things its way. Then you have to create the libraries and everything else.

A very time intensive task that can't realistically be shortened. Moving to PS4 if they re-use Cell, they'll be able to leverage that in a similar way that devs could leverage DirectX on X360 as well as the more familiar CPU architechture. Then again if developement of Cell slows, you risk being underpowered compared to the competition if they go with a CPU that still undergoes strong R&D spending. For example, I wouldn't be at all surprised if for the next Xbox, MS went with either an AMD or Intel CPU. By then it's quite possible that a 6 core AMD CPU will be fairly cost effective.

Regards,
SB

But they could not, Direct X is a registered trademark or proprietary Microsoft API software (Dx is not hardware), how the hell do you even think that Microsoft would give their ENEMY the same sword or gun they are using??

Both nintendo Wii and Sony Playstation 1/2/3 use NON-Microsoft software meanwhile XBox 1 used DirectX based platform software tools to give their developers an easy gate into the next gen Xbox 360 back in 2005, its pretty obvious that Microsoft does not want Sony or Nintendo to have the same advantage or they risk losing the market.

And please the Personal Computer market is Microsoft monopoly, Linux distros or MAC OSs are NOT supposed to run any direct X software or any microsoft software unless Microsoft actually bothers to make something for them, thats why Linux and MAC use OpenGL.

That "easy to program for platform" you keep referring to is Microsoft monopoly.

I don't think that you can explain the performance of the PS3 as some huge conspiracy or justify a lack of performance against other consoles which managed to release far more compelling content on day one. Sony screwed up and then people turned against them. The same thing they took advantage of against the N64, its just history turning full circle again nothing more.

The n64 was a completely different beast, there were many major differences as to why it just does not make sense to even compare it to PS3...

Maybe if Sony had announced that they were delaying the PS3 for two years after their competition launched and then shipped with the same number or lower of games then it would make sense but the fact is that Sony cut or forced the PS3 to not allow 3rd parties to get all cozy making games like GTAIV (the leading sales IP of the last decade) that showed up as a tattoed promise to a rival console maker's company rep.

That Microsoft has been desperately paying off franchises that became famous on Playstation so as to hurt that core audience's mindshare is a whole other story.

Yet you had hundreds of anti-sony articles, a major blow up on any game that had any type of flaw, and worst of all the "no games" label and that xbox 360 had more games marketing by these gaming sites and magazines was ridiculous unless they were being paid off or bought off with free cake and koolaid.
 
You're thinking in terms of single player, offline games.

Halo 2 was the most popular Xbox 360 Live game for a long time. It was a killer app for Xbox Live even when people were buying 360s. Allowing people to keep playing Halo 2 removed a barrier to transitioning to 360.

And did it use BC? Nope, it used a recompile from source. The 360 doesn't have a BC solution, for things that MS thinks will be popular, they recompile.
 
Even then though when you look at the stuff that causes most CPU usage, like encoding, decoding, compression, decompression, even xml parsing, an SPE is very good at exactly those kind of things. It took quite a while for instance for my PCs to catch up on ripping music to my PS3's harddrive, and then the bottleneck was for a large part simply the DVD drive. Take that to gaming, and you have an environment that is much more predictable and the Cell design can be put to even better use.

SPE and XML? surely you jest. Music ripping? That's been limited by drive speed for over a decade! Encoding/decoding? PC's from the same era were doing it just fine.

We can discuss this endlessly, but in the end the CPU is probably one of the last problems that the PS3 had. Imagine that the PS3 had had a 3 core PPE based processor like the 360, and nothing else changed. Would it have been that much better off, you think?

It would have had more games, better games, faster. They might of not had such a broken system architecture. Etc.
 
Well you don´t know what I think. ;) Beside the technical issues such as parsing GS code to shader code you have all the legal issues with copyrighted material: game ip, songs, art etc. that may need to be re-negotiated if you want to put out a new binary.

For new binaries there aren't any licensing issues. Shader code parsing/changing is pretty easy. Game IP, is the same, you are just using a different binary, no issues there.

By emulation you by pass that all that trouble and Sony can go to third parties like SEGA and say: Hey we have this PSN thing with X million subscribers, we think some percentage may consider re-buying some of your old PS2 games do you want to make some money with very little hassle?
Edit: It should be noted there will still be legal issues for digital distribution that has to be solved.

There are more legal issues with emulation than there are with re-complication (which is zero).




I take that as you are accepting my bet. :smile:

I don´t care much for BC myself and still have my PS2 left, but at another place, therefore I sold my 60 GB PS3 when the Slim was released. I put my PS3 out for sale just below the price of the Slim. I got mail bombed and had to take it off the net within hours. So there were many people who prefered a more than two year old 60 GB PS3 with PS2 BC and no warranty to a brand new 120 GB PS3 with full two year warranty at basically the same price.

Well there are always suckers. The reality is though that they were unwilling to pay the extra cost for BC when they could get it.

But the vast majority of workload will be games code, the SPUs are designed to deliver predictable real-time behaviour for that purpose, people at IBM were quite outspoken about this. It´s not designed for database-transactions or Linux-multitasking or whatever benchmark workload you were thinking about.
Many developers have now grown used to divide tasks into jobs to distribute over multiple cores, that skill will not go away as the benefits are numerous and there are now game engines designed around this. If you can stick more cores on the die they will just be happy.

People at IBM have been wrong before and will be wrong again. Task based programming has nothing to do with SPUs, it is equally applicable to MC.


I am intrigued. Do you care to elaborate?
Edit: Are you into Quantum computers?

Bah, don't need quantum computers for an infinite FP machine, just limited workloads. Honestly I can make an infinite FP machine with under 1M transistors.
 
Status
Not open for further replies.
Back
Top