Yoshida confirms SCE working on new hardware

Status
Not open for further replies.
Really, what task that's not related to running an OS is better on the xenon? Can xenon do MLAA? Can it do folding at home? Can it decode h264 streams that are higher than 10mbits?

Lots of scientific and FP code. And yes, it can decode H264 streams higher than 10mbits, PCs have been able to do that for a while. And if they are well designed they do it with less power.

The tasks a x86 or Xenon can do better than Cell are not very important or relevant in a game console, I'd love to see you try to run MW2 on a 7900GTX and 1.86 Ghz C2D.

You mean like AI? Yeah, game console tend to have pretty crappy AIs.

The only point you have is that Cell requires programming effort, but that effort has been largely spent and there are now frameworks each dev uses.

It requires continuous programming effort even with the frameworks, well beyond what is required by a more well designed architecture.
 
Do you know of any multiplatform developer complaining about the cell any more? Even Gabe Newell says the best console version of Portal 2 will be on the PS3. I don't think it's that hard any more to do a quick 360/PC port to run on the cell, it's the RSX that makes them look/run like crap, or at least, that's my impression, but if you have some examples, I'd like to hear them.

Um, Portal 2 will likely look better and play better on both 360 and PC. The only reason he said that is Sony is supporting steam on PS3, which is good and cool and needed because sony is still so far behind in online, but it really doesn't have much to do with the game itself at release.

Going with a redesigned/fixed Cell for next gen wouldn't really be as much of an issue as it was at PS3 launch any more, most devs are familiar with it. I'd wager as long as the GPU doesn't lag behind the competition, the PS3 ports would run just as well and look just as good. The x86 pricing makes it impossible to have one on a console that is supposed to sell for a profit, so that's not going to happen.

It doesn't matter how familiar they are with it, its not a familiarity issue. It is more complex and time consuming to program and get efficiency out of.
 
I don't know how expensive re-engineering Cell would be, or if anyone would want to do it. Maybe Cell could easily deal with next-gen engines primarily targeting more complex processor cores, but I wouldn't want to bet on it!

Considering all engineering work around cell has been dead for a while now...
 
This is far too broad a statement. Four years into the generation BC is much less attractive, but early on the story is quite different.

If the game is important, they can recompile the binary.


What are you talking about? The PS3 launched with full BC and the 360 had BC for its most popular titles. The PS2 had full BC. The Wii has full BC. If you want to use the market as an argument shouldn't you have the facts behind you?

360 DID NOT have BC. It had recompiled binaries.
PS3 has sold more consoles without BC than with.
 
We had ai code at a old firm I was at whose dependencies and data set were too difficult to make work on spu without changing how it determined ai...
Surely for these sorts of workloads, the solution is coupling SPUs with a decent conventional core or three, and you share the workloads between cores as is appropriate, no different to sharing work between CPU and GPU. The problem with Cell in this regard was the feeble PPE, but the architecture could accomodate a couple of worthwhile cores going forwards.

Bah, don't need quantum computers for an infinite FP machine, just limited workloads. Honestly I can make an infinite FP machine with under 1M transistors.
No you can't (if by infinite FP machine you mean infinity flops). By definition you'd need either an infinite number of transistors or infinite clock, and an infinite power draw. Maybe you could design a useless processor that can churn out a million petaflops of pointless calculations to prove your point, but that's so far from infinite (infinitely far) that it's by comparison no processing performance at all.
 
This is really embarrasing to read someone or anyone comparing "blast processing" to Cell and Blu Ray and just shows how screwed up the mentality of technology is to this day.

I'm sorry for your embarrassment. Please don't blush.

...if you hire the talent and spend the time and money making an exclusive game it'll look good or perhaps even great, and then you can easily market it to your fanbase as being technically impossible on the other guy's machine (because it lacks Cell or BluRay or eDram or Blast Processing or whatever else it is that the core gameplay and art style could easily survive without).

I'm comparing the way "hardware advantages" can be used to claim game X is impossible on your competitors hardware. Take a wild guess as to why I threw Blast Processing in there.

But your other argument is just as ridiculous, care to explain why multiplatform devs DID NOT BOTHER to make meaningfull distinctions in gameplay, AI, physics and GRAPHICS back when PS2 and XBox 1 represented extreme difference in supposed specs... I wonder what excuse can be used, the Xbox 1 offered a higher clocked CPU, a 2001 level technology GPU and well over TWICE the ram as well as the supposedly easier to develop for Direct X API tool set.

Maybe my comprehension isn't so great either, because I can't work out what you're getting at. Maybe all those NEEDLESSLY capitalised WORDS are CONFUSING ME.

I'm trying to make the point that a faster CPU does you little good if it's underutilised because it's only used to run stuff targeted at less powerful hardware. I'm not sure how what you're saying counters that.

On the current gen a first party game like MLB The Show graphically embarrases the multiplatform efforts, its too bad there are no first party football, soccer, and hockey or cricket anymore...

You seem to find a lot of things embarrassing. No need to be so bashful son, it's only a forum.

BTW it makes more sense to mention DirectX API, XNA with Blast Processing because they are both registered marketing trademarks only the latter is not owned by Microsoft.

No, if I'm talking about how "hardware advantages" have been used to big-up exclusive titles to the general public (and vice versa) it makes no sense to mention blast processing with Direct X and XNA.

Again look at XBox 1's technological lead over PS2 and what you are saying is that Microsoft should never have bothered in making that console specially considering that only one game managed to rise above the sea of mediocre mess but current standards the XBox 1 is basically a failiure then and it wasn't microsoft's constant investing and marketing that did anything to maintain that console.

Xbox 1 was expensive and late and multiplatform titles consistently failed to make the most of the system. Only a handful of exclusives did anything really special, but most of these were sales disappointments.

Also Cell software development is as complete as it should technically be considered "easy to develop for" however to be a programer you have to study books and such so if you don't put any effort to study you are not getting anywhere.

Hooray! "Lazy devs!"

Halo 1 also had a very slow acceptance that grew only because magazine articles kept being written on the old game and forum boards kept talking about the game... then again G4 TV also greatly kept giving the game alot of free advertising and it wasn't until 2004 that it became clear that halo was a real franchise.

Well, until 2004 there was only one Halo game so it'd be tough to call any standalone game a franchise.

Halo grew because it was an outstanding game. Games with far more hype have failed to do what it did.
 
For new binaries there aren't any licensing issues. Shader code parsing/changing is pretty easy. Game IP, is the same, you are just using a different binary, no issues there.
So adding a new target platform to your game by "re-compiling" it does not have any implications at all?

There are more legal issues with emulation than there are with re-complication (which is zero).
What kind of legal issues did Sony go through to have the complete library of PS1 and PS2 running on the PS3?

aaronspink said:
Well there are always suckers. The reality is though that they were unwilling to pay the extra cost for BC when they could get it.
Maybe there are enough suckers to warrant the presence of BC?

aaronspink said:
People at IBM have been wrong before and will be wrong again. Task based programming has nothing to do with SPUs, it is equally applicable to MC.
Still the major complaint about the Cell at introduction was that tasks had to be divided into managable jobs for the SPUs. Now you have to use the same technique to get good performance out of many-core homogenous CPUs. Seems like Cell just was ahead of its time with regard to this and now the development community has caught up.

aaronspink said:
Bah, don't need quantum computers for an infinite FP machine, just limited workloads. Honestly I can make an infinite FP machine with under 1M transistors.
If your FP machine involves transistors I seriously doubt it has infinite FLOPS, but you are welcome to prove me wrong. Edit: Shifty beat me on this, I also think Mr Spink is making promises he can´t keep. :)
 
Last edited by a moderator:
That 64MB of off chip L3 cache sounds a bit ridiculous (transistors are not free), the 4Ghz Cell was Sony's engineer's target at 65nm engineering chip process that was clearly two years away back in 2005.

Well it was going to be eDRAM and remember Sony was investing lots of money into eDRAM, as well as 65nm and 45nm. eDRAM amount to nothing, they did get to 45nm though (eventually).

Microsoft forcing a next gen in 2005 clearly forced Sony to panic and not allow them to get a lead, as is the nature of competition as a matter of fact I suspect that since PS2 was officially announced in March 1999, well after the launch of Sega's Dreamcast, Sony had no clue that there would be a Microsoft console coming so soon and by the time rumors started to come out in late 1999 it was too late to change anything on PS2 since it was really complete technology waiting for a die shrink to .18micron

It wasn't just MS, I am sure the Bluray group was pressuring Sony to release the PS3. PS3 was released into format war. Console launched always have that air of uncertainty about them, that's why backward compatible help ease those uncertainty even most people know they're pretty useless, but a stable platform where your investment is "safe" is what people want at launched. At the end of it, even if PS3 won the format war, the brand name was left bruised from it.

Cell was revealed in development in 2001 but it was far off technology if Microsoft had not forced a console launch in 2005 and they would have waited, you can bet your sweet butt that that would have given Sony the extra time to work on their chip process for a later launch, basically current gen launched too soon, for all parties reguardless of Nintendo taking the crown which by the way Microsoft bows down to.

Still my point was, Cell inside the PS3 is a poor choice given their circumstances.

Sony was funny, many devs had been crying over the difficulty of programming the Emotion Engine and yet Sony asked the same two engineers responsible for EEngine from Toshiba who is adamant in their use of local store to design the SPU. I think if only IBM that gotten the contract, it would have been all PowerPC cores in there. As efficient as Cell is, going forward is a dead end for the architecture.

If Sony had managed to get 65nm ready and produced the Broadband Engine the visioned then the investment in Cell would have been worthwhile. As it is, its just a waste of money. Worst while they were busy with Cell and the pursue of FLOPS, Nintendo stole their idea from under them and run with it. Now Playstation Move will be forever known as Wiimote clone, despite SCE demoing the technology so long ago that people had forgotten about it.
 
And did it use BC? Nope, it used a recompile from source. The 360 doesn't have a BC solution, for things that MS thinks will be popular, they recompile.

I dunno, this guy describes it as backwards compatibility:
http://www.qbrundage.com/michaelb/resume/michael_brundage_resume.pdf
http://www.qbrundage.com/michaelb/pubs/essays/xbox360.html

I don't think he's just using the term loosely, and as another term for recompiling the original code. If you look at the things he's talking about doing it just doesn't fit with simply running native 360 code.

Emulation Ninja
Xbox Console & Consumer Software
Nov, 2005 – Apr, 2007
Microsoft Corporation

• Responsible for the Xbox 360 kernel and operating system.
• Led the virtual Xbox performance team.
• Shipped three system updates and four updates to backwards compatibility.

Software Design Engineer
Xbox Software Team
May, 2004 – Nov, 2005
Microsoft Corporation

• Responsible for major parts of Xbox backwards compatibility, including file systems,
networking, Xbox Live, and performance.
• Shipped the Xbox 360.
• Created the Xbox backwards compatibility beta program.
• Wrote the FAQ, customer support articles, and other pages on xbox.com.


But emulation is a difficult challenge any time the emulator isn't several orders of magnitude faster than what it's emulating. So a few people who understand how emulators work look at these numbers, impressive as they are, and conclude that Xbox backwards compatibility will not work. (And then when they see backwards compatibility working, they realize the Xbox 360 is even more impressive than they thought!)
 
Last edited by a moderator:
360 DID NOT have BC. It had recompiled binaries.
PS3 has sold more consoles without BC than with.

The 360 DID HAVE BC. The way it implemented it is irrelevant. And the PS3 only abandoned BC a few years into its lifecycle. In both cases they only forgot about BCs importance when the platform's library was robust enough. But that doesn't address the fallacy of 'it hasn't hurt them any'. You can't make that statement, not when in this generation and the last the most popular, market-leading consoles have had full BC.

Also, are you kidding me? If Sony goes for a more traditional architecture next time around you really think that they'd get all these games tuned for Cell to work by 'recompiling the binary'?
 
The backward compatibility for the 360 is a little complicated, but I do seem to recall that it does mostly involve emulation. Didn't they basically release emulation code for each game individually, sometimes just changing a few parameters, but at other times recompiling whole parts of the game? (as I think they did with Halo 2?)

If true, you both may be right. Would be really nice if you don't both argue as if everything you say is the absolute truth and you can't possibly be wrong. ;) I know it is hard work, from personal experience, but at least look like you're trying. ;)
 
The 360 BC has already been covered in this thread, but here it is again. 1up.

Last week, ATI European Developer Relations Manager Richard Huddy provided the first hint in an interview with Bit-Tech. "Emulating the CPU [of the original Xbox] isn't really a difficult task. They have three 3GHz cores, so emulating one 733MHz chip is pretty easy," he said. "The real bottlenecks in the emulation are GPU calls - calls made specifically by games to the Nvidia hardware in a certain way."

In order to overcome the problem of games calling upon chip specific features within the Nvidia hardware, Microsoft needed to emulate the Xbox's GPU. Without an official agreement with Nvidia, however, Microsoft would have to reverse engineer the technology, which would have been a "legal nightmare" for the company, said our source.

Also all PS3s have PS1 BC.

edit: I also believe there are gamespecific emulation codes as Arwin suggests, I don´t think any games need completely new binaries, as stated above the CPU-part is not a major bottleneck.
 
Last edited by a moderator:
If Sony goes for a more traditional architecture next time around you really think that they'd get all these games tuned for Cell to work by 'recompiling the binary'?
This is what I wonder about. If most PS3 games use a task-based programming model that "sees only a small slice of memory at a time", and that "small slice of memory" can fit in the caches of a more traditional CPU, then it seems like PS3 code could run just fine on a conventional multicore CPU. Then again, I am probably grossly oversimplifying things.
 
This is what I wonder about. If most PS3 games use a task-based programming model that "sees only a small slice of memory at a time", and that "small slice of memory" can fit in the caches of a more traditional CPU, then it seems like PS3 code could run just fine on a conventional multicore CPU. Then again, I am probably grossly oversimplifying things.

My concern is about how much of the stuff is coded directly to the metal, as well as stuff designed around PS3's weirdness. If coding to a traditional CPU do you really want to do all those 'CELL as GPU-helper' shenanigans? The point is it's hardly a just a recompile, at least when you're dealing with shifting oddball architectures. (Edit: and this is supposing that this traditional CPU would be enough of a leap forward so as to do the job of the SPUs without a sweat).
 
My concern is about how much of the stuff is coded directly to the metal, as well as stuff designed around PS3's weirdness. If coding to a traditional CPU do you really want to do all those 'CELL as GPU-helper' shenanigans? The point is it's hardly a just a recompile, at least when you're dealing with shifting oddball architectures.

Current traditional CPUs can't keep up with some tasks the Cell's SPEs perform today, so don't expect that to be easily emulated in the future. If Sony is going to be serious about PS3 game emulation on the PS4, then they're going to either at the very least add a Cell processor as well as whatever other chip they're considering, or do it the way Microsoft did Xbox1 emulation on the 360 (of which, if you think about it, the HD re-release bundles of God of War and the recently announced Sly Cooper are good examples).

On the other end of the scale are the PhyreEngine games and the Minis platform, which are abstracted platforms/libraries that can be coded against on a higher level and can probably be rewritten to use a current higher end GPU today (if I remember correctly, there was an intention to be able to code for PhyreEngine on the PC and develop games for both PC and PS3 using it).

Microsoft has a similar setup for this with XNA - everything that runs on that will obviously run on the next Xbox as well.
 
It would have had more games, better games, faster. They might of not had such a broken system architecture. Etc.
BS...all the games would be far more inferior since there would be no way to make up for RSX's weaknesses. The PS3 is bottlenecked by its GPU, not the Cell.

Gaming code besides the graphics components is really not that complicated. GOW3 main executable was around 5MB for example. A multiplayer game might be a little more, but no one is complaining about programming effort for cell any more. Even PS Move shovelware developers are able to program using the cell, so it seems that it's always you who's complaining about the programming effort where studios seem to churn out PS3 games left and right without much problem. The problems, when they happen, are because of RSX, not Cell.
 
aaronspink said:
It would have had more games, better games, faster. They might of not had such a broken system architecture. Etc.

BS...all the games would be far more inferior since there would be no way to make up for RSX's weaknesses. The PS3 is bottlenecked by its GPU, not the Cell.

More games probably. But better games require much more than a good CPU and GPU (e.g., Nintendo can make great games with inferior CPU and GPU). It depends on a whole slew of other factors. I think someone surveyed the earlier PS3 games, collectively they reviewed well.

If the tech platform is easier to development for, developers will have more time working on the gameplay though. So they do have to address programmability moving forward.
 
A multiplayer game might be a little more, but no one is complaining about programming effort for cell any more.
Just because no-one's voicing their complaints, doesn't mean they're happy with Cell, nor that Cell is an ideal architecture. Developers just see it as a necessary evil and soldier on (potentially, or maybe they all love it ;)). Also more studios creating simple content doesn't mean they've all masterd Cell either. Sony make available PhyrEngine for all developers, and there are various middleware options. Managing your memory use is a pain that developers would rather not have to worry about; no-one willing takes on more work just for the sake of it! And for those developers who enjoy poking around with ultra-efficient memory management, a cache system enables that without forcing all developers to have to work to that low level.
 
Just because no-one's voicing their complaints, doesn't mean they're happy with Cell, nor that Cell is an ideal architecture. Developers just see it as a necessary evil and soldier on (potentially, or maybe they all love it ;)). Also more studios creating simple content doesn't mean they've all masterd Cell either. Sony make available PhyrEngine for all developers, and there are various middleware options. Managing your memory use is a pain that developers would rather not have to worry about; no-one willing takes on more work just for the sake of it! And for those developers who enjoy poking around with ultra-efficient memory management, a cache system enables that without forcing all developers to have to work to that low level.

It will always remain a matter of context as well though. While it may certainly be true that some coders will prefer to work with something like the 360 CPU instead or even better, a PC CPU, the reality is that in the current landscape of the 100 man team making a multi-platform game, there will be 5 programmers writing the code leading on PC or 360, and typically just one or maybe two doing the PS3 version of that code. This is why developers who code with both architectures in mind from day one will achieve better results - not just because it forces them to think better about strategies that work on both/all three systems, but because on average there are more programmers working on the PS3 version.

And yes, I am also strongly convinced that the state of play has changed, not just because programmers have more hours on the Cell under their belt, but because programming PCs and particularly modern GPUs has become more and more like programming the Cell. You can see some of that in the comments that D.I.C.E. make about programming the Cell in their slides. The concepts in Cell that were new and ahead of the curb are now becoming much more mainstream.

The main lesson, above anything else for Sony, should be that it is important to cater for two types of developers: those that like to code on the low level, and those that like to code on the high level. Particularly at the start of a new generation of hardware, you should be ready to give support for high level coding as there will be only a very small group ready to do the low level stuff. Microsoft got that right, Sony didn't (not for main development tools, ease of porting to PC, and not for SDK type support for network functions and the like), and it was a very important distinction, much more so than the choice for Cell per se. I don't consider it very likely that they will make the same mistake again (though it's just one mistake of several that Sony made, so it will be hard work to get them all right for next-gen). However, they will have some big catching up to do, and work hard if they want to skip some of Microsoft's mistakes and get to par or better - it is particularly in this area that I'm hoping the 'bring the developers in on the PS4 development early' is going to help out.
 
Status
Not open for further replies.
Back
Top