End of Cell for IBM

http://www.andriasang.com/e/blog/2009/12/25/ps4_cell/

It's work in progress, so plans may not be cast in stone yet. Might want to treat it like a regular rumor for now.

How has the design of the Cell proved to be "wrong" in regards to the PS3? If we take away the "lets start over again..." approach and the high cost of building new development tools. We are left with Developers that today.. i hope at least, knows their way around the Cell PPE and PPU´s.

So what would be wrong in using a Cell 2 in a PS4 when all the bad stuff is "fixed". You have the tools, you are not starting from scratch and the developers would not hit the ground with broken legs (like they did with the PS3) but running. The new Part would be the GPU and that goes for all consoles, but it could be a Nvidia part.

How much would Sony benefit from going with a X86 architecture, in regards to tools and experience?
 
Where would this spare power be? The 360 has three ppu's and three vmx units, so some spu's have to be spent to account for that. The 360's gpu is a generation ahead of the PS3's, so some more spu's have to be spent to account for that as well, sometimes possible and sometimes impossible. So where is this spare spu power?
Well, I found this quote.

You can't just slap more cores on that CPU given that they share the L2 cache, it would scale badly.
BTW..SPUs can execute up two instructions per clock cycle, and on decently written code a single SPU runs circles around a XeCPU core at any time of the day.

A core includes a VMX unit, correct? That leaves 3 free SPUs, right? BTW, what is the performance difference between the RSX and Xenos ON AVERAGE? 10% to 20%? According to some developer slides, 1 to 2 SPEs can speed up some large graphical jobs by 200% to 300% (based on "Deferred Lighting and Post Processing on PS3" presentation).
 
Back to topic, I thank that the STI cooperation was ended a while ago? Could we be actually hearing of their last researches before the project was canned?

Anyway for me till I see a new roadmap for IBM the architecture is dead. I don't really care if it is a good new or not it's clearly not the end of 'exotic' architecture, new 'heterogenic' are on their way (AMD fusion chips) and they are no longer 'new' as cell got here first in some way.
Homogenous designs would looks more radical/exotic to me now wether they originate from the CPU or the GPU realm.
 
So what would be wrong in using a Cell 2 in a PS4 when all the bad stuff is "fixed".

I am sure this is one of the many questions Sony ask itself going into the future. There are pros and cons to every approach. The question has to be balanced against long term economic, development, competitive advantage and support concerns.

Most of the complains I have seen are aimed at the GPU vertex input limit, insufficient memory due to large GameOS footprint, and the weak PPU. OTOH, Sony and partners have an extensive SPU software library and toolchain now. I am curious whether they can partner with Toshiba to reuse SPU technologies in other spaces as well.

On paper, a "regular" CPU + SpursEngine sounds like a possible route, assuming they can make SpursEngine for cheap. That's what Toshiba have in mind all along. It's useful for assorted network, security, compression/decompression, sensor-based, physics, codec, AI (*-recoginition), pre- and post-processing tasks. With something like OpenCL, the use case may actually increase. However, there are probably cheaper and faster solutions for these specialized tasks.

[size=-2]BTW, IBM and Cell roadmap may be out of the picture if that regular CPU is Intel or other flavor. Just a random thought. ^_^ I have no insider info. If they have time, it's probably best for Sony to review all options like what Lazy8s suggested.[/size]
 
Apparently not by those who actually buy games.

Not really a claim you can make, unless you can establish that people buy games for graphics. The claim you're responding to isn't really better-supported, but at least you can take a sampling off boards to see that it is a popular opinion, even if not the most popular. (Of course, considering people are making claims about the graphics of games they haven't played it's not like this sort of discussion will ever get anywhere worthwhile.)
 
This thread is getting all kinds of off-topic. Expect further posts about best graphics, XB360 RAM amounts, yadayada to be brutally removed. I want to be reading about what Cell has done good and bad, and may do in future!
 
The irony for the Cell is that if the PPE was more robust, we probably wouldn't hear so many complaints, but maybe the forced learning with the SPE's wouldn't have taken place - a la EmotionEngine vector units.

We still would have had to re-write lots of stuff to be local store compatible, so there would still have been complaints early on. Code had to be converted to be multi threaded on either box one way or the other, but it was far easier to do with the traditional cores on the 360 since you didn't have to re-write everything to support a new data model. This was very advantageous in the early days, because you could just identify the two most egregious performance offenders, and simply toss them onto their own cores. They ran horrifically inefficiently, vmx was largely unused, tons of load hit store issues, etc, but it didn't matter at that time since it still let us get quick and dirty performance in time for launch, and let us improve code overtime while still being able to make games that looked and performed reasonably well very early on. That was impossible to do on Cell since it just had one traditional core. Some stuff was easy to re-write to be Cell friendly, but other systems were a nightmare. Plus in some cases we were using code from either a sister company or a middleware company that we were not intimately familiar with. On 360 it didn't matter, dump it all to a ppu, get it to compile, and just stall that thread each frame to be in sync with everything else. Ugly and primitive, but it worked.


While I don't think making things unnecessarily complex is a good idea you can only hide complexity so much. I think the Cell is a good design and, once you master it, the results can be very good. A Cell2 with a good performing GPU (AMD?) would deliver very good IQ if we take into account what devs like Guerrilla and Naughty Dog have been able to do with a mediocre Nvidia part.

I'm totally fine with a Cell2 for PS4. What matters most is cost, heat, support, familiarity, and backwards compatibility. We're all familiar with Cell, so migrating our code to Cell2 should be cake. We need excellent support from day 1, with Cell2 one would presume that all existing tools would migrate over relatively intact. They should be able to make Cell2 run fairly cool and cost effective as well. Given that both consoles had way too many heat related failures at launch, I think it's safe to say that both ran too hot. I think heat alone will kill off many other cpu choices. Finally, backwards compatibility is far more important next gen that it was this gen. It will be really hard to get people to migrate to a new machine if they lose all their purchases on PSN. One would think that Cell2 would make the backward compatibility task easier. I'm sure they could get compatibility working with other architectures eventually, but whoever has compatibility working sooner will be at a big advantage. In the end I still think the gpu matters more, so get a cpu in there that is cost effective, friendly to devs, and backwards compatible, and then go nuts and spend the watts on the gpu.



patsu said:
I remember earlier on you mentioned that the PS3 has problem keeping up with 60fps @ 720p in a baseball game. In the end, MLB The Show 2009 became the best baseball game today (60fps, 1080p, realistic lighting and mo-capped animation). They probably did a lot of tricks to prevent bottlenecking the architecture.

Well...they went mostly 2d on their crowd, whereas I got it all working 3d. I insisted on full 3d because I wanted the lighting to be correct everywhere. It you watch the lighting on the crowd in The Show you'll see that sometimes it's totally wrong. Hence the performance disparity, although I did get it all working 60fps on 360. As far as why The Show took over as the best game, that's because of business choices. Half the MLB 2K team was vaporized a few months before the end of one of the the projects, which meant disaster for that title. MLB 2K titles after that went to an entirely different team in a different city that had little experience with the MLB code base, and less time to work with it. So it's no surprise that The Show pulled ahead. If they stuck with the original team, then we would have smoked The Show :)


Carl B said:
His 'career arc'

For the record, my last PS3 contract ended a week or so ago and I'm now focusing on my personal side business which has really taken off. If a good gaming gig comes along I'll consider it, but for now you can consider me a civilian. At least until I start delving more into iPhone games and XBLive indie games, which I now have a lot of time to play with :)
 
Well...they went mostly 2d on their crowd, whereas I got it all working 3d.

That wouldn't entirely account for 2X pixels in The Show among other things in comparison :p I completely understand and respect the fact that you have a strong preference to the 360 as a developer; hell that speaks for most developers out there too. They didn't sign up to do a jigsaw puzzle with some hardware design choices.

I'm not a developer but as a consumer and a tech enthusiast (almost went pro on the hardware side of things), it seems that both systems are just about the same level performance wise. The difference is that PS3 development requires a few trips to hell back and forth to get something decent out of it.

At any rate, good luck to you on your future endeavors and hopefully you won't stop contributing to the discussions here ;)
 

In your opinion/experience a cell2 sided by a strong gpu will be as usefull as the cell with the rsx?
This level of programming is for me a nerdy hobby, but i recall that what happens on the cpu is mostly integer, and then single precision float, so what will be the point in incresing the float performance in a cpu near a gpu that don't need any help?
 
We still would have had to re-write lots of stuff to be local store compatible, so there would still have been complaints early on. Code had to be converted to be multi threaded on either box one way or the other, but it was far easier to do with the traditional cores on the 360 since you didn't have to re-write everything to support a new data model.

...I think it was because I was unclear as to the hypothetical situation I was laying out. It wasn't about Cell receiving a 'better' PPE in the context of the 360 also receiving said core(s), but rather in a void - think a dual-core OOE processor of decent ability with threading support. Now... if this were the 'GP' core in Cell, the SPE's a bonus so to speak, and the 360 processor the same as current... I could envision the SPE's generally simply going unused save for the occasional side project and first party software. My point being that with Cell as it was, a hard break was necessitated towards parallelism and improvement in coding practices due to the lack of 'give' in the default ecosystem. Yet if said give had actually been present, the actual strength of the architecture may have gone underutilized in its own right, which was where I tied into the legacy of the EE vector units.

Granted, robust tools at launch would generally ameliorate both issues. In the former, ease of use of a difficult architecture requiring a different approach, and in the later to promote utilization of latent potential despite not requiring the extra effort to reach a 'good enough' result. When you were posting as to the allocation of resources within SCE and the money spent on game production vs tools production, believe me I have felt and thought the same many a time. A lot of the tool tech to come out for Sony has come out of internal developer efforts rather than a dedicated tool warchest per se, though I do have to plug the PSSG/Phyre team as being just such an entity internal to Sony that was focused on this goal for the benefit of PSN developers. If the year was 2006 again, $30 million or more thrown towards a top-notch set of tool guys may have yielded a good bit more economically for the company over the long run than whatever said exclusive title would have netted in mindshare/etc. Certainly I would retrospectively have offered LAIR up for just such a sacrifice.
 
Last edited by a moderator:
That wouldn't entirely account for 2X pixels in The Show among other things in comparison :p

I can't speak for the newest versions since I have nothing to do with those. But back in MLB 2K7 we had full 24 hour day/night time cycle, you could start a game in daylight and watch the sun set over the course of the game with the lighting all done dynamically. You also had a mix of 4xmsaa and 2xmsaa (360 version). Plus, watch replays and zoom into the grass. You'll see volumetric grass made up of millions of individual blades of grass that the ball and players feet sink into (360 version). Add in depth of field, hdr, ambient occlusion, etc, and it wasn't too shabby for something we did back in 2006! The demo doesn't do the game justice alas, but if you have the actual MLB 2K7 game on 360 then try it and see how it compares to The Show. Set a 5pm or so start time and pick a stadium with lower walls to really get the full feel of the lighting as the game progresses.


If the year was 2006 again, $30 million or more thrown towards a top-notch set of tool guys may have yielded a good bit more economically for the company over the long run than whatever said exclusive title would have netted in mindshare/etc. Certainly I would retrospectively have offered LAIR up for just such a sacrifice.

You know, I don't think it would have even taken that much money. Take 20 engineers, say 100k salaries each, and dedicated them to tools and the like for 2 years. That's just 4 million bucks, but it would have been money well spent!


In your opinion/experience a cell2 sided by a strong gpu will be as usefull as the cell with the rsx?

It depends, but I'm guessing that the effect won't be as dramatic. On PS3 you have a ridiculously powerful cpu paired with an antiquated gpu, so Cell's affect on graphics is dramatic. Presumably PS4 will offer a dramatically more powerful gpu, whereas a Cell2 would not be comparatively as much of a jump in cpu power. So while you can always use Cell2 to help graphics tasks on PS4, I'm guessing that the benefit would not be as pronounced as it is this gen.
 
It depends, but I'm guessing that the effect won't be as dramatic. On PS3 you have a ridiculously powerful cpu paired with an antiquated gpu, so Cell's affect on graphics is dramatic. Presumably PS4 will offer a dramatically more powerful gpu, whereas a Cell2 would not be comparatively as much of a jump in cpu power. So while you can always use Cell2 to help graphics tasks on PS4, I'm guessing that the benefit would not be as pronounced as it is this gen.

my point is
if with a powerful gpu even the actual cell would be underutilized, why going to a cell2 and not a more versatile cpu capable of emulating cell for psn retrocompatibility?
I ask this because you have direct experiencein it, and you were positive about reiterating the architecture
 
Thread trimmed of some of yesterdays off-topic banter, and 'best looking' posts spun off into new thread. Cell discussion is for here.

my point is
if with a powerful gpu even the actual cell would be underutilized, why going to a cell2 and not a more versatile cpu capable of emulating cell for psn retrocompatibility?
I ask this because you have direct experiencein it, and you were positive about reiterating the architecture

Cell will be a very difficult architecture to emulate if the processor replacing it is not similar in the least. 'Versatility' would be all well and good, but if you want solid B/C you would need to properly address/emulate the LS memory model of the Cell. The easiest route would obviously be an improved Cell. If Sony were to actually go commodity x86 for example, I don't even see where it could be achieved. So... hopefully it is not commodity x86, even if they do go 'standard' multicore.
 
my point is
if with a powerful gpu even the actual cell would be underutilized, why going to a cell2 and not a more versatile cpu capable of emulating cell for psn retrocompatibility?
I ask this because you have direct experiencein it, and you were positive about reiterating the architecture

What is that?

Even with a powerful GPU it would not be all powerful. Cell can still augment it.
 
Last edited by a moderator:
What is that?

Even with a powerful GPU it would not be all powerful. Cell can still augment it.

Let's assume they go with an almighty GPU that can handle anything that gets thrown at it for the next 15 years :p Would this not free up the Cell to do more other things, ie physics etc? Which means that it might not be underutilized, but used for other tasks than helping the GPU?
 
I'd think with the current level of physics and ai etc in games it might still be underutilized. At least you can use any cycles you have left for enhancing graphics (which ppl can never have enough of I think). The cell can already be used for "other tasks" and well so no problem.
 
Let's assume they go with an almighty GPU that can handle anything that gets thrown at it for the next 15 years :p Would this not free up the Cell to do more other things, ie physics etc? Which means that it might not be underutilized, but used for other tasks than helping the GPU?
I think the current expectation is an almighty GPU will also be able to run the physics etc. (more of a massive parallel engine that straight graphics processor) leaving very little heavy lifting for the CPU to do. If our next-gen consoles have a silicon budget of 200mm^2 (illustrative figure only), it could be divvied up 180 mm^2 super GPU and 20 mm^2 standard, lightweight CPU, or 100 mm^2 middlewieght GPU and 100 mm^2 powerful CPU that'll need to do the tasks that the other massive GPU option would do.

Both are viable solutions with pros and cons. I think it all depends on what workloads these processor will actually be doing (Cell is capable of jobs that aren't making it much into the games, so despite being an excellent processor design, it's ended up more as a graphics support processor) and the cost and ease options. We don't yet know how versatile GPUs will be, nor how effective their tools will be, not how well developers would benefit from BC with their codebase. I feel there's a good argument for a system with a Cell processor that's code compatible and a lesser GPU, depending on how comfortable devs are. If they still hate the system, a 'conventional' approach, which will still see a new programming paradigm on GPUs, will be the better choice. But a terribly dull one with the same basic hardware in every box. :(
 
Well...they went mostly 2d on their crowd, whereas I got it all working 3d. I insisted on full 3d because I wanted the lighting to be correct everywhere. It you watch the lighting on the crowd in The Show you'll see that sometimes it's totally wrong. Hence the performance disparity, although I did get it all working 60fps on 360. As far as why The Show took over as the best game, that's because of business choices. Half the MLB 2K team was vaporized a few months before the end of one of the the projects, which meant disaster for that title. MLB 2K titles after that went to an entirely different team in a different city that had little experience with the MLB code base, and less time to work with it. So it's no surprise that The Show pulled ahead. If they stuck with the original team, then we would have smoked The Show :)

Ah then it's a resource allocation issue. The answer to your "Where are the extra SPU power ?" may be "Spend it on the right things". I don't think people care as much about the crowd compared to the action. In the end they were able to achieve the highest specs possible with good gameplay, plus some SPU bells-n-whistles.

I look forward to see MLB 2K next year, lest the "MLB The Show" team becomes complacent.
 
I think the current expectation is an almighty GPU will also be able to run the physics etc. (more of a massive parallel engine that straight graphics processor) leaving very little heavy lifting for the CPU to do. If our next-gen consoles have a silicon budget of 200mm^2 (illustrative figure only), it could be divvied up 180 mm^2 super GPU and 20 mm^2 standard, lightweight CPU, or 100 mm^2 middlewieght GPU and 100 mm^2 powerful CPU that'll need to do the tasks that the other massive GPU option would do.
(

Does there really have to be a compromise and was there any compromise of that kind in this gen?
 
Back
Top