End of Cell for IBM

Discussion in 'Console Industry' started by Butta, Nov 20, 2009.

  1. upnorthsox

    Veteran

    Joined:
    May 7, 2008
    Messages:
    1,909
    Likes Received:
    232
    Well, the biggest effect of IBM dropping out of the Cell development is that the chances for a Power7 derived PPU in a Cell2 is slim at best. As Xenon road the coattails of the Cell1 PPU development, to not be able to do that again means they may have to foot the bill alone for any Power7 derivative and that's if IBM is even interested in doing one.

    That said, with its modularized smorgasboard of functional units, high capacity edram, and pick-a-core-count design it certainly appears IBM has gone with a design that lends itself to a quick console derivative. In fact, it wouldn't surprise me at all if both MS and Sony ended up with essentially the same 4 core Power7 cpu in their next consoles. That would then leave it to the gpu as the differenciating factor between the 2.
     
  2. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,686
    Likes Received:
    11,134
    Location:
    Under my bridge
    Indeed, they'll need to reassure owners and potential buyers that there's a future in their investments. I for one wouldn't want to buy and develop Cell software now if I didn't think there'd be any upgrade path in a couple of years.
     
  3. AzBat

    AzBat Agent of the Bat
    Legend Veteran

    Joined:
    Apr 1, 2002
    Messages:
    5,942
    Likes Received:
    1,739
    Location:
    Alma, AR
    Go back and read. I did not say they were abandoning Xenon. Since the Xenon CPU is based on a portion of Cell: the PPE, it might have some interesting side effects. Like Microsoft maybe choosing to go a different route with their next console CPU.

    Tommy McClain
     
  4. patsu

    Legend

    Joined:
    Jun 25, 2005
    Messages:
    27,614
    Likes Received:
    60
    While there are many uncertainties, I think the programming model question may have already been addressed: OpenCL and OpenGL.

    They released OpenCL for Cell. That may be needed for people to port their Cell-specific programs over.
     
  5. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
    Oh come on, MS is not going to abandon ppc in xbox720 just because the cpu investment this time will not be subsidized by Sony,If I understand you correctly here.
     
  6. Crossbar

    Veteran

    Joined:
    Feb 8, 2006
    Messages:
    1,821
    Likes Received:
    12
    I think those chances have always been slim if you think that the Cell2 would have been used in the next playstation that is, the Power7 PPU probably spanks the Cell PPU in all benchmarks, but I think it loses badly when looking at the performance/transistor ratio, which was the driving force to why both Cell and Xenon has in-order execution.
     
  7. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
    The way I see it, Cell2 in PS4 is toast. But The next gen will certainly see dramatic leaps in sw/hw architecture across the board. This gen, we only had cell pushing the envelope. Not to mention HCI.
     
  8. Crossbar

    Veteran

    Joined:
    Feb 8, 2006
    Messages:
    1,821
    Likes Received:
    12
    I think it has always been toast. I am a bit surprised there are still so many pursuing the idea that the next generation consoles will have a discrete GPU. I think the Cell was a forerunner in that sense given that a lot of graphics calculations has been offloaded on to it.

    Just tweek the Cell in the direction of LRB with just a few dedicated graphics function units and you just have one chip to take through die shrinks, negotiate production cost and all that shit. Makes life simpler and the hardware cheaper.

    The Wii HD may be the last console with a discrete GPU, but I wouldn´t be surprised if Nintendo is the first one to abandon the standalone GPU, I´ve always been impressed by Nintendos hardware, with regard to how it is designed to keep manufacturing costs down.
     
  9. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,686
    Likes Received:
    11,134
    Location:
    Under my bridge
    Why surprised? Have you seen lots of effective all-in-one solutions showing single multifunction cores are the most effective bang-for-buck solution? You'll have to point me to those Larrabee reviews that prove how wonderful it is. :p

    At the moment, the all-in-one solutions look good on paper, but they're unproven, just like they were with the Cell+Rasterizer design. If they're due out so close to the next consoles (something we don't know yet) then it's a huge leap of faith to say no discrete GPU is the obvious choice.
     
  10. AzBat

    AzBat Agent of the Bat
    Legend Veteran

    Joined:
    Apr 1, 2002
    Messages:
    5,942
    Likes Received:
    1,739
    Location:
    Alma, AR
    Nope, I was not suggesting anything nefarious like that. Just that _IF_ IBM is abandoning the Cell, where does the PPE & Xenon CPUs stand for future implementations? Can Microsoft continue to evolve their Xenon CPU core without IBM working on future Cell implementations? Does their patent on "System and method for parallel execution of data generation tasks" preclude them from using the Xenon CPU core or any derivatives? Nothing about riding the coat-tails of Sony's Cell development. They already got all the advantages they were going to get from that. ;)

    Tommy McClain
     
  11. Crossbar

    Veteran

    Joined:
    Feb 8, 2006
    Messages:
    1,821
    Likes Received:
    12
    Yeah, maybe LRB isn´t the best role model but Intel is building on the x86 legacy because they are more or less forced, and the jury is still out there to judge how successful it will be.

    I don´t think we should point the finger at the current GPGPU solutions and say they are not fit to be the CPU in consoles because they are obviously not designed with that in mind. But how much would it take to make them? I think it is pretty obvious that Generic Processing SIMD units is where most benefits lie when scaling the processing power and if those are shareable for game logic as well as graphics calculations it will allow very high utilisation. If you truelly need some old-fashioned bog standard CPU core, just slap one or two PPU cores (or what ever) in one corner of the die and be done with it.

    I think if we look at the past there is a clear trend from where we once had discrete floating point units (x87 anyone) and discrete memory controllers, the next inline for integration is obviously the GPU. Or is it the CPU, who is eating who?
     
  12. Mobius1aic

    Mobius1aic Quo vadis?
    Veteran

    Joined:
    Oct 30, 2007
    Messages:
    1,649
    Likes Received:
    244
    What is the feasibility of using a Larrabee only system? Could games be "highly multithreaded" to operated across a dozen cores, or is Larrabee's PIII derived core design really aimed at being just being incredibly parallel? Also, would Larrabee need (assuming it doesn't have one yet) a fairly large L3 cache to do general purpose work efficiently while balancing the graphics load? Whatever the case, Larrabee is an interesting idea, being a large homogenous chip that can handle pretty much anything, but I can see devs complaining about having to make their games run across possibly dozens of threads just to orchestrate itself. Then you have the load balancing issue.
     
  13. upnorthsox

    Veteran

    Joined:
    May 7, 2008
    Messages:
    1,909
    Likes Received:
    232
    Whatever the shortcomings Power7 ends up having, performance/transistor ratio is not going to be it. It's the same or higher than Nehlam but at 1/2 the trannys and a 1/3 the mm^2. Oh and it's out-of-order too.
     
  14. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,686
    Likes Received:
    11,134
    Location:
    Under my bridge
    That's true, and I expect that in the future we will have a . Eventually we'll have a single core versatile architecture. However, I'm not convinced we're there yet. Cost issues will mean you'll likely get more available transistors from two dies than one mammoth one. And more transistors means more performance. If everyone looks to be cheap next gen, a single core may make sense. Otherwise a discrete 'GPU' coupled with a normal processor will offer better performance. Of course 'GPU' won't be accurate, as the processor will be doing all sorts of work. But it will have hardware for dealing with graphic workloads that the 'CPU' side of the system won't have.
     
  15. Mobius1aic

    Mobius1aic Quo vadis?
    Veteran

    Joined:
    Oct 30, 2007
    Messages:
    1,649
    Likes Received:
    244
    Power7 is a freaking beast. 8 cores??? I think I just **** my pants.
     
  16. Crossbar

    Veteran

    Joined:
    Feb 8, 2006
    Messages:
    1,821
    Likes Received:
    12
    I was not talking about Nehalem. Please also keep in mind when looking at transistor numbers from intel and ibm they rarely truthfully take into account cache transistors. A lot of the Power7 performance also comes from some well designed and mammoth size caches. I think the Power7 configuration uses an off-die L3 cache of 32 MB! You´ll have more than one billion extra transistors just there if they use T6 SRAM.
     
  17. Weaste

    Newcomer

    Joined:
    Nov 13, 2007
    Messages:
    175
    Likes Received:
    0
    Location:
    Castellon de la Plana
    How accurate this is, I don't know, but...

    http://realworldtech.com/page.cfm?ArticleID=RWT081209143650&p=2

    Now, replace the VMX unit in each core with an SPU, and... :wink:
     
  18. Crossbar

    Veteran

    Joined:
    Feb 8, 2006
    Messages:
    1,821
    Likes Received:
    12
    If any lesson can be learned from this generation I think it is: going cheap is not a bad idea.

    Maybe they can have two discrete chips of the same type in a first design to help yield, to be merged at a smaller process. But I seriously don´t think that´s an attractive alternative, as it is probably easier to just add redundant processing units on the original design to help yield and keep the number of components down.

    Maybe it is to early right now for main stream CPU-GPU integration, by 2012 I think not. We will by then have seen AMD Fusion in flesh and by the spec it looks like a pretty capable device, it should be able to satisfy a lot of current gen PC games, which means it is way overspeced for most normal PC users of today.
     
  19. Crossbar

    Veteran

    Joined:
    Feb 8, 2006
    Messages:
    1,821
    Likes Received:
    12
    I apologise I was thinking of the Power6 architecture and did some sloppy googling, but replace external L3 SRAM cache with on-die L3 EDRAM cache and my point still stands a lot of performance comes from that cache.

    Wouldn´t it be awesome if IBMs edram technology was available for free and working on a bog standard CMOS process? I do hope there will be a comparable technology available sometime in the future, the benefits are obvious. The engineers at IBM are really awesome. :smile:
     
  20. Tahir2

    Veteran

    Joined:
    Feb 7, 2002
    Messages:
    2,978
    Likes Received:
    86
    Location:
    Earth
    Hasn't it been proven that one mammoth die is costlier than two smaller ones?
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...