Cell/CPu architectures as a GPU (Frostbite Spinoff)

Discussion in 'Console Technology' started by Arwin, Mar 10, 2011.

  1. i_am_interested

    Newcomer

    Joined:
    Dec 30, 2009
    Messages:
    28
    Likes Received:
    0

    ive seen no concrete evidence so its kind of confusing

    early rumors of the ps3 indicate and dedicated chip for graphics, which may or may not have been a modified cell

    however, glancing over my notes shows that the earliest reports of cell that contained specs indicated 72 cores on cell, if that were the case there wouldnt need to be a gpu

    seems to me that cell was designed to do everything
     
  2. patsu

    Legend

    Joined:
    Jun 25, 2005
    Messages:
    27,709
    Likes Received:
    145
    Yap ! From helping GPU, motion gaming, high performance computing, to running a server in the datacenter.
     
  3. liolio

    liolio Aquoiboniste
    Legend

    Joined:
    Jun 28, 2005
    Messages:
    5,724
    Likes Received:
    195
    Location:
    Stateless
    I agree to this.
    Agreed too, hence why I believe they miss a possibility, with larrabee they acknowledged that "data parrallel" processing is becoming relevant to their business, but they took the wrong route again. They were too obsessed about X86, may be they were showing cold feet after Itanium experience?
    Agreed again. But to be clear again my pov is that Intel needed to push of a Cell than a Larrabee, not going for graphic but data parallel processing / numbers crushing.
    When I say tackling the market by the low end, I meant low end laptop/desktop (not the mobile market), really 3d perfs are irrelevant, I'm not sure some vector processor cores would be at that much of a disadvantage vs tiny GPUs ceteris paribus. On the other side Intel could have been shipping "one chip" system which would have been a clear competitive advantage. Again I'm speaking of Intel doing its Cell but a more convenient Cell.
    Well that's about Larrabee, not really the point I tried to make, Intel could have done better choice.
    In the low end market Vector processors (or as I spoke of narrow vector processors, why not throughput oriented cores). Proper vector /throughput oriented cores could have been a huge competitive advantage and I don't think Intel would have failed to deliver, by not aiming to "plain 3D rendering" software would have been less of a problem. What is accelerated now by GPUs using cuda/OpenCL/Dx Compute, would have been written in a standard language like C. Clearly I don't believe that Intel cares much for the graphic market they want to prevent GPUs to bite in their CPU revenue, clearly going straight ahead to GPUs prove a wrong strategy. Intel had a huge opportunity it could some years that your average PC/laptop cheap without proper GPU, with one chip. More most consumers doesn't need 4 cores for instance they would have had better return from 2 cores + some throughput oriented cores. Now it's a bit late still "GPU" are completely irrelevant to most of the PC market, ie I'm not sure people that by i3 "something" would care if they were not proper GPU in the computer, as long as it fit their needs (which doesn't imply 3D games).

    Not really my point. Intel is huge and people (most of the market actually) don't care for 3D performance. A Cell like model may have made a lot more sense for the need of their costumer and could have been a better defense against the (relative) trend of GPGPU. More importantly I don't think Intel would have failed to reach such goals. Developers (for games or not) may not have passed to use the cores.
    Not what I meant again by fighting Nvidia/ATI with flash games. Social gaming (and activities) are raising, there is a huge market, those people don't invest in potent GPU but editors may be willing to push stuff that looks a bit better, keeping in mind Intel impressive market shares, they could have start to use this generic vector/throughput oriented cores to push more sophisticated games keeping in mind too that my wife (just an example, I could have said her dad would play farmville) doesn't have the same requirement as me/you. Such cores would have advantage too, you would code in pretty standard language + some optimized library, ISA is "almost" set in stone investments done in coding are" perennial".
    Keep in mind too that I start to diverge from (fair) remark done to Nick about why "such chips" don't exist. Intel best way to fight GPUs could have been, you don't need GPU as "core" gamers are such a tiny part of the spectrum of their costumers.
     
    #163 liolio, Apr 11, 2011
    Last edited by a moderator: Apr 11, 2011
  4. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,579
    Likes Received:
    4,799
    Location:
    Well within 3d
    There are aspects of Larrabee that are in the same vein as Cell. The most obvious graphics-specific parts of Larrabee were isolated in the TMU blocks. Aside from some number of graphics-related instructions for the x86 cores themselves, Larrabee was a data-parallel and throughput-oriented design.
    It had a more straightforward instruction cache and relied on the traditional coherent memory space model.
    Cell has been relegated to a very narrow niche, so I'm not certain copying it more than was absolutely necessary would have been a good idea.

    Larrabee was a huge chip, cost-wise it would have been a non-starter. Outside of graphics, Larrabee would have had problems establishing itself. It would have been unable to run most software made since the introduction of the Pentium, and even if it could it would have run all existing software terribly.

    3D graphics was one of the few areas where people would have paid money for a non-standard architecture, since they already do so for GPUs.

    Intel did go through a phase where it was going to market Larrabee for the mainstream graphics market. Then again, it went from max performance to mainstream to max to power efficient to cancelled through its years of delays.
     
  5. liolio

    liolio Aquoiboniste
    Legend

    Joined:
    Jun 28, 2005
    Messages:
    5,724
    Likes Received:
    195
    Location:
    Stateless
    I agree with that, that's why it could have been an option to design vector processors from scratch.
    Cell or SPUs took pretty extreme design choices. Intel could have pulled out something more convenient to use (while sacrificing some perf per mm2). From a non techincal POV Intel and IBM doesn't have the same markets. Intel CPUs are used for plenty of things I believe quiet of these markets would have benefited from the extra juice provided by some vector processors (not Larrabee 1 cores). There is room for balancing too, Intel may not have chosen a 1/8 ratio between CPUs and VPUs. But from my POV the main thing is still volume, if Intel say along with nehalem release announced that every one of their processors were to include some VPU cores the impact would have been on another scale than IBM launching the Cell. Software developers would have not passed on it.
    I agree.
    I don't agree as I believe Intel had a shot to include some VPU in its CPUs, they could have passed on "fusion" chips and be on the market earlier. My belief is that it make more sense for most consumers to by dual backed by some vectors processors than quad cores, even for gamers Intel market share would have ensure that devs would have used these new resources.
    Indeed they lacked vision, they wanted X86 every where even where it makes few sense, they were reluctant to adopt heterogeneous computing.
     
  6. jonabbey

    Regular

    Joined:
    Oct 12, 2006
    Messages:
    809
    Likes Received:
    1
    Location:
    Austin, TX
    Actually, Intel adopted an internal high speed ring bus in the Sandy Bridge CPUs. One could argue that they were aping Cell with that.
     
  7. liolio

    liolio Aquoiboniste
    Legend

    Joined:
    Jun 28, 2005
    Messages:
    5,724
    Likes Received:
    195
    Location:
    Stateless
    Well I would not give to the Cell the paternity of ring bus.
    Anyway it's a bit late now, on the X86 market on insulation they could still do the move but now the mobility market is big, language like OpenCL are available on many systems, it makes sense for developers to consider it.
     
  8. patsu

    Legend

    Joined:
    Jun 25, 2005
    Messages:
    27,709
    Likes Received:
    145
    Part of the reason is because the PPU is too weaksauce. What can we do with the PPU space if we were to replace it with today's tech ?

    Not saying this is all they need to improve Cell. In its current form, it's difficult to go far.
     
  9. TheAlSpark

    TheAlSpark Moderator
    Moderator Legend

    Joined:
    Feb 29, 2004
    Messages:
    22,146
    Likes Received:
    8,533
    Location:
    ಠ_ಠ
    A non-Power derivitive would be good for starters. :p
     
  10. liolio

    liolio Aquoiboniste
    Legend

    Joined:
    Jun 28, 2005
    Messages:
    5,724
    Likes Received:
    195
    Location:
    Stateless
    A better CPU would have helped that sure. Actually given usage done of Altivex unit (you gave info recently) I would go for an pretty narrow OoO PPC core with SMT without SIMD.
    Actually I'm not sure the whole concept of SPUs is valid, Spus does nothing to hide latency actually it relies on low latency LS access and double buffering / coders taking care to properly feed them. It offers impressive perfs per Watts and mm² but if adoption of the tech is any clue it's to much of a bother.
    To bring it further it would need a completely shift in design philosophy.
    If I were trying to picture what Intel could have push instead of larrabbe (and not for the same use by the way), I think of narrow vector processors (4 wide like SPUs handlling DP as the "Cell v.2") supporting a cheap form of multihtreading (barrel, or round robin), may be supporting so cheap form of OoO, good prefetching capability ,offering a coherent memory space, weid cache hierarchy L0 data cache, L1 cI$ and D$, no "L2", a dedicated part of the L3 (like in INtel SnB). Something dedicated to short vectors and data manipulation able on its own to hide wuiet some latency.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...