Kutaragi talks CELL technology @ TGS2002

Discussion in 'Console Technology' started by Guest, Sep 23, 2002.

  1. Megadrive1988

    Veteran

    Joined:
    May 30, 2002
    Messages:
    4,723
    Likes Received:
    242
    I have no idea if the basic CELL would have enough computational power to pull off real-time raytracing and/or radiosity in games of medium complexity.

    I doubt it.

    Perhaps though, the Playstation3-specific version of CELL, which is rumored to have many CELLs together (and thus a huge number of cores), might be sufficient for some form of RT or radiosity.

    I would imagine that PS4 will have the kind of processing muscle required for full-on raytracing in very complex games at high resulutions. but that's about 8 years away. (if PS3 is 3 years away, add another 5 years for the PS4)

    With PS4, we might be into the multi PetaFlop range since PS3 is aiming for multi Teraflop performance.

    all highly speculative though.
     
  2. Vince

    Veteran

    Joined:
    Apr 9, 2002
    Messages:
    2,158
    Likes Received:
    7
    Mega;

    Cell is the name of the architecture. I was referring to the ide abehind using a cellular architecture, it was origionally proposed at IBM as a Raytracer. People are calling diffent things a 'cell', but don't. No-offense, but stop speculating. You're not getting it... yet.
     
  3. Panajev2001a

    Veteran

    Joined:
    Mar 31, 2002
    Messages:
    3,187
    Likes Received:
    8
    that might be way it didn't end up to be a raytracer and went back in the shops and started making again public appearances as Blue Gene's basic processors ;)
     
  4. randycat99

    Veteran

    Joined:
    Jul 24, 2002
    Messages:
    1,772
    Likes Received:
    12
    Location:
    turn around...
    marconelly! just cited that realtime raytracing was on the verge of possibility on a mere 1 Ghz PIII machine. I think that makes it fairly plausible that a hypothetical PS3 could do it realtime with a TeraFlop of performance or even some sub-TeraFlop amount of performance (should the claims be a bit, err, overambitious ;) ). Ultimately, I'm not sure if it is worth it to piss away all of that power just to do more realistic lighting in a game, if something similar can be simulated (as we do now). Use the power for more crazy effects and animation, but maybe I'm just not seeing the full potential of realtime raytracing.
     
  5. PC-Engine

    Banned

    Joined:
    Feb 7, 2002
    Messages:
    6,799
    Likes Received:
    12
    If you look at that simple scene I posted, you'll understand the significance of raytracing. To do it right with radiosity, photon mapping, caustics, etc. at 60 fps requires more than a mere 1 GHz cpu especially for highly complex scenes. That little demo looks nothing like that global illumination render and that render only contains a sphere and cube. Though, looking at the actual rendering times for that shot, it seems a 1 TFLOPS cpu could do it.

    I guess the ultimate goal in CGI is to be able to model the real world in realtime. With realtme raytracing and billions of polygons around the corner, there's really no need for texture mapping.
     
  6. fbg1

    Newcomer

    Joined:
    Oct 8, 2002
    Messages:
    135
    Likes Received:
    0
    Location:
    Ithaca, NY USA
    A few questions:

    Will PS3 Cell actually be single die multiple processors each with multiple cores, or a single processor with multiple cores? Assuming each has the same transistor count and die size, what would the differences be? On-die memory and bus architecture? Or is enough info even known to speculate on that? Anyway, the former sounds too complex & expensive for a $300 console.

    Addressing some of the earlier thoughts in the forum about grid gaming and farming out world simulation tasks to network nodes: the naysayers claim that bandwidth is not sufficient or networks are too laggy for this. However, PS3 is due out in 2005, so perhaps Sony is counting on Japan's high-speed network infrastructure - Terabit Highway, I think it's called - due around 2005 as well.

    Maybe that will allow PS3 to accomplish some of Sony's claims for it.

    Finally, since PS2 has the Emotion Engine and Graphics Synthesizer, won't PS3 therefore have the EE2 and GS2, not the EE3 and GS3?
     
  7. clem64

    Newcomer

    Joined:
    Jun 29, 2002
    Messages:
    170
    Likes Received:
    1
    EE2 and GS2 are only upgraded versions of the original components. GS2 has 32MB E-VRAM. I don't know how different the EE2 is. They are used in some sort of high end graphic worstation (GSCube??).
     
  8. Vince

    Veteran

    Joined:
    Apr 9, 2002
    Messages:
    2,158
    Likes Received:
    7
    What I've been thinking of lately is does anyone else see a resemblence to the former PixelFusion design? It was a massive SIMD array with local eDRAM, that, oddly, went from a 3D solution to Network Processor iirc.

    Granted, it's was a fully SIMD array (not cellular) and promoted a few years before it's time - lithography couldn't support the size needed for good preformance - but I keep thinking of this idea when looking at cell.

    Anyone even remmeber this?
     
  9. Vince

    Veteran

    Joined:
    Apr 9, 2002
    Messages:
    2,158
    Likes Received:
    7
    Um..er.. ok.. I'll bite

    The basic idea behind Cell or any cellular computing design is to design an compact core that has a highly effecient transistor to preformance ratio that has the bare minimum instruction set and other features (prediction, ect). You then put a bunch of these cores [present talk is of 16-24 - which is what Blue Gene, Cells predecessor, used] together on one die with some local RAM, control logic, inter-core communication fabric, ect.

    At the present time, with present architectures, the uniprocessor appraoch where you have one large core with many instructions (even RISC would be large by comparason) is reaching a point of diminishing returns when looking at preoamnce verses transistor count/die size. Research done by IBM and several universities show that a cellular model is much more effeient and can extract more preformance per die area than a uniprocessor. Perhaps a departure from the current approach and adoption of models like Slipstreaming or other ideas will boost the uniprocessor approach, but at present it sucks looking forward.

    Alot of noise and speculation. But, basically, the price per die is dependent upon lithography and how fat they descided to push it. And, unlike others, I don't beleive they'll use multiple "cells" as some propose.. heh

    The naming scheme dates back to ~'99 when Sony outlined their roadmap to PS3. It may or may not have changed with time and advances, I'm not sure, but the plan was as follow: Emotion Engine was in PS2. EE2 was built on the EE architecture, designed by Toshiba and to be produced in 2002 for the reasons Clem64 stated. EE3 was to be a new architecture, several hundred transistors IIRC, and be the centerpiece of PS3.

    Sony R&D seems to be big on proof-of-concept and phased developments. GSCube (in Okomotos words) was proof of parrallelised rasterization, PS2 Linux (as shown by the slide Ben posted for me) was to test Linux on a RT distributed network... ect.

    Would anyone like to add?

    PS. I posted this first, but edited it with the post above.... arghh
     
  10. fbg1

    Newcomer

    Joined:
    Oct 8, 2002
    Messages:
    135
    Likes Received:
    0
    Location:
    Ithaca, NY USA
    Interesting. What is the performance part of that ratio measured in?
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...