Predict: The Next Generation Console Tech

Discussion in 'Console Technology' started by Acert93, Jun 12, 2006.

Thread Status:
Not open for further replies.
  1. Mobius1aic

    Mobius1aic Quo vadis?
    Veteran

    Joined:
    Oct 30, 2007
    Messages:
    1,715
    Likes Received:
    293
    Well it's been to Sony's advantage as until the PS3, due to the PS1 and PS2 being the best selling systems in history. Either you can love Sony's approach because it's new and allows for some great, extremely advanced aspects during the system's life or it can be a real pain for developers like with the Cell. The Cell is a very interesting case of this. We can either hate Sony for making a chip that requires some very specialized ways of programming for it or we can also thank Sony for force-feeding parallel computing down the throats of developers which they need to learn. Plus Cell is a freaking GFLOPS beast :D
     
  2. WhiteCrane

    Newcomer

    Joined:
    Dec 10, 2007
    Messages:
    36
    Likes Received:
    0
    Well, Power PC is dev friendly and stronger than Intel x86.

    I just dont want the same CPU in my PC as in my conoles. They should have something more specialized.
     
  3. Gubbi

    Veteran

    Joined:
    Feb 8, 2002
    Messages:
    3,661
    Likes Received:
    1,114
    Can't see it happening. There are technical obstacles, and even bigger business ones.

    Intel won't allow anybody else to use their IP, Microsoft won't repeat the mistakes of the first XBOX by being dependent on a single external source of components; It'll want to own the IP to all the central ICs in their system.

    Cheers
     
  4. Crossbar

    Veteran

    Joined:
    Feb 8, 2006
    Messages:
    1,821
    Likes Received:
    12
    If IBM can handle the IP issue, why can´t Intel?

    If Intel really wants a piece of the action maybe they will change their way of thinking.
     
  5. Gubbi

    Veteran

    Joined:
    Feb 8, 2002
    Messages:
    3,661
    Likes Received:
    1,114
    IBM offers a foundry service because their Fishkill fab would otherwise run at very low utilization levels. To help their customers build designs for their foundry they have a design center that helps their customers develop IP.

    Despite all its efforts IBM's microelectronics division still runs with a loss every year. Meanwhile Intel, the worlds largest microelectronics producer, is busy making loads and loads of money selling actual silicon.

    So I think Intel feels confident that their way of doing things is the right one.

    Intel only wants a piece of the action if there are decent margins on the devices. That is exactly the opposite of what Microsoft want.

    The only way Microsoft would end up using Larrabee would be if it was so mind boggling good that they would have to, - and there's really little to indicate that it is, other than a few marketing powerpoint slides.

    Cheers
     
  6. Crossbar

    Veteran

    Joined:
    Feb 8, 2006
    Messages:
    1,821
    Likes Received:
    12
    I didn´t know that. I thought they did pretty well, but maybe that was the design division and not the production facilities.

    Maybe if Intel would like some help to get a fast and broad acceptance of a new architecture, they would be able to accept lower margins for a certain piece of the market, but I can see there are a lot conflicting interests here.

    I think I see what you mean.
     
  7. Color me Dan

    Regular

    Joined:
    May 19, 2007
    Messages:
    300
    Likes Received:
    1
    Location:
    Sweden
    Considering how much Intel sells right now and are bound to sell i don't see how they could support an entierly new branch of products. If Larrabee isn't "just right" for MS i don't think they'll change the design to accomodate them. Off course it is worth trying, but i don't think Intel would get deppressed and commit suicide should they not get it in the next Xbox :smile:

    IBM on the other hand has both the manpower and production facitlities to accomodate MS, and both beign very shrewd businesses, would no doubt find a very nice solution. Hypthetically IBM could let MS have a go at designing a chip themselves with their PowerPC core as a base and then co own the design and produce it. OR MS could just let IBM do all the work and then buy it.

    Even though i'm not a coder myself i would imagine this constant change in coding practices taking it's toll every generation, and with costs rising i would think it prudent of MS to keep the basics similer or the same and then build ontop of that. Free backwards compatibility (er, well... More or less in this case) and more grunt in the machine = happy devs and green grass?
     
  8. liolio

    liolio Aquoiboniste
    Legend

    Joined:
    Jun 28, 2005
    Messages:
    5,724
    Likes Received:
    195
    Location:
    Stateless
    I agree with both of you (color me dan and gubbi) Negociation between Ms and Intel could(edit will is more likely :lol: )be tough.

    On a technical point of view do you (both of you, others welcome) do you think that if the larrabee acts as the cpu 4 hardware thread per core would be usefull?

    On the other hand I think IBM can provide some close.
    The most interesting feature in larrabee in coparison with say cell is the cache architecture.
    In the power slide, Intel say they will provide low latency L1 and L2 cache which is one of the main advantage of LS in cell.
    The spe seem to perform very well on anything that fit in their LS.

    If IBM manage to provide low the kind of latency advertised by Intel, MS could consider a mixed design of what make the the xenon more easy going and what make the cell so efficient.
    I remember the discussion "future of console cpu, will they go OoE, etc.." but I don't manage to find this old thread... IN fact it would be could intereesting to raise this thread from death as we have more clue about what Intel will offer.
    Nick was speaking of minix86 core, maybe Ms could do with a bunch of PPc mini cores (closer to spebut with a different ISA...ppcobviuosly :lol: ), Could they do well enough without SMT or OoE if they are granted low latency to L1 AND a pretty huge and fast L2?
     
    #148 liolio, Dec 14, 2007
    Last edited by a moderator: Dec 14, 2007
  9. WhiteCrane

    Newcomer

    Joined:
    Dec 10, 2007
    Messages:
    36
    Likes Received:
    0
    Where is AMD in this discussion? Its certainly possible they could surpass Intel. Any business can do anything. If Sony went to AMD and guaranteed them contracts in exchange for beating intel, rest assured, AMD would beat intel if the price was right.
     
  10. zifnab

    Newcomer

    Joined:
    Jul 23, 2005
    Messages:
    57
    Likes Received:
    0
    I'm just wondering...is it worth beefing up the cpu considerably? If havoc for example develops dedicated physics hardware at a reasonable price and performance for that time, wouldn't it be better to focus on the GPU and dedicated physics processing, whilst going with a moderate increase in a cpu intended for more general processing such as AI, sound, os, and compression?
     
  11. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    44,106
    Likes Received:
    16,898
    Location:
    Under my bridge
    [modhat]As the Larrabee thread evolved into general future CPU discussion, I felt it all the discussion should be kept in the one thread.[/modhat]
     
  12. liolio

    liolio Aquoiboniste
    Legend

    Joined:
    Jun 28, 2005
    Messages:
    5,724
    Likes Received:
    195
    Location:
    Stateless
    Great initiative Shifty!

    And I tried to find the old thread "future of console CPU, will they go OoE etc" but I've been unsuccessful so far.
    Do you know if this thread still exist?
    There were a lot of insightful comments from 3dilettante, NIck, DemoCoder, and some others that I forgot. I would like to read it again.
     
  13. ADEX

    Newcomer

    Joined:
    Sep 11, 2005
    Messages:
    231
    Likes Received:
    10
    Location:
    Here
    That's essentially what Cell is, and to a lesser degree Xenon also. Intel's new chip is not a traditional processor either.
     
  14. ADEX

    Newcomer

    Joined:
    Sep 11, 2005
    Messages:
    231
    Likes Received:
    10
    Location:
    Here
    For 65nm it's not just an IBM fab, it's owned by a consortium and operated by IBM.

    You sure that's true? They did make a loss for a while but they made a healthy profit in at least part of last year. That said their business is quite different from Intel's so they're not directly comparable, so it depends how you count things. Intel just churn out chips, all their efforts go into that, it's easy to count because you just count the chips.

    IBM sell all sorts of silicon related services and IP as well as chips to themselves and others. Should the profits made on POWER6 systems be counted as systems or as part of microelectronics? If they were counted as microelectronics it'd be making huge profits.
     
  15. zifnab

    Newcomer

    Joined:
    Jul 23, 2005
    Messages:
    57
    Likes Received:
    0
    I understand that cell is good at things like compression and physics but it doesn't seem well suited for general purpose processing like AI and certain os tasks. So essentially I'm wondering whether it might be more suitable to split those two aspects into say a havoc processor + a CPU suited for general processing.
     
  16. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    44,106
    Likes Received:
    16,898
    Location:
    Under my bridge
    Two points :

    1) Cell not being suitable for some tasks like AI seems to be a common misconception. It's a bad fit for lots of algorithms, but when you use (new) algorithms to do the same job that fit the hardware, it does very well. GPGPU is an excellent reference for this. Lots of 'bad' memory problems like searches have been engineered to fit Cell's architecture well to excellent affect. At this point in time it's not clear what problems can't be matched successfully to Cell's design.

    2) OS Tasks are lightweight. Look at your CPU usage on your PC when not running much and you'll see it's barely used. I've just opened Word a doc and done a bit of editing and it hovers around 7% on an Athlon 2500. The CPU only gets taxed on a real workload, processing data, and that's when you want the most possible power. So a lightweight CPU to handle the OS, with optimized routines to use the maths grunt when needed, and a truck load of processing power to churn through the data-hungry applications makes absolute sense. Which is why CPU manufacturers are going that way! That's why Intel are producing Larrabee, and STI produced Cell, to target the modern applications that are all data mashers.
     
  17. TheAlSpark

    TheAlSpark Moderator
    Moderator Legend

    Joined:
    Feb 29, 2004
    Messages:
    22,146
    Likes Received:
    8,533
    Location:
    ಠ_ಠ
    With respect to CPU speed, do we really need to have >3.2GHz :?: Suppose they go with more of the same cores in the next gen, what sorts of operations would be limited by CPU clock? (Obviously ignoring any IPC improvement or other enhancements). More physics interactions? I guess what I'm getting after is what can they not do now that could be achieved with a faster clock?
     
  18. pascal

    Veteran

    Joined:
    Feb 7, 2002
    Messages:
    1,968
    Likes Received:
    221
    Location:
    Brasil
    Maybe AI.

    My guess developers are dividing the game engine in modules and each module (AI, physics, etc..) run in one SPE.

    Maybe the next CELL should have a faster clock and a higher IPC.
    Maybe with a 32nm it could be possible to have 4 improved PPE (double cache) and 32 OoOE SPE, all at 9GHz and with double IPC for SPE in scallar operations. The trick is not have a highlly optimized OoOE, and my guess they can double the IPC without much space increase. Also maybe doubling the local memory (512k) could help.

    Also things like more instructions could be included. What about dot products? And double precision with smaller penalty?

    My guesstimate is that with a 32nm each SPE with the caracteristics above could be only 3mm2. And each PPE 7mm2 :cool:

    At least the target resolution for games in 2011 will be 720p and 1080p, the same now. Maybe the future stream based GPU could be used for more extreme levels of parallelism.
     
  19. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    44,106
    Likes Received:
    16,898
    Location:
    Under my bridge
    OoOE on SPE makes little sense due to the nature of its workload. Tailor made algorithms will have far higher performance per transistor and watt than OoOE.
    Why add more instructions when the same workload can be done in the same amount of time with the limited instruction set? That was the whole principle of RISC. Double precision with lower penalty is in there for the next gen Cell already. They've got 50% DP performance, and IIRC they are looking at much closer to 100% DP performance, although this is pretty irrelevant for games. 9 GHz isn't going to happen! Not without liquid nitrogen cooling or a fundamental shift in technology.
    GPU's are already stream-processors. And they don't have OoOE either ;)
     
  20. TheAlSpark

    TheAlSpark Moderator
    Moderator Legend

    Joined:
    Feb 29, 2004
    Messages:
    22,146
    Likes Received:
    8,533
    Location:
    ಠ_ಠ
    What is the split between 1080p displays and 720p displays being sold :?: Personally, I'm not convinced 1080p should be targeted by devs (not including arcadey games :p) .

    Any thoughts on what type of AA should be used :?: CFAA seems to require a bit of tweaking to get optimal results, and somehow it seems like a concept to which devs would pay the least attention (all considering). CSAA seems to be "set-it-and-forget-it-and-looks-bloody-good".
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...