Predict: The Next Generation Console Tech

Discussion in 'Console Technology' started by Acert93, Jun 12, 2006.

Thread Status:
Not open for further replies.
  1. Jugix

    Newcomer

    Joined:
    Mar 4, 2008
    Messages:
    102
    Likes Received:
    2
    An idea: PS4 comes equipped with stock Powercell8x (improved FP performance over Cell in PS3) but MEM, GPU and bandwidth are boosted to 2011-2012 'level'. Will anything good come from this concept?
     
  2. cbarcus

    Newcomer

    Joined:
    Jul 23, 2005
    Messages:
    92
    Likes Received:
    6
    I am unaware of anything that would make it impossible for Microsoft to develop and release a significant "upgrade" of their current console that would retail for $300 by the end of 2010. If Microsoft does this, then Sony is encouraged to do the same (offering substantially more performance for a 30% premium). As far as the technology is concerned, I doubt that Microsoft has anything to gain by waiting another year or two. In fact, I suspect they are hard at work thinking about how to come up with a more scalable architecture to keep up with Sony, and you can probably expect that that will take them more than a couple of years.

    http://www.eetimes.com/showArticle.jhtml?articleID=216500010

    Furthermore, it is quite possible that the step to the 8th generation of consoles will be somewhat less dramatic than the previous step: full backward compatibility by keeping with the same architecture; pretty much the same techniques, the same tools, etc. The economic situation is certainly affecting the industry, but instead of extending the console cycle, I believe it will only diminish the performance jump.

    I'm not so sure about this either. For both Microsoft and Sony, the most significant technological change involves motion control, and of course we've heard a lot about that. Performance improvements at this point can probably be kept quiet, with perhaps only a few first-parties aware of what to expect.

    This discourse is probably little more than mental masturbastion, but it is intriguing to try and predict the near future of digital entertainment- to see how far one can extend from what we know right now to something we can't quite see. When things finally come to pass, some of us will look back realizing how obvious everything was. I am hoping, perhaps entirely in vain, to be one of those!
     
  3. cbarcus

    Newcomer

    Joined:
    Jul 23, 2005
    Messages:
    92
    Likes Received:
    6
    There are several attractions of XDR2:

    1) bandwidth-per-pin which lowers manufacturing costs

    2) sustained throughput for data-hungry processors like a hypothetical Cell2

    3) reduced data fetch size, which improves efficiency

    4) overall higher efficiency than alternatives

    It would appear that 45nm represents a stage on the semiconductor manufacturing roadmap, while 40nm would be Toshiba's actual process. Toshiba, Sony, and IBM are developing a 28nm process that is compared to 32nm on the industry roadmap.

    http://www.eetimes.com/news/semi/showArticle.jhtml?articleID=211600040

    Anyway, the PS3 Cell and the hypothetical Cell2 would be different designs optimized respectively for low power and performance. As for what Cell2 would look like in terms of PPEs and SPEs, I would expect something that is both regular and optimized to take advantage of available resources. The design is supposedly scalable, so multiple Cells on a die with interconnects (ring bus or crossbar) and shared interfaces to memory and the GPU (through FlexIO) seems reasonable and conservative. I do not expect developers will have a difficult time trying to figure out what to do with all those cores considering all the things that can be done with regard to collision, geometry culling, animation, lighting, and what have you. Beyond this, I have no idea.

    I do not find this compelling enough to suggest that Sony would go with another GPU designer.

    I see EDRAM as a lower-cost substitute for SDRAM that can reduce fetches to main system memory when some limited dataset needs to be repeatedly processed. I can see why it wouldn't make a lot of sense within the PC architecture where system memory is farther away and much less available, so GPUs have their own dedicated memory. I do not design GPUs, so there may be more significant trade-offs involved of which I am completely ignorant. The following is about IBM's recent advances regarding EDRAM, which might have some bearing on the next generation:

    http://arstechnica.com/old/content/2007/02/8842.ars

    http://www.eetimes.com/news/latest/showArticle.jhtml?articleID=197006869

    A lengthy IBM article on the history and merits of EDRAM:
    http://findarticles.com/p/articles/mi_qa3751/is_200501/ai_n9521086/?tag=content;col1
     
    #2443 cbarcus, Jun 29, 2009
    Last edited by a moderator: Jun 29, 2009
  4. Blazkowicz

    Legend

    Joined:
    Dec 24, 2004
    Messages:
    5,607
    Likes Received:
    256
    Very interestingly, Ars Technica hints at eDram being used by IBM in CPUs.
    thus, in a PS4 based on Cell2 and a GT300 derivate, eDram might be used in the CPU (especially as SPE local storage), rather than the GPU. (I'm not fond of edram in or along the GPU)
     
  5. grandmaster

    Veteran

    Joined:
    Feb 6, 2007
    Messages:
    1,159
    Likes Received:
    0
    So they're launching in 15 months and not telling any one about it! So not only are they developing a new console when they have no need to whatsoever, it's not going to launch with any third party games?!
     
  6. obonicus

    Veteran

    Joined:
    May 1, 2008
    Messages:
    4,939
    Likes Received:
    0
    Maybe with all the Sega talk lately, Sony and MS are going to crib a play from the Saturn playbook. :eek:
     
  7. damienw

    Regular

    Joined:
    Sep 29, 2008
    Messages:
    513
    Likes Received:
    61
    Location:
    LA
    Maybe Sony and MS will combine to make 1 machine that runs of happy thoughts and pixie dust?

    And Sega and Nintendo will combine and come out with a machine that does full 1080p @ 240hz and is controlled by thought?
     
  8. Blazkowicz

    Legend

    Joined:
    Dec 24, 2004
    Messages:
    5,607
    Likes Received:
    256
    That would allow awesome-looking games with a terrible framerate. That Cell revision brings good FP64 performance which help it extend its potential market. An amount of FP number crunching can be moved to the CPU, but I feel the PPE would be much of a bottleneck. It already looked weak even next to a Pentium 4, and will look very weak in 2012.
     
  9. milan616

    Newcomer

    Joined:
    Feb 25, 2009
    Messages:
    17
    Likes Received:
    0
    Yes, nothing is stopping them from releasing a new console from a technology stand point. They do have something to gain though, and that is a lot of money. Even an evolutionary upgrade from the 360 will cost them billions in R&D, manufacture and marketing. Why do that now when they can put it off a few years while raking in cash? They know Sony won't do a new launch any time soon, there is just no reason.

    I guess my question is, why do you so adamantly believe MS will launch a new console next year when almost no one in the industry believes so?
     
  10. cbarcus

    Newcomer

    Joined:
    Jul 23, 2005
    Messages:
    92
    Likes Received:
    6
    I seem to recall that Microsoft officially announced the 360 about the middle of 2005 (May?) and launched later that year. A spec sheet leaked out about a year before the official announcement, and some details changed by launch (CPU speed & memory). The console was (still?) plagued by quality issues indicative of a rushed design. When did the industry know that Microsoft was launching in 2005, only 4 years after the launch of their original console? I do not know, maybe you do?

    That was a major upgrade from their original console with 8 times the memory, 20x the theoretical performance for their CPU, 10x(?) performance for the GPU, etc. I am not expecting their next system to have a CPU with 500+ Gflops of theoretical performance, 4 GBs of GDDR5 memory, etc. As a consequence of low-balling the performance of the next iteration (others are expecting something far more powerful later), an earlier launch would fair better against the competition, especially 5 years after the 360 has been on the market. I would not expect the system to launch without 3rd-party support, but with a more modest upgrade, late support might not be a big problem. Especially if you consider that the major change involves motion control.

    If Sony is concerned about their market position, then they are probably going to respond to an early Microsoft launch with one of their own (major assumption).

    My guess as far as what Sony is capable of:

    a) By 2010: 32-core Cell @ 40nm (~1 TFlop SP FP?)
    b) By 2011: 64-core Cell @ 28nm (~2 TFlops SP FP?)

    If these implementations incorporate certain changes that IBM included in their PowerXCell 8i Variant, performance will be perhaps a bit less than half for double-precision (which may be useful in games- I don't know).

    If you were Microsoft considering your current architecture, would you rather compare yourself to (a) or (b)? Even if you had GPGPU capability?


    I am making many assumptions here, any of which could be wrong because I am not in the industry, and I hope no one takes my speculation too seriously!
     
  11. liolio

    liolio Aquoiboniste
    Legend

    Joined:
    Jun 28, 2005
    Messages:
    5,724
    Likes Received:
    195
    Location:
    Stateless
    They must have know may be a year before that, here the lunch games:
    Even if they recieve the final dev kit 3 months before launch publishers had to know a while ago, I would bet a tiny year.
    But Ms clearly made a fast move but hopefully the system was quiet standard with I guess decent tools API etc. In regard to the xbox "dumping" I guess publisher were not surprised either even a company as big as Ms can't support that level of bleeding for years.
    I don't see why Ms would be in a different situation than Sony or Nintendo when it will come to their next system. They will make their choice based on what they want to achieve and what can be made hard wise in regard to costs, power consumption, heat dissipation. Mostly likely by fall 2012 Sony or Ms will use 32nm process for the CPU 28 nm for the GPU. Between I don't expect 500GFlops from their next CPU, but it should be clearer now after the PR mess we faced once again this gen that GFlops alone are a poor indicator of CPU performances. If Ms sticks with something close to xenon they would better improve it that try to augment its throughput at all cost. I don't see in which way a modest update will help late support, basically I expect the CPU to be a pretty standard SMP CPU stuck to a pretty standard/slightly modified directx12 GPU, it will be accesible and it's usual for launch games to not push the system. Say devs would be unprepared to deal with 6 cores instead of 3, they will simply do with three cores. Idem if they don't know what to with some gpu extra capatibilitie they will use it as a standard part.

    I don't think that it would be a smart move, if Sony stick to the Cell they should assume that IBM was right hen they wanted fewer SPU and thus a tinier, more money maker chip (read that IBM want 6 SPUs).
    2 Improved PPU and 12 spu should be more than enough muscles.
    Actually I Sony will also have a GPU able to run some general purpose calculations.
    Basically Ms would have to balanced their cpu/gpu budget depending on how well things run on the cpu or the gpu. I think they would do this within the bundaries they will fix themselves in regard to respective die size. Actually you should consider thing the other way. Optical shrink allow to pack more transistors but power consumption doesn't go don't as fast, thus such a cell (mostly as big as the actual cell at its launch) would most likely be consistantly more power hungry. That would put constrain on the rest of the system. Not too mention that it would cost more to produce and thus affect how much Sony is wiling to spend on the GPU. GPU is improtant and must not be overlooked, and Sony I think will be carreful to not price themselves out of the competition this time around.
    Imagine Sony do what you say, Ms come with a tiny CPU (say they pack four fixed/improved xenon cores along with 2MB of cache) they have a lot of budget left for the GPu and the RAM and they could have way neater package. While a four cores CPU may be enough, Sony may have to spend quiet a lot on its GPU to keep up in this regard => along with its supposed Cell2 the system will be consistantly more expansive, power hungry etc.
    Sony has to balance its silicon budget properly or they could be at either or both a graphical disadvantage or cost disadvantage.
    Nothing wrong with speculations :) (I've done a lot here some... to say the trust).
     
    #2451 liolio, Jun 30, 2009
    Last edited by a moderator: Jun 30, 2009
  12. AzBat

    AzBat Agent of the Bat
    Legend

    Joined:
    Apr 1, 2002
    Messages:
    7,749
    Likes Received:
    4,847
    Location:
    Alma, AR
  13. Blazkowicz

    Legend

    Joined:
    Dec 24, 2004
    Messages:
    5,607
    Likes Received:
    256
    PS3 was already hyped to be a cloud computing device (even though that term wasn't known back then and wasn't used if I recall correctly).

    here, that's only a claim to get attention.
     
  14. Rangers

    Legend

    Joined:
    Aug 4, 2006
    Messages:
    12,791
    Likes Received:
    1,596
    But obviously that was just marketing hyperbole, far far from reality.
     
  15. cbarcus

    Newcomer

    Joined:
    Jul 23, 2005
    Messages:
    92
    Likes Received:
    6
    I do not know anything about the original claims, but Folding@Home is an interesting and useful application of grid computing, which is related to cloud computing in concept (scalable, distributed computing). But if one were to consider what the likes of Google's AppEngine, Amazon's AWS, and Microsoft's Azure actually does, then yes, the PS3 does not do anything like this.
     
  16. JasonLD

    Regular

    Joined:
    Apr 3, 2004
    Messages:
    463
    Likes Received:
    105
    I think it will all depends on how Onlive, Gaikai, and OTOY do in maybe next two or three years. If they work as advertised and become successful, then we might see the complete transformation of the industry...if not, then we will see at least one more generation of so called traditional consoles.
     
  17. fehu

    Veteran

    Joined:
    Nov 15, 2006
    Messages:
    2,068
    Likes Received:
    992
    Location:
    Somewhere over the ocean
    A the time Kutaragi said something like "PS3 will be a connected console, so that whe you will play GT5, more ps3 will be connected, more gorgeous will be the grphic rendered!".
    This back in 2002 circa, back when the ps2's dominated the planet.
    Nobody belived him for a single istant, except for the fanboy + cellrulezwillchangetheworld people, but at the end the idea of a connected ps3 remained.
     
  18. pjbliverpool

    pjbliverpool B3D Scallywag
    Legend

    Joined:
    May 8, 2005
    Messages:
    9,237
    Likes Received:
    4,260
    Location:
    Guess...
    Yeah but its not PS3 exclusive. In fact it was already available on the PC (and in the form of SETI at home for a long time) before the PS3 launched.
     
  19. ShaidarHaran

    ShaidarHaran hardware monkey
    Veteran

    Joined:
    Mar 31, 2007
    Messages:
    4,027
    Likes Received:
    90
    It is exclusive to the PS3 within PS3's market (consoles). No other console can run F@H.
     
  20. geeQ

    Newcomer

    Joined:
    Feb 22, 2007
    Messages:
    32
    Likes Received:
    0
    Actually I thought grid computing and cloud computing were quite different?

    As has been said folding@home would be a classic example of grid computing, where by the end user has a system which is connected via software to a central server. the server sends out the data to be processed by the end users hardware, which then comes back to the central server once the job is finished. Or that a users hardware is designated the central server, it sends out the jobs which get sent to other users hardware and servers etc.

    I thought a cloud setup was basically when a central server does ALL the heavy lifting and sends the results to a simple receiver device with software in the users home. The end user does almost no processing.

    Think Kutargi's vision was based on grid computing and not cloud computing.

    Perhaps someone with more knowledge than me can help answer this question.
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...