Predict: The Next Generation Console Tech

Discussion in 'Console Technology' started by Acert93, Jun 12, 2006.

Thread Status:
Not open for further replies.
  1. Blazkowicz

    Legend

    Joined:
    Dec 24, 2004
    Messages:
    5,607
    Likes Received:
    256
    cloud computing is vague, deliberately so even, as the term itself ; there's not necessarily a central server doing all the work. Processing is done "somewhere", likewise data might be stored wherever.
    A grid is a good exemple of cloud computing. This very board could be a mundane one. Wikipedia is another one, with dozens of cache, web, database and other servers ; you have no idea where or how is processed your entry editing, but it gets done, and you access it from any web browser on yourlanguage.wikipedia.org
     
  2. Silent_Buddha

    Legend

    Joined:
    Mar 13, 2007
    Messages:
    19,426
    Likes Received:
    10,320
    Sorta. In broad terms...

    Grid computing takes a bunch of disparate computing resources and ties them in such that they can perform on one task.

    Cloud computing uses a central computing resource to serve the needs of many "access" points. Replacing the need for traditional computers and replacing them with dumb terminals.

    Again REALLY broad definitions.

    Grid computer can be used to power computing resources for Cloud computing, but it doesn't work the other way around.

    Grid computing is more about processing data. Cloud computing is more about serving up data (and computer resources).

    Regards,
    SB
     
  3. geeQ

    Newcomer

    Joined:
    Feb 22, 2007
    Messages:
    32
    Likes Received:
    0
    Ah I think I see. Was thinking that the 2 were exclusive and not necessarily compatible to each other but now I see that in fact a grid setup- multiple servers/systems all working on a common goal could in fact be the central setup that powers the cloud. Once the grid has done it's task it then gets sent to the end user whom simply has the 'dumb' terminal sitting at home.

    The only problem is adopting a universal cloud setup would take away resources that the grid could use in the first instance... If we all simply had 'dumb' terminals at home then the grid could only rely on the hardware that remains. folding@home for example would not work if none of us actually had any hardware sitting in our homes.
     
  4. Blazkowicz

    Legend

    Joined:
    Dec 24, 2004
    Messages:
    5,607
    Likes Received:
    256
    real dumb terminals have P2-class hardware and only run a VNC or similar session. Here a computer running mainly a browser needs more CPU power (it's pretty hungry), but the CPU usage is usually spiky and there's some to spare when pages were loaded and you are mostly scrolling through text.

    So assuming a decent CPU in a light home appliance (such as a dual core Atom, VIA nano or sempron), you can fold, likewise a slow DX11 IGP can be used. Not as fast as a gaming PC, but lighter on the power bill, and with a great power/watt. Some people might be willing to donate 10W or 20W, not so much 100W or 200W.

    but we could be arguing that on the google chrome OS thread or have a generic cloud computing discussion (dunno in which section :))
     
  5. Megadrive1988

    Veteran

    Joined:
    May 30, 2002
    Messages:
    4,723
    Likes Received:
    242
    Going from PS3 CELL: 1 PPE:7 SPEs ~ 1/5 of a TFLOP
    to 4 PPEs:32 SPEs providing 1 or 1+ TFLOP would be a nice improvement
    8 PPEs:64 SPEs providing 2 or 2+ TFLOPs would also be nice, given the timeframe.


    You might recall, one reported option for the PS3 back in 2003 or so was an 8:64 configuration, all on a single chip (even though that would be totally unrealistic for 2006):

    By 2012-2013 such configuration of CELL, 8:64, should be possible, given that IBM was planning to have a 4:32 config in 2010. I'm not saying PS4 will use an 8:64 config at all. Sony might do something as modest as a 2:12 or 2:16 and instead sink more money into the GPU.

    I'd rather see PS4 with GPU that's no so outdated at the time of its release as the PS3's RSX was. RSX was so disappointing for a console that was meant to be cutting edge. IMHO the RSX was the least impressive for its time of any PlayStation. The PS1's graphic chip and PS2's GS were more advanced for their time than PS3's RSX. On the other hand, RSX was the first industry-standard GPU put into a PlayStation system. It didn't have any serious shortcomings compared to Sony's in-house graphics. So in that respect RSX was really good, it was just a low-end GPU by the time PS3 launched.
     
    #2465 Megadrive1988, Jul 25, 2009
    Last edited by a moderator: Jul 25, 2009
  6. liolio

    liolio Aquoiboniste
    Legend

    Joined:
    Jun 28, 2005
    Messages:
    5,724
    Likes Received:
    195
    Location:
    Stateless
    I don't give much faith to this early report, basically that was unrealistic high expectations, basically they speak of 8 cells, that's 2000mm² not too mention the complexity of the mobo. I still believe no matter it has somehow been dismissed by a lot of persons here that Sony seriously considered at some point to go with a "gpu less" design so multiple cells would have make sense. Anyway 8 is... way too much.

    Overall I'm less and less confident in the cell roadmap, 2010 is less than 6 months away we haven't heard anything about it, neither from Sony or IBM. I think that if anything IBM is focusing on its POWER7 supposed to deliver a consistent amount of flops. They know larrabee is coming at they don't want to loose to much market share on the HTC /super computer market.

    I wouldn't be surprised if the three actual manufacturers end up with pretty similar Power PPC next gen. The only case where I don't see this happens is if larrabee is to land in on of the next gen system I think that the manufacturer which made that choice could have a X86 CPU. If negotiations have been tough enough with Intel and the latter decided to cut a lot in their margin.

    After reading rumors (based on old rumors... :lol: ) about what could be AMD bulldozer and also what AMD called Cluster based multi threading, I've come to think that MS and IBM could consider this approach to boost xenon throughput. Basically attach two vmx128 to one core and rework the cache architecture (along with fixing xenon main flaws). It may end a lot less bothering than go further than 4 cores.
    Xenon does not support multi threading for it vmx units and given the number and the size of the register it could prove pretty costly to implement.

    I also had been wandering if cloud based rendering/computing could be part of the equation. Basically offload operations where latency has no significant impact on the game. I especially think of AI. For example think about a huge open world where your actions would have a real significant impact on the population populating it. That could be a lot of data to update and eat consistent memory space.
    In regard to rendering I don't know how could rendering could be mixed in.
    Interestingly it could an addition done on a per game basis, not all game would need it.
     
  7. Prophecy2k

    Veteran

    Joined:
    Dec 17, 2007
    Messages:
    2,468
    Likes Received:
    379
    Location:
    The land that time forgot
    Wouldn't such an approach then assume that all game users would have to be connected to the internet to even play the game? What about the folk with slow connections? How would that affect things?

    Also one has to think about the added cost of running your cloud computing server on the dev side. Would the added cost be paid for by the product consumer? Then again such games could be distributed as XBLA/PSN titles. I dunno:???:
     
  8. Megadrive1988

    Veteran

    Joined:
    May 30, 2002
    Messages:
    4,723
    Likes Received:
    242
    I have no faith in anything like cloud computing , web browser based computing, server based computing, distributed computing, etc as far as real-time games are concerned. This won't happen in time for the next generation consoles, Xbox3/PS4 assuming those consoles are coming in the 2011-2013 timeframe, 2-4 years from now. By 2020 or so when it's time for Xbox4 and PS5, perhaps the technology will be ready.

    Alternatively, if Xbox3/PS4 aren't coming until 2015-2016, then perhaps some model of cloud computing might be ready, but just barely.

    As I see it, next-gen console games are still going to depend on the CPUs and GPUs that can be put into a box that sits in everyone's home. The advancement is going to come from moving from multi-core to massively multi-core aka manycore, as well as the advances in software/tools/programming models. But still based on the silicon that can be manufactured, not remote data centers.
     
  9. grandmaster

    Veteran

    Joined:
    Feb 6, 2007
    Messages:
    1,159
    Likes Received:
    0
    Until hyper-fast broadband is as easily available as mains power, Cloud won't and can't be the focus for the major platform holders.
     
  10. Silent_Buddha

    Legend

    Joined:
    Mar 13, 2007
    Messages:
    19,426
    Likes Received:
    10,320
    It just depends on how much of the public a manufacturer wants to exclude. There will ALWAYS be someone without fast broadband. Hell, I'd argue that for the next 25-50 years if not more, there will always be someone without broadband.

    However, that doesn't preclude any company from deciding that the population without fast enough broadband is small enough that it doesn't matter or composed of a demographic that wouldn't be purchasing consoles for personal use anyways.

    That said, I still think it's at least 5-10 years away (if not more) before any of the major players could even think of implementing a console focused on cloud gaming.

    Online only distribution would be the first step. As with that you could still have offline methods of distribution through secure memory cards or something.

    Regards,
    SB
     
  11. damienw

    Regular

    Joined:
    Sep 29, 2008
    Messages:
    513
    Likes Received:
    61
    Location:
    LA
    I have a totally hypothetical question.

    What would be the pros/cons of using a setup like this (PS4 used just to illustrate point):

    Cell 4/32 as CPU/General Purpose Unit (or Cell descendant)
    2x a midrange GPU at launch (like a built in SLI/crossfire; maybe evolutionary descendant of RSX)
    2 (or 4) GB RAM; split a la PS3
    4x (or 8x) Blu-ray
    built in A/B/G/N wireless and ethernet

    I'm just thinking that with the economic climate, all 3 current console manufacturers might really go evolutionary steps and try to not have to pour a cubic butt ton more money into R&D on new/unproven tech.

    I just find it hard to believe that after all the grief Sony has taken that they would be somewhat reluctant to go through that again by using Larrabee this early.
     
  12. holsty101

    Regular

    Joined:
    Dec 22, 2003
    Messages:
    289
    Likes Received:
    17
    Wouldn't it be easier/more efficient to have 1 more powerful gpu over 2 lower powered ones?
     
  13. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    10,245
    Likes Received:
    4,465
    Location:
    Finland
    Yes, only positive effect from having 2 smaller ones would be better yields
     
  14. Blazkowicz

    Legend

    Joined:
    Dec 24, 2004
    Messages:
    5,607
    Likes Received:
    256
    even then you can disable part of the GPU, as is already done on Cell and RSX (which has 28 pipes, 24 of them enabled).

    dual GPU add costs, eats too much power ; the power budget can't increase anymore in consoles. You also need to design a new dual GPU mechanism, probably an MCM with very high speed interconnects, or either you have to duplicate all assets, wasting a huge quantity of memory : a usual scarce resource on consoles.

    a point was already made against a 4/32 Cell.
    I can imagine a 2/12 Cell with a midrange GPU (the RV740 kind of midrange CPU : very fast already) and 2GB of rather fast unified memory.
     
  15. Megadrive1988

    Veteran

    Joined:
    May 30, 2002
    Messages:
    4,723
    Likes Received:
    242
    no reason for duel midrange GPUs. Best thing to do is take the basics of a new upcoming highend GPU and customize it, work hard on increasing its efficiency. By the time the console comes out, the GPU in it is near the state of the art of what you can buy seperately for PC, maybe a little more or a little less. Xbox1 in 2001 was this way, it was a GeForce 3.5 almost a GF4, at a time when GeForce 3 Ti500 was out for PCs. The Ti500 had a fillrate and bandwidth advantage over NV2A but NV2A had more geometry power, and perhaps more pixel shader performance, although they both shared the same basic 4:2 design. NV2A was not outclassed by Ti500 like the RSX was outclassed by the G80.

    I'd like to see PS4's GPU a highly custom well-thought-out work by Nvidia and SCEI.
     
  16. Acert93

    Acert93 Artist formerly known as Acert93
    Legend

    Joined:
    Dec 9, 2004
    Messages:
    7,782
    Likes Received:
    162
    Location:
    Seattle
    It would be interesting to see a study / series of interviews with developers of various walks and look at hardware and software, look at what they do well now, what they don't do well, areas that can be improved and areas that won't see much improvement. Likewise a discussion of diminishing returns.

    For example, at what point do we say, "It really doesn't impact the gameplay or the general impression of the product." Take poly edges and a "wheel." A 5 poly wheel is going to look poor, but at about 20 it looks respectable for some games. At 40 you wonder if going to 80 is worth it (or if tesselating the 20 would be better), not to mention once you add in some detail, normal maps, etc how many people will notice? How much does it impact gameplay?


    [​IMG]


    The same applies to resolution.

    [​IMG]

    So a 200x200 asphalt texture reduced to 140x140 and 100x100 and stretched back out (very poorly btw, I should have used better filters sothese are worst case scenarios here). The original had a lot of fine detail and contrast, but when you consider the distance users sit from the screen and add in various layers (diffuse+normal+specular), maybe toss in a noise filter or a detail map or multiple diffuse layers or other appealing tricks the number of people who are going to noticer a 50% (140x140) or 75% (100x100) reduction in textures is how many people? Toss in something more subtle, like a 20% reduction on a fixed pixel budget (assuming new consoles don't really stretch out beyond 1080p) it raises the question if pure power and gaining a 20% performance increase is going to be meaningful.

    All this to say that while I am sure console makers will go for the PR-marks (20Teraflops!) with diminishing returns having a console that is, in realworld metrics, 20% slower isn't going to "kill" it out of the gates. Sure, your competitor can do 200x200 cloth meshes while you do 180x180 ... is that a gameplay changer? In many ways it seems we have moved beyond the "make or break" point where one platform can have 8 poly wheels and the other 16 poly with 2x or 4x the texture resolution in addition to that.

    As a business, producing hardware that gets good results out of the gate, is highly productive in terms of time schedules as well as utilization (both hardware AND staff) that focuses on the iterative process and getting software out on time and on budget with the best results possible under such seems like a smart business approach--smarter than trying to pack and extra 20-30% performance into the box at the expense of accessibility and performance extraction. I doubt there are many simple answers in a multi-core world but a platform that is development-centric seems like a smart approach. As I posted in the Rage thread, you hear guys like Gabe Newell and Carmack talk a bit about developing content over taming architectures (probably because they get paid for having good content).

    It will also be interesting to see how much dev budgets increase. e.g. Epic talks about scenes with 200M polygons and how they use this for their normal maps. While the tools may change, if companies do get a lot faster hardware and can utilize better techniques (e.g. displacement maps or exotics like voxels) why would that scene explode to 4B polys? I am not sure it has to. New render techniques that improve IQ (shadowing, shaders, lighting, etc) with the same 200M art source will look night and day.

    Just running 1080p with 8xAF and 4xAA, cleaning up shadows (higher resolution and better filtering) and increased lighting percision (evolutionary stuff) will go a long way to make this gen assets look MUCH better. But do we need 4xAA at 1080p? Will 2xAA at 1080p be "good enough" for most? Are hardware makers better at focusing on real short comings (e.g. the lack of grass in games! :oops: )

    Ultimately I think it will come down to what the IHV's can offer at set costs, but I don't think sheer hardware "in the box" is going to differentiate the platforms. User input, services and interfaces, social interaction, and software are going to be huge. Devs are going to need a framework to quickly deploy services and software consumers want. Looking at high selling games like Halo 3 and CoD4 consumers are willing to forgive technical short comings (e.g. resolution--most don't notice... most are on SDTV!) if they provide the experience and services they want. With an increased focus on services may we actually see a platform with a big jump in memory to compensate for bigger game worlds as well as background applications running at the same time?
     

    Attached Files:

  17. monkeyx

    Newcomer

    Joined:
    Aug 6, 2009
    Messages:
    3
    Likes Received:
    0
    Actually he wasn't that far off. :)

    I get your point though, I was really excited at what I thought was an awesome concept. Not so much in graphics capability, but PS3's talking to each other and sharing computing time to effect what happens in multplayer co-op games. Oh well maybe next gen.
     
  18. Acert93

    Acert93 Artist formerly known as Acert93
    Legend

    Joined:
    Dec 9, 2004
    Messages:
    7,782
    Likes Received:
    162
    Location:
    Seattle
    If we are going to count that then we would need to count FM2 doing 3840x720 or MS Flight Sim using handfuls of monitors long before that.

    Although the idea of cloud computing, once user bandwidth increases, would have potential although this appears to be aimed at centralized facillities and not peer based.
     
  19. Squilliam

    Squilliam Beyond3d isn't defined yet
    Veteran

    Joined:
    Jan 11, 2008
    Messages:
    3,495
    Likes Received:
    114
    Location:
    New Zealand
    Perhaps the question is really about filling up the worlds that the game developers create with content. So instead of having a wasteland, you have grass and trees and instead of having a barely populated city you have a bustling metropolis.

    Maybe for the next generation, whats in order is to expand the scope of the rendered world and fill it with content on the outside of the rendered structures and on the inside fill it with content. Just as with GTA IV how they had seperate episodes within the same city they created why not have an expensive pre-created city made for the strict purpose of having games use it as their setpiece.

    It makes sense to me at least for consoles to have a 'gotham city' type setting where any number of stories and games can unfold within. All they would have to do is take the assets and use them as they see fit. That way you can have an open world shooter, an RPG, an adventure game etc all using the same places. So for example in an adventure game you may visit the tallest tower in the city and explore it in search for an answer to a mystery and in another game like for instance an action sandbox game you may level it to the ground! New York has be blown up, invaded, you've found out what women wanted etc all in the same place.
     
  20. TheChefO

    Banned

    Joined:
    Jul 29, 2005
    Messages:
    4,656
    Likes Received:
    32
    Location:
    Tampa, FL
    Funny you should say that.

    I thought of something like this back in 1994. The idea was to use a common library for everyday objects.

    Cars, Trees, houses, boats, planes, etc. Things that don't change and yet are recreated over and over again.

    Once the common library was created (to your example a fully fleshed out city) it would be stored locally and added upon as needed. This system would allow games to be extremely small and even downloadable (a crazy idea back then!)

    The other idea I had from about 1998 was to take this library to the city level (your example) and then having multiple games and genres playing out at the exact same time in the exact same arena (city). Having racing games playing out in the same region as an fps which is also hosting Ace Combat type games and water sports (lakes, or bay areas) etc.

    The technology wasn't ready then and doesn't seem to be now, but it's close. MAG is the closest I've seen to the scale, and BF1942 is the closest as far as having a mass of chaos, but the players are all in the same genre.
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...