Predict: The Next Generation Console Tech

Status
Not open for further replies.
cloud computing is vague, deliberately so even, as the term itself ; there's not necessarily a central server doing all the work. Processing is done "somewhere", likewise data might be stored wherever.
A grid is a good exemple of cloud computing. This very board could be a mundane one. Wikipedia is another one, with dozens of cache, web, database and other servers ; you have no idea where or how is processed your entry editing, but it gets done, and you access it from any web browser on yourlanguage.wikipedia.org
 
Sorta. In broad terms...

Grid computing takes a bunch of disparate computing resources and ties them in such that they can perform on one task.

Cloud computing uses a central computing resource to serve the needs of many "access" points. Replacing the need for traditional computers and replacing them with dumb terminals.

Again REALLY broad definitions.

Grid computer can be used to power computing resources for Cloud computing, but it doesn't work the other way around.

Grid computing is more about processing data. Cloud computing is more about serving up data (and computer resources).

Regards,
SB
 
cloud computing is vague, deliberately so even, as the term itself ; there's not necessarily a central server doing all the work. Processing is done "somewhere", likewise data might be stored wherever.
A grid is a good exemple of cloud computing. This very board could be a mundane one. Wikipedia is another one, with dozens of cache, web, database and other servers ; you have no idea where or how is processed your entry editing, but it gets done, and you access it from any web browser on yourlanguage.wikipedia.org

Grid computer can be used to power computing resources for Cloud computing, but it doesn't work the other way around.

Grid computing is more about processing data. Cloud computing is more about serving up data (and computer resources).

Ah I think I see. Was thinking that the 2 were exclusive and not necessarily compatible to each other but now I see that in fact a grid setup- multiple servers/systems all working on a common goal could in fact be the central setup that powers the cloud. Once the grid has done it's task it then gets sent to the end user whom simply has the 'dumb' terminal sitting at home.

The only problem is adopting a universal cloud setup would take away resources that the grid could use in the first instance... If we all simply had 'dumb' terminals at home then the grid could only rely on the hardware that remains. folding@home for example would not work if none of us actually had any hardware sitting in our homes.
 
real dumb terminals have P2-class hardware and only run a VNC or similar session. Here a computer running mainly a browser needs more CPU power (it's pretty hungry), but the CPU usage is usually spiky and there's some to spare when pages were loaded and you are mostly scrolling through text.

So assuming a decent CPU in a light home appliance (such as a dual core Atom, VIA nano or sempron), you can fold, likewise a slow DX11 IGP can be used. Not as fast as a gaming PC, but lighter on the power bill, and with a great power/watt. Some people might be willing to donate 10W or 20W, not so much 100W or 200W.

but we could be arguing that on the google chrome OS thread or have a generic cloud computing discussion (dunno in which section :))
 
Going from PS3 CELL: 1 PPE:7 SPEs ~ 1/5 of a TFLOP
to 4 PPEs:32 SPEs providing 1 or 1+ TFLOP would be a nice improvement
8 PPEs:64 SPEs providing 2 or 2+ TFLOPs would also be nice, given the timeframe.


You might recall, one reported option for the PS3 back in 2003 or so was an 8:64 configuration, all on a single chip (even though that would be totally unrealistic for 2006):

Sony chip could transform video-game industry
TECHNOLOGY ENVISIONS ALL-IN-ONE BOX FOR HOME
By Dean Takahashi
Mercury News

Sony's next-generation video-game console, due in just two years, will feature a revolutionary architecture that will allow it to pack the processing power of a hundred of today's personal computers on a single chip and tap the resources of additional computers using high-speed network connections.

If key technical hurdles are overcome, the ``cell microprocessor'' technology, described in a patent Sony quietly secured in September, could help the Japanese electronics giant achieve the industry's holy grail: a cheap, all-in-one box for the home that can record television shows, surf the Net in 3-D, play music and run movie-like video games.

Besides the PlayStation 3 game console, Sony and its partners, IBM and Toshiba, hope to use the same basic chip design -- which organizes small groups of microprocessors to work together like bees in a hive -- for a range of computing devices, from tiny handheld personal digital assistants to the largest corporate servers.

If the partners succeed in crafting such a modular, all-purpose chip, it would challenge the dominance of Intel and other chip makers that make specialized chips for each kind of electronic device.

``This is a new class of beast,'' said Richard Doherty, an analyst at the Envisioneering Group in Seaford, N.Y. ``There is nothing like this project when it comes to how far-reaching it will be.''

Game industry insiders became aware of Sony's patent in the past few weeks, and the technology is expected to be a hot topic at the Game Developers Conference in San Jose this week. Since it can take a couple of years to write a game for a new system, developers will be pressing Sony and its rivals for technical details of their upcoming boxes, which are scheduled to debut in 2005.

Ken Kutaragi, head of Sony's game division and mastermind of the company's last two game boxes, is betting that in an era of networked devices, many distributed processors working together will be able to outperform a single processor, such as the Pentium chip at the heart of most PCs.

With the PS 3, Sony will apparently put 72 processors on a single chip: eight PowerPC microprocessors, each of which controls eight auxiliary processors.

Using sophisticated software to manage the workload, the PowerPC processors will divide complicated problems into smaller tasks and tap as many of the auxiliary processors as necessary to tackle them.

``The cell processors won't work alone,'' Doherty said. ``They will work in teams to handle the tasks at hand, no matter whether it is processing a video game or communications.''

As soon as each processor or team finishes its job, it will be immediately redeployed to do something else.

Such complex, on-the-fly coordination is a technical challenge, and not just for Sony. Game developers warn that the cell chips do so many things at once that it could be a nightmare writing programs for them -- the same complaint they originally had about the PlayStation 2, Sony's current game console.

Tim Sweeney, chief executive of Epic Games in Raleigh, N.C., said that programming games for the PS 3 will be far more complicated than for the PS 2 because the programmer will have to keep track of all the tasks being performed by dozens of processors.


``I can't imagine how you will actually program it,'' he said. ``You do all these tasks in parallel, but the results of one task may affect the results of another task.''

But Sony and its partners believe that if they can coordinate those processors at maximum efficiency, the PS 3 will be able to process a trillion math operations per second -- the equivalent of 100 Intel Pentium 4 chips and 1,000 times faster than processing power of the PS 2.

That kind of power would likely enable the PS 3 to simultaneously handle a wide range of electronic tasks in the home. For example, the kids might be able to race each other in a Grand Prix video game while Dad records an episode of ``The Simpsons.''

``The home server and the PS 3 may be the same thing,'' said Kunitake Ando, president and chief operating officer of Sony, at a recent dinner in Las Vegas.

Sony officials said that one key feature of the cell design is that if a device doesn't have enough processing power itself to handle everything, it can reach out to unused processors across the Internet and tap them for help.

Peter Glaskowsky, editor of the Microprocessor Report, said Sony is ``being too ambitious'' with the networked aspect of the cell design because even the fastest Internet connections are usually way too slow to coordinate tasks efficiently.

The cell chips are due to begin production in 2004, and the PS 3 console is expected to be ready at the same time that Nintendo and Microsoft launch their next-generation-game consoles in 2005.

Nintendo will likely focus on making a pure game box, but Microsoft, like Sony, envisions its next game console as a universal digital box.

A big risk for Sony and its allies is that in their quest to create a universal cell-based chip, they might compromise the PS 3's core video-game functionality. Chips suitable for a handheld, for example, might not be powerful enough to handle gaming tasks.

Sony has tried to address this problem by making the cell design modular; it can add more processors for a server, or use fewer of them in a handheld device.

``We plan to use the cell chips in other things besides the PlayStation 3,'' Ando said. ``IBM will use it in servers, and Toshiba will use it in consumer devices. You'd be surprised how much we are working on it now.''

But observers remain skeptical. ``It's very hard to use a special-purpose design across a lot of products, and this sounds like a very special-purpose chip,'' Glaskowsky said.

The processors will be primed for operation in a broadband, Net-connected environment and will be connected by a next-generation high-speed technology developed by Rambus of Los Altos.

Nintendo and Microsoft say they won't lag behind Sony on technology, nor will they be late in deploying their own next-generation systems.

While the outcome is murky now, analyst Doherty said that a few things are clear: ``Games are the engine of the next big wave of computing. Kutaragi is the dance master, and Sony is calling the shots.''

By 2012-2013 such configuration of CELL, 8:64, should be possible, given that IBM was planning to have a 4:32 config in 2010. I'm not saying PS4 will use an 8:64 config at all. Sony might do something as modest as a 2:12 or 2:16 and instead sink more money into the GPU.

I'd rather see PS4 with GPU that's no so outdated at the time of its release as the PS3's RSX was. RSX was so disappointing for a console that was meant to be cutting edge. IMHO the RSX was the least impressive for its time of any PlayStation. The PS1's graphic chip and PS2's GS were more advanced for their time than PS3's RSX. On the other hand, RSX was the first industry-standard GPU put into a PlayStation system. It didn't have any serious shortcomings compared to Sony's in-house graphics. So in that respect RSX was really good, it was just a low-end GPU by the time PS3 launched.
 
Last edited by a moderator:
I don't give much faith to this early report, basically that was unrealistic high expectations, basically they speak of 8 cells, that's 2000mm² not too mention the complexity of the mobo. I still believe no matter it has somehow been dismissed by a lot of persons here that Sony seriously considered at some point to go with a "gpu less" design so multiple cells would have make sense. Anyway 8 is... way too much.

Overall I'm less and less confident in the cell roadmap, 2010 is less than 6 months away we haven't heard anything about it, neither from Sony or IBM. I think that if anything IBM is focusing on its POWER7 supposed to deliver a consistent amount of flops. They know larrabee is coming at they don't want to loose to much market share on the HTC /super computer market.

I wouldn't be surprised if the three actual manufacturers end up with pretty similar Power PPC next gen. The only case where I don't see this happens is if larrabee is to land in on of the next gen system I think that the manufacturer which made that choice could have a X86 CPU. If negotiations have been tough enough with Intel and the latter decided to cut a lot in their margin.

After reading rumors (based on old rumors... :LOL: ) about what could be AMD bulldozer and also what AMD called Cluster based multi threading, I've come to think that MS and IBM could consider this approach to boost xenon throughput. Basically attach two vmx128 to one core and rework the cache architecture (along with fixing xenon main flaws). It may end a lot less bothering than go further than 4 cores.
Xenon does not support multi threading for it vmx units and given the number and the size of the register it could prove pretty costly to implement.

I also had been wandering if cloud based rendering/computing could be part of the equation. Basically offload operations where latency has no significant impact on the game. I especially think of AI. For example think about a huge open world where your actions would have a real significant impact on the population populating it. That could be a lot of data to update and eat consistent memory space.
In regard to rendering I don't know how could rendering could be mixed in.
Interestingly it could an addition done on a per game basis, not all game would need it.
 
I don't give much faith to this early report, basically that was unrealistic high expectations, basically they speak of 8 cells, that's 2000mm² not too mention the complexity of the mobo. I still believe no matter it has somehow been dismissed by a lot of persons here that Sony seriously considered at some point to go with a "gpu less" design so multiple cells would have make sense. Anyway 8 is... way too much.

Overall I'm less and less confident in the cell roadmap, 2010 is less than 6 months away we haven't heard anything about it, neither from Sony or IBM. I think that if anything IBM is focusing on its POWER7 supposed to deliver a consistent amount of flops. They know larrabee is coming at they don't want to loose to much market share on the HTC /super computer market.

I wouldn't be surprised if the three actual manufacturers end up with pretty similar Power PPC next gen. The only case where I don't see this happens is if larrabee is to land in on of the next gen system I think that the manufacturer which made that choice could have a X86 CPU. If negotiations have been tough enough with Intel and the latter decided to cut a lot in their margin.

After reading rumors (based on old rumors... :LOL: ) about what could be AMD bulldozer and also what AMD called Cluster based multi threading, I've come to think that MS and IBM could consider this approach to boost xenon throughput. Basically attach two vmx128 to one core and rework the cache architecture (along with fixing xenon main flaws). It may end a lot less bothering than go further than 4 cores.
Xenon does not support multi threading for it vmx units and given the number and the size of the register it could prove pretty costly to implement.

I also had been wandering if cloud based rendering/computing could be part of the equation. Basically offload operations where latency has no significant impact on the game. I especially think of AI. For example think about a huge open world where your actions would have a real significant impact on the population populating it. That could be a lot of data to update and eat consistent memory space.
In regard to rendering I don't know how could rendering could be mixed in.
Interestingly it could an addition done on a per game basis, not all game would need it.

Wouldn't such an approach then assume that all game users would have to be connected to the internet to even play the game? What about the folk with slow connections? How would that affect things?

Also one has to think about the added cost of running your cloud computing server on the dev side. Would the added cost be paid for by the product consumer? Then again such games could be distributed as XBLA/PSN titles. I dunno:???:
 
I have no faith in anything like cloud computing , web browser based computing, server based computing, distributed computing, etc as far as real-time games are concerned. This won't happen in time for the next generation consoles, Xbox3/PS4 assuming those consoles are coming in the 2011-2013 timeframe, 2-4 years from now. By 2020 or so when it's time for Xbox4 and PS5, perhaps the technology will be ready.

Alternatively, if Xbox3/PS4 aren't coming until 2015-2016, then perhaps some model of cloud computing might be ready, but just barely.

As I see it, next-gen console games are still going to depend on the CPUs and GPUs that can be put into a box that sits in everyone's home. The advancement is going to come from moving from multi-core to massively multi-core aka manycore, as well as the advances in software/tools/programming models. But still based on the silicon that can be manufactured, not remote data centers.
 
It just depends on how much of the public a manufacturer wants to exclude. There will ALWAYS be someone without fast broadband. Hell, I'd argue that for the next 25-50 years if not more, there will always be someone without broadband.

However, that doesn't preclude any company from deciding that the population without fast enough broadband is small enough that it doesn't matter or composed of a demographic that wouldn't be purchasing consoles for personal use anyways.

That said, I still think it's at least 5-10 years away (if not more) before any of the major players could even think of implementing a console focused on cloud gaming.

Online only distribution would be the first step. As with that you could still have offline methods of distribution through secure memory cards or something.

Regards,
SB
 
I have a totally hypothetical question.

What would be the pros/cons of using a setup like this (PS4 used just to illustrate point):

Cell 4/32 as CPU/General Purpose Unit (or Cell descendant)
2x a midrange GPU at launch (like a built in SLI/crossfire; maybe evolutionary descendant of RSX)
2 (or 4) GB RAM; split a la PS3
4x (or 8x) Blu-ray
built in A/B/G/N wireless and ethernet

I'm just thinking that with the economic climate, all 3 current console manufacturers might really go evolutionary steps and try to not have to pour a cubic butt ton more money into R&D on new/unproven tech.

I just find it hard to believe that after all the grief Sony has taken that they would be somewhat reluctant to go through that again by using Larrabee this early.
 
even then you can disable part of the GPU, as is already done on Cell and RSX (which has 28 pipes, 24 of them enabled).

dual GPU add costs, eats too much power ; the power budget can't increase anymore in consoles. You also need to design a new dual GPU mechanism, probably an MCM with very high speed interconnects, or either you have to duplicate all assets, wasting a huge quantity of memory : a usual scarce resource on consoles.

a point was already made against a 4/32 Cell.
I can imagine a 2/12 Cell with a midrange GPU (the RV740 kind of midrange CPU : very fast already) and 2GB of rather fast unified memory.
 
no reason for duel midrange GPUs. Best thing to do is take the basics of a new upcoming highend GPU and customize it, work hard on increasing its efficiency. By the time the console comes out, the GPU in it is near the state of the art of what you can buy seperately for PC, maybe a little more or a little less. Xbox1 in 2001 was this way, it was a GeForce 3.5 almost a GF4, at a time when GeForce 3 Ti500 was out for PCs. The Ti500 had a fillrate and bandwidth advantage over NV2A but NV2A had more geometry power, and perhaps more pixel shader performance, although they both shared the same basic 4:2 design. NV2A was not outclassed by Ti500 like the RSX was outclassed by the G80.

I'd like to see PS4's GPU a highly custom well-thought-out work by Nvidia and SCEI.
 
It would be interesting to see a study / series of interviews with developers of various walks and look at hardware and software, look at what they do well now, what they don't do well, areas that can be improved and areas that won't see much improvement. Likewise a discussion of diminishing returns.

For example, at what point do we say, "It really doesn't impact the gameplay or the general impression of the product." Take poly edges and a "wheel." A 5 poly wheel is going to look poor, but at about 20 it looks respectable for some games. At 40 you wonder if going to 80 is worth it (or if tesselating the 20 would be better), not to mention once you add in some detail, normal maps, etc how many people will notice? How much does it impact gameplay?


attachment.php



The same applies to resolution.

attachment.php


So a 200x200 asphalt texture reduced to 140x140 and 100x100 and stretched back out (very poorly btw, I should have used better filters sothese are worst case scenarios here). The original had a lot of fine detail and contrast, but when you consider the distance users sit from the screen and add in various layers (diffuse+normal+specular), maybe toss in a noise filter or a detail map or multiple diffuse layers or other appealing tricks the number of people who are going to noticer a 50% (140x140) or 75% (100x100) reduction in textures is how many people? Toss in something more subtle, like a 20% reduction on a fixed pixel budget (assuming new consoles don't really stretch out beyond 1080p) it raises the question if pure power and gaining a 20% performance increase is going to be meaningful.

All this to say that while I am sure console makers will go for the PR-marks (20Teraflops!) with diminishing returns having a console that is, in realworld metrics, 20% slower isn't going to "kill" it out of the gates. Sure, your competitor can do 200x200 cloth meshes while you do 180x180 ... is that a gameplay changer? In many ways it seems we have moved beyond the "make or break" point where one platform can have 8 poly wheels and the other 16 poly with 2x or 4x the texture resolution in addition to that.

As a business, producing hardware that gets good results out of the gate, is highly productive in terms of time schedules as well as utilization (both hardware AND staff) that focuses on the iterative process and getting software out on time and on budget with the best results possible under such seems like a smart business approach--smarter than trying to pack and extra 20-30% performance into the box at the expense of accessibility and performance extraction. I doubt there are many simple answers in a multi-core world but a platform that is development-centric seems like a smart approach. As I posted in the Rage thread, you hear guys like Gabe Newell and Carmack talk a bit about developing content over taming architectures (probably because they get paid for having good content).

It will also be interesting to see how much dev budgets increase. e.g. Epic talks about scenes with 200M polygons and how they use this for their normal maps. While the tools may change, if companies do get a lot faster hardware and can utilize better techniques (e.g. displacement maps or exotics like voxels) why would that scene explode to 4B polys? I am not sure it has to. New render techniques that improve IQ (shadowing, shaders, lighting, etc) with the same 200M art source will look night and day.

Just running 1080p with 8xAF and 4xAA, cleaning up shadows (higher resolution and better filtering) and increased lighting percision (evolutionary stuff) will go a long way to make this gen assets look MUCH better. But do we need 4xAA at 1080p? Will 2xAA at 1080p be "good enough" for most? Are hardware makers better at focusing on real short comings (e.g. the lack of grass in games! :oops: )

Ultimately I think it will come down to what the IHV's can offer at set costs, but I don't think sheer hardware "in the box" is going to differentiate the platforms. User input, services and interfaces, social interaction, and software are going to be huge. Devs are going to need a framework to quickly deploy services and software consumers want. Looking at high selling games like Halo 3 and CoD4 consumers are willing to forgive technical short comings (e.g. resolution--most don't notice... most are on SDTV!) if they provide the experience and services they want. With an increased focus on services may we actually see a platform with a big jump in memory to compensate for bigger game worlds as well as background applications running at the same time?
 

Attachments

  • edges.gif
    edges.gif
    6.9 KB · Views: 620
  • resolution2.jpg
    resolution2.jpg
    89.8 KB · Views: 630
A the time Kutaragi said something like "PS3 will be a connected console, so that whe you will play GT5, more ps3 will be connected, more gorgeous will be the grphic rendered!".
This back in 2002 circa, back when the ps2's dominated the planet.
Nobody belived him for a single istant, except for the fanboy + cellrulezwillchangetheworld people, but at the end the idea of a connected ps3 remained.

Actually he wasn't that far off. :)

I get your point though, I was really excited at what I thought was an awesome concept. Not so much in graphics capability, but PS3's talking to each other and sharing computing time to effect what happens in multplayer co-op games. Oh well maybe next gen.
 
Actually he wasn't that far off. :)

If we are going to count that then we would need to count FM2 doing 3840x720 or MS Flight Sim using handfuls of monitors long before that.

Although the idea of cloud computing, once user bandwidth increases, would have potential although this appears to be aimed at centralized facillities and not peer based.
 
Perhaps the question is really about filling up the worlds that the game developers create with content. So instead of having a wasteland, you have grass and trees and instead of having a barely populated city you have a bustling metropolis.

Maybe for the next generation, whats in order is to expand the scope of the rendered world and fill it with content on the outside of the rendered structures and on the inside fill it with content. Just as with GTA IV how they had seperate episodes within the same city they created why not have an expensive pre-created city made for the strict purpose of having games use it as their setpiece.

It makes sense to me at least for consoles to have a 'gotham city' type setting where any number of stories and games can unfold within. All they would have to do is take the assets and use them as they see fit. That way you can have an open world shooter, an RPG, an adventure game etc all using the same places. So for example in an adventure game you may visit the tallest tower in the city and explore it in search for an answer to a mystery and in another game like for instance an action sandbox game you may level it to the ground! New York has be blown up, invaded, you've found out what women wanted etc all in the same place.
 
It makes sense to me at least for consoles to have a 'gotham city' type setting where any number of stories and games can unfold within. All they would have to do is take the assets and use them as they see fit. That way you can have an open world shooter, an RPG, an adventure game etc all using the same places. So for example in an adventure game you may visit the tallest tower in the city and explore it in search for an answer to a mystery and in another game like for instance an action sandbox game you may level it to the ground! New York has be blown up, invaded, you've found out what women wanted etc all in the same place.

Funny you should say that.

I thought of something like this back in 1994. The idea was to use a common library for everyday objects.

Cars, Trees, houses, boats, planes, etc. Things that don't change and yet are recreated over and over again.

Once the common library was created (to your example a fully fleshed out city) it would be stored locally and added upon as needed. This system would allow games to be extremely small and even downloadable (a crazy idea back then!)

The other idea I had from about 1998 was to take this library to the city level (your example) and then having multiple games and genres playing out at the exact same time in the exact same arena (city). Having racing games playing out in the same region as an fps which is also hosting Ace Combat type games and water sports (lakes, or bay areas) etc.

The technology wasn't ready then and doesn't seem to be now, but it's close. MAG is the closest I've seen to the scale, and BF1942 is the closest as far as having a mass of chaos, but the players are all in the same genre.
 
Status
Not open for further replies.
Back
Top