Predict: The Next Generation Console Tech

Status
Not open for further replies.
As AI gets more realistic it will become easier to parallelize ... a large amount of dependency in agent execution at the moment is completely unrealistic. Instantaneous communication, reasoning and reaction is not something organisms are capable of in the real world.

As soon as you move away from that and let AI react purely by observation with realistic reaction times you can move to pure incremental timestep simulation, at which point every agent's actions are completely independent except when there is a collision (even that doesn't really require the agents to communicate instantaneously, our response to bumping into something are not generally instantaneous either ... handling collisions as they happen is more appropriately handled by physics/IK code, AI can respond later and independently).
 
As AI gets more realistic it will become easier to parallelize ... a large amount of dependency in agent execution at the moment is completely unrealistic. Instantaneous communication, reasoning and reaction is not something organisms are capable of in the real world.

As soon as you move away from that and let AI react purely by observation with realistic reaction times you can move to pure incremental timestep simulation, at which point every agent's actions are completely independent except when there is a collision (even that doesn't really require the agents to communicate instantaneously, our response to bumping into something are not generally instantaneous either ... handling collisions as they happen is more appropriately handled by physics/IK code, AI can respond later and independently).

I'm not sure how much more realistic we can really expect AI to become outside university labs and maybe simulation games. Most of the apparent improvement will be in terms of animation and physics as you mentioned. But when that collision happens, it's still a dice roll on a decision tree to determine the outcome. The other area of improvement is scale, i.e. more NPCs or monsters on screen simulating.

Most of the improvements I've seen in the past few years have all been around animation. Things like hiding routing slots or reservation slots better, or improving routing by recalculating the path every frame, etc. I don't see a big push to make AI agents out-think human players, certainly not in a game where a player is asked to kill hundreds or thousands of AI opponents.
 
But is that main branch still going to exist in its current easy form? Looking at the alternatives, Larrabee in PS4 makes worse sense than Cell2, as it's an unknown, whereas Cell2 is 'more of the same'. Which makes the only option standard multicore, which is processing bottlenecked. Unless games really don't progress much and it's nothing more than a graphical facelift next gen, fast processing throughput will be the order of the day, needing lots of vector units. The PC space is talking about getting that performance from GPUs and CUDA etc., which AFAIK is a bigger pain in the butt than Cell development!

I think next gen, a Cell2 system will be the easiest to transition to, comparng the new gen of console to the previous gen. From XB360 to XB4000 will possibly be as problematic as PS3 development is now. Lots of us were saying this gen that Sony were making the difficult change now, but it'd pay off next gen. What are PC+XB360 devs who shy away from Cell going to have to work with next gen? Will they be safe and comfortable in their development environments and able to achieve good results thanks to automagicval parallisation tools that Cell can't use, or will the development environments have the same problems that Cell development has now and the devs are going to have to learn new skills? That's what it all came down to. Conventional processing cannot provide the processing throughput needed for data-heavy 'media' tasks. Fat, parallel float processing is necessary. The only way to get that is lots of small, simple cores, which needs new programming paradigms, which needs skills to be learnt. Reticent developers may be able to hold onto simplified systems, but not forever, unless technology is going to hit a brick-wall and get nowhere. And if you're going to learn new development paradigms for Cell, transition those to Cell2 should be as easy as following x86 development from it's early days to the P4.

I'd imagine transition from Xenon -> 6, 8, 12, or more core processor for Xbox Future will be easier than the transition Playstation devs had from PS2 to PS3. Likewise with a chance it could be an Intel CPU which means hyperthreading for double the number of threads.

I'm not sure how much advantage Cell2 will have in the area of multiple parrallel threads as Cell1 may have over current Xenon.

Likewise transition to Xbox Future (and presumably for PS4 from Cell1 to Cell2) is just an extension of current practices with just more threads available. Add to that additional physics processing capabilities.

Xbox Future however would STILL have a significant development advantage in that they can still leverage PC experience.

Depending on how PC developement starts to embrace multi-threading. It's possible that in 3+ years you could see advancements in the area of 4-8 corse with 8-16 threads on PC. Which could then be directly translated to Xbox Future with minimal work.

Cell2 devs will still only be able to leverage experience of Cell1 devs.

Likewise while PC/Xbox Future will still be relatively trivial, PS4 with Cell2 will still remain the outlier.

IE - I can well believe that many 3rd party devs are hoping and praying that Sony drops Cell for PS4. And depending on whether Sony listens to their 1st party devs or 3rd party devs will determine whether they drop Cell or not.

Regards,
SB
 
Xbox Future however would STILL have a significant development advantage in that they can still leverage PC experience.

I think that such a PC experience, as far as dealing with 6+ cores (and more threads) is concerned, is basically very compatible with the lessons and best practices a less forgiving architecture like CELL teaches you compared to a more forgiving processor like Xenon.

IMHO devs who mastered CELL well will be prepared the best to tackle LRB and other many-core architectures, whether in the PC or in the console space.
 
I think that such a PC experience, as far as dealing with 6+ cores (and more threads) is concerned, is basically very compatible with the lessons and best practices a less forgiving architecture like CELL teaches you compared to a more forgiving processor like Xenon.

IMHO devs who mastered CELL well will be prepared the best to tackle LRB and other many-core architectures, whether in the PC or in the console space.

I do absolutely agree with that, but the fact that you identify programming practices with CELL to Larrabee draws towards my underlining point that at some point you will be doing any largely parallel work on the GPU (larrabee et al going forward) and still want something in reserve that can handle the most serial code as quickly as possible?

Or do we imagine a future where there really is no difference between any of the cores and it becomes a matrix of cores where one of them runs an OS, the others render graphics etc?
 
I think that such a PC experience, as far as dealing with 6+ cores (and more threads) is concerned, is basically very compatible with the lessons and best practices a less forgiving architecture like CELL teaches you compared to a more forgiving processor like Xenon.

IMHO devs who mastered CELL well will be prepared the best to tackle LRB and other many-core architectures, whether in the PC or in the console space.

That may be true but in no way helps you in expanding your developement base if people predominantly have experience programming and working with a PC centric world.

If you recruit someone out of college or out of the PC dev world it's a small step to then develope for X360 or it's likely successor which will still probably use a dev friendly and relatively easy to develope for base.

Recruiting for developement on PS4 and cell, you're still drawing on the same pool but now there's a steeper learning curve for incoming people.

As such I still see X360/Xbox Next dominating multiplatform games and generally being lead dev console unless marketting convinces the masses otherwise.

It's why even though 3rd parties have gotten a better grasp on Cell, they still wish Cell had never existed and hope Sony will drop it in favor of a more friendly architechture.

Then again, it'd be difficult for Sony to scrap Cell and start all over again.

Regards,
SB
 
I do absolutely agree with that, but the fact that you identify programming practises with CELL to Larrabee draws toward my underlining point that at some point you will be doing any largely parallel work on the GPU (Larrabee et al going forward)
Written in what language?! This theory assumes that in future, writing for GPU will be a lot easier than writing for Cell, but that requires faith that there will be a dramatic improvement in GPGPU development that leapfrogs Cell development. Larrabee doesn't count as a GPU because that's more a Cell-like CPU with x86 instruction set, just with GPU features to facility that aspect. You'll still need Intel to provide some amazing tools though, and you'll still be faced with the same sorts of problems as writing for Cell, only starting from zero code base versus 5+ years of Cell development and reusable code.



So we have the options of :
  1. Basic PC CPU and hefty GPU for heavy processing
    Requires complex GPGPU code, and standard PC code with the problems of parallelising tasks.
  2. Cell2 and GPU
    Uses existing Cell know-how, existing standard GPU use, with the issues being how you parallelise your tasks.
  3. Larrabee only
    Requires a whole new, unused development method, with solid roots in x86, and an unknown set of tools.
  4. Basic PC CPU and Larrabee as a GPU
    A combination of 1 and 3, making the 'GPGPU' aspect easier but still Larrabee development unknowns and general parallelisation task problems.
All of these have the issue of developing parallel code, and unless one system can offer a Magic Bullet, that's not a deciding factor of any. Only one of these options says 'take everything you know now and carry on with it, with a progressive advancement' whereas the other options all have some element of 'learn something completely new'. Now if a developer is faced with a console having not developed on Cell at all, then the option to work with what they know will be more appealing, but I dare say it's no easier. Being able to write x86 instead of SPU code is no advantage as no-one goes that low-level any more. Not having to worry about memory management is a plus, but if you're writing GPGPU code you have all sorts of other concerns. I think if people drew up an honest, comprehensive assessment of what different issues would be faced by the different hardware setups, such as exactly what is involved in writing heavy-lifting code for GPUs, they'd see no easy choices. But one choice has years of solid experience behind it. Those who shied away from getting that experience this gen will be faced with it, unavoidably, next-gen. There's really no point trying to resist change! When it happens, get in quick!

If you recruit someone out of college or out of the PC dev world it's a small step to then develope for X360 or it's likely successor which will still probably use a dev friendly and relatively easy to develope for base.
What you're talking about here is high-level development. Anyone who can come out of college and write low-level, high-performance engine code for multicore x86 CPUs and GPUs is going to have what it takes to learn Cell. What we think of as PC code is just throwing any old thing at it and having the CPU turn that into something fairly quick, but this does not get best performance from the silicon. The only way that approach will work next-gen is if efficiency goes out the window and all devs care about is cost effectiveness, which may be the case. the end result would be a bit like getting current-gen performance but really easily. That is, back in the 16 bit days you'd need to optimise to betsy to get high-performance platformer, that now could be knocked up by a teenage in a couple of days on high-level languages where there's so much processing overhead, a lot of that can be given over to easy development. Developers will be able to release KZ2 quality games using off-the-shelf engines, just filling in the blocks. And as Wii demonstrates, that may be enough. However, if you want to advance things and get better looking games, where NFL looks like EA's CG trailer instead of what we have now, you'll need to hit the metal, and that's something that more intrinsic to the coder than anything taught.

It's why even though 3rd parties have gotten a better grasp on Cell, they still wish Cell had never existed and hope Sony will drop it in favor of a more friendly architechture.
Which is the more friendly architecture and how? Larrabee, that great architecture that no-one has used with unknown tools and all the parallilisation problems of Cell? GPUs and their funky languages and limited data-structures to fit their access patterns? None of the options are friendly!
 
ban25 said:
but let's be serious...writing to SPUs is not like writing threads.
Right - on SPUs fine-grain parallelism is actually feasible(and indeed desired). On Windows PCs it's not.

Panajev said:
Do you think that pushing a quad-core CPU (with SMT so we are talking about 8 HW threads) will be trivial?
As above, it probably wouldn't even be attempted. But then again the question is will it be necessary?
 
Would a Larrabee work out cheaper than a Cell 2 + GPU combo? I ask as I still firmly believe that cost will be a defining next gen factor and PS4 simply must be cheaper than PS3 was pound for pound, dollar for dollar.
 
Single Larrabee versus dual chips, probably, especially if Intel do a deal to get Larrabee out there. I guess ultimate it comes down to silicon budget. If you spend 300 mm^2 on Larrabee and 200 mm^2 shared between Cell 2 and a GPU, the latter would be cheaper, and vice versa, all things being equal. Of course Intel may well have the advantage on lithography and can provide equivalent performance at a smaller size and so lower cost. But also for Sony, their first-party development will likely be cheaper targeting Cell+GPU than having to start from scratch with Larrabee.

Then the other issue with straight cost is cost reduction, which may be a bigger factor. The budget would really be for transistors, not area.
 
my prediction:

PS4
processor: Cell2 (25 terraflop or something)
gpu: nividia shader model 5.0 hardware
memory: 4 gigabyte unified 10ghz 1024bit rambus XDR memory 25Gigabyte/sec bandwidth)
storage: BD-ROM (8 layer/ 200GB) 8 speed.
hdd: 128GB SSD
controller: dual shock 4 with li-ion built in
output: hdmi 1.4a spec
bluetooth 3.0 interface
wifi-n built in

Xbox720processor:
Intel quad core
gpu: ati something
memory: 4 gigabyte unified 2ghz 256bit DDR4 memory
storage: DVD-ROM 32 speed.
hdd: 100 GB 7200rpm
controller: xbox with aa-batteries
output: hdmi 1.2 spec
proprietary wireless controller/peripheral interface
wifi-n sold seperately
1gbit lan

wii next:
processor
cpu:1.2 ghz broadway
gpu:600 mhz hollywood
storage: dvd
backwards compatible with wii-mote and the wii fit
wireless built in.
 
my prediction:

PS4
processor: Cell2 (25 terraflop or something)
gpu: nividia shader model 5.0 hardware
memory: 4 gigabyte unified 10ghz 1024bit rambus XDR memory 25Gigabyte/sec bandwidth)
storage: BD-ROM (8 layer/ 200GB) 8 speed.
hdd: 128GB SSD
controller: dual shock 4 with li-ion built in
output: hdmi 1.4a spec
bluetooth 3.0 interface
wifi-n built in

Xbox720processor:
Intel quad core
gpu: ati something
memory: 4 gigabyte unified 2ghz 256bit DDR4 memory
storage: DVD-ROM 32 speed.
hdd: 100 GB 7200rpm
controller: xbox with aa-batteries
output: hdmi 1.2 spec
proprietary wireless controller/peripheral interface
wifi-n sold seperately
1gbit lan

wii next:
processor
cpu:1.2 ghz broadway
gpu:600 mhz hollywood
storage: dvd
backwards compatible with wii-mote and the wii fit
wireless built in.


25 Teraflops for the next Cell processor? You expect the next cell processor to be 125X faster than the current Cell processor in the Playstation 3? To give you a comparison, the PowerXCell 8I version used by IBM in their top of the line supercomputer doesn't reach 400 Gigaflops. You're expecting a little too much there.
 
Would a Larrabee work out cheaper than a Cell 2 + GPU combo? I ask as I still firmly believe that cost will be a defining next gen factor and PS4 simply must be cheaper than PS3 was pound for pound, dollar for dollar.

Larrabee, is what I would call a very 'grand' solution. So no I don't think it will be cheap to match raw performance of Cell 2 or next gen GPU from NV or AMD.

If developers doesn't like Cell, why do people think they will like Larrabee ? To take advantage of Larrabee, it needs to be programed differently compare to current GPUs. Except for texturing operation, Larrabee will be all software. It'll be like good old days of writing software renderer on 8086 architecture but with beefy vector performance and bigger and better cache.

If developers are going to limit themselves to DirectX11 featureset, I don't think Larrabee is a good idea. Because both NV and AMD are going to offer better performance for the same cost.
 
I expect the next Xbox to have a reasonably fast BD-ROM, just like its PS4 counterpart. I certainly do not expect the PS4 to be way ahead of the next Xbox as both will be built to the same price, and will come out at the same time.

I have a suspicion that both will come with proprietary peripherals as it is a useful additional revenue stream. Microsoft must have made a fortune from its hard disks and wireless dongles.
 
If one manufacturer is about to use larrabee in its next system I would not say that they will have to start from scratch. I don't think that we can compare larrabbe to cell in regard to tools/environment.
Intel won't come with empty hands in regard to the software.
Next systems release date is at least 2 years from now Intel is already actively working on various tools and libraries, in no way larrabee will come as naked in regard to software as the cell was at launch.
In fact I wouldn't be surprised if tools and libraries available to larrabee by 2012 are at least as good (my bet would be better) as their cell counter part.
Intel may offer the best compiler and profiling tools of the business, thread building block, optimized library, havoc (and likely a very well optimized rendition).
I won't be like devs will have to come with their own solutions about how to spread their works on so many cores, both Intel and Microsoft agree that the solution that offers the best scaling is tasks base (finer grain that thread) and work stealing algorithm ( I read it's easier to implement efficiently on a system with caches).

And Larrabee will have a graphic ISA it's not like everything will have to be programmed by hand, I can't see why larrabee ISA would be more bothering than low level ISA used on the PS360, as it's software you have can make your own changes but it's not mandatory.
 
Last edited by a moderator:
I find it very unlikely Sony will opt for Larrabee when they have Cell. As for Microsoft...maybe, but I doubt they'd want to deal with Intel again after the first Xbox.

What would be interesting though is that, if Sony and Microsoft both turn Larrabee down, might it be possible for another player to enter the game? Apple could be a very good candidate...or perhaps even Intel themselves (slim to none chance on that on though). Who would want all that R&D to go to waste? But I suppose, given the current state of the economy, its hard to predict such things...
 
Does anyone want to hazard a guess at the power consumption per 100mm^2 of GPU/CPU one might expect on the 32nm process?

It may be one of the most important considerations because the silicon budgets didn't increase at all between this gen and last gen, and yet the power consumption increased by many orders of magnitude.
 
my prediction:

PS4
processor: Cell2 (25 terraflop or something)

memory: 4 gigabyte unified 10ghz 1024bit rambus XDR memory 25Gigabyte/sec bandwidth)

output: hdmi 1.4a spec

Xbox720processor:

storage: DVD-ROM 32 speed.

output: hdmi 1.2 spec

wii next:
processor
cpu:1.2 ghz broadway
gpu:600 mhz hollywood
storage: dvd
backwards compatible with wii-mote and the wii fit
wireless built in.

How did you come up with that 25GB bandwidth for example? That's pretty low :) 25 teraflop is ridiculous and that HDMI spec thing is aswell. Why would MS use 1.2 spec in 2011?! I see what you did with the Wii next there and ... ...Well I doubt that's going to happen.

In general I don't post too often in the console technology section of the forum as my knowledge is not very high here, and looking at the quotet post and the one about Gears of War pics I suggest you do the same with similar reasons...
 
How did you come up with that 25GB bandwidth for example? That's pretty low :) 25 teraflop is ridiculous and that HDMI spec thing is aswell. Why would MS use 1.2 spec in 2011?! I see what you did with the Wii next there and ... ...Well I doubt that's going to happen.

In general I don't post too often in the console technology section of the forum as my knowledge is not very high here, and looking at the quotet post and the one about Gears of War pics I suggest you do the same with similar reasons...

I took the previous generation (2000-2001) and compared it to the current (wii/360/ps3). I then projected that onto the future.
For example:
ps2 6.2 gigaflop, ps3 1 terraflop (single precision) so ps4 should (could) be about 125 terraflop, give or take.
memory 32 mb to 256mb ram, ps4 should have 4 gb (main) ram, but i made it unified because XDR is faster than the other type.
dvd 4.7 GB, ps3 blu ray 50 gb, so ps4 should have 500 gb, but the spec goes to 8layer so yeah, i took 200gb.
as for the hdmi spec, 1.3 only goes to 25xx resolution single screen, so i guess they put in 1.4 as we don't know what resolutions it will support if it is to last 10 years, so in 2021 they should use 1.4 ;)

as for the xbox, microsoft said they want to give the people choise to include options so yeah, the wifi can be bought seperate.

edit: the 25 GB bandwidth should be 250GB, that was a typo.
 
Status
Not open for further replies.
Back
Top