How could SPE's be used in console RPG's?

I'm thinking that turn based RPG's must be among the least CPU intensive games out there.
With little in the way of AI complexity so a game like this where most of the game code runs on the PPE. What would be the most likely use for SPE's in this scenario?

Extension of the GPU? lighting? vertex processing? any thoughts?
 
Last edited by a moderator:
least . I said least CPU intensive. So I guess my question would also apply to realtime cutscenes. There's not much CPU work involved so in that mode do SPE's get switched over to help with rendering?
 
seismologist said:
least . I said least CPU intensive. So I guess my question would also apply to realtime cutscenes. There's not much CPU work involved so in that mode do SPE's get switched over to help with rendering?
Particles maybe...:???: RPG's are very particle heavy. Oh animations!
 
Do a simulation of the character's facial muscles, and adaptively modify the mesh according to the underlying structure. If done right you could have very convincing lip-synched talking-while-showing-emotions, etc. Sure, it's questionable wether that would be much better than just using traditional vertex/bone animation, but it should be more flexible if done right and perhaps even easier to handle from a content-creation perspective.

If they'd make something like that available as middleware for the SPEs they could even call it - wait for it - emotion engine!

Another thing I want in next-gen console RPGs is also pure eyecandy, but probably a good fit for the SPEs: more realistic cloth.
 
Particles as already stated, and full-body physics (ie. realistic musculature taking hits and whatnot). Also, procedurally generated backgrounds details and such. There is work for SPE's to do if they look for it. :)
 
Stuff like weather and atmospheric fx (particles), cloth and water simulation are probably good candidates for those resources to be used on.
 
seismologist said:
What would be the most likely use for SPE's in this scenario?

Extension of the GPU? lighting? vertex processing? any thoughts?
Most J-RPG's I know are very poor in the graphics department. I would like to see realistic animations (xenosaga 2 is terrible), physics (remember Aeris' clothes from the FFVII techdemo?) and more interactivity with the environment while walking around waiting for endless random encounters.

Or ofcourse they could dump the stupid turn based system and go realtime like FXII but I guess I'm asking too much then. :cry:
 
Just insane graphics and scenery. Maybe raytrace the entire thing for a laugh. Or not bother, leave a lot of Cell idle, and save yourself the development costs.
 
Shifty Geezer said:
Just insane graphics and scenery. Maybe raytrace the entire thing for a laugh. Or not bother, leave a lot of Cell idle, and save yourself the development costs.

Yeah, what he said - if you don't *need* the power, why go to the trouble of contriving a use for it just to give yourself more work?

It's like saying "what can I get the SPUs to do in my 2D coloured-block puzzle game?"

I dunno, get them doing folding@home or whatever :)
 
seismologist said:
least . I said least CPU intensive. So I guess my question would also apply to realtime cutscenes. There's not much CPU work involved so in that mode do SPE's get switched over to help with rendering?
Doh!
Maybe they could have there be real-time damage to the players and environment? Maybe make it play out like in semi-turn-based so that your reaction times and animations are affected by jumping off broken bridges or doing quick "matrix slo-mo" jumps with a sprained ankle to hit multiple targets. Basically, they could make it a heck of a lot more hectic.;)

MrWibble said:
Yeah, what he said - if you don't *need* the power, why go to the trouble of contriving a use for it just to give yourself more work?

It's like saying "what can I get the SPUs to do in my 2D coloured-block puzzle game?"

I dunno, get them doing folding@home or whatever :)
For the sake of using your imagination....
 
nintenho said:
For the sake of using your imagination....

Ok, well if we rephrase the question to be less PS3 centric (it almost reads like another "SPUs aren't general purpose/useful" thread at the moment), and simply ask:

"How could RPGs be improved with more processing power?"

Then there are many answers, most of which could be applied to any other genre of game too:

- Pimp the graphics (obvious)
- Pimp the sound
- Procedural content generation

One thing I think would benefit mostly an RPG or adventure style title, would be speech generation and/or recognition. The latter would free us from cludgy control mechanics and limited conversation engines, and the former would allow much more freedom in dialog for not only the main characters, but also huge numbers of varied NPCs. Right now everything has to be scripted and chosen from a limited menu, and many NPCs will sound exactly the same and say the same things.
 
MrWibble said:
Ok, well if we rephrase the question to be less PS3 centric (it almost reads like another "SPUs aren't general purpose/useful" thread at the moment), and simply ask:

"How could RPGs be improved with more processing power?"

Then there are many answers, most of which could be applied to any other genre of game too:

- Pimp the graphics (obvious)
- Pimp the sound
- Procedural content generation

One thing I think would benefit mostly an RPG or adventure style title, would be speech generation and/or recognition. The latter would free us from cludgy control mechanics and limited conversation engines, and the former would allow much more freedom in dialog for not only the main characters, but also huge numbers of varied NPCs. Right now everything has to be scripted and chosen from a limited menu, and many NPCs will sound exactly the same and say the same things.
Technically, you couldn't call it a role-playing-game if you have speech generation and/or recognition. Nothing should be organized and presented into the story without it being intended (obviously, but just saying).

Edit: I just got an idea for how you could improve the sound. I'm not sure if this would be CPU intensive, but you could make it so that the closer the source of the sound effect is to the ground or anything solid, then there will be more bass. Could definitely make the game feel more visceral...
 
Last edited by a moderator:
nintenho said:
Technically, you couldn't call it a role-playing-game if you have speech generation and/or recognition. Nothing should be organized and presented into the story without it being intended (obviously, but just saying).
Huh? RPGs are supposed to be open-ended, do what you want, say what you want. Forcing people to a particular set of speech options limits the scope of the role you get to play. Some RPGs are diffrent of course, and in something like FF speech options should follow the story. But for players to be able to choose their own interaction with people would be great.

Speech synthesis is a good idea. Instead of using a dozen actors for a dozen voices, and every person you meet to have the same voice as a twelth of the population, you'd create voices on the fly, with intonation, accents, etc. Also instead of spending hours recording speech to load off disc, you create it from text. That would allow the users to type in text and have them say it (though if you're doing that, may as well do speech recognition!). And of course, there are moments where sentence generation would be good. Having to script and record hours of small talk is no fun. Simple conversations could be created on the fly, translated into voices, to add ambient and relevant discussion that adapts to the players actions.

This would be great, but to date all speech synthesis sounds very robotic, and attempts to create conversation by computer hasn't been successful either. Maybe instead take a few basic voices and process them to make them different sound different? Perhaps a phonetic construct system with advanced blending so words don't sound like a cut-and-paste job? Also that wouldn't cover the latencies during turn-based combat, which I think was the main crux of the thread. As a player spends billions of processor cycles deciding whether to use a 'Neo Katana Thrust' or 'Lightning Starsong of Pearlescent Might', what should those cycles be doing?

One idea to expend money pointlessly on uneccessary development, how's about acoustically modelled procedurally created orchestral scores? Wave synthesis of an orchestra that takes primary musical phrases and constructs audio on the fly using fractal variations and orchestral tonality to set the mood? You could bring any processor to its knees trying to do this, so a library could be developed for all turn-based JRPGs to needlessly fill whatever free processor cycles on whatever platforms, and no-one would have to worry about their processors sitting idle ever again.
 
Last edited by a moderator:
AI will be the most important advance in RPGs.

seismologist said:
I'm thinking that turn based RPG's must be among the least CPU intensive games out there.
With little in the way of AI complexity so a game like this where most of the game code runs on the PPE. What would be the most likely use for SPE's in this scenario?

Extension of the GPU? lighting? vertex processing? any thoughts?

Why do you think more complex AI would not be beneficial to RPGs? I think that improved AI will be the most important advance in RPGs in future, rather than graphics or physics improvements.

Current games use primitive code branch based top-down pseudo AI, because of the procedural, single threaded nature of conventional CPUs. The AI is simple and predictable because it consists simply of traversing through a fixed set of states (mirrored in code branches) depending on decisions taken. This is predictable and has a limited number of states, because it has to be programmed in explicitly by the programmer. It is also limited because it is inefficient due to the fact that the logic state is stored in the program counter and requires long branches (forcing the instruction cache to be flushed), and hogs a whole execution thread. It is pseudo-AI because it isn't really AI at all, just something that has the appearance of intelligence. It is comparable to the LISP based talk programs of the 80s, which would take text you type in and try and give a response by identifying grammar and switching verbs and order to pose a question. It seems intelligent for a very short time and then becomes infuriating because of it's limited response capability.

There is another way of doing AI, one which the SPE is ideally suited to: neural-network type AI. Examples of this type of AI have recently been demonstrated in the crowd behaviour demos.

Neural network type AI involves treating the logic status as data in a logic table and processing it as data with boolean operations rather than representing it by the position of the program counter. This has a number of advantages. You are not limited to one logic status per execution thread as you are with procedural AI - you can have thousands of logic states by creating thousands of logic status datasets each associated with a different object. You can process the logic datasets using vector processing to some extent by using branchless boolean algorithms. You can load and stream process the logic states of objects in turn, or distribute them to a number of SPEs to process them in parrallel. Branches are short and can be limited to a large extent by using branchless algorithms that store the results of what would otherwise be branches as boolean flags in the data in the logic tables instead for later action. The SPE is perfect for could have been built with this type of AI in mind. Also for complex AI, neural network type AI is much easier to program, because the programmer just has to program the (relatively simple) rules for each object, and then turn them loose to interact with the environment and with each other in complex ways, rather than having to program explicitly for all outcomes in the entire environment. Unlike procedural AI, this is real intelligence. Take the crowd behaviour demo for example, the programmer has defined how each person in the crowd should behave, and he/she can put each person in a particular place, but unlike procedural AI, there is no way he/she can predict where they will end up - the darn things have a mind of their own, and there is an almost infinite number of ways things can end up. What neural network type AI really is, is a realtime simulation of a complex network of interacting objects based on rules specified for the objects by the programmer. You may of course have more than one AI object to a physical object. For example the human mind may have an emotion AI object, a conscious AI object, a sub-conscious AI object, a desire AI object, and a conscience AI object, all interacting with each other and with the environment. Another possibility is interaction of AI objects over the Internet in a truely massive environment.

In Cell, you would probably run procedural AI on the PPE to do overall game AI, but the SPEs would handle the AI of individual objects in the environment.

As far as a RPGs are concerned, instead of the boring and predictable navigating through a finite set of possibilities, the neural network type of AI could allow far more immersive and complex RPG games than we have now. It would drive the "cheat" bloggers mad though.

There were some old discussions on this in the IBM Cell forums
http://www-128.ibm.com/developerworks/forums/dw_thread.jsp?forum=739&thread=93069&cat=46
http://www-128.ibm.com/developerworks/forums/dw_thread.jsp?forum=739&thread=103263&cat=46
 
SPM said:
Also for complex AI, neural network type AI is much easier to program, because the programmer just has to program the (relatively simple) rules for each object, and then turn them loose to interact with the environment and with each other in complex ways, rather than having to program explicitly for all outcomes in the entire environment.

This has its own ups and downs, though. It might seem simpler, but coming up with rules that result in the desired behaviour in the first place can be tricky. There may be lots of subtleties to those rules that result in behaviour you couldn't have predicted - which may indeed be a pleasant surprise, or not, depending on what your AI ends up doing.

I guess this is true of procedural AI though, also, although the whole aspect of 'learning' would make things less predictable.

This isn't a knock on learning algos or neural nets at all, of course! I think it could be quite cool to see them being used more versus hard-coded stuff.
 
Last edited by a moderator:
Titanio said:
It might seem simpler, but coming up with rules that result in the desired behaviour in the first place can be tricky.

There may be lots of subtleties to those rules that result in behaviour you couldn't have predicted - which may indeed be a pleasant surprise, or not, depending on what your AI ends up doing.

Just try programming the crowd behaviour algorithm for several hundred people using procedural AI and you will see just how much easier the neural network AI approach is for certain things. Procedural AI approach is of course simpler for overall game logic.

Regarding the unpredictable subtlties that arise, isn't this the very definition of intelligence - for an object to use what experience it has gained from it's environment and interactions to come up with actions which could not have been predicted by it's programmer? Or to put this in Star Trek speak - for an artificial creation to go beyond it's original programming.
 
PeterT said:
Do a simulation of the character's facial muscles, and adaptively modify the mesh according to the underlying structure. If done right you could have very convincing lip-synched talking-while-showing-emotions, etc. Sure, it's questionable wether that would be much better than just using traditional vertex/bone animation, but it should be more flexible if done right and perhaps even easier to handle from a content-creation perspective.
I think the part of facial animation that we now need to get a hold of is nuances in movement. Source has very convincing facial articulation, but it's still a long way off realistic. Tiny things, like muscular twitches, tiny head and eye movements (your eyes actually make tiny adjustments something like three times a second on whatever you're staring at), tiny head movements.

Would a method like the one you propose make any of this stuff more acheivable than the traditional method? Or more difficult because it means more calculations?
 
liverkick said:
Not your favorite genre is it? :)
Just saying that in a turn based RPG that is by it's nature less demanding of the processor, why make work for yourself? If you design a good game that only uses 10% of resources, why do you need to invent things to use up another 90% of resources if they don't contribute to the game? Like a Tetris game, for example. That game doesn't need 6 SPEs of processing. If you can create a quality Tetris title in 1 month with fanatastic 3D graphics using RSX, and it only uses 10% of resources, what's the point in rewriting it to raytracer the whole thing and use 100% of resources if it doesn't improve the game?

The premise of the thread was 'in turn based RPGs, how can you use SPE to make them better' and my response is basically, you can't. The nature of the turn-based system makes it undemanding of the conosle. That doesn't mean it's a bad game. You don't measure how good a game is by how much idle time there isn't. If the game happens to run in less than max resources, the devs should be happy to pocket the savings, rather than drive their costs up adding unneccesary extras just to fill out processor cycles.
 
Back
Top