I don't know what changed with the PS3 architecture by then but I presume something did.
It went from a Beta kit consisting of a 2.8 Ghz Cell cpu with 512mb XDR ram and a 512 mb 7800GTX GPU, to a dev kit with 512mb ram and RSX (and 3.2ghz cell).
I don't know what changed with the PS3 architecture by then but I presume something did.
You forgot FlexIOIt went from a Beta kit consisting of a 2.8 Ghz Cell cpu with 512mb XDR ram and a 512 mb 7800GTX GPU, to a dev kit with 512mb ram and RSX (and 3.2ghz cell).
Like Ac...Joshua and others have said, bad animations - especially character animations.
Too bad this is a problem that isn't likely to be solved with a groundbreaking new algorithm, or so it seems to me at least.
My hope is that middlewares/engines will provide better and better character animation packages to take as much work from the animators as possible without restraining flexibility/power of expression too much. But still, many good animators (and thus a lot of money) will still be needed for a long time to come regarding animations, I'm afraid.
I disagree...
I think the best solution would be much more intelligent animatiion systems developed by programmers to aid animators in getting in-game animation to work realistically.. The biggest problem with animation isn't specifically in the animations themselves (what could be more realistic than mocap?) but the systems used to blend animations smoothly or dynamically from one to the next..
Well, with mocap, you don't really have any choice but to even it out after you get it or use it as a reference because all mocap is noisy to start and has the occasional blips or errors where line-of-sight of a point was obscured or something and in turn, some extrapolated guesses turn out wrong. A lot of outsource mocap studios might do some cleanup work before delivery, but even that still only becomes a starting point because the cleanup they do is more or less towards the goal of getting you an error-free, but otherwise direct mocap result, not really a "production quality" animation.There must be a reason why so much - especially regarding facial animation - in movie CGI is still keyframed, with mocap only used as a reference (if used at all).
I disagree...
I think the best solution would be much more intelligent animatiion systems developed by programmers to aid animators in getting in-game animation to work realistically.. The biggest problem with animation isn't specifically in the animations themselves (what could be more realistic than mocap?) but the systems used to blend animations smoothly or dynamically from one to the next..
The best solution would be context/ai-driven animation systems which could be integrated into the engine and the animation tools to allow the animators to define the properties of the contexts/AI themselves in order to get them to work well enough..
That way it would just be a case of setting up the pre-canned anims, creating descriptions of the context of the animations and then the engine could dynamically alter the animations in real-time to allow them to change and modify depending on the situation..
This kind of technology is where I see the future of animation going..
Soul Calibur 2/3 used motion tweening to blend animations from one to the next using some form of either linear/parabolar-interpolation..What you're talking about already exists. Soul Calibur 2/3 uses it.
The problem is more pre-canned animations won't help entities in games move more realistically.. It will just give them more movement variation..I think the problem is most games just don't have enough MOCAP animations to connect different possible posteurs. For a fighting game it's feasible because the level is small and static so a lot of the animations could be permanently be stored in RAM. For a game that has wide open areas like a FPS storing a bunch of MOCAP animations for a bunch of onscreen characters uses up quite a bit of memory. Of course streaming the level will help free up some memory but I don't think anyone has found a viable solution yet. Then there is animation instancing which could be a good method since Project Offset will be doing it.
None of my Atari 2600 games had screen tearing or framerate drops!
So I guess that you guys are talking about something else cause like I said this is already being worked on in games currently in development.
But I think until the first gameplay videos of Uncharted are released people won't really understand what implications it has for games.
Nothing to do with hardware - I'm quite certain MGS4 first showing used softfiltered shadow volumes - the artifacts were exactly the same as the myriad of PS2 games that used the same technique in last 6 years (including those I've worked on myself).Ether_Snake said:I don't know what changed with the PS3 architecture by then but I presume something did.
shadowmaps, inspite all of their visual shortcomings, are just way more practical on new generation of hardware.
I disagree.Panajev said:you have tons of bandwidth and fill-rate
Well, with mocap, you don't really have any choice but to even it out after you get it or use it as a reference because all mocap is noisy to start and has the occasional blips or errors[..]. A lot of outsource mocap studios might do some cleanup work before delivery, but even that still only becomes a starting point because the cleanup they do is more or less towards the goal of getting you an error-free, but otherwise direct mocap result, not really a "production quality" animation.
And exactly the step from "direct mocap result" to "production quality animation" is where I see the problem.
The stuff arch is talking about is definitely something to look forward to, but it's not the area of my concern. "Incomplete" movement animations are not something that bother me too much, and surely something that can be made better by more intelligent animation systems.
What I'm talking about are high quality character animations, i.e. gestures and facial expressions. Characters in dialogue, in close-up. That's where an intelligent animation system won't be the ultimate saviour, I'm afraid...
And if badly executed, it's quite often one of the main causes of breaking the immersion for me. (E.g. KotOR: Really great game - but boy, that would have rocked so much more with good character animations...)
PCF (Percentage Closer Filtering) is used by a lot of games (HS for examples takes up to 12 PCF samples per pixel), it doesn't fix aliasing problems, but it can improve quality a lot as it can replaces hard edged low res shadows with fuzzier/smoother edges.As far as shadow aliasing is concerned, does this technique proposed by Insomniac solve the problem (page 14)?
http://www.insomniacgames.com/tech/articles/1007/files/shadows_in_rcf.pdf
If so, is there a negative impact of using this technique (performance, quality, difficulty to implement, etc)? I would love to think that this pet peeve of mine might be resolved as I find it to be the most irritating of all (breaks the illusion). I hope that MGS4 uses such a technique as I have seen alot of alliasing in this otherwise perfect looking game.
I disagree.
Bandwith is irellevant in this case, and fillrate isn't even close to being up to scale with resolution & scene complexity increases(both of which baloon up the fillrate demands), compared to last gen.
It's much less practical then last gen(especially if you want to add accurate soft shadowing to volumes), and alternatives are much more practical then they used to be (especially on PS2). Not a contest really.