london-boy said:
As i said, of course Waverace had a primitive way to do things compared to Motorstorm, but that's due to technical differences that are obvious.
I'm quite sure that either Waverace or some other water-scooter racing game had racers that did create waves
I wasn't sure about this, but was referring just to 64. But as is, if you have an engine that can spawn waves from a given point which I think would be fairly easy to do, doing so in the wake of players isn't so difficult, you don't subsequently have to worry about what happens to them any more than any other waves, and I don't think you're actually deforming any existing waves as such (but maybe blending some new ones in with others?). And these games weren't exactly using complex fluid dynamics or the like.. ; )
One imagines dealing with 'mud' and having mud that can dry out either splattered on cars or as more persistant grooves in the track itself, with the potential for getting stuck in the mud even etc. is somewhat more complex. We haven't seen a (off-road) track racer with that level of environment interaction, as far as I'm aware?
Laa-Yosh said:
Well, to composite the clouds, you need depth info on the scene. It makes more sense to let RSX render the frame and then read the framebuffer and Zbuffer data in small blocks into the SPUs. There's probably some more optimizing involved though, as it'd need quite a few hundred megs of data per frame, especially with 4x AA (4 times the Z samples).
Then again, the cloud data only requires a depth input, and produces a color output, so if they let RSX to do the final blending, then they won't have to read the framebuffer into the SPUs, only the Z-buffer. So the SPUs could render into a texture, that RSX would simply read in from the XDR RAM and add on top of the framebuffer. Still, GDDR bandwith is probably scarce enough already, with 4x AA, so maybe they can do some trickery to minimize the Z-reads too...
Hadn't thought about the first way before..in some instances it might be better. In the second, though, I doubt the bandwidth required by RSX for the final blend would really be all that much at all?
Laa-Yosh said:
Now, about the raytracing part, why can't they call it raymarching, cause AFAIK that's the proper name?
I've seen ray-marching referred to in the context of cloud and atmospherics rendering before, so it's quite possible this is what they're doing. edit - though doing a google, is ray marching perhaps not ray tracing but accounting for light scattering..? Does this allow you to introduce some assumptions that simplifies things a bit?
I remember reading a small bit about Heavenly Sword in a Ninja Theory profile in Edge some time ago, and they were talking about using Cell for, amongst other things, clouds. I wonder if they're doing anything quite as ambitious as this?
(Yeah yeah, I'm teasing for info..for all I know they might 'just' be generating some perlin noise on the CPU to feed to the GPU
).
edit - side note, but harping back to the argument of "what can physics do for my game?", but this is a really neat comparison using GRAW on a regular CPU, and on a PhysX-enabled PC:
http://physx.ageia.com/footage.html (<-- footage there)
That's 'just' effects physics. Though I'm sure it's not exactly pushing it to the limit, the PhysX-enabled version clearly looks an awful lot better than the regular PC version, and it's a fairly clear demonstration of how physics can help visually, also. There'll be a lot more where this came from, and lot more sophistication going forward I think.