Next-gen graphical effects

Shifty Geezer

uber-Troll!
Moderator
Legend
Seen a couple of real improvements that'll hopefully make it over, and felt it worth making a thread for anyone to post a new graphical effect that we could see on the next consoles.

Volumetric smoke: Finally! Big puffy, lingering smoke clouds. I hope it can be extended to incorporate wind and add air pollution (fog) to affect gameplay, but this is definitely a start.

Proper hair: AMD's latest creation is featuring first in Tomb Raider, and it looks mighty fine. I imagine the compute requirements mean one character out of 100 will have hair and the rest will have to wear hats, head-scarves, or have crew cuts, but it's a pretty impressive leap forwards. With LOD, maybe a few characters up close will be able to sport realistic hair?
 
The original Warhawk for PS3 had volumetric clouds, years ago, computed on the cell's SPUs. They weren't perfect, but the effect looked pretty cool and effective regardless...

I would really like to see more like that, especially since no other game I know of ever did anything similar. Good ole Quake 3 had a pretty nice volumetric fog effect, but that wasn't comparable in any way. It just used the fixed-function features of 3D graphics processors of that era (pre-GPU) in a better way.

Don't really care about accurate hair, there are much more important things to waste computing cycles on IMO. I'd like to see some proper use of tesselation for starters.
 
The original Warhawk for PS3 had volumetric clouds, years ago, computed on the cell's SPUs. They weren't perfect, but the effect looked pretty cool and effective regardless...
Yeah, but they were static. Knack's smoke is an implementation of the GPGPU fluid dynamics that were tech-demo'd years ago.
 
Yeah, but they were static. Knack's smoke is an implementation of the GPGPU fluid dynamics that were tech-demo'd years ago.
Hellgate london had implementation of a simple case.
http://www.youtube.com/watch?v=1BOn8Dwag2c

Now that scene voxelization seems to be big thing it very well could be that we will see more fluid simulations and other effects. (voxel based GI, reflections, all sorts of effects and particle systems hitting environments.)
 
Yeah, but they were static. Knack's smoke is an implementation of the GPGPU fluid dynamics that were tech-demo'd years ago.

really? they look canned:
Ds7F7Co.png


the plumes are identical
 
Not technically "next-gen," since we saw it in ~20% of games this gen, but FP RGB buffers look to be a permanent fixture in next-gen graphics engine, much like normal maps between last and this gen. Being able to use an accurate lighting spectrum without significant sacrifices elsewhere will be the most immediate, visible difference IMO.
 
Would definitely like to see some developers implement SVO. Maybe in the second or third wave of games.

In general, everything CryEngine 3, Frostbite 2 and UE4 was built to do. A lot of next gen features have been implemented in limited form on current consoles, but I expect them to fully stretch their wings this gen (thinking of SSS, particle effects and the like).
 
really? they look canned:

the plumes are identical
1) They're not identical. The ones in your capture are similar but clearly not identical (though could be offset in time portion). 2) The main point is they are volumetric. Even if pre-computed, that's a step up from the sprays of 2D sprites of yesteryear. Although I reckon they're realtime myself, but I may be proven wrong on that.
 
1) They're not identical. The ones in your capture are similar but clearly not identical (though could be offset in time portion). 2) The main point is they are volumetric. Even if pre-computed, that's a step up from the sprays of 2D sprites of yesteryear. Although I reckon they're realtime myself, but I may be proven wrong on that.

Perhaps their algorithm is rather deterministic given it has too few variables? I can see similar sources without independent wind interference and influence lending themselves to very similar, but independently computed, volumetric clouds.

I'm still waiting for realistic, real time, volumetric fire too.
 
Volumetric stuff consumes insane amounts of memory, so it makes sense to re-use precalculated data as much as possible. Also notice the relatively small and enclosed bounding box, that also helps to conserve RAM.

For reference, we've spent some 400 gigabytes of disk cache just on a couple of flintlock gun muzzle flashes. Six sets of chimney smoke took up a terrabyte. And these were all relatively small sized on screen.

Fire would be easier in that it's smaller so there's either less memory used or the same amount allows for greater detail; but it also is a little more complex in shapes and there's a lot less tolerance for a 'fuzzy' look. I'd expect image-based approaches to be still quite viable as it does not require shading and thus it does not have to be real 3D volumetric stuff.
 
Even if they are canned, it requires huge memory bandwidth as you would typically ray-cast into the volumetric data. To calculate those shadows again you need quite a lot of bandwidth some of which can be saved by precomputation.

This has been done a lot in the movies recently but I am skeptical about whether it can be done in games. My OpenCL based volume renderer crawls at 25 fps for a static 256x256x256 volume data on a 7970GHz (using accurate light scattering model and real-time shadow computation).
 
What Laa-Yosh said sounds about right. I was doing flow over a 24 degree corner for my dissertation. A single frame took around 1 or 2 GB IIRC, and took around .25s to process on 250 cores.

Of course, there are lots of tricky things you can do for graphics that create things that look more-or-less right, but aren't technically physically correct and consume about one bajillionth as much data. I've seen a lot of impressive results using coarse Lagrangian particles.
 
Back
Top