Capcom's "Panta Rhei" engine

New details!

http://www.dualshockers.com/2013/07...clusive-or-not-more-info-on-panta-rhei-shared

Some of those questions were answered by Capcom’s Deputy General Manager of Technology Research and Development Masaru ijuin and by Programmers Daisuke Shimizu and Hitoshi Mishima as part of a very extensive interview on the Japanese website Game Watch.

The development of Panta Rhei started in the summer of 2011, to address problems with Capcom’s proprietary engine MT Framework that led to the deterioration of development efficiency.

The Deep Down demo ran in real time at the PS4 presentation and displayed between 10 and 20 million polygons per frame. It had a variable frame rate above 30 frames per second and a total texture capacity of 2 GB. It ran 30 different shaders at the same time.

The demo can run on a PC with an NVIDIA Geforce GTX 570, 8 GB of RAM, and an Intel Core i7 CPU. The development machines used at Capcom have Geforce GTX 680 and GTX 590 GPUs.

The peak performance of the PS4 is lower than that of an high end PC theoretically, but due to the ease of development and the streamlined architecture there are areas in which it can be superior. The same can be said about the Xbox One which has a similar architecture and potential.

The engine can use Tessellation and in certain areas Approximating Catmull-Clark Subdivision Surfaces, that is unfortunately too heavy to be used systematically for every element . It also uses Dynamic Level of Detail (DLOD) to avoid pop-in. Thanks to the PS4′s high memory capacity it’s possible to achieve both stable performance and avoiding pop-in with LOD.

The rendering pipeline has been redesigned. It can use Tile Based Deferred Rendering combined with Forward Rendering for special and semi transparent materials. Tile Based Deferred Rendering can grant very high performance but Forward rendering can be used when you want to use the Bidirectional Reflectance Distribution Function (a four-dimensional function that defines how light is reflected at an opaque surface) or render translucent materials. Material rendering is done with a diffuse map, a specular map, a normal map and a roughness map.

Rendering of materials is done by combining a diffuse map, a specular map, a normal map and a roughness map.

Shimizu-san tried to implement the indirect lightning technology named Sparse Voxel Octree Based Real-Time Global Illumination (SVO-GI) for the flames of the dragon, however the process was rather heavy on the resources, so he decided to take advantage of simpler Voxel Cone Tracing for the Deep Down demo.

It’s possible to use Partially Resident textures both on PS4 and Xbox One. It’s the same technology as the Mega Texture used in Rage by id Software and the DirectX 11.2 Tiled Resources technology flaunted by Microsoft at BUILD 2013 is pretty exactly that. Panta Rhei Can do it as well.

The original Game Watch article in Japanese is worth a look too, for the slides and probably more details that weren't translated.
http://game.watch.impress.co.jp/docs/series/3dcg/20130731_608483.html

Stolen from gofreak
WYHASun.png
 
Last edited by a moderator:
Voxels for lighting live on :)

So voxel cone tracing will act as emitters, applied to emissive materials.. like the dragon fire. And it's view dependent with lower resolution voxels the further away from the camera.

Also, 3D textures are used to look up the voxels rather than the traditional octree way in SVOGI, which makes it faster somehow. And with potentially huge 3D textures representing the scene, they can use PRT to help.

Not sure if this is all known stuff, but I thought this part of the article was the most interesting. Maybe someone with some Japanese skills can parse the original text better.
 
so is that the same kind of stuff that epic was doing with ue4?

They actually started with the exact same thing, but creating and updating the voxel octree proved too costly and they decided to use voxels as a 3d texture instead.

That's a lot faster to create the voxel structures but it also has it's share of problems:

- Since you are storing a volume even empty space inside that volume costs memory. They avoided that by using far less and bigger voxels to represent the scene then Epic was using in the UE4 demo (which means the light would lack finer detail). They also said that the PRT capabilities of GCN could be useful to tile a 3d mega texture and so bring the memory usage down a lot (Ms also showed a tiled shadow map demo on build. How mega texture works for real time generated assets? You just discard the unused parts?)

- Searching a volume is a lot slower than searching a tree, so I guess they also limited the number of light bounces (I have no idea if they even have indirect lighting, or just used it for direct lighting by having the entire fire as a "emissive object").

I actually got surprised that creating and updating the voxel octree is actually more expansive than cone tracing it... I wonder how a vector capable cpu would fare at that... Perhaps SVOGI might make a return later this gen once developers get their hands on the HSA capabilities of those machines...
 
It apear they prefered doing a severely jittered conetracing over a full-res buffer, over good per-pixel sampling on a half-res buffer. The actual conetracing cost can be the same, but it takes a buffer 4x as big. Depending on how good their final resolve is though, it mihgt produce more acurate results than a discontinuity aware upsampling of a half-res lighting.
 
Wow, I never realized how much visual "cheating" goes on in games.

It's not so much cheating as doing as little as possible in order to present the visual illusion in the most convincing way possible. With finite resources, the less you need to do for a given effect then the more things you can do. As long as there isn't some huge corner case that breaks your illusion then even better.

For example, smoke is often just a flat texture (sometimes animated, sometimes not). It has a huge gaping hole of a corner case though in that the texture either has to rotate with the player's view (which looks odd) or not rotate and the player will see it on edge (which also looks odd). But it's far cheaper to do than volumetric smoke comprised of particles. The illusion is good enough for most developers and most gamers and performance can better be spent on other things which have a greater impact on the game presentation.

As time goes on, hardware gets more powerful which allows you to use less cut corners and/or you find a new way to present a more believable illusion with less obvious corner case breakdowns at minimally increased resource useage. Or it can now use a resource that was once extremely limited but now isn't (going from 512 MB of memory to 5 GB of memory, for example) even though the other resources haven't improved as significantly.

Game development is sort of like Magic. It's all about fooling the viewer into thinking something is happening that isn't actually happening. That grass you see? That isn't really modeled grass, it's just a flat texture. Looks great head on, looks like arse when viewed from the side. Just like magic tricks, looks great if seen from the right perspective, looks like arse if it isn't or you know what is going on. That nicely POM'd cobblestone street? Looks great at the right distance and angle. View it too close or at a bad angle and the illusion falls apart.

Regards,
SB
 
Hence the quotes. I know its just a limitation of the power available. I don't fault developers for it. I'll be interested to see if these graphical effects are just eye candy or have actual gameplay implications as well.
 
Hence the quotes. I know its just a limitation of the power available. I don't fault developers for it. I'll be interested to see if these graphical effects are just eye candy or have actual gameplay implications as well.

Kind of OT, but doesn't one of the powers in infamous involve smoke? Blow stuff up to make volumetric smoke that the character can start moves from inside?
 
Back
Top