1st practical Realtime GI Engine aimed at next-gen consoles/pc...

ROG27

Regular
This is a pretty big deal *if* true and *if* practical. The new Fantasy Engine by Fantasy Lab is purported to be able to do a staggeringly accurate (for realtime) rendition of single pass GI supporting both subdivision surfaces and displacement mapping. The engine utilizes the GPU mostly. In the context of a NVidia 7900 GTX, performance is claimed to be as follows:

"An NVIDIA® GeForce™ Go 7900 GTX calculates the global illumination solution (light bouncing to convergence) for the scene in these clips above in about 3.3 milliseconds (300 frames per second) per frame treating all surfaces as dynamic. Static geometry can be handled much faster."

The guy who invented the method/engine is talking on the forums below. He has also contributed to several books on CG including GPU Gems II and is noted in the field supposedly. I believe his name is Michael Bunnell if I'm not mistaken. He's claiming to post realtime demos soon (claims quicktime videos were dumps) and has submitted the engine to NVidia for content creation. He also says the engine is still a work in progress.

What does everyone think? I'm quite skeptical myself, but you can decide for yourself...

http://www.fantasylab.com/newPages/FantasyEngineFeatures.html

http://forums.cgsociety.org/showthread.php?t=372242&page=2&pp=15

http://forums.e-mpire.com/showthread.php?t=58211
 
I can understand the skepticism. The idea of a realtime GI solution doesn't make sense on the hardware when you understand the problem. For this particular demo, the lack of any objects is going to reduce memory access a great deal I'd expect, so even if a true GI solver, in a real game situation it'll be no ise (except maybe a one-on-one fighter). Still, whatever it is, the demo footage looks fantastic, like a stop-motion movie. Even if a total hack, not GI at all, if games can look this good I won't complain ;)
 
Last edited by a moderator:
I have doubts as well - what he claims requires severl orders of magnituds of speedup, knowing the computing capacity and memory systems of GPUs and the computational requirements of offline rendering solutions.
We've done a lot of GI stuff on Mental Ray - ambient occlusion, reflection occlusion and bent normal passes - using many subdiv surface characters with lots of heavy displacement mapping and it took some time.

The only clue I have is that he might be bending the rules a bit, after all GI is not a strictly defined term and he might be doing it without any raytracing at all.
 
After reading the website, I see he states that it's not ambient occlusion (hasn't looked like it either) nor raytracing, so it might be something like the technique in the first MotoGP.
 
Laa-Yosh said:
After reading the website, I see he states that it's not ambient occlusion (hasn't looked like it either) nor raytracing, so it might be something like the technique in the first MotoGP.

Laa-Yosh, their example of displacement mapping doesn't add any 'significant' geometry i.e. horns, ears, fingers, etc are all real geometry; the DM is used for stuff like rubs, muscle ripples, large scales, etc.

Are there any significant draw backs for using DM for filling in detail in realtime (like a game)? Collision detection would not really be a problem. Any other drawbacks to displacement mapping in general (not necessarily their implimentation) other than performance when used this way? I think LOD was mentioned at one time, are there ways around this (like MIP maps?) DM looks substantially better than just using normal mapping, it would be nice to get this sort of effect in games, especially since character models still tend to be decidely low poly and normal maps (while effective in many areas) do tend to leave characters looking flat.
 
After reading the website, I see he states that it's not ambient occlusion (hasn't looked like it either) nor raytracing, so it might be something like the technique in the first MotoGP.
That was an image-based technique as well. The website also claims no image-based lighting. So far, it doesn't say much other than to make it sound as if it is an exhaustive computation. My best guess is a series of samples that are used to generate some point lights, but that will hit memory access limitations real fast when you start scaling up to several objects and scenery that's a little more complex than... the ground and the sky.
 
This guy has two articles in GPU gems, one on "Adaptive Tessellation of Subdivision Surfaces with Displacement Mapping" and one on "Dynamic Ambient Occlusion and Indirect Lighting". According to the blurb in the book he's worked at Silicon Graphics, Gigapixel, 3dfx and nVIDIA so he's got a fair amount of experience in the 3D hardware field by the look of things.

Here's the introductory blurb from the chapter in GPU Gems 2 about Dynamic Ambient Occlusion:

In this chapter we describe a new technique for computing diffuse light transfer and show how it can be used to compute global illumination for animated scenes. Our technique is efficient enough when implemented on a fast GPU to calculate ambient occlusion and indirect lighting data on the fly for each rendered frame. It does not have the limitations of precomputed radiance transfer (PRT) or precomputed ambient occlusion techniques, which are limited to rigid objects that do not move relative to one another (Sloan 2002). Figure 14-1 illustrates how ambient occlusion and indirect lighting enhance environment lighting.

Our technique works by treating polygon meshes as a set of surface elements that can emit, transmit, or reflect light and that can shadow each other. This method is so efficient because it works without calculating the visibility of one element to another. Instead, it uses a much simpler and faster technique based on approximate shadowing to account for occluding (blocking), geometry.

The technique does a lot of approximation but the results they show in the book are quite good. I'll have to read the article in detail to fully understand what they're doing.
 
If you check out the e-mpire discussion you have cpiasmic there talking about using the guys other techniques rather ineffectively. I don't know how much of a graphics guru the guy is.

Looking at the vids, especially the 360 rotation textured+non textured, you can see it's definitely not a full solution. The top of the shoulder should be much darker under the shoulder pad than it is, yet appears to be being lit by the skybox with no regards for occlusion by the armour.
 
Acert93 said:
Laa-Yosh, their example of displacement mapping doesn't add any 'significant' geometry i.e. horns, ears, fingers, etc are all real geometry; the DM is used for stuff like rubs, muscle ripples, large scales, etc.


There are many possible issues with heavy displacement mapping.
For example, if you want to create a large horn from a smooth surface, you'll only have a few vertices to push out, and it wil lget you long, thin polygons - and heavy stretching of the UV coordinates and thus the textures. Also, if you decrease the subdivision of the base mesh, some displaced details could simply vanish because there won't be any vertices to use.
So, you'll have a rough looking horn with blurry textures, and it's better to add a few extra polygons to the mesh and model the horn, then only add details with the displacement.

Are there any significant draw backs for using DM for filling in detail in realtime (like a game)?


The general problem is that you need to have at least a 1:1 ratio of texels to vertices to get all the detail from the map; but as the texture can be filtered, you could actually use even more vertices per texel and get better results. Now even an 512*512 texture has 260000 texels' worth of information, adn we all know how rough that looks up close. Now most Unreal Engine3 characters use 2K textures, and that's 4 million polygons - no current hardware could work with that much geometry in real time.
And keep in mind that you would have to use this huge amount of polygons for shadow map calculations as well! Most VFX studios using PRMan go as far as to save all shadow maps for each frame to disk to speed up rendering (again, VFX work is heavily iterative, most scenes are rendered 10 to 100 times during a production so it makes sense to cache as much as we can). Stencil shadows would suffer an even greater penalty with high poly counts. And I'm not sure about this but I guess any deferred rendering approach using many rendering passes, or Xenos' tile based rendering would also suffer.
You can theoretically cheat and calculate shadows without displacement, but that would ruin self-shadowing. A system that separates shadow maps for the characters and the enviroment could work, but then you'd have to avoid displacements in the enviroment geometry.

If you consider that high quality normal maps can capture a similar amount of shading detail and will usually only lack the ability to produce silhouette detail, while using a far lower amount of geometry, then the choice is very clear. Until there's robust hardware support for on-chip tesselation and displacement, there's not going to be much use for it. And keep in mind that tesselated geometry eats memory like candy; that is why PRMan has to render in 32-64 pixel sized buckets, as the micropolygon point lists would not fit into RAM. So current GPU hardware would need significant rearchitecting for this.

The only thing I can imagine for the current gen is to create models very carefully so that even a low amount of subdivision will provide enough geometry in key areas for some displacement. Then combine a high res, say 2K normal map for shading detail and a low-res 256 or 512 texture for displacement based silhouette detail. That would keep memory use at acceptable levels as well (and I think you can guess what the problem is with using animated displacements for muscle motion, facial animation etc...).
As far as I know only the X360 has hardware support for tesselation; even this tech demo is probably relying on the CPU to handle that. PS3's SPEs could offer a lot of help in this though.
 
Laa-Yosh said:
The general problem is that you need to have at least a 1:1 ratio of texels to vertices to get all the detail from the map ... Now most Unreal Engine3 characters use 2K textures, and that's 4 million polygons - no current hardware could work with that much geometry in real time.
Do you really HAVE to use that many vertices tho? 2k texture maps - there aren't even that many pixels on the screen, if there were 2k squared vertices per texture map per character there would be far far far far more geometry than could ever be seen, even at 1080P res, and that's as high as video screens will go for the foreseeable future. Seems to me, there's plenty headroom for some rendering cheatin' there! :LOL:

You can theoretically cheat and calculate shadows without displacement, but that would ruin self-shadowing.
Would it really? You mentioned large-scale displacement would look weird for various reasons, so if only small scale is used instead, would you really notice self-shadows might look a bit off?

I assume the shadows would cover displaced geometry, but not reflect the changed shape of the displaced geometry, and rather just the original shape of the base geometry. Am I right or wrong? :)

A system that separates shadow maps for the characters and the enviroment could work, but then you'd have to avoid displacements in the enviroment geometry.
Could you please explain why this is the case? I don't know enough about stuff like this to figure it out on my own...

Anyway, great post. Thanks!
 
Guden Oden said:
Do you really HAVE to use that many vertices tho? 2k texture maps - there aren't even that many pixels on the screen, if there were 2k squared vertices per texture map per character there would be far far far far more geometry than could ever be seen, even at 1080P res, and that's as high as video screens will go for the foreseeable future. Seems to me, there's plenty headroom for some rendering cheatin' there! :LOL:
The question is how? If you have a 4 megavertex source object, to accomodate a 2x^2 displacement texture, you'd need to scale that down to screen resolutions, perhaps a 50k model, on the fly. You'd still need to process 4 million vertices to create a lower resolution model, every time the model changed. Alternative you create the model on the fly creating vertices for key points on the displacement texture. Say your displacement maps has a couple of ridges around a beetle's abdomen. The beetle's abdomen is 500 vertices. The Disp. map is 256x256. Perhaps the displacement map can be summed up as a 25 extra vertices along the length of the beetle. You would need to add 25 (or rather 50 at least, for both side of the bump) to the beetle model, and create an optimized mesh from the output. I don't know if the technology is there for that yet. Normal subdivision is uniform on the whole, so you'd subdivide the whole surface rather than add points arbitarily.
 
Shifty Geezer said:
If you check out the e-mpire discussion you have cpiasmic there talking about using the guys other techniques rather ineffectively. I don't know how much of a graphics guru the guy is.

Looking at the vids, especially the 360 rotation textured+non textured, you can see it's definitely not a full solution. The top of the shoulder should be much darker under the shoulder pad than it is, yet appears to be being lit by the skybox with no regards for occlusion by the armour.

He claims it the engine to be a work in process. Even with its inaccuracies and lack of precision as you point out, it would still be great if its results are visually more believable relative to other solutions, while still being able to maintain scalable performance. What I don't get here is how he is doing a single pass method without massive computational power or precomputed approximations/samples. He claims to have 'invented' something, and that he needs to 'protect' it. If he actually did invent something, I don't blame him. It would be copied 100 times over ASAP. He may be sitting on something big if his solution is a scalable performer. The fact that NVidia is toying with his engine now lends some creedence to him IMO.
 
you don't have to render displaced polygons .

I allready pointed to this guy (more than 2 times actually):
http://www.divideconcept.net/index.php?page=news.php
His technic rasterises subpixel displacement without physically adding any polygons.Pixel shader does all the job.It works on silhouetes ,like a true displacement mapping.

He had been contacted by crytek,guerilla, ,sweeny (epic) ,EA ,and many others...
I saw that demo (he came wher i work) ,it works ,but exactly how is his well kept secret...
 
  • Like
Reactions: one
_phil_ said:
you don't have to render displaced polygons .

I allready pointed to this guy (more than 2 times actually):
http://www.divideconcept.net/index.php?page=news.php
His technic rasterises subpixel displacement without physically adding any polygons.Pixel shader does all the job.It works on silhouetes ,like a true displacement mapping.

He had been contacted by crytek,guerilla, ,sweeny (epic) ,EA ,and many others...
I saw that demo (he came wher i work) ,it works ,but exactly how is his well kept secret...
WOW

But how was the perfomance in that demo he showed you?
 
_phil_ said:
you don't have to render displaced polygons .

I allready pointed to this guy (more than 2 times actually):
http://www.divideconcept.net/index.php?page=news.php
His technic rasterises subpixel displacement without physically adding any polygons.Pixel shader does all the job.It works on silhouetes ,like a true displacement mapping.

He had been contacted by crytek,guerilla, ,sweeny (epic) ,EA ,and many others...
I saw that demo (he came wher i work) ,it works ,but exactly how is his well kept secret...

Holy crap! In due time, my friends, in due time we will see something about this. I'm certain of it by the sounds of this.
 
Looks great, especially the last movie. However, it's dated July 2007. There's been no update, and no news. What's happening with it, and why isn't it featuring in any products?
 
If you check out the e-mpire discussion you have cpiasmic there talking about using the guys other techniques rather ineffectively. I don't know how much of a graphics guru the guy is.
It's me, by the way -- that's just the name I use there :p. If you look at the throughput figures he gets in the paper, it's around half a million verts per second. I got around 800,000 per second... which, when you get down to it, is pretty useless. 800,000 per frame would be good, but not per second.

Looking at the vids, especially the 360 rotation textured+non textured, you can see it's definitely not a full solution. The top of the shoulder should be much darker under the shoulder pad than it is, yet appears to be being lit by the skybox with no regards for occlusion by the armour.
You can also see it in the glowing club video. Makes me think that shadow tests are not part of the indirect lighting portion of the solution. Which is just as well. The AO paper was in general kind of like the idea of making extra lights using other surfaces. Of course, using the classic radiosity form factor computation (in a simplified form) was part of the trick to making sure things behaved properly. It does scale poorly for a variety of reasons, including, of course, lots of incoherent memory accesses.

I have no clue what he might be doing here (maybe just better chosen samples)... but I have my doubts it will scale up as well as he claims and still remain feasible for an in-game render scheme. Rendering in game, by itself, technically needs to be a hell of a lot faster than the actual framerate of the game (it's not as if it's the only thing eating up time).
 
i downloaded a video called engine-arbiturary-model-divx.avi
it has z fighting issues, is there something else that i should view
hmm looks like i didnt read the text, the zfighting was meant to be there
 
Last edited by a moderator:
Back
Top