PS3Land interviews Fantasy Lab

ElStupido

Newcomer
Found it on neogaf.

image001.jpg


Fantasy Lab is one of the newest next gen development studios bringing to life an engine with the power of Global Illumination and amazing displaced subdivision surfaces technologies for the PS3.

Recently PS3Land.com had a chance to talk with Michael Bunnell, the President of Fantasy Lab. Here is what we discussed:

PS3Land: How long has Fantasy Lab been around?

Fantasy Lab was founded about a year ago.

PS3Land: How long has the Fantasy Engine been in the works?

The Fantasy Engine was begun when the company was founded a year ago.

PS3Land: What will the Fantasy Engine include? Anything new that has not been seen in any other game engine besides Global Illumination?

I think our combination of subdivision surfaces with displacement mapping is unique. It lets us render very complex geometry in a realistic way from any viewpoint. Computer graphics is about creating an illusion. If you see an object up close and it is obvious that it is rendered with normal-mapped polygons then you lose the illusion. Thereafter the viewer knows that the object is just some normal-mapped polygons and it no longer seems as real.

PS3Land: I hear you currently have a PS3 title in the works. Now I understand releasing details on the title are strict, but will it be PS3 exclusive or are there any details you are allowed to provide?

Yes, we are working on a PS3 title, but we are not providing any more information other than it will have an “Eâ€￾ rating.

PS3Land: Global Illumination, what is it?

Global Illumination is a method of lighting geometry in computer graphics in which the light that reaches a surface is calculated using both direct light from light sources and indirect reflecting off other surfaces. The result is more realistic and natural looking images. Global illumination complex to compute since light can come from any surface in the virtual world, and any surface can occlude (block) light.

PS3Land: What makes Global Illumination any better than the shading techniques developers are using for todays games? Are there any noticable differences?

The most noticeable difference between global illumination renders and typical lighting techniques used for games is the diffuse inter-reflections. It allows for soft, natural-looking lighting that makes images look realistic and objects really “popâ€￾ no matter what materials are rendered.

As I see it there are three alternatives to global illumination.
One approach is to use point and directional lights only and ignore inter-reflections. The result is harsh, unnatural lighting as in the Doom 3 engine.

Another approach is to use lots of area lights. This is the approach taken by Pixar in many of there earlier films and Dreamworks in the movie Shrek. The results can look very good as long as the area lights are properly shadowed. However, it requires placing a lot of lights, and area lights take a lot more computation power to use than point lights, so this technique is not used for real-time applications. Dreamworks added a bounce of indirect lighting in Shrek 2 so they did not need to place so many lights. New films, like Monster House, are using full global illumination.

A third approach is environment lighting, using environment maps or spherical harmonic lighting. Environment lighting is a natural choice for real-time rendering since its predecessor, environment mapping, has been used for years to render shiny materials in real-time. However, diffuse materials need self-shadowing or they look flat and seem to glow in all the wrong places (like in the Jimmy Neutron TV show). Calculating the shadowing (and ideally, diffuse inter-reflections) requires basically the same amount of work as global illumination and is often computed using the same techniques. Therefore, environment lighting is typically only used for real-time applications if radiance transfer information is pre-computed, which is only practical for static scenes.

Another problem with environment lighting is that it is only correct at a single point. It takes a lot of environment maps rendered at different locations (or groups of spherical harmonic coefficients computed from different locations) to properly render objects in a complex scene or game level. Note that all the environment lighting information needs to be re-computed each frame for dynamic environments.

PS3Land: How will the PS3 be able to handle something like GI?

We are well aware that conventional methods for computing global illumination can take hours for typical scenes, even on hardware as powerful as the PS3. However, our technique is extremely efficient and ideal for running on parallel computing hardware. The PS3 is plenty powerful enough to render complex scenes in real-time with global illumination using our technique.

PS3Land: Any personal comments about the Cell processor or RSX graphics Card?

I love the Cell processor. The SPUs are ideal for running our global illumination code, which can also run on the RSX. Having both Cell and RSX gives us a lot of power and flexibility.

PS3Land: Will Fantasy Lab license the Fantasy Engine or Global illumination technology to other game studios or perhaps Sony to use as a middleware program in their line up of dev tools?

We would love to offer our global illumination technology as middleware on the PS3. We are actively looking into that.

PS3Land: In relation to #8, to what extent will the global illumination go on the PS3 hardware? Would something like a high end PC or perhaps the Xbox 360 run GI?

I can say that the PS3 is the very attractive to us, because it is the most powerful hardware that the target audience for our game is likely to own.

PS3Land: Any other words you would like to add whether its about your company or the Fantasy Engine or GI or PS3?

We are very excited about the combination of the PlayStation 3 and our rendering technology. Our artists are able to create game characters and environments that match, and even exceed, their vision of what the character should look like. What we have shown so far is just the tip of the iceberg. Just wait until you see our game!

Videos
http://www.ps3land.com/images/fantasylab/Rhinork_FantasyLab_PS3Land.wmv
http://www.fantasylab.com/newPages/turn.mov
http://www.fantasylab.com/newPages/rhinorkSun.mov
http://www.fantasylab.com/newPages/glowingClub.mov
 
The videos actually look pretty decent, but I'm no GI expert. They gotta be cutting corners all over the place, I imagine.
 
That looks very nice, but the scene is to simple, is it because it is just a demo or for other reasons?
 
Well, he actually came out and said
The PS3 is plenty powerful enough to render complex scenes in real-time with global illumination using our technique.
I look forward to seeing them! I'm also somewhat skeptical of the game. It sounds like they had an idea for GI and decided to make it the basis of a new game? Not the best foundation for an entertainment. Hopefully, for their sake, it has a lot more substance to it than just looking pretty.
 
I think it's a next-gen game, not just PS3 despite the concentration on that in the article. Though I don't remember it being announced as any sort of game before the Realtime GI link above.
 
Shifty Geezer said:
Well, he actually came out and said
The PS3 is plenty powerful enough to render complex scenes in real-time with global illumination using our technique.
I look forward to seeing them! I'm also somewhat skeptical of the game. It sounds like they had an idea for GI and decided to make it the basis of a new game? Not the best foundation for an entertainment. Hopefully, for their sake, it has a lot more substance to it than just looking pretty.


And KZ2 is supposed to be a render target after Sony heard to many things about not being real time. Not said they are lying too but they say complex ans show this...
 
I'm not saying I believe him, but he has gone on record as saying it works with complex scenes, whereas if he hadn't mentioned that at all there'd be more reason to believe it can't handle but the simplest of scenes.
 
New interview: http://interviews.teamxbox.com/xbox/1665/Fantasy-Lab-Interview/p1/

Before you ask, the following movies are captured from a real-time Fantasy Engine demo. In the clips below, an nVIDIA GeForce Go 7900 GTX calculated the global illumination solution (light bouncing to convergence) for the scene in about 3.3 milliseconds (300 frames per second) per frame, treating all surfaces as dynamic.

We had the chance to talk with Michael Bunnell, President of Fantasy Lab, Inc., to discuss the new global illumination technique that the company has developed, as well as their Fantasy Engine.

...

We understand that Fantasy Lab has pioneered a method to implement a graphic technique for real-time graphics that was previously found only in pre-calculated graphics, such of those seen in CGI movies like Shrek Is this the type of rendering that SplutterFish’s Brazil plug-in provides for 3DSMax, but in real-time? Is that correct?

Michael Bunnell: Yes, our method is different, but the results are basically the same. There are some effects, like caustics, that are not supported, but fortunately most of those effects can be faked convincingly with texture mapping.

...

Does that imply that by implementing your solution, environments could support a higher degree of deformable surfaces, and therefore, more interactivity?

Michael Bunnell: Yes, both the geometry and the lighting can be dynamic. Not only can objects be deformed, but so I can area lights, which can be any shape.

Does your global illumination technique have any consequence on post processing effects like motion blur, depth-of-field, or things like particles, environmental effects? If so, does it improve the implementation of those, or to the contrary - make it more difficult to have additional effects due the resources this new GI algorithm consumes?

Michael Bunnell: No, the technique does no require any changes to post processing and can be applied to things like particles. As to the resources that GI algorithm consumes, we find that it allows us to use fewer traditional lights. So, in the end, the cost of GI is not that high.

In your Rhinork demo, you have a single model in a rather simple environment. What can we expect from this technology when implemented in a real-life scenario, such as a game engine? Will a 720p visual be possible at playable framerates?

Michael Bunnell: All I can say is that we are targeting 60 frames per second at 720p for our own game.

A current trend in game engines is to build in-game assets from two meshes; the developer uses a highly-detailed model that is raytraced to obtain a normal map, which is then applied as a texture to a simpler model that is the one rendered in the game. The trick is to have a low poly mesh that looks like it has all the details of the high poly model. Does your GI algorithm change this paradigm in any form?

Michael Bunnell: No. We use the same paradigm. The engine supports treating the low-resolution mesh as a subdivision surface, but that is optional, as is displacement mapping.

...

Aside form the Fantasy Engine, do you have plans to create video games? If so, would you focus on a specific platform at first?

Michael Bunnell: Fantasy Lab is currently working on a video game. As to the platform, I can tell you that there will likely be a console version first because of our target market. We have not made the final decision as to which console will be first.

...

And:

As someone who undoubtedly knows a lot about real-time graphics, what is your opinion on the Xbox 360 architecture? What about the PlayStation 3? Do you take a side in the current discussion of ATI’s unified shader architecture versus nVIDIA’s traditional approach?

Michael Bunnell: I’m not going to touch this one with a ten foot pole.

:LOL:

Edit - can the thread title be edited to 'Fantasy Engine' or something? Cheers.
 
Last edited by a moderator:
Sounds confident. We haven't even seen any mockups for Fantasy Lab to have any idea what the complexity is going to be. Maybe you play a Rhino walking around the desert...

This was a silly metric
In the clips below, an nVIDIA GeForce Go 7900 GTX calculated the global illumination solution (light bouncing to convergence) for the scene in about 3.3 milliseconds (300 frames per second) per frame
You won't measure it as 300 FPS as you have to do more to render a frame than calculate GI. The proper metric is that the GI takes 3.3 milliseconds which is about 10% of a 30 FPS frame, and 20% of a 60 FPS. That let's you know how much GPU time you have left for rendering other things.

I'd also suggest this method (if it actually works!) would be ideal on a unified architecture. Whether it's vertex shader based or pixel shader based, Xenos can turn all the shader pipes to the task, whereas a traditional GPU will have some resources sitting idle.
 
Shifty Geezer said:
if it actually works!) would be ideal on a unified architecture. Whether it's vertex shader based or pixel shader based, Xenos can turn all the shader pipes to the task, whereas a traditional GPU will have some resources sitting idle.

Though it doesn't matter much if those resources are as powerful or beyond that :)
 
london-boy said:
Am i the only one who can't get the videos to work??
Probably. I think I used QT first time they were linked.

[/rant]Why the blazes have Apple forced an iTunes install with QT?! How rude! Wanting to watch free moves is not the same as wanting to buy their downloadable music. It just adds waste to the download and my time uninstalling iTunes once QT is in.[/rant]
 
[/rant]Why the blazes have Apple forced an iTunes install with QT?! How rude! Wanting to watch free moves is not the same as wanting to buy their downloadable music. It just adds waste to the download and my time uninstalling iTunes once QT is in.[/rant]
http://www.apple.com/quicktime/download/standalone.html

They do still have a standalone Quicktime Installer. It's just that they deliberately make it more difficult to find. I find, though, that Quicktime disagrees enough with a few of the things I've worked with in the past, that I pretty much stick to VLC.
 
Back
Top