Epic Says This Is What Next-Gen Should Look Like

Textures and geometry will definitely need more memory, and deferred rendering and various other techniques will also eat a lot of it.

The problem is that optical storage is very very slow and it'll take very long minutes to load all the data. Even an HDD install wouldn't help too much here. So this will be a pretty important issue that the hw architects of the nextgen consoles have to solve somehow...
 
Yeah, that's only a part of the pipeline though :)

They're basically using spherical harmonics to store the light contributions in the scene in 3D space. This is especially important in the nigth scenes where every plant and creature is a light source thanks to the bioluminescence. Of course it also has to be dynamic as characters and leaves are all animated throughout the sequences and so the lighting has to keep on changing, too.

As far as I know they use point clouds to represent the scene geometry, relatively sparsely spaced points but it's enough for the generally soft lighting. Of course it's all derived from the actual scene geometry and even though they use displacement and tessellation (thanks to PRMan) it's still a LOT of geometry... maybe that's what they can speed up with GPU calculations, to generate the point clouds?

Then once they hive the lighting baked, they use that precalculated data through the normal rendering pipe using PRMan, which is still REYES based and has the best damn antialiasing and texture filtering I've ever seen in any renderer. Probably full of customized shaders and other extensions, though... I don't know what kind of SSS they use nowadays, for example, I recall they raytraced into shadow maps back in 2002-2003 with LOTR but that's quite probably obsolete by now.

I dont know, but it seems they did not save the processing (with displament mapping simulating geometry) to simulate for 3D/Stereo geometry effectively and still have 800 different models for each characters.

May be accelerated by GPUs like fermi (CUDA etc) more programmable, but I think that was commonly used processing of "old fashioned" via cpu (NURBS whatheaver) through tools and custom applications,if not my mistake as has said that much of what we saw in Avatar was developed from scratch (for 3D universe) or highly customized existing tools similarly to what happened with Toy Story in 1995 and even Final Fantasy 2001.

About ray-tracing (like in CGs in Lord of the rings...) maybe they have used but with 300 rays per pixel(more heavy to process)?

SSS... some interesting feedback here
http://www.cgfeedback.com/cgfeedback/showthread.php?t=1296


Maybe just marketing but perhaps have some interesting information here:

http://software.intel.com/sites/billboard/va-magazine/issue-05/articles/dreamworks-pipeline/


Another link about with same info
http://3dvision-blog.com/some-of-the-challenges-behind-the-making-of-the-3d-movie-avatar/

From wiki:

" The lead visual effects company was Weta Digital in Wellington, New Zealand, at one point employing 900 people to work on the film.[101] Because of the huge amount of data which needed to be stored, cataloged and available for everybody involved, even on the other side of the world, a new cloud computing and Digital Asset Management (DAM) system named Gaia was created by Microsoft especially for Avatar, which allowed the crews to keep track of and coordinate all stages in the digital processing.[102] To render Avatar, Weta used a 10,000 sq ft (930 m2) server farm making use of 4,000 Hewlett-Packard servers with 35,000 processor cores running Ubuntu Linux and the Grid Engine cluster manager.[103][104][105] The render farm occupies the 193rd to 197th spots in the TOP500 list of the world's most powerful supercomputers. A new texturing and paint software system called Mari, was developed by The Foundry in cooperation with Weta.[106][107] Creating the Na'vi characters and the virtual world of Pandora required over a petabyte of digital storage,[108] and each minute of the final footage for Avatar occupies 17.28 gigabytes of storage.[109] To help finish preparing the special effects sequences on time, a number of other companies were brought on board, including Industrial Light & Magic, which worked alongside Weta Digital to create the battle sequences. ILM was responsible for the visual effects for many of the film's specialized vehicles and devised a new way to make CGI explosions.[110] Joe Letteri was the film's visual effects general supervisor.[111]"

http://en.wikipedia.org/wiki/Avatar_(2009_film)
 
Last edited by a moderator:
I don't think you get the point here.

PRMan's standard rendering method is like this:
- load all the objects in the scene
- split them up into smaller pieces until they're smaller than a preset limit (the smallest primitive is usually a quadratic b-spline patch, PRMan converts even polygons to these)
- when they're small enough, check if they're visible and throw them away if not

- for actual rendering, tessellate primitive until each polygon is smaller than a pixel (exact amount is set by shading rate)
- apply displacement when used
- shade the vertices of the resulting grid (this includes lighting, shadows etc)
- recombine using stochastic sampling


The problem is that if you start using raytracing, every bounced ray will require PRMan to repeat this same procedure. If an object is visible in a reflection it'll be loaded, bounded and split until it's small enough, tesselated, displaced, shaded and so on. This makes the process incredibly slow, especially when using multiple bounces. Think about global illumination with multiple bounces, or subsurface scattering.

The solution is to replace objects with a simplified version, which is the point cloud I've mentioned. They're basically sampling the object using a spatial grid and store just these points, complete with color info, and this is what's loaded and used for raytracing calculations. It's a lot less data and it'll of course be an approximation but it's more than enough for GI, SSS and such stuff.

You basically never load anything that's not visible to the camera, you use the point cloud as a simplified representation of the scene. Every point is treated as a disc facing the ray that you're tracing so that there won't be any holes and such. I think it's even good enough for glossy (blurred) reflections.
The problem is of course that you need to update the point cloud for every frame of animation if there are moving or deforming objects in the scene, which is pretty much guaranteed with action movies.

Now this is only a guess but I think the guys at Weta are using GPU computing to calculate these point clouds for all the objects in the scene.


As for LOTR and raytracing, it was a very ugly hack. Basically you put in a hundred spotlights and render shadow maps for them which gives you a crude 3D representation of the scene in those 100 shadow (depth) maps. You can then raytrace using this data structure and it'll be faster - but less accurate - than using full blown raytracing in PRMan. Back in 2002-2003 raytracing wasn't optimized at all and it was even slower than today.
The downside was that this data had no color info so it could only be used for SSS and ambient occlusion.


It's worth to note that traditional raytracing renderers are getting a lot of R&D and practical use nowadays. Arnold renderer is used on all Sony Imageworks productions (we use it too :) ) and it has a very different approach compared to PRMan - no need to precalculate point clouds and shadow maps and such, it requires far less artist time but render times are somewhat longer. It seems that eventually offline CG is going to resort to traditional raytracing, although there are still some significant advantages with PRMan and Reyes.

* PRMan is Pixar's Renderman if it's not clear for someone
 
I don't think you get the point here.

PRMan's standard rendering method is like this:
- load all the objects in the scene
- split them up into smaller pieces until they're smaller than a preset limit (the smallest primitive is usually a quadratic b-spline patch, PRMan converts even polygons to these)
- when they're small enough, check if they're visible and throw them away if not

- for actual rendering, tessellate primitive until each polygon is smaller than a pixel (exact amount is set by shading rate)
- apply displacement when used
- shade the vertices of the resulting grid (this includes lighting, shadows etc)
- recombine using stochastic sampling


The problem is that if you start using raytracing, every bounced ray will require PRMan to repeat this same procedure. If an object is visible in a reflection it'll be loaded, bounded and split until it's small enough, tesselated, displaced, shaded and so on. This makes the process incredibly slow, especially when using multiple bounces. Think about global illumination with multiple bounces, or subsurface scattering.

The solution is to replace objects with a simplified version, which is the point cloud I've mentioned. They're basically sampling the object using a spatial grid and store just these points, complete with color info, and this is what's loaded and used for raytracing calculations. It's a lot less data and it'll of course be an approximation but it's more than enough for GI, SSS and such stuff.

You basically never load anything that's not visible to the camera, you use the point cloud as a simplified representation of the scene. Every point is treated as a disc facing the ray that you're tracing so that there won't be any holes and such. I think it's even good enough for glossy (blurred) reflections.
The problem is of course that you need to update the point cloud for every frame of animation if there are moving or deforming objects in the scene, which is pretty much guaranteed with action movies.

Now this is only a guess but I think the guys at Weta are using GPU computing to calculate these point clouds for all the objects in the scene.


As for LOTR and raytracing, it was a very ugly hack. Basically you put in a hundred spotlights and render shadow maps for them which gives you a crude 3D representation of the scene in those 100 shadow (depth) maps. You can then raytrace using this data structure and it'll be faster - but less accurate - than using full blown raytracing in PRMan. Back in 2002-2003 raytracing wasn't optimized at all and it was even slower than today.
The downside was that this data had no color info so it could only be used for SSS and ambient occlusion.


It's worth to note that traditional raytracing renderers are getting a lot of R&D and practical use nowadays. Arnold renderer is used on all Sony Imageworks productions (we use it too :) ) and it has a very different approach compared to PRMan - no need to precalculate point clouds and shadow maps and such, it requires far less artist time but render times are somewhat longer. It seems that eventually offline CG is going to resort to traditional raytracing, although there are still some significant advantages with PRMan and Reyes.

* PRMan is Pixar's Renderman if it's not clear for someone

Impressive range and accuracy of information given by you and I can only say that it can share with us the knowledge to develop your CGI engine.;)

Forgive me for I am very old and I'm not used more current methods of rendering(PRMan).

(im not code since...forget about...)
 
Serious question, do you really need 4GB of ram in a console?
If I remember correctly each character in avatar has tens of GBs of texture data, to display a character you need a good virtual texturing system with a current computers.

So yes, we need as much as we can get especially if any devgelopers are trying to go for a persistent worlds.
 
Heh, Renderman's main pipeline is actually 15-20 years old ;) the actual implementation has been constantly upgraded though...
 
If I remember correctly each character in avatar has tens of GBs of texture data, to display a character you need a good virtual texturing system with a current computers.

So yes, we need as much as we can get especially if any devgelopers are trying to go for a persistent worlds.

Agree, headshoot are for little girls, we need special hairshoots to diferentiate the Pros from the newbies :LOL:.





Joking aside, in gfx terms other than realism, what kinds of visual stuff would be possible:?:

I mean I do really like what Bioshock and Zelda look like, or other example like cellshading also give a great look IMO.

What could next gen brings other than realist visuals:?:
 
Heh, Renderman's main pipeline is actually 15-20 years old ;) the actual implementation has been constantly upgraded though...

Believe me im very old than this ;),but sorry im not clear im talk about pixar ,shaders Cgs like using today,im coming from world only Ray-tracing was mainstream for CGI(im use very early 3dstudiomax,CAD..Intell 386,486...wait million years for one picture).
 
Textures and geometry will definitely need more memory, and deferred rendering and various other techniques will also eat a lot of it.

The problem is that optical storage is very very slow and it'll take very long minutes to load all the data. Even an HDD install wouldn't help too much here. So this will be a pretty important issue that the hw architects of the nextgen consoles have to solve somehow...

May be the solution, for reduced streaming from drive, is to use very high well compressed data, so no more long time to load and reduced storage need, and have specialized transistors for decompressed the data on the fly. Or procedural, may be the both.
 
As far as I know everything is already compressed as much as it's possible...

Procedural stuff will never look as good as hand made.
 
I'm confused how HDD installs wouldn't be much better than loading the data off the disc. Don't get me wrong, I don't care for installs on my game consoles, but I'm curious to know why you think this is the case. Isn't HDD speeds much faster than current disc speeds?
 
It is faster mostly due to vastly better seek times. Peak throughput wise I think they are may be around 2-4x faster at most.
 
Current consoles are using very slow HDDs, just take a look at DigitalFoundry's tests to see how badly they perform...
 
What about flash memory (like Wii U will have)?
They should generally have pretty good latency vs HDD's but I would imagine throughput-wise the one in Wii probably lags behind the HDD's in other consoles. Though that's just a theory based on what was availiable back when Wii launched and how cheap they managed to sell it :)

I'd definitely not expect to see those 400MB/s+ SSD's in consoles any time soon. Definitely not in the upcoming generation. They could add a few GBs at 100-ish MB/s but I would think it would cost about as much to add 1-2GB to system RAM and that would usually help more.
 
Plenty of pop-in in that game actually, even on PC with max settings.

Yeah tons. That was the one thing I was looking to see if I could change in the settings/ini files, as it's really glaring in some places. Sometimes you can shoot at something before it even appears, and then it pops up. It's the one really annoying thing about and otherwise great sandbox environment.
 
Back
Top