Carmacks next engine..how can it do offline pixar-quality?

I was doing some reading on the specs of renderman, and I listened to Carmack's speech, and i'm wondering...how could scenes which are indistinguishable from pixar stuff be done, even offline with Carmack's renderer? I mean, he didn't mention implimenting any global illumination techniques, radiosity, photon mapping etc..?

I mean his engine will be good and all, but how could it do Pixar Quality stuff without the above mentioned stuff?
 
It was Carmack's latest Quakecon Keynote Address. And there's one thing that you missed. He said it would equal the "Scanline" rendering quality of current offline renderers. That excludes any radiosity, caustics, etc.
 
squarewithin: QC 2004


StratoMaster: he was probably referring to pixar quality shadows -> high resolution shadow maps with shadow buffers
 
Re: Carmacks next engine..how can it do offline pixar-qualit

XxStratoMasterXx said:
I mean, he didn't mention implimenting any global illumination techniques, radiosity, photon mapping etc..?

I mean his engine will be good and all, but how could it do Pixar Quality stuff without the above mentioned stuff?

Look at the specs of Renderman again. The official Pixar one, PRMan, isn't a raytracer or GI. It's REYES-based. It's basically scanline rendering, where instead of a z-buffer you have a linked list of partly transparent pixels and the closest opaque one so you can handle the blending properly. That aside, Exluna's BMRT, now nVidia's Gelato, was a raytracing one.

I don't think Pixar used photon mapping in any of their movies besides the last one. There was a presentation by one of their guys at EGSR last year on all the stuff they had to do to accelerate it enough to make it practical for their scene sizes.

Though I'm still tempted to call BS on Carmack's renderer doing that. Even without, GI or anything, PRMan still has significantly higher quality filtering and anti-aliasing than any hardware I've seen so far. Maybe he knows something that is 2-3 years off that I don't. But given the current to near-future state of things, I'm quite skeptical.
 
squarewithin:

You want the text ones I wrote up last August, or do you want the videos of JC talking for over an hour (over at Gamespy). ;)
 
Alstrong said:
squarewithin:

You want the text ones I wrote up last August, or do you want the videos of JC talking for over an hour (over at Gamespy). ;)

Both. I might as well listen to him blab while I work.
 
Alstrong said:
squarewithin: QC 2004


StratoMaster: he was probably referring to pixar quality shadows -> high resolution shadow maps with shadow buffers

No, he said that with his next engine, if we didn't render in realtime, we could get pixar quality scenes if we throw the right data at it.

Now, if his engine can't do raytracing, radiosity, GI, photon mapping etc... then why is he getting bored?
 
PRMan's raytracing implementation is still very slow - so much so, that most of the raytracing techniques used for ambient occlusion and subsurface scattering in ILM and Weta movies are hacks using dozens of shadow maps to do the stuff from shaders.
The actual lighting and shadowing is done using very conventional stuff; point lights, shadow maps and clever compositing (for example to create soft shadows).

Dreamworks implemented a sort of GI for Shrek2, where they are using a reduced complexity version of their sets to generate a bounced lighting pass. They still don't raytrace the displaced micropolygons either.


PRMan's texture filtering and antialiasing quality is still unmatched. It's probably because of the stochastic sampling stuff...
 
Going back through those videos, it makes sense. He (John) was only talking about the new renderer having the potential to produce images that were indistinguishable from "some" images produced via an off-line renderer. He does not, however say that the new renderer coupled with GPU/PC technology from the same time period would produce those images at interactive frame rates.

Later


You guys type too fast for me sometimes... :oops: On the "raytracing" issue, I don't think JC is too worried about implementing that into an engine is because it can be done perfectly fine (cinema-grade even) through extremely clever hacks. Hacks that can certainly be done in a game engine in a few years (maybe not interactively at the time, but hardware will catch up).
 
Laa-Yosh said:
PRMan's texture filtering and antialiasing quality is still unmatched. It's probably because of the stochastic sampling stuff...

It's more than that. Mostly better sampling and reconstruction techniques. You can get just as crappy results with stochastic sampling if you use a uniform grid and a box filter. There are tomes of literature on low-discrepancy sampling distributions and complex reconstruction filters such as Lanczos, Mitchell, and EWA. Unless Carmack is planning on having not only programmable texture sampling, and (I think) some control over raster positions, I still see it lacking compared to PRMan.
 
sunscar said:
Going back through those videos, it makes sense. He (John) was only talking about the new renderer having the potential to produce images that were indistinguishable from "some" images produced via an off-line renderer. He does not, however say that the new renderer coupled with GPU/PC technology from the same time period would produce those images at interactive frame rates.

Later


You guys type too fast for me sometimes... :oops: On the "raytracing" issue, I don't think JC is too worried about implementing that into an engine is because it can be done perfectly fine (cinema-grade even) through extremely clever hacks. Hacks that can certainly be done in a game engine in a few years (maybe not interactively at the time, but hardware will catch up).

yeah, but shouldn't Carmack put this type of stuff in, since the game based on this engine wont be out for a while? He did say he had to predict where the hardware is going for the next few years...
 
You know if people want to go and have something render, and they don’t mind that it’s running a few frames a second, you can get, literally, film quality shadowing effects out of this, by just changing out the number of samples that are going on in there. This does wind up being very close to the algorithm that Pixar has used in a great many of the Renderman-based movies, and it’s just running in the GPU’s now in real-time at a lower sample levels.

So reading Alstrong's lovely transcripts (even if they are Word documents), Carmack appears to just be saying that they can oversample their shadow buffers to compensate for the lack of quality hardware. Nvidia already has a demo on that, using adaptive sampling for shadow buffers. No where did I see him state that the engine can match PRMan on quailty, just that the shadow buffer sampling algorithm is the same and you can get similar quality shadows if you up the sampling.
 
squarewithin said:
You know if people want to go and have something render, and they don’t mind that it’s running a few frames a second, you can get, literally, film quality shadowing effects out of this, by just changing out the number of samples that are going on in there. This does wind up being very close to the algorithm that Pixar has used in a great many of the Renderman-based movies, and it’s just running in the GPU’s now in real-time at a lower sample levels.

So reading Alstrong's lovely transcripts (even if they are Word documents), Carmack appears to just be saying that they can oversample their shadow buffers to compensate for the lack of quality hardware. Nvidia already has a demo on that, using adaptive sampling for shadow buffers. No where did I see him state that the engine can match PRMan on quailty, just that the shadow buffer sampling algorithm is the same and you can get similar quality shadows if you up the sampling.

"With the next engine you're not going to have absolutely every capability you'll have for an offline
renderer, but you will be able to produce scenes that are effectively indistinguishable from a typical
offline scanline renderer if you throw the appropriate data at it, and avoid some of the things
that it's just not going to do as well."

It's not just the shadows, but entire scenes.
 
XxStratoMasterXx said:
"With the next engine you're not going to have absolutely every capability you'll have for an offline renderer, but you will be able to produce scenes that are effectively indistinguishable from a typical
offline scanline renderer if you throw the appropriate data at it, and avoid some of the things that it's just not going to do as well."

It's not just the shadows, but entire scenes.

For contrived scenes, but I highly doubt anywhere near the general case. Shading hardware is no where near the complexity of Renderman shaders.
 
squarewithin said:
XxStratoMasterXx said:
"With the next engine you're not going to have absolutely every capability you'll have for an offline renderer, but you will be able to produce scenes that are effectively indistinguishable from a typical
offline scanline renderer if you throw the appropriate data at it, and avoid some of the things that it's just not going to do as well."

It's not just the shadows, but entire scenes.

For contrived scenes, but I highly doubt anywhere near the general case. Shading hardware is no where near the complexity of Renderman shaders.

Today's isn't, but the next Id engine will be in use a few years from now, so obviously it'll have some newer HW stuff in it.
 
XxStratoMasterXx said:
Today's isn't, but the next Id engine will be in use a few years from now, so obviously it'll have some newer HW stuff in it.

I understand that, but having talked to Larry Gritz (nVidia's head of Digital Film) and several other poeple, they say that things can be accelerated with using hardware, but they'll still being doing certain parts of the rendering process on the CPU for a years to come because GPU hardware just isn't there yet.

Carmack may get 99% of the way there, but that last 1% is the real bitch and the movie folks demand it.
 
Well considering Carmack is making a game engine, it's pretty damn impressive to get 99% of the way there :)

Most devs are stil stuck in old quake-level technology ;)
 
Back
Top