real time render when?

The big issue with the hair is physics. If it's entirely static, there's probably a bunch of shortcuts and other hacks that could significantly reduce the workload and geometry density without significantly affecting quality.
 
Ostsol said:
The big issue with the hair is physics. If it's entirely static, there's probably a bunch of shortcuts and other hacks that could significantly reduce the workload and geometry density without significantly affecting quality.
Well, judging from those pictures, I seroiusly doubt they could be anything but modelling each strand individually. That is to say, what you stated may be true for some hair styles, but not all.

Edit:
I'd also like to comment that the person who made this picture really doesn't know proper human proportions. The mouth and the end of the nose are both too low on the face. The eyes are also set a tiny bit too wide.
 
I would also say that hair is the biggest problem

I remember how the dawn demo impressed me back that time. This image is not *such* a big improvement over dawn, it simply needs more horsepower.

Actually, I won't be surprised if Geforce6 or R420 could render this image in real-time. However, speaking of games this will happen much later. I mean dawn demo run very smooth on my old GeForceFX 5600 but we still have no game that would implement such a quality - it's an overkill.

Look on self-shadowing paper from Gpu Gems 2(developers.nvidia.org). This technique is very nice and simple but can you imagine using it in real game? Me not.
 
Btw im 99% sure the hair in those two images is photoshoped in at post, it definitely is in the ark game one anyway. I would link you to the CGtalk thread on it but I can't be bothered to dig up the link, Ive yet to try and render hair myself but it seems like most artists seem to have problems with the massive setup times and annoyance factor that simulated hair causes. It's pretty hard to get to look good and be bug free.

edit: I just read the article and it says that the dark haired girl uses transparency mapped planes. I still think the image has been tweaked a lot in post, also the excessive blur around the edges is probably there for a reason ;).
 
Chalnoth said:
I rather doubt it, Sage. There's a hell of a lot of detail in that hair.

this thing can render much higher resolution images than that in real-time. Now, get into my GeForce4 with 64MB of RAM... yeah that thing can easily handle images with 8x the resolution.
 
Ostsol said:
The big issue with the hair is physics. If it's entirely static, there's probably a bunch of shortcuts and other hacks that could significantly reduce the workload and geometry density without significantly affecting quality.

I dont know about you people but I'm not seeing anything dynamic in that image at all. They're just jpegs, why would you have to do any physics or geometry??? The only thing that should affect quality is the viewers eyes, monitor, RAMDAC, decompression codec, and level of compression. :?
 
Sage said:
I dont know about you people but I'm not seeing anything dynamic in that image at all. They're just jpegs, why would you have to do any physics or geometry??? The only thing that should affect quality is the viewers eyes, monitor, RAMDAC, decompression codec, and level of compression. :?

Everyone seems to be talking about a real-time 3d render here, but you... am sure you know that, I must be missing the humor :p
 
hughJ said:
Everyone seems to be talking about a real-time 3d render here, but you... am sure you know that, I must be missing the humor :p

yes i know that... but the original post said "render this picture in real-time" and I'll be damned if my computer isnt doing that right now! Just picking on semantics :p
 
Post processing: while you can actually go and manually paint into the image, it's usually not that common to do anything that could not be automated and applied to a sequence of frames to create an animation. Color and brightness/contrast adjustments, glow and other effects, fake shadows of various kinds, element edge blurs etc. are the more usual manipulations, but you can also render world-space normal passes and re-light a whole scene in a 2D compositing package if you want to.
Note that the CG industry does this for flexibility - if you can do something in comp, you don't have to re-render which is always a lot more time consuming. Thus many of these operations are not neccessary, and many are already imitated in 3D games (like blooms and glows). Some tweaks, however, require the artist to animate parameters, track points in the image through the sequence, and so on - so these are not that easy to imitate.


Regarding real-time rendering... I believe that the AA and texture filtering quality of offline CGI will not be reached, people will always use the processing power for more detail - whereas in offline CGI, you can afford the extra rendering time to avoid ALL the artifacts. Not to mention the time to calculate dynamic simulations for cloth, hair and such... So offline will always be ahead with a few steps, at least in quality.
 
Well, you can surely render this on NV40 but it won't be real time. So the only thing that keeps us from real time here is the performance of todays GPUs.

Another thing to consider is what exactly do you mean by "real time". Is it only "this head with flat shaded background made of two polygons" :) or is it "this kind of quality models in real games"? The second option depends not only on performance but on artists and budgets too.
 
Chalnoth said:
Edit:
I'd also like to comment that the person who made this picture really doesn't know proper human proportions. The mouth and the end of the nose are both too low on the face. The eyes are also set a tiny bit too wide.
Either you're joking or you really do think there's some set facial proportion that every single human being follows. There's nothing wrong with that picture.
 
If you are willing to settle for exactly that picture but you can rotate the camera and the light (no panning or zooming): We could do it today. 30 frames per second on a 6800.

You would basically be condensing everything from Nalu into just a head with a few changes: Severely limit the animation so that you can use SH-PRT to do the lighting in a dozen instructions. Use textured polygonal ribbons instead of lines for the hair. Statically pre-sort the hair polys to be ordered from inside->out so that they blend correctly (like Ruby). Sure it's cheating, but all that matters is how it looks in the end.

Of course, if what you want is a character that looks that good in a full game like Half-Life 2, you are going to have to wait a few years.
 
Back
Top