real time render when?

GRAKS

Newcomer
when can we get a graphics card that can render those pictures in real-time?,like the ati ruby demo only better.

http://media01.cgchannel.com/images/news/3567/wright_01a.jpg

wright_01b.jpg


http://www.cgchannel.com/news/viewfeature.jsp?newsid=3567&pageid=2
 
You have to be careful with what you assume now. Are you saying render exactly that or are you saying you want that level of detail in a feature rich environment like current games? (ie: I want this level of detail in HL2)

I don't think rendering that particular image will take very long, but it will occupy all resources and you won't be rendering anything else. What I am saying is that I would not be surprised if something like that could be rendered with current hardware, but she won't be running around beating up ninjas then, that's for sure.

It also depends on how you define real-time. Are you saying 30 fps and we can move this head about freely (in a blank enviroment backgorund as sugested in the pictures) and quality need not be preserved at closer zoom?

Re-reading your post, I will interpret "like ATI demo only better" as meaning the whole enchilada. That still leaves out the complexity of the environment and the need to increase physics to accompany the higher model fidelity. Gonna need huge memory and processing. The pre-programmed path may help a bit though, so I will go with 4 years. (and that is with severe "cheating" and dumb models). The lifelike qualities of the still pictures cannot suggest what it will or should look like in motion. Just consider the eyes and that close-up. That particular look would be interesting for about 60 frames and then something needs to happen. The question is what and how accurate are those reflection in the eyes...will she blink? Will blinking change the dynamics (moisture) of her eyes...and so on.

The still looks good and suggests reality, but reality in motion is completely different and much more complex.
 
Considering that there's usually some level of post-processing that's done in photoshop, you could argue that right now you can't even offline render (in 3d anyway) a lot of the better CG art.
 
Which end of "real-time" are you talking about? 1fps? 10fps? 100fps? They're all real-time, just different types of real.
 
hughJ said:
Considering that there's usually some level of post-processing that's done in photoshop, you could argue that right now you can't even offline render (in 3d anyway) a lot of the better CG art.
Why can't those same post-processing effects in PhotoShop be done as post-processing effects when you render?
 
OpenGL guy said:
hughJ said:
Considering that there's usually some level of post-processing that's done in photoshop, you could argue that right now you can't even offline render (in 3d anyway) a lot of the better CG art.
Why can't those same post-processing effects in PhotoShop be done as post-processing effects when you render?

I was always under the impression it was the same type of touch up work that is done for glamour shots. Just in this case it's using human perception to find and fix the parts of the image that look fake. So it's not something you could have a computer do easily.
 
OpenGL guy said:
hughJ said:
Considering that there's usually some level of post-processing that's done in photoshop, you could argue that right now you can't even offline render (in 3d anyway) a lot of the better CG art.
Why can't those same post-processing effects in PhotoShop be done as post-processing effects when you render?

It definitely could be, assuming we're not talking about a touch-up job that's done by eye/hand, right?

Main difference, I guess, would be that touch-ups and post-processing in photoshop are taking something that's in 3d space, cherry picking a specific viewpoint by tweaking it to the point where everything syncs up to look photorealistic. Different angles, different lighting, etc would likely require different sorts of tweaking, and I'm not sure that could lend itself to a completely dynamic online render?

Just thinking out loud.
 
wireframe said:
You have to be careful with what you assume now. Are you saying render exactly that or are you saying you want that level of detail in a feature rich environment like current games? (ie: I want this level of detail in HL2)

I don't think rendering that particular image will take very long, but it will occupy all resources and you won't be rendering anything else. What I am saying is that I would not be surprised if something like that could be rendered with current hardware, but she won't be running around beating up ninjas then, that's for sure.

It also depends on how you define real-time. Are you saying 30 fps and we can move this head about freely (in a blank enviroment backgorund as sugested in the pictures) and quality need not be preserved at closer zoom?

Re-reading your post, I will interpret "like ATI demo only better" as meaning the whole enchilada. That still leaves out the complexity of the environment and the need to increase physics to accompany the higher model fidelity. Gonna need huge memory and processing. The pre-programmed path may help a bit though, so I will go with 4 years. (and that is with severe "cheating" and dumb models). The lifelike qualities of the still pictures cannot suggest what it will or should look like in motion. Just consider the eyes and that close-up. That particular look would be interesting for about 60 frames and then something needs to happen. The question is what and how accurate are those reflection in the eyes...will she blink? Will blinking change the dynamics (moisture) of her eyes...and so on.

The still looks good and suggests reality, but reality in motion is completely different and much more complex.
good points, wireframe

going with the same criteria where you said "4 years" , I'm gonna go with 5-7 years..
 
hughJ said:
It definitely could be, assuming we're not talking about a touch-up job that's done by eye/hand, right?

Main difference, I guess, would be that touch-ups and post-processing in photoshop are taking something that's in 3d space, cherry picking a specific viewpoint by tweaking it to the point where everything syncs up to look photorealistic. Different angles, different lighting, etc would likely require different sorts of tweaking, and I'm not sure that could lend itself to a completely dynamic online render?

Just thinking out loud.
I think that's the point. The post-processing in photoshop is done by hand, and isn't just some generalized macro. It's basically a way of being a bit lazy about the 3D rendering: render it 'till it looks good, photoshop it until it looks perfect.
 
phenix: Such comparison is a bit hard since 6 years ago we didn't have a lot of the technology we have today (pixel/vertex shaders for example).

To answer the topic, I'd be the optimist and say 1 year in theory, 3 in practise.
 
wtf is wrong with you people!? The graphics card on this machine (Trident 96xx) can render that in real-time!





of course, if we're talking rendering a 3d scene that comes out looking like that then it't an entirely different story...
 
Back
Top