when will we be able to render reality in real-time

We'll be able to render a scene which looks "lifelike" within a decade or so, but interactivity and everything else which goes into making an environment "real" will take much, much longer.

If you're just talking a non-playable flythrough, it's really only a small step away.
 
We don't use quantum electrodynamics to design a skyscaper, and there is no reason why we need to go down to the lowest levels of physics to achieve something macroscopically convincing.

...but skyscapers (sic) don't animate and aren't organic so it only needs to appear real. A CGI character will animate very real if the physiology and physics of the human body is modeled accurately. Hand animation can only go so far. Of course there's motion capture, but that's all scripted and there's no physics involved which makes collisions with objects just another scripted event ie a CGI character that gets hit by a moving car.
 
PC-Engine said:
...but skyscapers (sic) don't animate and aren't organic so it only needs to appear real. A CGI character will animate very real if the physiology and physics of the human body is modeled accurately. Hand animation can only go so far. Of course there's motion capture, but that's all scripted and there's no physics involved which makes collisions with objects just another scripted event ie a CGI character that gets hit by a moving car.

What he's saying is that to create an image that a human consciousness would percieve as "real" (eg. reality as based on 'every-day' perceptions, occurances and happenings that can be compered to) you don't need to simulate the world down to a fundimental level (eg. QED, M-Theory, etc) to get results that are macroscopically correct and based on simpler classical mechanics. In essence, you can 'cull' a tremendous amount of information and calculation that is present in the "real world" while still yeilding results that are accurate to what we experience as a person.

Human Anatomy and Physiology, as you're speaking of, is percieved as sheerly classical; although there are much lowerlevel quantum effects at work that aren't seen in the macroscopic world (Thus don't need to be accounted for as he said). Although, we never finished that debate. ;)
 
Vince said:
PC-Engine said:
...but skyscapers (sic) don't animate and aren't organic so it only needs to appear real. A CGI character will animate very real if the physiology and physics of the human body is modeled accurately. Hand animation can only go so far. Of course there's motion capture, but that's all scripted and there's no physics involved which makes collisions with objects just another scripted event ie a CGI character that gets hit by a moving car.

What he's saying is that to present an image that a human consciousness would percieve as "real" (eg. reality as based on 'every-day' perceptions, occurances and happenings that can be compered to) you don't need to simulate the world down to a fundimental level (eg. QED, M-Theory, etc) to get results that are macroscopically correct and based on simpler classical mechanics. In essence, you can 'cull' a tremendous amount of information and calculation that is present at very low levels in the "real world" while still yeilding results that are accurate to what we experience as a person.

I understood what he was saying, however like I said static structures will appear real regardless of the complexity of the physics involved, however a living human being will not appear real enough without detailed physics as in bone, muscle, hair, gravity, behavior, etc.

What he's saying is that something like Massive is real enough for most people which I agree. What I'm talking about is dynamic character animation which someone brought up as being too artificial at present because it's either motion captured (scripted) or the physiological aspects aren't modeled accurately enough for dynamic situations. Even cloth animation looks artificial with today's technology. Imagine an interactive game where the characters are wearing different types of clothes that are modeled correctly according to the cloth it's made of and that's just the physics for the clothes let alone the physiological and psychological aspects of the CGI characters. Non dynamic cutscenes are a different story of course.
 
anyone forgot that we are part of huuuge super computer simulating life. Of course mouses are controlling all this and the primary target is resolving the most important question of the universe... at least acording to Douglas Adams. ;) Too bad that aliens will blow up the whole thing 10 minutes before result is ready. :LOL:

so as another cartoon quote: "Don't ever, say never..." -Fifel goes to America :)

but, I don't think it will happen any time soon. (means here most likely more than few hundred years.) Plus, there's detail paradox, which means that there's always some room to improvments of detail. you can always take the rendering new level by modeling smaller and smaller parts.
 
As somebody said (Bacon?), it's the "suspension of disbelief" that counts more than any other element. The most "realistic" movie is, of course, entirely contrived. It's our conscious willingness to be temporarily deceived--to believe the story presented is real while watching it-- that enables us to enjoy even the best, most "realistic" movies. Without that interesting psychological ability inherent in most of the species, movies, games, etc., would never have been possible. It's the same with "computer reality," regardless of the "realism" factor present in the graphics. Often it's the interactivity of computer simulations that enhances the suspension of disbelief in a way not possible for the best of movies and books. I think this demonstrates that there are ways to convincingly portray "reality" aside from merely a cut & dried approach to "photo-realistic" 3d graphics. I guess the point is that even if you had photo-realistic 3d at the present, such a game would still require many other elements in order for us to temporarily forget it is contrived so that we become submerged in the story.
 
Without a context of what you looking at, asking when reality will be achieved/beaten is difficult.

If your reality consists of the Cornell Box, then reality can be simulated now or at least very soon. If your reality consists of the space near the event horizon of a black hole, then we may be looking at hundreds of years before 'real-time' computation can be achieved.

Rendering is an approximation to electro-magnetic fields and there interaction in a variety of materials. Some materials have very simple approximation (some surfaces are reasonable approximated by BSSRDF's), other are so complex we can't simulate them currently with any amount of computing power.

While I used an example of black holes, there are lots of 'mundane' things we are so expensive to calculate in any way close to reality we choose to cheat and in many ways the cheat fails to appear like the real-thing. Think of turbulance (smoke, clouds, etc), photoeletric effects (UV aka Black lights) or diffraction effects (CDs) and how badly we render them even with off-line renderers.
 
what good photo realistic graphics need is REAL TIME PHOTON MAPPING AND CAUSTICS ( not like tenes quake caustics ) as well as GI ( global illuminations ) or something similar.

This still won't be realistic but it will be close enough to trick people in most cases something won't be done for a long time like I.E. photon mapping over different frequences this is need to do that good old triangalur prisim and white light making coloured.
 
DeanoC said:
Without a context of what you looking at, asking when reality will be achieved/beaten is difficult.
Lets modify the question, and constrain the environment to a average living room with doors locked.
The room contains TV, but no broadband. As follows from popular theorem, simulating entire internet would require too many simulated monkeys at typewriters.
So how long before you cant distinguish a operator-held video camera feed from realtime-rendered CG ?

I'd say, about five years. That is, without simulating humans. Simulating humans depends a lot on intelligence of the individual being simulated :p

EDIT: which reminds me, i've been once asked to "write a program that does what this guy in his cubicle does every day". Although relatively simple msoffice macro would have done it, i wasnt interested and the guy still works there :p
 
no_way said:
DeanoC said:
Without a context of what you looking at, asking when reality will be achieved/beaten is difficult.
Lets modify the question, and constrain the environment to a average living room with doors locked.
The room contains TV, but no broadband. As follows from popular theorem, simulating entire internet would require too many simulated monkeys at typewriters.
So how long before you cant distinguish a operator-held video camera feed from realtime-rendered CG ?

I'd say, about five years. That is, without simulating humans. Simulating humans depends a lot on intelligence of the individual being simulated :p

If you limit the environment to inanimate objects (buildings, vehicles. terrain) I would say flight simulators with collimated displays and calligraphic projectors can render some fairly realistic looking night scenes. The scenes don’t look as real in daytime.

More and more military flight simulators are generating environments using satellite imagery (DFAD level 2) and photographic images that look pretty good, especially at night with the aforementioned display systems. The day images also look much better than Toy Story IMO.
 
You don't have to simulate reality, only our visual perception of it. You don't need an infinite refresh rate and resolution either, just high enough to match the capabilities of the human eye. Considering how low a resolution movies are played back in, I would think 4-8x that would do the trick on the desktop, overkill. Refresh rate issues are more about constant snapshots of action and blur, you just want it fine grained enough to even out the differences in processing time required per frame and the view changes should be as fine grained as possible. Imagine games that can rotate the view in 1/100th of a degree and still appear to be moving at 20 Kph.

I think in 3-5 years time, games will look a lot better than they do now, they will probably all be using PS 3.0 or better by then, it all depends on how cheap the stuff can be made.
 
What if a new born baby was fitted with a head mounted display and fed an aritificial view of a world at 640x480 res at 30fps using tnt/voodoo2 class hardware and it never saw anything but through this device.

How real would it look to the baby!?!!! :)
 
Interesting topic. Since we're limiting our perception of "reality" to our visual sense, I see real-time imaging approaching "reality" coming in a few years. It will have to be reality as seen through a window as the image will be limited to the edges of the display and not our peripheral vision. It will also be highly dependent on how the content was created in the first place.

I think the real problem here will not be whether technology can fool our eyes, but rather, will it be practical for content makers (game developers, etc.) to create content up to the level of technology in a cost-effective and timely manner...enough to make money at it.

When will Lord of the Rings or Matrix-style productions be required to make a video game "real-looking"? I don't know, but I'm seeing a time coming soon where content production methods will have to change radically to keep costs down and believability high.

The "reality" of the image may not always remain limited by the rendering tech.
 
bertroid said:
Interesting topic. Since we're limiting our perception of "reality" to our visual sense, I see real-time imaging approaching "reality" coming in a few years. It will have to be reality as seen through a window as the image will be limited to the edges of the display and not our peripheral vision. It will also be highly dependent on how the content was created in the first place.

I think the real problem here will not be whether technology can fool our eyes, but rather, will it be practical for content makers (game developers, etc.) to create content up to the level of technology in a cost-effective and timely manner...enough to make money at it.

When will Lord of the Rings or Matrix-style productions be required to make a video game "real-looking"? I don't know, but I'm seeing a time coming soon where content production methods will have to change radically to keep costs down and believability high.

The "reality" of the image may not always remain limited by the rendering tech.

Awesome post!!! Welcome to the board :)
 
PC-Engine said:
bertroid said:
Interesting topic. Since we're limiting our perception of "reality" to our visual sense, I see real-time imaging approaching "reality" coming in a few years. It will have to be reality as seen through a window as the image will be limited to the edges of the display and not our peripheral vision. It will also be highly dependent on how the content was created in the first place.

I think the real problem here will not be whether technology can fool our eyes, but rather, will it be practical for content makers (game developers, etc.) to create content up to the level of technology in a cost-effective and timely manner...enough to make money at it.

When will Lord of the Rings or Matrix-style productions be required to make a video game "real-looking"? I don't know, but I'm seeing a time coming soon where content production methods will have to change radically to keep costs down and believability high.

The "reality" of the image may not always remain limited by the rendering tech.

Awesome post!!! Welcome to the board :)

Thx..I've been a silent reader here for a couple years now. For some reason I felt like registering today.
 
I'd like to indulge a bit here:

I can see "reality" being easier to produce if it's based on the real-world. What I mean by that is that we'll likely see vastly improved methods of capturing what we see everyday. Video is a good example of that. Film is another. Both have their inherent distortions, but suffice to fool us most of the time. There are other methods to capture the real world which exist today and new ones that will no doubt be invented.

Using these and other methods we will have to be able to capture "reality" that is consistent with rendering tech and convert it into a 3D environment with which we can interact and move about? This I can see as being reasonably doable over the next several years.

But what about Sci Fi? How would we use the above techniques to create 3D environments that don't exist but must be presentable as "reality"? Do we take real-world captures and mess with them in post production? Or is there another way we can find to synthesize something that doesn't or could not exist?

Today, games are almost entirely synthesized, which works because the level of believability does not have to be that high considering the state of our rendering tech. Some photo-based textures have been used to help the illusion, but for the most part the world is make-believe in every way.

We (the 3D junkies) accept many distortions today, but will we accept them as rendering tech makes real-world 3D environments doable. Will we get spoiled by some games that use the real-world captures (using future methods I can't conceive yet) in their production (i.e., tactical sims)?

In contrast will we be satisfied with Sci-fi 3D environments, which may look amateurish by comparison, but where we can run at 60 mph, jump 80 feet in a low-grav world orbiting a red giant and inhabited by impossible creatures with 2000 teeth who can fire rocket-propelled photonic grenades out of their butts? Will those games be less fun because they don't look real?

My point is that it may be more difficult to make "believable" sci-fi/fantasy content using fpresent and future techniques that may work fine for real-world content. Today it's really not problem as the rendering tech is the limiting factor. What will happen when it's not?
 
Back
Top