when will we be able to render reality in real-time

Josiah

Newcomer
OK maybe not reality, but a reasonable fascimile thereof. 20 years ago real-time 3d graphics was very very primitive, I think in 2023 real-time graphics will have surpassed the offline rendering of today. I'm betting rendering reality will be possible within our lifetimes.
 
to render reality will need a device no smaller then the reality you are going to render

The simple answer is no there are many things that just won't be able to be done i.e. florescence phosphoresce and other such things.

Also I doubt we will really wanna do the rendering that we do off line because we don't need those sorts of resolutions ( IE for film).

Many of the off line rendering that is done these days is still not using Golbal illumination or other similiar things but I would hope that by 2023 we will have good GI and photon mapping on consumer level devices.
 
I expect in 2 or 3 years really "real-looking" Demos. Of course not everything can be rendered "real", but I believe we will see some photo-realistic looking landscapes and so on in a couple of years.
 
Reality - never. You just can't simulate gazillions of electromagnetic waves/particles, all doing funky interactions, and all simultaneously... Even more, no one _really_ knows what's happening "in reality" - there are just a bunch of theories :)
Now, _good looking_ graphics is/will be certainly possible, but that of course depends what your "good enough" criterias are...
 
I honestly do not think it possible for real-time rendering to surpass cinematic CG rendering in terms of quality. After all, cinematic CG rendering looks the way it does as a result of a balance between render time and quality. As processors (CPUs and GPUs) become more powerful the quality of cinematic CG will also be able to increase without a similar increase in render time. It will use the capabilities of real-time rendering and go a step further, a step that would be impractical for real-time rendering. As such, real-time rendering can only seek to catch up to cinematic CG -- and that will only happen when cinematic CG cannot get any better.

EDIT: Oops! You meant the real-time rendering of the future will surpass the offline rendering of the present. :oops: Nevermind, then. . . :D
 
bloodbob said:
to render reality will need a device no smaller then the reality you are going to render.
I agree with that, but I think what the OP wanted to ask is "when will we exceed what we can perceive?"

Unfortunately, no one really knows exactly what it is that we can perceive. <shrug>
 
i can forsee a time fairly soon when the problem is with the display device...
Monitors havent advanced much in the past 10-15 years...
 
You guys suck.

You get hung up on discussing semantics (definition of "real"), instead of focusing on the actual issue of the question, which was when we can expect graphics that LOOKS real (within the limits of a computer display device, obviously).

Now try again guys. First attempts you messed up bad.


*G*
 
Ostsol said:
I honestly do not think it possible for real-time rendering to surpass cinematic CG rendering in terms of quality. After all, cinematic CG rendering looks the way it does as a result of a balance between render time and quality. As processors (CPUs and GPUs) become more powerful the quality of cinematic CG will also be able to increase without a similar increase in render time. It will use the capabilities of real-time rendering and go a step further, a step that would be impractical for real-time rendering. As such, real-time rendering can only seek to catch up to cinematic CG -- and that will only happen when cinematic CG cannot get any better.

EDIT: Oops! You meant the real-time rendering of the future will surpass the offline rendering of the present. :oops: Nevermind, then. . . :D


In the early 90's I was using Amiga/Toaster render farms to pump out some really cool stuff...had a lot of fun. It was frustrating because a single frame could take hours to render, depending on resolution and effects and how much "pure" ray tracing you invoked. Basically, the consensus was at that time that what current 3D cards are doing was...impossible. Simply impossible...Heh...;) So, I don't make predictions like that anymore...Heck, back in the 80's I can recall thinking that 640x480x24-bit integer 2d animation at ~30 fps was "highly unlikely"....;) As such, I now think that as the 3d-chip industry matures that it will catch CG fairly soon--but not completely, for the reasons you point out. But...time is money, as they say...and as such I see CG steadly incorporating more elements of 3d-"realtime" as we move ahead. This is congruent with the pattern of the last few years which demonstrates clearly that the divide between CG and 3d is much narrower today than it was a decade ago. I anticipate the gap will continue to narrow, but I also agree that both technologies are likely to remain separate (both in terms of function and cost) for the foreseeable future.

But heck...as far as realism goes--when we start seeing 3d game engines which really take advantage of the dynamic color ranges possible with fp precision, that in itself will seem pretty dramatic. IMO, of course...
 
i can forsee a time fairly soon when the problem is with the display device...
Monitors havent advanced much in the past 10-15 years...

Not true at all. A Monitor capable of displaying upwards of 1920x1080 (is that the res?) definitely can produce an image that looks "real" enough. In fact, one of the stores around where I lived set up an HDTV at a wall...put drapes around it, then took a high definition camera outside and hooked it up to the TV and showed how a High Definition TV produces an image that looks as nice as looking out of a window.

Man you should watch the Discovery Channel in high definition :oops: :oops: :oops: Holy shit that channel is like better than real :)

As for when we'll be able to render something close to the quality of real in real time? It depends on the developers. I could say that based solely on hardware we could be 10-15 years from it. But then I'd have to realize how unbearably slow developers are to catch up with hardware and realize we're probably closer to 15-20 years.
 
I, for one, have a hard time with any predictions that will take longer than about 3-5 years.

I say that within two years, we'll have PC's capable of near movie-quality animation in games (at lower processing precision, with simplified shaders, lower resolution, and fewer polys...but close enough that people won't see a whole lot of difference). Beyond that, there will need to be a large number of software improvements to actually leverage this computing power.

But beyond about five years, the future of computing becomes very uncertain indeed. Transistor densities in silicon will have dramatically slowed by then. I don't think anybody knows what new technologies will be made available for computing at that time, let alone whether or not the new technologies will allow for such nicely smooth and continuous improvement as we've seen with silicon transistor-based microprocessors. Computing power may accelerate beyond our wildest imaginations, or may stagnate enough to where PCs start to get bigger and bigger (more and more chips within the same PC...) in order to increase processing power.
 
surfhurleydude said:
i can forsee a time fairly soon when the problem is with the display device...
Monitors havent advanced much in the past 10-15 years...

Not true at all. A Monitor capable of displaying upwards of 1920x1080 (is that the res?) definitely can produce an image that looks "real" enough.
Funny you should say that... I attended two lectures while at Siggraph/Graphics Hardware that said pretty much the opposite...
 
For a "real" looking image, content matters more than sheer computing power used to generate it. At least with todays rendering methods. And IMO, this balance is increasingly shifting towards content.
And thus, i firmly believe that procedural content generation methods will have to become much more common, before we get signigicant leaps of perceived image quality in 3D environments again.
Meaning, the artist creating the game level must be able to just say: "i want a red brick wall here. No, not brand new, like weather-charred. A bit more. Ok" and continue. So that the wall is generated to intricate detail, with chips of stone broken loose, appropriate textures, normal maps &| displacement maps generated for him.
Otherwise all the increases in sheer computing power will have no effect to end user, because they just wont have enough content to render.

Yes, there are quite a ways realtime rendering could still go, like using full global illumination solutions. But you wont get the Courtyard House without mr. van der Rohe's help.
 
SvP said:
I think the next big problem is character animation, not rendering by itself.

I'm gonna have to agree with that. When I'm watching a movie, the biggest tip-off that a scene is done using special effects (CGI or traditional, but CGI in particular) is the often horrible animation. And this is all stuff that's animated (or at least tweaked) by hand, rather than algorithmically as would have to be done in any open-ended virtual world. It's much more noticeable than any flaws in lighting, texturing, modeling/art, etc. (Although, things being as they are, if the animation problems could suddenly be fixed, these other ways in which CGI falls short of reality would become more glaring as a result.)

In principle, the solution to the animation probem is obvious: better and finer-grained physics and modeling of physical characteristics, better incorporation of physiology, etc. To mention a small example of what needs to be done, note the work Valve has done on bringing more realistic eye-contact to HL2. As for when (or, frankly, if) it will be feasible to bring all of this together to the point where it can reliably fool another human, I have no idea, but I doubt it will be terribly soon.
 
surfhurleydude said:
Not true at all. A Monitor capable of displaying upwards of 1920x1080 (is that the res?) definitely can produce an image that looks "real" enough.
Um, im gonna have to say, wrong, for a couple of reasons.
Refresh rate: A limit on refresh rate means that rapid movement will look wrong - temporal AA cna help to a degree, but not totally.
Resolution: A limit on resolution is another big flaw. AA can help, but it does not fix the problem. Detail is limited by resolution. Look at the resolution of pro level digital cameras. It keeps going up and up and up.....
 
bloodbob said:
to render reality will need a device no smaller then the reality you are going to render


Only problem with this is, it sounds like a truism, but is unknown, and probably false. (see recent discussion in General Forum on scientific american article about the Holographic Universe) First of all, in terms of information density, the universe can be vastly compressed. The upper bound on the amount of information that can be stored in a given area is exponentially larger than the amount of information required to represent the state space of the particles. (see Article) The unknown is the computation required.


In Frank Tipler's book Physics of Immortality, he derives mathematics to show how an infinitesimally small point can achieve an infinity of computation in an infinitesimally small time.


However, to achieve "the matrix", we need not simulate physics at the lowest level. The goal is to convince humans that something is real, not a science experiment, and in that regard, there are many short cuts that can be taken.

We don't use quantum electrodynamics to design a skyscaper, and there is no reason why we need to go down to the lowest levels of physics to achieve something macroscopically convincing.
 
Gphx that resemble live broadcast should be here in 5-10yrs

In principle, the solution to the animation probem is obvious: better and finer-grained physics and modeling of physical characteristics, better incorporation of physiology, etc.

I agree.
 
Well, I wasn't asking when The Matrix will be possible. Someone mentioned the Discovery Channel, I can watch that and realize it's not real because it's presented on a TV screen. But when will we be able to render in real-time an image that's (practically) indistinguishable from that? My guess is only that it will be done in our lifetimes.

I suppose the next step from there would be something like The Matrix, where a CG flower is not distinguishable from a real one. But then you're not just dealing with the visual sense, but all five senses. I don't think this will happen in our lifetimes, I'm still waiting for a good consumer-level VR headset to be developed...
 
Back
Top