that looks great mckmas8808 very naturalmckmas8808 said:
If you had a 360 you would see that the lighting looks very similar to PGR3 so I think we are definitely moving in the right direction already.
that looks great mckmas8808 very naturalmckmas8808 said:
Shifty Geezer said:For you're camera there's really 3 types of exposure selection. First you've got your overall light, often with some analysis going on. For example my Canon SLR has 36 regions that it measures the light for and then decides what to set the exposure to. This is a more complex process to apply in realtime. 2nd approach is a region using the average intensity of that region. And 3rd is a spot reading, measuring light from a particular point on a surface. The first method, analytically scene based, produces qualities dependent on the method employed. Reviews of cameras tell you when they have got it right or under or overexposed. My Canon does a great job and I'm free to generally just take photos without having to worry much. The third method is also very effective and straightforward, and I think it could be easily incorporated into realtime as I think I mentioned earlier. The game bases the exposure point, halfway between man and max output intensities, on a point in the scene relative to the game style. So for an FPS, this could be where the player is targetting, whereas for an RPG it could take it as centre spot or scripted point on an NPC's face. There could be issues with certain objects, such as a black characters skin, where exposing for that at the midpoint would be overall overexposed. You could provide objects with an exposure correction setting to counter that, so for a dark skinned character you'd aim the exposure at minus 3 stops say. Of course between exposure transitions you'd need to have a delay and gradual change, so moving an FPS target from a dark cave across the sky and to a dark tower, it doesn't instantly change settings, but depending on where you're looking adjusts itself appropriately.
I can't see any reason why this wouldn't work effectively and it should be very simple to implement.
Lens flare was funny. An artefact lens makers took great measures to try to reduce, CGs went throwing in with abundance. Personally I think photorealism is a better target than...occular realism. Occular realism doesn't have much by way of DOF for starters, whereas that's an important aspect to artistic imagery. Ultimately it doesn't matter which approach is taken, as long as what the games chuck out looks as good as the TV programmes and movies we watch on the same display hardware!MrWibble said:Also although people state "photo realism" as an aim, really I think they're after "realism" - which is less a case of simulating how a camera works and more a case (IMO) of simulating how the eye would percieve an image. And if you don't agree, we're going to have to start putting lens flares back in
Shifty Geezer said:Lens flare was funny. An artefact lens makers took great measures to try to reduce, CGs went throwing in with abundance. Personally I think photorealism is a better target than...occular realism. Occular realism doesn't have much by way of DOF for starters, whereas that's an important aspect to artistic imagery. Ultimately it doesn't matter which approach is taken, as long as what the games chuck out looks as good as the TV programmes and movies we watch on the same display hardware!
Like I said earlier, you have the previous frame's average luminance for that. Neither a camera nor an eye adapts instantaneously.MrWibble said:I would have to say you're missing a fairly big part of the problem here.
With film, you have someone stood behind the camera, choosing where to point it and how to expose the image. Then you have post-production and editing where more people look at the picture and fiddle with it to make it look right.
With a graphical image, you don't have that - you need to find an analytical way of approximating how a photographer would treat the scene, no matter if the player decides to look under a rock or stare at the sun.
Mintmaster said:Like I said earlier, you have the previous frame's average luminance for that. Neither a camera nor an eye adapts instantaneously.
...
As for the human element of post-production and editing, I fail to see how a higher precision format solves that problem. You can use the above method for spot averages instead of full scene averages. Like I said, you do know the raw values provided instantaneous contrast isn't really high, and even then you have more than enough information to decide how to tone map it.
mckmas8808 said:Edited: Why do most next-gen games today not have that natural looking lighting that this Getaway video is displaying? Is it technical or is it artistic skill that not being shown?
A176 said:Artistic; for example, the very reason people compare the Doom 3 engine to Source.
But for personal preference, I'd rather play a game with style and content, rather than something where its obvious dev's spent 90% of their time trying to get lighting right
Don't get them wrong though, all technicalities aside, HDR libraries are probably saving devs alot of development time, and that I can't argue with...no matter how bad it might look in the end.
In laboratory testing, a range of 10,000:1 is a lot greater than the active domain of a film's/CCD's response curve. (8-bits is enough for the range of the curve, but we don't have a choice there anyway due to monitor technology.) For photorealism, it's enough. Period.MrWibble said:(I don't suggest a human element can be replaced with more precision - the two are seperate issues for me. However *without* enough precision in the image, I feel we'll lose quality in the image after mapping it, and we don't quite have enough information to map it correctly in the first place.)
You'll notice that the clamping I'm talking about is in terms of "order of magnitude", hence mathematically speaking it's a logarithmic clamp. A shift on a logarithmic scale is a scale in the linear world. The clamping itself happens automatically when you force a value into a finite precision format. I'm just talking about dynamic range, and it's no different whether you're in graphics, instrumentation, electronics, or whatever. Saturation value divided by noise floor.You seem to suggest that the range has both an upper and lower clamp - but if that's the case, you need to both offet *and* scale the range to get it to fit... how would you know which is necessary when you've thrown most of the distribution information away?
Umm, and how do you think an optical system works? The iris scales the light your camera lets in. The shutter can be open for different amounts of time, again just a linear scale. The light hitting the film's surface is directly proportional to the light emitted by a point in the scene. Period. If it was non-linear, or even had an offset, you'd win the Nobel Prize.Generally speaking you'd assume a lower bound of zero, and only worry about the scale, but if you're seriously suggesting just linearly scaling to fit that's going to look horrible. The optical system doesn't work that way, and neither does film, AFAIK.
Nope, you got it backwards. The sun will be clamped to an effective value of 32/scale, so it's going to give you a lower average luminance. If 1% of the previous example had a luminance of 1B instead of 10M, it would get clamped to 32 in the framebuffer, translating into a value of 10M. The computed average doesn't change, and so dark details retain their contrast. What will happen is that if the sun is more than 100 times the average luminance, it won't affect the scale factor, so the rest of your scene retains the detail.Furthermore, you're going to have a hell of a problem if your brightest spot is indeed something like the sun - your method will see a very high average and even after adjusting, you're going to get a bright oversaturated area where the sun roughly is, and black on the rest of the image.
No, I was just talking about Xenos' FP10 format. I don't think 8-bit is enough, though zeckensack does. 10-bit integer (1000:1) may get you decent results, but it's borderline IMO. I was really disappointed when I found out ATI's X1K series didn't have FP10.But I'm confused - you say a range of "a few thousand is enough" - but that's a lot more than we have in a traditional LDR buffer. Are you just saying we should use integer 16-bit instead of FP16? We certainly could do that, but again I don't see the advantage.
Okay, now I know for sure you don't know what I'm talking about. All this talk has nothing to do with tone mapping. For tone-mapping the only suggestion I made was to look at the film response curve, and that most certainly isn't linear. You still have to tonemap this framebuffer into the final results you see on screen.I think this kind of simplistic tone-mapping has already been well explored and already discarded as producing unrealistic results which don't look good. Certainly my own tests show that you really do need to spend a bit of time getting the mapping right and it's pretty essential to be doing more than mapping the visible values linearly to the output range.
Thats a very interesting idea, that I hadn't really thought about.Mintmaster said:As an example of why we need this change of thought, I heard nAo talking about pushing the limits of FP16/NAO32, where he sees dithering/banding in the darkest scenes of Heavenly Sword. There's no reason to render the absolute color value in the framebuffer. The previous frame's exposure is a good estimate of the next frame's exposure, so use this value to assign more reasonable framebuffer values for the scene.
DeanoC said:Thats a very interesting idea, that I hadn't really thought about.
Essentially predictive compression of the framebuffer format based on the last N frames average lumonisity.
Mintmaster said:MrWibble, a lot of your arguments don't make sense at all, and you're completely avoiding the factual evidence I have that my method works: A real life camera. You also need to understand that one still has to tonemap the output (i.e. develop the film).
...
Another reason is that I don't think we need to move towards 64-bpp rendering. 32-bpp is enough if the format is right. FP10 is great, and shared exponent would be even better. FP16 is overkill for a framebuffer.
Your line of reasoning here is similar to the previous post where you said everything would be dark if there was a bright light source. You should remember that the calculated average has an inverse feedback to the scale factor. If the average is lower than your target, then the scale becomes larger, thus brightening the image.Because the result is actually now *different* to the first attempt (not more or less precise - actually quite different in distribution) you'll get a new average - which will be even lower, because the clamp means the brightest values got thrown away, and everything will get gradually worse until you can't actually see the light through the surface at all...
Mintmaster said:No need for a long explanation there, MrWibbles, as I understood after the first sentence. That's one of the extreme cases which does show clamping errors, and will give an incorrect output. (zeckensack: This is why more than 8-bit is needed. Even if you knew the perfect scale factor, you can't render a bright object through a tinted surface correctly)
I think there are a few things that could mitigate any errors:
1) 8,192:1 does still have some room to play with. Going back to my example, you can target a lower average value (say 0.1 instead of 0.25) to increase the room on the upper end. 320x the average is a lot, and your tone map shouldn't need that full range.
2) HL2, which can't do any HDR at all through a window, manages okay. This will be much less of a problem.
3) You can special case this with the alpha buffer. Even though you only have 2-bits in a 10-10-10-2 format, you can flag where super-bright objects are. The flag can stand for an additional factor of 100 or something for the sun. In practice, I suspect only a few super intense sources would need this.
4) I think it's a hard artifact to visually identify as incorrect. #1 and #3 should take care of most of it.
One more thing:
Your line of reasoning here is similar to the previous post where you said everything would be dark if there was a bright light source. You should remember that the calculated average has an inverse feedback to the scale factor. If the average is lower than your target, then the scale becomes larger, thus brightening the image.
Speaking of which, is there direct support for HDR textures on modern GPUs or are they still 8bpp only and FP16 etc is only for framebuffers? If there is, what HDR image formats are supported?MrWibble said:However if we're talking about where things are going in the near future, I really hope we'll just have standard support for HDR pixels and texels
All DX9 GPUs support FP16 and FP32 textures, though NV3x only in a limited fashion, and only NV4x/G7x support filtering of FP16 textures, anything else is point sampling only.Shifty Geezer said:Speaking of which, is there direct support for HDR textures on modern GPUs or are they still 8bpp only and FP16 etc is only for framebuffers? If there is, what HDR image formats are supported?