Anyway to disable depth of field effects in 3dMark03?

DemoCoder

Veteran
Don't get me wrong, they are cool and all, but they also detract from image quality in the demo. Even if I force on 6X FSAA and 16xAF, I see horrible aliasing everywhere. I'd love to see what the demos look like with it switched off.
 
DemoCoder said:
Don't get me wrong, they are cool and all, but they also detract from image quality in the demo. Even if I force on 6X FSAA and 16xAF, I see horrible aliasing everywhere. I'd love to see what the demos look like with it switched off.
Go to benchmark settings and set pixel processing to "None".
 
Nagorak said:
What the hell is a "depth of field" effect? :?:
:oops:

DemoCoder said:
Don't get me wrong, they are cool and all, but they also detract from image quality in the demo. Even if I force on 6X FSAA and 16xAF, I see horrible aliasing everywhere. I'd love to see what the demos look like with it switched off.
There is no way to disable the post effect in the demo, but the default benchmark has the post processing off. The post effect and FSAA do not work on top of each other (no, I didn't mean THAT). It's an issue in DX9.0.
 
Nagorak said:
What the hell is a "depth of field" effect? :?:

Were you not around when Voodoo5 was being hyped?! :) 3Dfx didn't introduce the concept, but it was a big part of the marketing. Anyway, it's when the screen is focused on a given object in the scene and the rest is out of focus. Like when you use a picture camera and you turn the focusing knob, objects in the scene fade in and out of focus.
 
Well, I have an depth of field demo though, but it sucks, ugly blur and crap. But ATi's depth of field in for instance their HDR demo is pretty nice though. They haven't done any ugly mipmap bias trick but implemented a gaussian blur, which works much better.
 
Because it uses a bell curve, which by another name, is a Gaussian curve, Gaussian Distribution, Normal Distribution, etc.
 
Humus said:
Well, I have an depth of field demo though, but it sucks, ugly blur and crap. But ATi's depth of field in for instance their HDR demo is pretty nice though. They haven't done any ugly mipmap bias trick but implemented a gaussian blur, which works much better.

It's better, but it is not good.
Depth of field in not gaussian in distribution around the focal plane.
This site has the equations.
http://www.dof.pcraft.com/dof.html

Plug them into excel or whatever, and do the graphs to get a feeling for how the circle of confusion depends on distances, focal lengths and apertures. The best one is to plot the circle of confusion vs. front and back distance from the focal plane. I used to tote around a programmable calculator for doing these on location. :)

David Jacobssens excellent tutorial page (mostly gathered from pre-web rec.photo discussions) seem to be off-line atm.

If there are problems getting to grips with the concepts (some photographic experience/theory helps a lot - preferably large format :)), I can volunteer some help.


Entropy

PS. If you want to get ridiculous, you should consider that the above equations are based on for instance the assumption that the aperture is circular. Real apertures are not, they are formed by blades that contract. Note that more expensive lenses utilize more blades, because this is considered to give "more beautiful" blur, and not-so-ugly aperture images when shooting for instance water reflections, or direct sun. Point is - even small details matter to people in the field. Taking such niggles into account would be going totally overboard for rendering, particularly realtime.
 
Entropy said:
Depth of field in not gaussian in distribution around the focal plane.

Noone said that it is.

What we'd need is the image lightness distribution of a single point projected to the image screen.
The site you talk about does not calculate this, only the outer boundaries of this "circle of confusion".

Of course the fact that it has outer boundaries proves that this is not gaussian distribution... but what is it?
 
Hyp-X said:
Entropy said:
Depth of field in not gaussian in distribution around the focal plane.

Noone said that it is.

What we'd need is the image lightness distribution of a single point projected to the image screen.
The site you talk about does not calculate this, only the outer boundaries of this "circle of confusion".

Of course the fact that it has outer boundaries proves that this is not gaussian distribution... but what is it?
I should have been clearer.

You have to reformulate the equations to calculate the circle of confusion as a function of distance rather than the other way around, which is what photographers are interested in. They decide a maximum circle of confusion (blur) depending on how much they are going to enlarge their negative, and then use the equation to calculate what aperture they need to use in order to get acceptable front and back sharpness. Often times they simply can't get that, which is why portraits shot with large format in natural light often show people with fuzzy noses and ears. :)

But as I said, in order to get a feeling for the proper distribution, reformulate the equations to calculate the circle of confusion as a function of distance, plug into a diagram program, and plot away with different focal lengths. The last point is important - the shorter the focal length, the more drastic the front/back assymetry around the focal plane.

Entropy
 
Entropy said:
It's better, but it is not good.

The motto in graphics is, if it looks good, then it is good. The depth of field ATi uses looks pretty good to me, so I would say it is good, even though it most likely isn't close to how things work in the physical world, but that's true for pretty much everything in graphics. ATi's method is pretty convincing to me, so how much of an improvement would it be to use those formulas?
 
Humus said:
Entropy said:
It's better, but it is not good.

The motto in graphics is, if it looks good, then it is good. The depth of field ATi uses looks pretty good to me, so I would say it is good, even though it most likely isn't close to how things work in the physical world, but that's true for pretty much everything in graphics. ATi's method is pretty convincing to me, so how much of an improvement would it be to use those formulas?

Depends. Of course.

It's always good to know the way stuff really works, so that you have a decent grasp of just how much of a hack it is you are using. C'mon, this is simple. Just look at it. Then you can happily neglect it, and be satisfied with what you got, but you'll be wiser for it.

Where the differences between front and back DOF get dramatic is with short focal lengths, particularly at short object distances. Trained eyes note these things, just as you can show pictures to a photographer, and they will be able to tell roughly what focal length the lens had that you took the image with.

Personally, I feel Depth of Field is completely inappropriate in virtual reality applications - it is a property of an optical recording system, and we do not percieve depth of field in reality. Mimicking film is another matter of course - then it may well be appropriate to emulate the limitations of film cameras.


Entropy
 
I was about to say the same thing.

Graphics is all about hacks. The best hacks are the ones in which you can't tell the difference between what's real and what's not.

To do real depth of field, you couldn't do a post prossessing filter, because it arises from a camera that has a real lens rather than a pinhole. You'd have to shift the viewpoint around and combine the images together with some distribution function.

The improvement just isn't worth it.
 
Entropy said:
Personally, I feel Depth of Field is completely inappropriate in virtual reality applications - it is a property of an optical recording system, and we do not percieve depth of field in reality. Mimicking film is another matter of course - then it may well be appropriate to emulate the limitations of film cameras.

I don't quite agree. We totally percieve depth of field in reality. If we didn't, I wouldn't need glasses :) Put your finger in front of your eye while reading text from the computer screen. The finger gets blurry. It's just not as dramatic as some implementations make it out to be.

The problem is that the computer has no way of knowing where you are looking, so it's really only useful for realtime cinematic sequences, or when you don't need to see the entire world on demand. However, it would be interesting to get some of that eye tracking movement in a virtual reality helmet, like Canon has in some of their cameras. That would allow the computer to figure out where you are looking, and would be very cool.
 
Mintmaster said:
I don't quite agree. We totally percieve depth of field in reality. If we didn't, I wouldn't need glasses :) Put your finger in front of your eye while reading text from the computer screen. The finger gets blurry. It's just not as dramatic as some implementations make it out to be.

LOL, you nearsighted too? Well the reason we don't percieve DOF in reality is mainly that we refocus (plus we have 25 mm lenses with small apertures), and if you're nearsighted you can't do that, beyond 20-25 cm in my case. Good thing my wife can help me find my glasses in the morning. In fact, I suspect she moves them around just so she can enjoy watching my bloodpressure rising. ;)

Entropy
 
We do percieve DOF in reality, it's just that it's never there when we're looking at something, because we're focusing on that object then. But focus on the wall or something a few meters away, then hold something up near to your eye but not blocking your view, and look at it without...erm...actually looking at it :LOL: If you keep focused at a distance, the near object will be blurred.

Of course, for this to be appropriate on a computer screen, it would have to track where our eyes are looking, and adjust the DOF to focus on that object...or have the "it's a camera" cop-out... ;)
 
Hmm... just walking around and doing normal tasks every day, I find depth of field to be a constant part of my perception of things. I can't imagine what the world would look like if I didn't perceiv depth of field.
 
Back
Top