Draw Distance

fearsomepirate

Dinosaur Hunter
Veteran
What is it exactly that makes large draw distances so historically challenging to accomplish? I suppose it's no problem for the latest hardware, but it seems like it was never trivial to accomplish. Games with big view distances, like Unreal, Shadow Man, Serious Sam, etc, always seemed to be the exception rather than the rule, even well into the SM 1.0 generation.

So what's the limitation on computer hardware that programmers have always had to work around in order to get a decent view distance? It seemed to me initially like all that you need is to keep your on-screen geometry under control, but that clearly isn't the case. Is it that polygons farther away have to be transformed at a different precision than polygons close by? Or is there something else going on?
 
Becasue the amount you have to draw scales as a function of the square of the distance.
Just think of a quadrant on a circle and how the area increases as you make the radius larger.

LOD is employd to help, but there is a limit to what you can do.
 
IMHO the real reason is the one decribed by ERP.

There are numerous ways to work around Z-precision issues (of course assuming there are issues). For example, you can use a floating point Z (with near=1 and far=0), or you can divide space into different regions/buckets and scale the data in each bucket.

I know as I have suggested something like this to a person who writes a driving simulator. But this will kill the “It just worksâ€￾ property of the depth buffer. I although remember that it was a common suggestion in the past to limited the overall depth with fog to reduce depth artifacts. But I must confess that IIRC this was in the 16 bit depth buffer timeframe.
 
Becasue the amount you have to draw scales as a function of the square of the distance.

OK, this is a little confusing to me, because geometry in a video game isn't measured in "feet" or anything like this. Does the problem come at texturing, since a very large polygon will have a lot more texels on it than a small one, assuming the same texture is used on both? Because it seems that, as far as the computer is concerned, 1000 vertices should be 1000 vertices no matter what you're doing, unless the space coordinates of those vertices are so widely scattered that your machine precision can't handle them.
 
Last edited by a moderator:
Becasue the amount you have to draw scales as a function of the square of the distance.
Just think of a quadrant on a circle and how the area increases as you make the radius larger.

In a 3D realm, wouldn't the number of things that could possibly be rendered scale with the volume of the cone of vision?

That would scale things as a cubic function, unless other considerations limit what is being drawn (at least in the theoretical worst-case).
 
Last edited by a moderator:
OK, this is a little confusing to me, because geometry in a video game isn't measured in "feet" or anything like this. Does the problem come at texturing, since a very large polygon will have a lot more texels on it than a small one, assuming the same texture is used on both? Because it seems that, as far as the computer is concerned, 1000 vertices should be 1000 vertices no matter what you're doing, unless the space coordinates of those vertices are so widely scattered that your machine precision can't handle them.

I believe his point is that the geometry, too, squares with the distance increase. so, say you have even just crappy terrain with 100 polygons per mile of view distance. well, each "mile" you have 100 polygons but since it's 3d space, don't you want polygons in both directions, plus upward (though this is curtailed significantly by the relatviely thin nature of the visible earth and relatively empty sky)? thus, the number of drawn polygons per mile is a lot more than 100, and increases rapidly the more distance you draw. And if you throw in foliage, characters, AI (pathfinding over huge distances), lighting, etc..?

I am really uninformed compared to a lot of b3ders but it seems like, in the modern era, draw distances are limited mainly by the push for eye candy. After that, they're limited by an apparent lack of sophisticated yet fast methods for implementing LOD nicely over millions of polygons. I mean, it seems to a layman like myself that these would be a high priority, and be relatively easy to implement, but perhaps sorting and filtering so many objects over such huge distances just takes too much time, when youe first priority as a graphics programmer seems to be trying to run sophisticated shaders in the near distance in a handful of passes.
 
OK, this is a little confusing to me, because geometry in a video game isn't measured in "feet" or anything like this.
When talking about large view distances people usually refer to games with huge outdoor worlds like Oblivion or Gothic. Such games use methods like dividing the world into square cells and process only those cells that could potentially be visible from the player's point of view. Thus if you increase the view distance many more of these cells need to be processed.
Additionally, there are cases where the AI is coupled to view distance, so NPCs that are outside the view distance will have very simple AI calculations as the player can't see them anyway.


In games that are set on the surface of a planet, the number of visible objects likely scales roughly with the square of the viewing distance, at least for outdoor scenes. For indoor scenes view distance is not important. For space games it might be a cubic scaling.
 
In a 3D realm, wouldn't the number of things that could possibly be rendered scale with the volume of the cone of vision?

That would scale things as a cubic function, unless other considerations limit what is being drawn (at least in the theoretical worst-case).

True but most 3D games take place on flat planes or close to it. So your distribution is largely on a plane intersecting the the cone. But generally yes it's slightly worse than squared.
 
I know as I have suggested something like this to a person who writes a driving simulator. But this will kill the “It just works” property of the depth buffer.

both techniques suggested by Simon would 'just work' with truly minimal extra effort, but the depth partitioning one would (likely) impose a heafty hit as it supposes redrawing your scene for each partition - could be heavy if carried out dully.
 
both techniques suggested by Simon would 'just work' with truly minimal extra effort, but the depth partitioning one would (likely) impose a heafty hit as it supposes redrawing your scene for each partition - could be heavy if carried out dully.

True, but the technique was intended for things that have truly enormous ranges of scale and may also have discontinuities. A space simulation, for example, could trivially use this trick.

I suspect that even ordinary FPS applications could use the technique by using user clip planes to handle where the scaling changes.
 
"Games with big view distances, like Unreal, Shadow Man, Serious Sam"

they dont have big view distances - try some flight simultators lomac, fsx exc 25miles plus
 
"Games with big view distances, like Unreal, Shadow Man, Serious Sam"

they dont have big view distances - try some flight simultators lomac, fsx exc 25miles plus

As the units are arbitrary and not to real world scale I don't see how it's revelevant at all how many "miles" it is. I can easily render a single polygon and claim that it is 10^80 metres long.

It's the far clip/near clip ratio that matters for precision of the Z-buffer AFAIK. And for performance it's definetly about the amount of junk that is rendered/resource cache/draw calls etc.
 
What is it exactly that makes large draw distances so historically challenging to accomplish? I suppose it's no problem for the latest hardware, but it seems like it was never trivial to accomplish. Games with big view distances, like Unreal, Shadow Man, Serious Sam, etc, always seemed to be the exception rather than the rule, even well into the SM 1.0 generation.

So what's the limitation on computer hardware that programmers have always had to work around in order to get a decent view distance? It seemed to me initially like all that you need is to keep your on-screen geometry under control, but that clearly isn't the case. Is it that polygons farther away have to be transformed at a different precision than polygons close by? Or is there something else going on?

memory
 
Depth buffer precision is a huge issue.
There is horrible z-fighting for example if you draw a tree 1 km away from zNear and the billboards of the leaves are a dozen cm away from each other.
You absolutely cannot tweak zNear if your eye-camera is allowed to get close to other geometry.
Things get even worse if you add binoculars or whatever.

I am not aware of any *practical* solution for this.
You can see that a lot of games have serious problems with this and have to make major cuts in art content.

So I absolutely cannot understand why there is so much hardware afford going into HDR lighting, etc., while the precision for geometry (which is the basis of *everything*), is the same cheap crap since the last decade!
 
So I absolutely cannot understand why there is so much hardware afford going into HDR lighting, etc., while the precision for geometry (which is the basis of *everything*), is the same cheap crap since the last decade!
Quite simply because geometric processing precision, depth buffer precision, attribute read rate, etc. doesn't sell GPUs. Being able to achieve nifty per-pixel effects and simulate all sorts of illumination phenomena with massive textures at high resolutions sells. As long as you can throw pretty pictures and buzzwords and drop trademark names like "TrueHDR" and "HyperAniso" about how things are generated, that's all that matters.
 
If my math isn't completely wrong a fp32 Z-buffer should result in a maximum relative error of somewhere around 2.4e-7 with a far clip plane at "infinity". That's 2cm at 100km distance.
Of course you already lose precision in the vertex shader so you never get to use all that precision. But even fp24 Z is enough to get 3cm at 1km distance.
 
What zNear are you assuming?
For a game with a first person like camera, you need a zNear of 0.02 m at maximum.
With 24 Bit depth buffer (there is no 32 Bit depth buffer support on any consumer hardware I know about), you get at 1000 m a lousy resolution of three meters!

Than there are still really bad precision errors for meshes with larger triangles.
ATI still has nothing implemented (in OGL) to minimize artifacts caused by T-junctions.

It seems like everyone already got used to horrible pixel bleeding and z-fighting that you can see in every game.
 
Back
Top