Irreverent to the Reverend. Warning - contains hard data.

Discussion in 'General 3D Technology' started by Entropy, Jun 27, 2002.

  1. KnightBreed

    Newcomer

    Joined:
    Feb 7, 2002
    Messages:
    203
    Likes Received:
    0
    I think you're missing Doomtrooper's point. Some games sync network packets with framerate. This was rather apparent in the old Halflife netcode. I couldn't ever play Halflife online with my voodoo2 because the framerate would flood my horrible dial-up connection (I averaged 500-700ms latency). It would freeze up every 5-10 seconds never fail. Lowering my maximum framerate or switching to software rendering would always solve the issue. The revamped netcode introduced in 1.0.1.0 severed the link between framerate and network packets.

    I'm not sure how many games on the market are set up this way, but there are some.
     
  2. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    In an optimal situation, the frame that the CPU is working on should be the one that is currently being written to the backbuffer.

    Now, at 60Hz, we're talking about 16.7 ms between frames, and thus roughly 33ms delay from input to output with double buffering in an optimal situation.

    Human reaction times are usually in the 100ms-200ms range. To test your own, try the meterstick test. Have somebody else hold a meterstick, while you have your fingers right at the zero mark. Have that person drop the meterstick without any warning, and you close your fingers around the meterstick (Well, any rigid measuring implement will do) as quickly as you can after it begins falling.

    Find your reaction time by dividing the number of centimeters the meterstick moved before you grabbed it by 490, and then taking the square root. That is your reaction time in seconds (multiply by a thousand for milliseconds).

    I did the test myself and got 166ms (of course, I only did it once...for a good test, you should take a few and average).

    While I won't argue that 33ms delay is not noticeable in some situations, in most, I'm certain it is not. So yes, I do agree that in highly-competitive situations, you'd want to have framerates in excess of 120fps to have as much of an edge as is humanly possible, but it won't make a difference in most cases.
     
  3. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    Impossible to do for all situations, or even most situations. Let's take a rather simple one: a 360 degree rotation that happens in one second.

    At a 640x480 display with a fov of 90 degrees, a 360 degree rotation will effectively move objects across four screens, or 2560 pixels in one second. In order for no object to have moved more than one pixel (moving any more than one pixel would introduce a gap between frames) obviously requires a framerate of 2560 fps.

    Bump the resolution up to 1600x1200, you should easily see the problem.
     
  4. Freon

    Veteran

    Joined:
    Feb 7, 2002
    Messages:
    38
    Likes Received:
    0
    Vsync adds a frame of lag to your mouse input. I honestly can't play with vsync on it is so distracting.
     
  5. Dave B(TotalVR)

    Regular

    Joined:
    Feb 6, 2002
    Messages:
    491
    Likes Received:
    3
    Location:
    Essex, UK (not far from IMGTEC:)
    For complete smoothness the time taken for one frame to render must be less than the time it takes any given object in the 3D scene to move by more than 1 pixel.

    Some games that means 60 FPS and some games that means 150.
     
  6. SA

    SA
    Newcomer

    Joined:
    Feb 9, 2002
    Messages:
    100
    Likes Received:
    2
    The term "motion blur" refers to antialiasing in the temporal dimension. All high quality CGI feature film work performs this, otherwise CGI imagery would look very unrealistic (esp at 24 fps). Not fully understanding temporal antialiasing was a major problem is early film special effects such as hand animated miniatures.

    The methods of temporal anti-aliasing (motion blur) are the same as for the spatial dimensions. The most straight forward is to supersample at a higher temporal resolution (frame rate) and then downsample using some filter kernel (often just a box filter meaning all the in between frames are simply averaged) to the desired frame rate. You get better quality by increasing the number of samples and by using a better filter shape (such as a gaussian).

    Aliasing gets its name from the fact that frequencies higher than the display frequency alias (take on another identity) as frequencies below the display frequency.

    This means that spatial frequencies (rapid changes in image contrast) at resolutions higher than the display resolution alias to lower spatial frequencies and become visible. This is the same phenomena as the beat frequency from two separately occillating strings, tuning forks, etc. In that case, the two frequencies interfere to cause a third, low frequency beat. In the case of images, the spatial display frequency of the CRT screen (say 1280 pixels across) interferes with the spatial frequencies in the image (spatial patterns in textures, spatial patterns formed by triangle edges, etc.) and cause artifact "beat" frequencies. Thus, increasing the display resolution can never eliminate aliasing, only raise the frequency of the artifacts. You must antialias to remove aliasing. In the temporal case, raising the frequency of the frame rate can never remove temporal aliasing. Again it only increases the frequency of the temporal artifacts. You must antialias in the temporal dimension (use correct motion blur), to remove temporal aliasing.

    Note that correct temporal antialiasing does not truly "blur" the image any more than correct spatial antialiasing blurs the image. In fact, the image actually appears much sharper, with more resolution than is actually being displayed. In the temporal case, this means a correctly motion blurred image actually appears to be running at a much higher frame rate than it actually is, with no flicker or jerkyness.

    It does not depend on where the eyes are looking, any more than spatial antialiasing. It is applied to the whole image on each frame.
     
    Pete likes this.
  7. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    For those that didn't get this, one example is when you see a car wheel or other rotating object (fan, propeller, whatever) in a movie that appears to be rotating backwards, at a much lower speed than the actual rotation of the object.

    If you paid attention, you may notice that this is mostly talking about oscillatory effects. In fact, oscillating objects (any rotating object, or any object just moving back and forth with a frequency that isn't changing very rapidly). In fact, it is oscillating objects that make aliasing the most apparent. It's not as easy to see aliasing in more irregular movements.

    But, that doesn't mean it won't look better once the temporal anti-aliasing (motion blur) is applied!

    One quick thing to note: Temporal aliasing as described in the paragraph above is often used to judge the frequency of oscillatory effects. Tom's Hardware posted an article some time ago that used a strobe and that same temporal aliasing effect to find the speed of computer fans.
     
  8. DemoCoder

    Veteran

    Joined:
    Feb 9, 2002
    Messages:
    4,733
    Likes Received:
    81
    Location:
    California
    Any sweeping camera movements or fast action (like sports) and you can easily see the difference with and without temporal AA.

    One of the reasons film looks better than video (besides all the other advantages and better cinematography) is the temporal aliasing you see in low quality video. Especially, digital video. For example, on my Sony TRV900, if I want to achieve a professional look, I have to film at 60fps and post-process down to 24fps or adjust exposure times to simulate a film camera.

    Otherwise, any fast motion or sweep camera movement leaves you with that "lost frame" feeling where you were see objects jump.
     
  9. Simon F

    Simon F Tea maker
    Moderator Veteran

    Joined:
    Feb 8, 2002
    Messages:
    4,563
    Likes Received:
    171
    Location:
    In the Island of Sodor, where the steam trains lie
    It will look better with temporal anti-aliasing. The (main) reason you get the "wheels going backwards" aliasing effects in movies is because the camera shutter is only open for about 1/2 the frame time due to (no doubt) physical contraints. This is of course much better than the instaneous time sampling you get in computer games but it could still be much better.
     
  10. Nagorak

    Regular

    Joined:
    Jun 20, 2002
    Messages:
    854
    Likes Received:
    0
    Re: Irreverent to the Reverend. Warning - contains hard data

    First of all, movies look "OK" at 24 fps. I don't know why people say they look "great". Even with fabled motion blur you can see choppiness if you look closely, and especially if you look into a scene's background. Movies are "stuck" at 24 fps since that's the standard, but they would definitely look better with more fps. Still, they are passable, so we'll let it be.

    Secondly your statement that the human eye is "8 fps + motion blur" seems ridiculous to me. I don't know what scientific evidence you are basing that on, but it's obvious people can see a lot more than 8 fps. And it's not just motion blur that you see in excess of 8 fps... The whole suggestion seems ridiculous to me.

    Finally, if you have a high enough FPS, motion blur will take care of itself. It will just happen. In order to achieve what you are suggesting you couldn't just have a "motion blur trail" or something either. You'd actually have to blend each frame together with the one coming after it. This would be a performance nightmare, and I don't know about you, but this would be the absolutely first thing I would disable upon installing a new graphics card. Maybe we'll see something like this 5 years down the line, but not anytime soon (and I highly question whether it's worth the effort).

    I don't have any understanding how people can stand 60 Hz refresh rate. Even at 100 Hz the monitor refresh is annoying to me, so go figure (and now that I've started paying attention to it, it's really bothering me, LOL).

    I agree that there is no "cap" on acceptable FPS, but it does vary a lot depending on what you're used to. As a matter of fact you can play just fine at a solid 60 fps (no major ups and downs). I'm not even sure you are at a disadvantage with 60 fps as opposed to 120 fps. There's a slight difference, but I'd say luck would factor about equally into the equation (read: not that much). Still, more fps is always better.

    I do think that some people concentrate excessively on FPS, however.
     
  11. Nagorak

    Regular

    Joined:
    Jun 20, 2002
    Messages:
    854
    Likes Received:
    0
    No offense, but you're being kind of anal retentive about this. Your eye cannot recognize a movement of 1 pixel, especially not at 1600 resolution. The idea that you need everything to move only a single pixel in order for there to be a proper motion blur, is just ridiculous. In fact if you are getting high FPS things already do motion blur to some extent, there's no need to have 2560 fps that's clearly overkill. The pixel movements are just not important, the raw fps is more of a factor. At 200 fps, for example, the amount of time that has passed since the previous frame is so miniscule as to be effectively non-existant.
     
  12. Brutal Deluxe

    Newcomer

    Joined:
    Jun 3, 2002
    Messages:
    23
    Likes Received:
    0
    Comparing MMORPGs with normal multiplayergames is a mistake. Your Quake3 uses connection orientet communication where MMORPGs use connection less communication.
     
  13. Nappe1

    Nappe1 lp0 On Fire!
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    1,532
    Likes Received:
    11
    Location:
    South east finland
    guys... how about playing something that don't give chance showing fps?

    I can guarentee that "what somebody don't know, he doesn't need it." works just fine here too.

    when FPS counter is visible, I need 30-40 fps to get fluid experience. If I don't know what's the real fps, sometimes even 20 to 24 fps is enough.
     
  14. Dio

    Dio
    Veteran

    Joined:
    Jul 1, 2002
    Messages:
    1,758
    Likes Received:
    8
    Location:
    UK
    What is actually required for motion blur may in fact be even more restrictive than rendering once across each pixel; you have to achieve better than per-pixel resolution of your subsamples. i.e. 'time based antialiasing'. Or it may be less. It depends on the application and the rate of movement, and in the same way that the point of FSAA reduces as resolution increases, as your sampling rate increases the necessity for motion blurring decreases.

    It's easy to see that 60fps point sampled (in time) is insufficient, as there are several examples on TV that show these artifacts, usually at sporting events. Many of these events use CCD cameras with extremely high virtual shutter speeds (often 1/1000s) so slow-motion replays can show point sampled (in time) images rather than blurs. These images look distinctly odd on television. I remember finding the diving at the Barcelona Olympics particularly disturbing for this reason. To alleviate this factor, sporting events are deploying special replay cameras and recording systems that record for replay at the frame rate of the camera for slow-mo but accumulate the images for realtime display.

    Computer systems suffer the problem that they aren't analogue at any level and so have 'infinitely fast' shutter speeds, so you would need to sum an infinite number of frames to produce the equivalent solution in the most limiting case. Probably an accuracy so the largest displacement on screen is approximately 1/4 of a pixel between frames would be sufficient (by analogy to spatial antialiasing).

    The best way to examine how many samples are 'necessary' is to consider the appearance at the edges of an object. If the object moves 400 pixels on screen, but is only sampled 10 times, then there will be sets of 40 pixels which share a single level, creating the obvious 'shadow' effect seen in the 'accumulate frame' antialiasing method that used to be frequently used (Magic Carpet was one game that showed this, if I remember right). Therefore, to achieve a motion-blur effect good enough for stills you need to move by a small enough amount that this effect isn't noticeable, which in the limiting case is about 1/4 of a pixel. If you're not talking about stills you can relax the restriction depending on how fast you are rendering, but I'd still say much more than a few pixels would be noticeable.

    Motion blur is a difficult problem to solve currently - an awful lot of the render time of the cinematic CG goes into it. There are other techniques that can provide good results - in the same way as multisample + aniso provides very similar IQ to supersample at a fraction of the performance hit.

    The result of all of this is that computing motion blur isn't done much in the consumer space; so a good solution IS to render as fast as possible and see how well the eye copes.

    What you want is a monitor + video that can run at 120fps at the highest possible image quality (probably 12x10 4xAA + aniso - even my VM Pro 510 won't do 120fps at 16x12) and run the game locked at 120fps. There's no point at all in rendering faster than monitor refresh except to reduce latency (well, unless you have a physics engine with resonances). Having a video card that can get 350fps on Q3 benchmarks is therefore still valuable, because in-game it will make sure it stays at the 120fps even in the most complex scenes, and never stutters.
     
  15. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    Sure it can. As I said, at high framerates where, say, an object jumps 3-5 pixels at a time, the object will show a different kind of artifact than what we see at very low framerates. Instead of looking jerky, the object will look like a series of objects. And yes, you would need thousands and thousands of discreet fps to eliminate this artifact, and even then, you could manufacture an in-game situation that would still show it.

    One example that's easy to see exists near the beginning of Half-Life, the single-player game. Remember when you have to "start the rotors" before all hell breaks loose? If you have a slow computer, or run at very high resolution/FSAA/aniso, you will have a hard time seeing which way the rotor is spinning once it gets going, because it will jump around.

    If you have a fast computer, it still won't look perfect. You will see many ghosted images of the rotor as it spins.
     
  16. Entropy

    Veteran

    Joined:
    Feb 8, 2002
    Messages:
    3,360
    Likes Received:
    1,377
    I had intended to write something a bit more explanatory, (the joys of vacation :)) but got busy with other stuff (the joys of having a spouse ;-)).

    You make a mistake (or engage in rethorics) here when you make the connection between the effect of temporal antialiasing with the effect of spatial, and imply that results and problems will be similar. There are obviously conceptual (sample higher -> downsample for display) similarities, and in that both attempt to alleviate the problem of insufficient resolution (temporal and spatial) of the output. But whereas the difference between the observer and the "camera" of the rendering isn't an issue for spatial antialiasing, it does matter for temporal.

    Just because graphics literature (what I've seen of it) is notoriously bad at making a distinction between "camera" and "human observer" doesn't mean there isn't one.

    I'll try to explain with an example. I'll have more of a rendering slant to it this time around. I'll also use a slightly more stringent language since it seems difficult to be taken seriously without it.

    Say that you render a car on a track, from the viewpoint of the stands. The car has its' sponsors written on it (for instance). Assume that the scene is rendered at a resolution of, say, 1024x768 and that we render four temporal samples per displayed frame, and that we output 50 frames per second to the monitor - the specific numbers matter little. The car takes a second to travel across the screen. As you render the temporal samples, the car moves an average of five pixels per sample, so that as you go to the filter-to-frame stage you (for each pixel) average the values you got in those four temporal samples. For the track and the rest of the static scenery, the four samples will be identical, the car however will have moved on average five pixels per temporal sample and those pixels where the car have been will differ in their values, and we compute an average value for the final display pixels from our four samples. It is clear to see that our car (and the background it passes over) will be "blurred", and that the text will be rendered beyond legibility since we take samples that end up in random locations in the row of letters. This is how straightforward temporal oversampling works.

    Unfortunately, not only does it carry a similar rendering overhead as supersampling spatial antialiasing, but there are concrete reasons why temporal antialiasing won't be as effective filling its intended purpose. I'll touch on that below.

    But first lets look at the consequences of the camera=viewer fallacy.

    The distance between the temporal samples of the car above will be proportional to its angular velocity within the camera field of view and the temporal samples of the background will superimpose perfectly. Now, assume that instead of staring stupidly straight ahead, the human observer does what is natural, and follows the car with his eyes/head. The car is now fixed in the viewers field of view, whereas the track and background moves. Note that since the track and background temporal samples overlapped, the viewer will now percieve them moving in exactly discrete 50 Hz steps within his field of view without them having been affected at all by our temporal antialiasing. Instead, what was affected was the car that he keeps centered in his field of view. The image of the car is now formed of four superimposed temporal samples, which is not necessarily particularly natural looking. The legibility of the line of text that was sampled at four different positions and then averaged together is an example.(*)

    The above should be crystal clear.
    In my opinion the consequences of the camera=viewer fallacy is enough, in and of itself, to disqualify temporal antialiasing for interactive rendering purposes.

    But there are other problems as well.
    Temporal antialiasing is promoted as a way of getting rid of sharp "ghost images" that is caused by fast movement or turns. Lets take an artificial example first, a pixel wide bar of light moving across our 1024x768 screen in one second. If we render our 200 temporal samples directly to the screen we will get 200 lines drawn five pixels apart. Now instead average four of these samples together, and display these averages at 50 fps. You will get exactly the same 200 lines on screen, only displayed four at a time at 50 Hz at one fourth the intensity of our original rendering. Not exactly much of an improvement.
    As usual, where the effect is most apparent is in high contrast areas. Consider rendering a scene where we are driving a car along road, lit by street lights. The street lights continously whoosh by to the sides. The street lights will describe exactly the same behaviour as the single line example above. The same is true for any light/dark edge. Temporal antialiasing works better when the movement is small relative to the object size, or where the contrast of the object vs background is small. But that was not where we needed it in the first place. Temporal supersampling is the least effective in the areas where a solution is most needed. (Nor does it solve Simons propeller blade example. If you sample at 200Hz and display the average at 50, or display your samples directly at 200 Hz doesn't really affect the phenomenon.)

    Furthermore, it is sometimes claimed that using temporal antialiasing would allow a significantly lowered displayed framerate, but I'm sceptical. The eye-motor control is more sensitive to responsiveness than the eye alone. Control is percieved as choppy when framerates are low and "grainy" for lack of a better term when framrates are higher but still not ideal. I fail to see how (apart from its other problems) temporal antialiasing would make percieved control all that much better.

    Summing up, we pay a huge rendering price to get rid of a pest in our rendering, but only manage to ease the less painful of the symptoms and get afflicted with cholera in the process. And that is still assuming that it doesn't come at cost in actual displayed fps, which seems impossible for the foreseeable future.

    It just doesn't make sense, no matter how in vogue "Cinematic Rendering" is.

    Entropy

    (*) That the eye-brain combo may be able to make better sense of it than could be expected given sufficient time to gather data is beside the point. And remember - this is all about fast moving object where there is little time for such integration.

    PS. The situation is somewhat different in actual rendering for cinematic display since there your framerate is locked at a very low value. You don't have to worry about control aspects, and the director decides what's important, and leads the eye of the viewer.
    Blurring fast moving objects helps reduce the "artificial" impression of computer generated content and artifacts of traditional film can help create a more "real" impression. Cinematic rendering is a different ballgame.
     
  17. ddlink

    Newcomer

    Joined:
    Jun 28, 2002
    Messages:
    3
    Likes Received:
    0
    Correction, it adds upto 1 frame of lag to your mouse input and removes tearing. I guess some people prefers the one or the other.
     
  18. Simon F

    Simon F Tea maker
    Moderator Veteran

    Joined:
    Feb 8, 2002
    Messages:
    4,563
    Likes Received:
    171
    Location:
    In the Island of Sodor, where the steam trains lie
    Entropy
    I think you are making a couple of bad assumptions here:

    1) You are confusing a (rather poor) method of temporal antialiasing, which is not sampling anywhere near the Nyquist limit, with temporal AA in general.

    2) WRT to the "viewer rather than the camera" following an object, you are confusing interface/feedback issues. In an ideal situation, the camera is the viewer. Here there would be no problems with the right objects being 'blurred'. Perhaps one way to work around the problem would be to use a narrower field of view. In this situation the camera would have to track the moving target or else it would disappear off-screen :)
     
  19. hughJ

    Regular

    Joined:
    Feb 7, 2002
    Messages:
    861
    Likes Received:
    417
    seeing helicopter rotors made of a spinning texture definitely opens one's eyes..

    I suppose eventually we'll have to use some form of advanced motion blurring.. since our monitors certainly can't keep up with the increasing resolutions and maintain even higher refresh rates..

    similar to how 5 years ago we may have assumed that increasing screen resolution would simply be enough to make a less pixelated image.. now we have a healthy mix of "high" resolutions as well as assisted filtering..

    wonder how far off it'll be before we'd see something like this?
     
  20. Entropy

    Veteran

    Joined:
    Feb 8, 2002
    Messages:
    3,360
    Likes Received:
    1,377
    Well, I had to take an example, didn't I. So I took one which was close to how supersampling spatial antialiasing works so that it would be easy for people to understand the principle. Never said it was the only way, nor particularly good. In fact I slammed it pretty hard. ;)

    :) You are describing TV.
    But I'm not confusing anything I'm afraid.

    The ideal situation doesn't exist. Period.
    I'll even make the claim that having your head in a vice and your eyes staring unblinkingly straight ahead just to comply with the rendering camera isn't a resonable ideal in the first place.

    Take my example above, but use 5 displayed fps and 100 temporal samples per frame, and the problem should be even more obvious to your inner eye. I just chose numbers people would feel comfortable with. The problem is there no matter what numbers you choose (or method). If you have a higher displayed fps, it's just less obvious that you would have been just as well or better off without temporal AA in the first place.

    Temporal antialiasing is so similar to the spatial conceptually, that it is necessary to take a step back in order to see that the visual consequences are not. That the camera is distinct from the viewer is not the only, but a killer difference, IMO, and since everyone thinks only about the camera when rendering from their first textbook steps into gfx and onwards, it comes right out of rendering traditions blind spot.

    Entropy
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...