Can 20fps be smoothly produced on interlaced displays *spawn

No. A 3:2 pulldown has frames shown for different durations. Frame 1 is shown for 3/60ths of a second, and frame 2 is shown for 2/60ths of a second. That's forcing a 24fps from intervals of 60. 20 fps is achieved by showing every frame for 3/60ths of a second. The composition of two different images for each field leads to interlacing artefacts, not motion judder, and you have that problem with 60 fps material. 20 fps has interlacing artefacts like 60 fps, and a lower framerate, but it has no motion issues or other visual issues. The end result is uniform motion and as clean an image as you get with 60 fps.

Yup. Fields and frame rate are much misunderstood.

3:2 pulldown is horror. The horror is that once you notice it, you can't unsee it. Like frameskipping on poor PAL conversions (e.g. 60 fps game skips displaying every sixth frame, leading to rapid fire micro jerks!).
 
At 20 fps, you see half the first frame, other half the first frame, first half of the first frame again, and then second half of the next frame.

The first two fields are from the same frame buffer data (IE a still and smooth image), the next two are from the first and some different data interlaced (a frame with interlace "motion blur").

Thats how I mean its like 3:2 pulldown. Not exactly like it but similar in nature.

edit: Added a clarifying word.
 
Last edited:
Not really. And if you feel interlaced frames aren't a smooth image, than 60 fps never was because no single 'frame' is from the frame buffer. That'd place only 30 fps (or 15 fps) as a 'smooth image'. It's really wong to think of interlaced displays as needing two passes per image. Think of it as 60 individual frames. On a CRT, you only ever saw one field at a time (actually only a fraction of it) and it was persistence of vision that made it look like an image. NTSC was 60 frames per second, each frame 240 pixels high, and alternating the vertical offset one line at a time. As such, 20 fps on an interlaced screen was no different to 20 fps on a progressive scan 60 fps screen.
 
Yeah I know all that. My main point is that jumping from one technique or style of showing an animated picture to another, every other complete frame, is something the human visual system is very well attuned to noticing.
I'd far better like a solid 15, than what you describe at "20".

How sure are you anyway about the 20 fps of Oot?
It's very easy to get a circle going of people who reference each other on the web, with the origin being some uncertain stray remark, or something else entirely.
The 20 fps may turn out to be just some internal cap in the rendering engine, put in there for different purposes, noticed by people perusing the code or running and emulator?
 
Last edited:
What I balked at was the "any" part of "any integer".
But it's true. You can divide the 60hz refresh rate into evenly paced frames if you divide by any integer. It might not be smoothly animated, because 600 is an integer after all, but if the next update comes at a rate that is equal to the previous you have consistently paced refreshes.
The first two fields are from the same frame buffer data (IE a still and smooth image), the next two are from the first and some different data interlaced (a frame with interlace "motion blur").

Thats how I mean its like 3:2 pulldown. Not exactly like it but similar in nature.

edit: Added a clarifying word.
It isn't at all. 3:2 pulldown adds frames to lower framerate video at a non integer rate inconsistent with the source to make it consistent with the display refresh. What we are talking about here is displaying image from a single framebuffer for multiple screen refreshes in integer multiples to conform with display refresh.

On a 60hz field updated display like an NTSC TV, every field update is always only half the image anyway, so every "frame" the way you describe it is being affected by the motion blur that you are describing. Think of it like this, at 60FPS every field refresh is from a new frame, so every 2 field refreshes are constructed from 2 different frame buffers, so you would get what you are describing as motionn blur every refresh. At 30FPS every two field refreshes are constructed from one set of framebuffer data, but then the next field update is from a new framebuffer. One out of every every 3 refreshes is going to have this motion blur you are saying is a problem. At 20FPS the same framebuffer is used for 3 field updates, and the 4th update pulls from a new buffer. So one out of every 4 refreshes has this motion blur.

The point is, the motion blur you are saying is a problem is less common on lower framerates because more fields are pulled from a single buffer.
 
I'd far better like a solid 15, than what you describe at "20".

That would bring no benefits in terms of the way an interlaced tv displayed it, only an even more jerky, even more gameplay-hurting frame rate.

You would still be going from one field displaying the old frame, to the next field displaying the new, when the frame changes. You can not get away from that. Only now when the frame changes the differences between fields is even greater.
 
Yeah I know all that. My main point is that jumping from one technique or style of showing an animated picture to another, every other complete frame, is something the human visual system is very well attuned to noticing.
I'd far better like a solid 15, than what you describe at "20".
It still would happen at 15. 4 field from one buffer and the 5th field refresh would be from a new one, so when combined with the 4th you'd have your issue.

Edit- Function beat me to it. More functional and faster.
 
The first two fields are from the same frame buffer data (IE a still and smooth image), the next two are from the first and some different data interlaced (a frame with interlace "motion blur").
So basically it has the same problems as when pulldown converting 24 fps film to NTSC (some of the frames are static images and others are composites of two interlace fields).
On an interlaced display, the data from one frame to the next will interlace no matter what. Having a balanced even-odd distribution of fields in a frame has no effect on this.

If you play 30fps video on a 60Hz interlace display, half of the time a field from one frame will come immediately following a field from the same frame. But the other half of the time, a field will follow a field from the previous frame. The eye can still see the fading lines from the previous field as the field from the new frame is being drawn, and the misalignment between the two images causes a "combing" effect.

Combing happens no matter what the framerate is. At 60fps, where every field is a new frame, you get 17ms-misaligned combs between every field. At 30fps, you get 33ms-misaligned combs between every other field. At 20fps, you get 50ms-misaligned combs between every third field. At 15fps, you get 67ms-misaligned combs between every fourth field. And so on.

The delta-T of the combs tends to have a more dramatic visual impact than the scale of the combs. 60fps video tends to look much cleaner on an interlaced display than lower framerates, because the combs are so small that they're usually not obvious.
Having the framerate be a power-of-2 division of the refresh rate doesn't help; combing artifacts aren't all that subtle in 30fps games.
 
But it's true. You can divide the 60hz refresh rate into evenly paced frames if you divide by any integer. It might not be smoothly animated, because 600 is an integer after all, but if the next update comes at a rate that is equal to the previous you have consistently paced refreshes.

But that was the whole discussion to start with.
Sure you can "do anything" with video if you don't care about quality.

It isn't at all. 3:2 pulldown adds frames to lower framerate video at a non integer rate inconsistent with the source to make it consistent with the display refresh. What we are talking about here is displaying image from a single framebuffer for multiple screen refreshes in integer multiples to conform with display refresh.

It is in the sense that it give undue weight to some frames, while others are feathered out with interlacing.

On a 60hz field updated display like an NTSC TV, every field update is always only half the image anyway, so every "frame" the way you describe it is being affected by the motion blur that you are describing. Think of it like this, at 60FPS every field refresh is from a new frame, so every 2 field refreshes are constructed from 2 different frame buffers, so you would get what you are describing as motionn blur every refresh. At 30FPS every two field refreshes are constructed from one set of framebuffer data, but then the next field update is from a new framebuffer. One out of every every 3 refreshes is going to have this motion blur you are saying is a problem. At 20FPS the same framebuffer is used for 3 field updates, and the 4th update pulls from a new buffer. So one out of every 4 refreshes has this motion blur.

With 30 fps you will never have interlacing artifacts if you are going with odd and even fields.
It is of course not interlacing artifacts I'm against as such. They are fine and a calculated compromise. horizontal interlacing was chosen because mosts camera moves are horizontal and combing artifacts will just 'simulate" strong motion blur.
It the intermittent presence of combing every other frame that will look wanky.

The point is, the motion blur you are saying is a problem is less common on lower framerates because more fields are pulled from a single buffer.

If the speed is 15 or 30 there will be no interlace combing patterns. Which actually is not what you'd want for that low a framerate. But then, you could use the trick of copying the previous frame alpha blended on top of the new one, which will give you uniform blur in any direction.
 
Last edited:
It the intermittent presence of combing every other frame that will look wanky.
That's the first time you've really made it about the interlacing. This line of discussion started with you saying you couldn't reach 20 fps.

Any integer? I think you need to go back to the fifties and explain your scheme to the guys who first attempted telecining film. ;-)
How would you make those 20 fps fit within a second of NTSC? How many sequential frames, or even fields would you display the single frame for to make it fit without tearing?
Do you acknowledge now that 20 regularly paced frames is completely possible without tearing?
 
Of course I could have been more clear, if I had known how this discussion would play out. But hindsigth is twenty twenty.
I did mention 'sequential', which implies something (as in different frames and fields) changing over time in a regular sequence.
I do think I made myself very clear quickly though.
And mentioning fields, is also making it "about interlacing".
 
I don't know that the artefacts would be particularly noticeable. I mean, it's not a problem with 60 fps material so why would it be a problem at 20? It was a generation with dithering and flickering colours for things like shadows. A little interlaced fringing, somewhat smeared over by the CRT, wouldn't be at all out of place to what gamers were experiencing elsewhere with their graphics.
 
That would bring no benefits in terms of the way an interlaced tv displayed it, only an even more jerky, even more gameplay-hurting frame rate.

You would still be going from one field displaying the old frame, to the next field displaying the new, when the frame changes. You can not get away from that. Only now when the frame changes the differences between fields is even greater.

With 15 fps you display one video game frame over four fields or two NTSC frames. No interlacing artifacts.

I don't know that the artefacts would be particularly noticeable. I mean, it's not a problem with 60 fps material so why would it be a problem at 20? It was a generation with dithering and flickering colours for things like shadows. A little interlaced fringing, somewhat smeared over by the CRT, wouldn't be at all out of place to what gamers were experiencing elsewhere with their graphics.

The difference between shows recorded on film (30fps) versus on tape (60 fields per second) was and is very noticeable at a glance on old CRTs.
It wasn't just the better colours and dynamic range of film, but also the radically different way in which it moved (it's the reason why people accuse 60/48fps movies of looking like soap operas).
Imagine alternating between the two. It is analogous to how 3:2 pulldown looks choppy.
As said it's not the interlace artifacts as such I object to, it's the quality of motion they give to the moving image intermittently in this case.
 
Last edited:
If the speed is 15 or 30 there will be no interlace combing patterns.

The visible effect of two subsequent fields, generated from different frames, appearing "combed" is there at 60, 30, 20, 15 etc. It is highly noticeable at 30 fps if you're very close to a big enough screen with a moving camera.

It's caused by two subsequent fields from different frames. There's nothing magic about 30 or 15 fps that doesn't happen at 20.
 
And again, despite all being said about interlacing, its irelevant to zelda or n64 as those game's signals forced progressive.
What?
You mean on HD CRTs?
Oot is be before any meaningful penetration of progressive scan capable TV screens in the home.
 
Oot is be before any meaningful penetration of progressive scan capable TV screens in the home.
It is trivial to run a CRT as progressive. It's just a question of resetting the vertical retrace so you don't skew half a line per field.

Cheers
 
The visible effect of two subsequent fields, generated from different frames, appearing "combed" is there at 60, 30, 20, 15 etc. It is highly noticeable at 30 fps if you're very close to a big enough screen with a moving camera.

It's caused by two subsequent fields from different frames. There's nothing magic about 30 or 15 fps that doesn't happen at 20.

Sure you will have combing when a new field is drawn, but that is regularly spaced, coinciding with change in the image (if any).
Not like with the 20 fps example, where there is a two fields with no movement between them and then two with.
 
The difference between shows recorded on film (30fps) versus on tape (60 fields per second) was and is very noticeable at a glance on old CRTs.
It wasn't just the better colours and dynamic range of film, but also the radically different way in which it moved (it's the reason why people accuse 60/48fps movies of looking like soap operas).
Imagine alternating between the two. It is analogous to how 3:2 pulldown looks choppy.
As said it's not the interlace artifacts as such I object to, it's the quality of motion they give to the moving image intermittently in this case.
At this point I'm going to have to drop out of the conversation because I don't understand what you're talking about. I'd have to see it in action for myself. All I know is I played plenty of games on CRTs and none of them had anything particularly more or less jarring. 50/60 fps was smoother in motion. Some games had inconsistent framerates. I see nothing about evenly spaced frame changes as being visually problematic, and apparently neither can anyone else in this discussion (;)), which isn't doing anything at all to further your original enquiry!
 
Back
Top