The Power Of Refresh Rates

Probably because crts and up until recently that I know of, most low rent lcd panels used analog connections that made them act like crts. Not sure if dvi helps here or what, but ya, the basic idea of having to wait for a repeat display of a single frame because the game is running slower than the refresh rate is annoying to us geeks.. lol

As for multiples of 10ms in your example, I was imagining that a NULL frame sent, flagged with a special vsync signal (or whatever indicates the beginning of a frame in dvi mode) wouldn't take any delay time, whole point being the data doesn't need to be updated, so no point in waiting for it.
 
Interesting idea. A few concepts follow logically once you have decoupled screen refresh from a fixed frequency.

The primary one is that if the pixels are not locked to a forced refresh, then there wouldn't seem to be any reason for them to be locked together either. i.e., you shouldn't have to refresh all pixes at once, should you?

If not, then more ideas follow. To further relieve tearing, how about a buffer that stores frames and refreshes pixels in something like a sparse sampled or jittered pattern? Wouldn't that act as some form of temporal anti-aliasing? It might have the effect of blurring geometry and texture edges, but with the benefit of reducing the appearance of discrete steps or "jerks" (as I see them) of the screen.

I also have to ask a dumb question here... I haven't followed or thought about vid card technology in a while. Is there any current form of time-based geometry compression being used wherein if geometry does not change from one frame to another then certain steps of the rendering cycle are bypassed? Say the geometry is the same from frame 1 to 2, but texturing or lighting changes. Is triangle setup bypassed with information pulled out of a buffer to be used again? I'm thinking no, because I can't off-hand think of an easy way to check and see if geometry is unchanged, or I'm thinking that at the least it might take more time than just setting up the geometry again. But what if both geometry and texturing/shading/lighting was the same for the two frames. Is the frame rerendered, or do current GPU's detect that all information for that frame is identical and just resend the frame for the next refresh?

I'm thinking of a hypothetical example where there is a lot of complexity on screen, with most static and a small part dynamic. Say a billion triangles (or other arbitrairily high number) with complex shading that create a static environment, and a relatively simple dynamic character like a rat crawling on the floor. If it takes .1 seconds to render that frame with current GPU's, would the framerate be 10fps regardless of whether the "camera" viewpoint was static (rat crawling at 10fps) or not (screen panning at 10fps)?

That would be similar to mpeg style time-compression where pixels that are the same or similar from one frame to the next are not stored in duplicate, but only with a flag that says "draw for 10 frames" or similar, right? Well, if that were combined with LCD screens where refresh was decoupled, you could create a system in which the video card only output pixels that needed to be refreshed. The output rate could be say 180Hz or more (probably a multiple of maximum screen refresh), with a buffer system that prevented any pixel from attempting to update more than the screen maximum of 60Hz or so. Perhaps even a blending operation for all pixel values rendered within a single refresh timeframe (two or three, or whatever).

Anyway, just some rambling. Seems like a good oportunity to extend the "draw only what you need to draw" philosophy that works for hidden surface removal, MSAA, and the like.
 
While it would seem to me to be entirely possible with TFTs to only refresh a 'region of interest' rather than the whole frame, I would expect that such functionality would more useful when rendering 2D GUIs than full-screen 3D applications.

I would suspect that the kind of stochastic/jittered update you suggest would result in disturbingly grainy rather than blurry images; pixels are much more sharply defined on TFTs than on CRTs.

AFAIK, there are not very many attempts made by modern drivers to do time-based geometry compression; some drivers will generate bounding boxes for static geometry in order to be able to reject large amounts of static geometry fast, or steer data arrays in or out of GPU memory based on their memory access/usage statistics, but that's about it. Detecting in the driver that the rendering calls for a current frame are exactly the same as for a previous frame AFAIK adds too much overhead to be considered worthwhile.

For the example with the rat, I suspect that you will get the same framerate whether you move the camera or the rat. I guess you could copy a rat-free version of the frame to a texture and then render that texture into subsequent frames, or modify APIs/drivers so that you can tell them that a region of the framebuffer shall be kept unchanged (which in turn could be used to keep that region from being refreshed on the TFT), but I don't expect that 3d drivers will ever make an effort to detect unchanging regions without some level of cooperation with the application.
 
Bigus Dickus said:
If not, then more ideas follow. To further relieve tearing, how about a buffer that stores frames and refreshes pixels in something like a sparse sampled or jittered pattern? Wouldn't that act as some form of temporal anti-aliasing? It might have the effect of blurring geometry and texture edges, but with the benefit of reducing the appearance of discrete steps or "jerks" (as I see them) of the screen.
This is basically the same as a method of temporal antialiasing that I've seen described previously:
Render 10 subsequent frames.
Randomly compose a single frame out of 10% of each of the component frames.

So yeah, it should work, but it'd only look good with really high framerates. I somewhat doubt any current LCD could do this,though.
 
Interesting... thanks for the comments. After thinking about it more I suppose the result would be rather grainy as opposed to blurry. Temporal AA would work as Chalnoth describes, but that's blending multiple frames to produce one frame. Refreshing pixels in a sparse pattern would just spread the existing jutter, so you'd have fuzz instead of jerks. Not much better I guess! :)

Time based compression seems like it should be explored, but then again all of the current cases where it works well is after data is generated and the compression is done off-line to enhance playback speeds or reduce storage requirements, etc. I'm not sure how much application support it would require, but at the least I think the CPU would have to function as a real time compressor, sending the new compressed geometry/lighting/shading/etc. information to the GPU. Whether the compression enhanced or reduced framerates would probably depend a lot on whether the game/etc. was CPU or GPU bound. If there was abundant CPU power, perhaps compression would be useful. Not sure how much static scenery occurs in modern games though, so maybe that's why there hasn't been much development of those techniques.
 
Bigus Dickus said:
Interesting... thanks for the comments. After thinking about it more I suppose the result would be rather grainy as opposed to blurry. Temporal AA would work as Chalnoth describes, but that's blending multiple frames to produce one frame. Refreshing pixels in a sparse pattern would just spread the existing jutter, so you'd have fuzz instead of jerks. Not much better I guess! :)
Not if the framerate was much higher than the pixel delay.
 
it would be temporal dithering, and can look quite nice.. i used it in some raytracing apps to have faster animations (5fps or so, but it looked better to have 20fps with only about every 4th pixel updated than to have only 5 district frames.. definitely).


yeah, there is never any need to have more frames per second than screen refreshes. what _does_ make sence is to give the user then a choise:

idle
rerender


idle is great for notebook users. after one frame is drawn, the system idles around till it can process the next one, thus requiring less batterylife while gaming.


rerender is great for pc's, as it can rerender the frame with new information, and blend them together, resulting in motionblur, or, if you want, higher quality antialiasing (jitter the existing camerapos to get more samples per pixel), or a combination of both..

just imagine how a q3 would look by today if it would have supported that feature? whats max today? 700fps? on a tft, on 60 refreshes per second, this would mean about 11-12 renders per frame, this would allow for nice motionblur _and_ supersampled antialiasing, thus extreme highquality images.

so if you want to code an engine that should scale over the next tens of years in quality, the higher the renderpower gets, implementing such a feature would definitely help.
 
LCD's that refresh when told rather than expecting a signal every 1/60th of a second already exist. Generally they are in the mobile area where power consumption is a priority, as only sending the data when it changes is a big saving, especially on something like a phone where the display will not be changing for much of the time.

CC
 
davepermen said:
just imagine how a q3 would look by today if it would have supported that feature? whats max today? 700fps? on a tft, on 60 refreshes per second, this would mean about 11-12 renders per frame, this would allow for nice motionblur _and_ supersampled antialiasing, thus extreme highquality images.

so if you want to code an engine that should scale over the next tens of years in quality, the higher the renderpower gets, implementing such a feature would definitely help.
I don't really think so. I mean, when the game is designed, it doesn't make sense for any system to be capable of displaying motion-blurred frames (yet), since there are much more pressing concerns when it comes to the game's looks.

Edit: Well, at least not supersampling-like motion blur algorithms. Even temporal dithering can be horribly expensive: even if your rendering algorithm doesn't mind having few neighboring pixels rendered, you'll be recalculating the geometry many more times per displayed frame.

However, other methods of motion blur have already been described that use more analytic techniques for motion blur, though I've only ever seen such things used for cinematic effect. It'd be fun to see somebody implement an analytic approximation to motion blur to a full screen, to see how it looks.
 
i'm talking about implementing such systems instead of just rendering too much frames (the 700fps q3 has today are useless as there is no screen with such a refreshrate), and instead implementing a system that uses the additional time to do supersampling in time and space (antialiasing, motionblur).

of course, the day q3 would have came to sale, the effect wouldn't be visible. but by today, those systems could be usable (in all q3 engine based games actually), and result in high fidelity images today.

it's just a bether way to make an engine scale up with higher end hw of tomorrow.
 
Well, and I'm not seeing why it makes sense. I mean, it'd be fun as sort of a "proof of concept," but not useful.
 
why not?

games like doom3 got released with resource requirements, that ment you to play with 640x480 or similar on an ordinary system, to be.. more or less smooth.

if you have one of the newest gpu's, you can play at high res.

does playing at higher res make sense? yes, because the image looks bether. does adding more aa make sense? yes, because the image looks bether. does adding motionblur (NOT the blurring if you get hit, but samples between the frames) make sense? yes, because they make the image more smooth without (!!) loosing sharpness and richness.

doom3 can scale in resolution, and with aa. not with motionblur. instead, it will just, in some years, spit out 500-1000 fps. useful? not at all.. if it would instead USE those 500-1000 frames one day in the future, it could've used it for even higher quality rendering (motionblur, higher aa, etc..).

would be definitely a much more useful way to use the power of tomorrow on a game of today. it would mean that even today, a newer gpu would enhance the graphics of good old q3. why not? i'd prefer to play q3 at 60fps on 1280x1024 on my tft at home, with precice motionblur and say 32x aa (combined multi- and supersampling from engine and hw), than to still play it with 6x aa, and dropped frames in between..

not that useful.
 
the idea is to make an engine still enhance the image quality of your game AFTER it got maxed out by the gpu with highest res, highest settings, everything..


i'd love to have outcast f.e. much more scalable.. on todays highend cpu's, it would definitely rock at high res.. but thats another story..
 
davepermen said:
does playing at higher res make sense? yes, because the image looks bether. does adding more aa make sense? yes, because the image looks bether. does adding motionblur (NOT the blurring if you get hit, but samples between the frames) make sense? yes, because they make the image more smooth without (!!) loosing sharpness and richness.
Which is exactly why I say it's not necessary. Specifically, we already have scaling in other areas that make more of a difference. And, what's more, framerate always has been, and will continue to be, mostly CPU-bound. That is to say, you won't get 300 FPS unless your CPU allows it.

That said, CPU's are going to progress much more slowly for single-threaded performance than they have in the past. Instead, parallelism is going to increase. This won't increase framerates for old games. Newer games that make use of the parallelism also will likely only target a limited number of CPU's, so when the number of cores per die increases again, those games will again get left behind. So I just don't see framerates increasing in the future by the dramatic amounts that, say, Quake3 has.
 
anyways, a q3 can run on todays hw with up to 700 fps (overclocked fx55 or 57 it was.. :D).. and it _could_ use that horsepower to get highest-quality 60fps instead.

oh, and.. if they are cpu bound.. how would a loop that rerenders frames to get higher quality be possibly useful then? exactly: to use the else wasted gpu power!

hm..

higher quality is useless? no, but computations that never get to a result is. and dropped frames (if you play q3 now, you drop 100s of frames per second!), that got rendered for NOTHING, that is useless.

oh, and, in theory, q3 has multicore support in.. just never got enabled, or something... so it could scale even more..



the idea is just to use every resource that can, one day, be big enough to get useful.


i don't see that as useless. as i don't see it useless, to, on the other side, have support for not rendering more than the screen refresh rate and idle the rest. this would help notebooks a great deal.

but let frames per second grow to infinity to simply.. drop them away again anyways, _THAT_ is simply useless. now wether you put it into the engine, or into the driver, to do more out of the x00 frames of old games, thats up to the one that writes the code.. motionblur for oversampled frames could be implemented in driver, too.. ultrahigh antialiasings, too..

you can scale at a lot more points in an engine, and make some features variable instead of fixed.. that way games can scale beyond what is imaginable in quality.. even while one day, possibly a bether solution exists..

say soft shadows.. implement a simple jitter method to generate soft shadows. yes, it runs at 1fps today.. but wayt some years, and it possibly runs smooth.. there'll still be a lot of games around based on the engine, and the people will like to see the graphic scale on their newest cards.
 
davepermen said:
oh, and.. if they are cpu bound.. how would a loop that rerenders frames to get higher quality be possibly useful then? exactly: to use the else wasted gpu power!
Rerendering the same frames won't give you motion blur.

And I never said useless. I'm just saying there are better things to do with the available performance. Now, once those things are taken for granted (long shaders, high resolution, high-quality AA for both edges and complex shaders, high-quality physics and AI), then developers should start thinking about making good, high-quality motion blur. But I'm not sure temporal supersampling or temporal dithering would be the way to go.
 
Chalnoth said:
davepermen said:
oh, and.. if they are cpu bound.. how would a loop that rerenders frames to get higher quality be possibly useful then? exactly: to use the else wasted gpu power!
Rerendering the same frames won't give you motion blur.

And I never said useless. I'm just saying there are better things to do with the available performance. Now, once those things are taken for granted (long shaders, high resolution, high-quality AA for both edges and complex shaders, high-quality physics and AI), then developers should start thinking about making good, high-quality motion blur. But I'm not sure temporal supersampling or temporal dithering would be the way to go.

i haven't said the SAME frames.. :D i know how motionblur works... :rolleyes:

what i'm talking about is that you can make the biggest high end games for TODAYS hardware, with big shaders and highly detailed geometry, that works at lowest settings TODAY on gf7. that doesn't mean in some years we play it at maxed out settings. and some years later? there will NOT be any gain in quality with an ever higher end gpu, just more useless frames.

the idea is to build an engine instead that even then can still scale up.


what use do huge shaders, hdr, and all the fuzz have to all the games that where programed years ago? NOTHING, as they can't use it.
but if they would have a loop that would do both spacial and temporal antialiasing, i could even today crank the quality of those features up more high, and get an even bether image.

shaders, textures, meshes, and all, get fixed at release time. the quality of those is fixed, and limited. calculationpower gets fixed at game time.

a gf7 is useless to play q3. if q3 could scale up, support up-to-infinite-antialiasing samples, and up-to-infinite motionblur-samples, a gf7 would provide even today in q3 a little visual gain.



sure, it's not necessary. but it would be definitely bether than nothing at all (wich is what all games of today have). they all max out at a certain configuration, and at that point, going higher end does simply increase framerate, wich is useless. it would be interesting to have an engine, that can scale image quality even higher to scale to even higher hw.



you know, for a little background, i'm working on some raytracing software. i know that on my lonely cpu, i can't expect much.. low res, low quality, much too low amount of samples. still i put in all those features, make it scale to higher resolutions, to finer detail, scale in time for motion blur, scale for more correct gi, etc... i know that all those features are useless on my cpu.

but there is that back end, that can render on a cpu network. once written out, and once .NET 2.0 is out of beta, i can distribute it over the companies network, and render over dozens of clients at the same time.

suddenly, all those features aren't useless anymore. suddenly, i just get a much bether image at much higher quality.

thats what a good engine should be capable, too. no mather how fast the components are. if i can get something even faster, image quality should get even higher (even if it's just a bit).

motionblur, aa, resolution, softshadow-samples, relief-map-iteration-max-count and step-size, dynamic texture resolutions (reflection cubemaps etc), all variables that should be dynamically scaleable depending on performance.

why shouldn't they?

oh, and, no, it's definitely NOT hard to implement motionblur that automatically cummulates frames till the next screen refresh. based on the amount of code needed, it should be standard in EVERY engine.
 
davepermen said:
i haven't said the SAME frames.. :D i know how motionblur works... :rolleyes:
Which is why you have to have faster CPU's for motion blur to make sense. And since it'll take even longer now for CPU's to make similar performance jumps than they have in the past.....

what i'm talking about is that you can make the biggest high end games for TODAYS hardware, with big shaders and highly detailed geometry, that works at lowest settings TODAY on gf7. that doesn't mean in some years we play it at maxed out settings. and some years later? there will NOT be any gain in quality with an ever higher end gpu, just more useless frames.
At release, most games today that really push technology just don't run at good framerates at the highest resolutions and settings with the very best hardware money can buy. In a few years' time, who would want to further improve visual quality (via motion blur) for that old game anyway, except as an academic exercise?
 
when mentoing it's cpu bottleneck so render more frames, i was actually thinking about the antialiasing :rolleyes: sorry, must've been confusing.

still, if you want a more complex way to solve it, just lerp between different states of two frames.. this isn't that much more work for the cpu (no additional physics and ai involved), and let you render the motionblur frames easily.
as a lot of games have a fixed physics and ai rate anyways, they should support the lerping anyways, to allow to render more frames.. so then again, motionblur can be done easily and is about free on cpu side. not completely, as supersampling aa would be, of course.
 
Chalnoth said:
At release, most games today that really push technology just don't run at good framerates at the highest resolutions and settings with the very best hardware money can buy. In a few years' time, who would want to further improve visual quality (via motion blur) for that old game anyway, except as an academic exercise?

I would like to.. i'd prefer to play q3 with motionblur and highest aa possible at 60fps on my 1280x1024 tft. more useful than the 6x aa i have now, with some houndred fps, and no motionblur.

I play quite some old games, and every bit of enhancement would make it just look a bit better. Anyone would say no to this? It wouldn't HURT, at least.

one thing that comes to mind was the good old cs.. the friends around had all things cranked up, and played all night, and still had much too much frames. could get used better.

it isn't much work to build it into an engine, and it wouldn't hurt anyone.

and with todays view onto SLI, it could mean that it would be possible to render at highest settins say on a 4xsli on nextgen card in one year, with motionblur and such. just for the kick for the one that can pay that :D
 
Back
Top