Chalnoth said:
davepermen said:
oh, and.. if they are cpu bound.. how would a loop that rerenders frames to get higher quality be possibly useful then? exactly: to use the else wasted gpu power!
Rerendering the same frames won't give you motion blur.
And I never said useless. I'm just saying there are better things to do with the available performance. Now, once those things are taken for granted (long shaders, high resolution, high-quality AA for both edges and complex shaders, high-quality physics and AI), then developers should start thinking about making good, high-quality motion blur. But I'm not sure temporal supersampling or temporal dithering would be the way to go.
i haven't said the SAME frames..
i know how motionblur works...
what i'm talking about is that you can make the biggest high end games for TODAYS hardware, with big shaders and highly detailed geometry, that works at lowest settings TODAY on gf7. that doesn't mean in some years we play it at maxed out settings. and some years later? there will NOT be any gain in quality with an ever higher end gpu, just more useless frames.
the idea is to build an engine instead that even then can still scale up.
what use do huge shaders, hdr, and all the fuzz have to all the games that where programed years ago? NOTHING, as they can't use it.
but if they would have a loop that would do both spacial and temporal antialiasing, i could even today crank the quality of those features up more high, and get an even bether image.
shaders, textures, meshes, and all, get fixed at release time. the quality of those is fixed, and limited. calculationpower gets fixed at game time.
a gf7 is useless to play q3. if q3 could scale up, support up-to-infinite-antialiasing samples, and up-to-infinite motionblur-samples, a gf7 would provide even today in q3 a little visual gain.
sure, it's not necessary. but it would be definitely bether than nothing at all (wich is what all games of today have). they all max out at a certain configuration, and at that point, going higher end does simply increase framerate, wich is useless. it would be interesting to have an engine, that can scale image quality even higher to scale to even higher hw.
you know, for a little background, i'm working on some raytracing software. i know that on my lonely cpu, i can't expect much.. low res, low quality, much too low amount of samples. still i put in all those features, make it scale to higher resolutions, to finer detail, scale in time for motion blur, scale for more correct gi, etc... i know that all those features are useless on my cpu.
but there is that back end, that can render on a cpu network. once written out, and once .NET 2.0 is out of beta, i can distribute it over the companies network, and render over dozens of clients at the same time.
suddenly, all those features aren't useless anymore. suddenly, i just get a much bether image at much higher quality.
thats what a good engine should be capable, too. no mather how fast the components are. if i can get something even faster, image quality should get even higher (even if it's just a bit).
motionblur, aa, resolution, softshadow-samples, relief-map-iteration-max-count and step-size, dynamic texture resolutions (reflection cubemaps etc), all variables that should be dynamically scaleable depending on performance.
why shouldn't they?
oh, and, no, it's definitely NOT hard to implement motionblur that automatically cummulates frames till the next screen refresh. based on the amount of code needed, it should be standard in EVERY engine.