Using FFT to determine game sharpness as measured by high frequency detail *spawn

I agree with you, but what you say holds for every single tec aspect of game graphics...the per eption of the final reception is subjective.

But this is not the point of the console technology sub forum I guess...here the tec only counts.

I think it is an interesting way to find a way to judge the IQ of games.

I am wondering if it is better to look at the spectrum of the FFT, i.e. Convert the 2d plot into a 1D line plot...this is actually what people do to investigate the scales in a turbulent flow field.

I hope I find time this weekend to try this out...
 
Last edited:
I think your test is even more flawed than Durante's. A sphere applied with only one color gradient? How is that representative of any game?

Do you have any idea what is going on in that image? Let's put it this way.. yes, the geometry is simple. But the rendering is more advanced than any game can do.. and not just now but for generations.

The sphere test is using a more physically plausible shader, real area light, an environment sky, casting pre-number adjusting shadows, and has indirect global illumination. And it's all being path-traced, so noise is inherently introduced into the scene by default due to the Monte Carlo sampling.

I created it as a means of controlling the scene precisely.
 
NOTE: These are camera samples but they migrate to all indirect and light samples as well. That's why the noise is cleaned up at the shader level as well as spatial aliasing.

2 samples (i.e. 4 actual samples per pixel)
sphere_test21.png

fft_2_samples1.png


3 samples (9 samples/pixel)
sphere_test3.png

fft_3_samples1.png


4 samples (16 samples/pixel)
sphere_test4.png

fft_4_samples1.png


6 samples (36 samples/pixel)
sphere_test6.png

fft_6_samples1.png


10 samples (100 samples/pixel)
sphere_test10.png

fft_10_samples.png
 
Do you have any idea what is going on in that image? Let's put it this way.. yes, the geometry is simple. But the rendering is more advanced than any game can do.. and not just now but for generations.

The sphere test is using a more physically plausible shader, real area light, an environment sky, casting pre-number adjusting shadows, and has indirect global illumination. And it's all being path-traced, so noise is inherently introduced into the scene by default due to the Monte Carlo sampling.

I created it as a means of controlling the scene precisely.

He's just pointing out how a sphere on a plane isn't at all representative of typical game environments. The fact that you're using an offline path tracer only strengthens his point: the artifacts that you get from high variance in a path tracer are going to be very different from the sort of noise that you would get from a grain filter (grain is much more uniform), and even more different from the regular patterns that you get from undersampling in a rasterizer.
 
He's just pointing out how a sphere on a plane isn't at all representative of typical game environments. The fact that you're using an offline path tracer only strengthens his point: the artifacts that you get from high variance in a path tracer are going to be very different from the sort of noise that you would get from a grain filter (grain is much more uniform), and even more different from the regular patterns that you get from undersampling in a rasterizer.
Any chance you can provide The Order images with and without post for analysis? :D

Actually, any screenshot sans postFX can be used to compare.
 
Any chance you can provide The Order images with and without post for analysis? :D

Actually, any screenshot sans postFX can be used to compare.

I have a better idea: we could do our own screenshots and comparisons if the game had an option to disable the grain filter...

I swear It's for science's sake only! :yep2:
 
He's just pointing out how a sphere on a plane isn't at all representative of typical game environments. The fact that you're using an offline path tracer only strengthens his point: the artifacts that you get from high variance in a path tracer are going to be very different from the sort of noise that you would get from a grain filter (grain is much more uniform), and even more different from the regular patterns that you get from undersampling in a rasterizer.

I hear ya.. I only posted to verify the algorithm really. I just wanted to see it work in a very controlled environment. It wasn't meant to represent games. But I have added the source code if anyone wants to do their own analysis.
 
Uhm I have a couple of doubts here.

Both Durante's and VFX_Veteran's plots look a bit weird compared to fourier transform I usually generate. More precisely they have a strange colormapping that seems to clamp higher values to 1 (hence that red blotch for low freq and some times higher frequency).
For example, for the image provided by VFX_Veteran

sphere_test10.png


The plot as I would generate looks more like this:
1uxBNTv.png


(Note that as usually done I have here represented the log of the magnitudes for visualization purposes).

To me this looks much more informative and tells much more about an image.


Am I wrong?

Secondly, I can't see this test as a rigorous way to compare two different games postproc blur/grain/etc.. There are so many things that can make an image have hi/low freq rather than those effects. This starts to make more sense, IMHO, when the images are incredibly alike or if you want to compare the effects of different sets of postproc for the same pic.


I am pretty much a newbie in the field, so probably I've made some terrible mistakes in the above, so please correct me if I am wrong :)
 

I didn't see VFX_Veteran's code. The values are obviously correct, but the visualization (imshow) is done wrongly. In particular the magnitudes values are double and in such case imshow will assume they're in the [0 ... 1] range while they aren't. The quickest way to fix it is by using

Code:
imshow(F2, []);

Or by normalizing the magnitudes right before visualization.
 
Yea, I didn't really put a lot of time into this. In any case, it's there for people to mess around with/optimize or whatever they want to do. I just wanted to prove that given the same image and render, increasing AA does indeed bring the results closer to a cleaner image based on reading the DFT.
 
BTW MJP, congratulations on The Order - I recall you're at RAD, right? - amazing work!
 
I can certainly ask, but I think it's pretty unlikely. Perhaps I can see if I can have a comparison as part of a future tech presentation.

Great job! I enjoy your game and great graphics and sound design! Now I need to platinum it.
 
He's just pointing out how a sphere on a plane isn't at all representative of typical game environments. The fact that you're using an offline path tracer only strengthens his point: the artifacts that you get from high variance in a path tracer are going to be very different from the sort of noise that you would get from a grain filter (grain is much more uniform), and even more different from the regular patterns that you get from undersampling in a rasterizer.

Oh absolutely agree. I wasn't trying to relate the two. I was just providing a proof of concept in my own environment to verify that the DFT is working. And to also provide the code which Durante didn't post on NeoGAF.

Congrats on the game btw! I'm hearing excellent things about the graphics from our artists at our studio!

I see you know Adam Martinez, Zap and Christophe Hery.. :)
 
Thank you everyone for the kind words! Please don't let me derail the thread. :)

I see you know Adam Martinez, Zap and Christophe Hery.. :)

Haha, yes indeed! Zap's talk about mental ray a few years ago at SIGGRAPH was amazing, I was so glad that I was there to see it in person.
 
Sharpness as defined by Imatest? Then there would be a need to find good Modular Transfer Functions on each render step a game uses. Even so "-ness" are psychology effects... Maybe search for SchubinCafe talks?
 
Back
Top