Using FFT to determine game sharpness as measured by high frequency detail *spawn

The test measures high-frequency detail. That's what a full screen noise filter is.

No. The test plots the amplitude(color encoded) for each frequency.

If I understand his plot correct (he did not name the axis), the axis represent the wavenumvers in x and y direction. Thus, the closer to the value zero coordinate, the lower the frequencies.

Thus, red color in the center of plot indicates that the most frequencies ste lower frequencies. The larger the red blob extend to the outer part of the axis, the more frequencies are significant in the plot.

Thus, a noise filter with significant impact would have red all over the place and also in the high frequency (outer) parts.

Noise is low amplitude high frequency signals, thus has minor impact in this plot.

(If I understand his axis correct)
 
Without his exact method published we can't be sure about the effects of noise on his tests. What we can be sure of is that TO1886 is most definitely blurrier than the PC version of Ryse.
 
Yea, I was wondering if he would release his script code. I've got MatLab here at home but really don't feel like trying to remake his stuff..
 
You forget the noise filter in The Order which renders that test useless.
It impacts it. Billy suggests it also measures the results factor in amplitude.

Another issue is The Order's filtering. Deliberate reduction of specular aliasing written into the material shaders will reduce high frequency detail. Edit: He mentions as much regards post AA. Anything that filters out high-frequency aliasing will soften the results of the analysis.

There's not enough detail on the methodology. It's an interesting idea though, trying to produce an objective measure. The partner would be to showcase images to people and see which they prefer and compare that to the frequency analyses. Although that's probably going to produce the same mixed results as traditional empirical testing.
 
Judging by the numbers on the axes, that might just be a magnitude graph of the discrete fourier transform. Although I haven't personally delved into the multidimensional variants much, and at the moment don't have any MATLAB available.

It impacts it. Billy suggests it also measures the results factor in amplitude.
It definitely should, if it's what it sounds like. Fourier transforms are linear, a bigger signal component has bigger frequency-domain components.
 
Well, I *think* I mimic'd Durante's script in MatLab but I have my reservations. I'll post the code here in case someone wants to try and play around with it. I think the aspect ratio of the image and the output of the data must be the same as I was getting some crazy results otherwise.

As a test I rendered an image of a sphere and plane in Arnold using Maya:

sphere_test2.png


2 camera AA samples:

fft_2_samples.png


3 camera AA samples:

fft_3_samples.png


4 camera AA samples:

fft_4_samples.png


6 camera AA samples:

fft_6_samples.png


So by Durante's logic, the 2 camera AA samples image is 'sharper' (i.e. has more noise in it). As I increase the samples to get rid of the noise, my DFT is showing more 'smoothness' (i.e. less noise).

Matlab code:

function img = fft_view(filename,ext)

% Read the image
A = imread(filename, ext);

% Convert to grayscale
A = rgb2gray(A);

% Convert from 0:255 to 0:1 range
A = double(A);
A = A - min(A( : )));
A = A / max(A( : )));

% take the fft-2d with same aspect ratio (in this case 1.333)
F = fft2(A,420,560);

% invert fourier so spikes are in the middle
F = fftshift(F);

% allow user to see the data
F2 = log(abs(F));

% create a new figure and show the image
figure;
imshow(F2);
colormap(jet); colorbar;
img = 1;
end
 
Thanks! Can you explain please what the 2 camera, 4 cam and 6 cam means?



What is flawed imo in Durante's test is that the screenshot are obviously different and I think that this has much more impact on the result than anything else. Watching a plain blue sky in Ryse and then a detailed indoor scene should give very different results.

That is why I like your test. What happens when you add a noise filter to the 6 cam figure after AA...similar to the noise filter in The Order? Does it then look like the 2 cam figure or does it still look roughly the same?
This is exactly what Scofield and I were discussing.


Could you maybe also post the actual figures to see how they look like?
 
So by Durante's logic, the 2 camera AA samples image is 'sharper' (i.e. has more noise in it). As I increase the samples to get rid of the noise, my DFT is showing more 'smoothness' (i.e. less noise).

Matlab code:

I think your test is even more flawed than Durante's. A sphere applied with only one color gradient? How is that representative of any game?

Well maybe for the first 3D games of the SNES (like Starfox) it would be a good test...

One should do this test not on one particular scene but a whole video (like the first level of each game).
 
Well, I *think* I mimic'd Durante's script in MatLab but I have my reservations.
Good job! Can you provide the 2x and 4x images as well so people can correlate the results with the difference in noise?

So by Durante's logic, the 2 camera AA samples image is 'sharper' (i.e. has more noise in it). As I increase the samples to get rid of the noise, my DFT is showing more 'smoothness' (i.e. less noise).
He pointed out the limitation of this test with his Oblivion example and it's 0 AA and heavy aliasing. I think it points to the idea generally being bunk, at least in this first draft. ;) Super crisp images will have aliasing reduced and thus extreme high frequencies suppressed. I do think there's something that can be done to extract a sharpness factor though. Might require a histogram normalisation on the image to adapt for low-key, low-contrast images.

Thanks! Can you explain please what the 2 camera, 4 cam and 6 cam means?
Samples per pixel, equivalent to 2x, 3x, 4x and 6x supersampling filtering the noisy area light.

That is why I like your test. What happens when you add a noise filter to the 6 cam figure after AA...similar to the noise filter in The Order?
The image already has noise. Lots at 2x, less at 6x. The noise amplitude is greatly reduced relative to the average intensity as samples increases, and that's coupled with a focussing of the red area in the results.

I think your test is even more flawed than Durante's. A sphere applied with only one color gradient? How is that representative of any game?
It's just a test of the Matlib script! VFX_Veteran used an image he could control produced from his area of expertise to test how the results of the script compare to what's seen in the graph results. Someone else with experience poking around with PC game settings would like produce images from a game with various AA and filtering settings applied.
 
The image already has noise. Lots at 2x, less at 6x. The noise amplitude is greatly reduced relative to the average intensity as samples increases, and that's coupled with a focussing of the red area in the results.

But isn't there a difference between noise from aliasing and noise as a post processing filter?!

Noise from aliasing can have much larger amplitudes in a FFT (as this is what aliasing is all about...large amplitude errors), whereas I am curious if a de-aliased picture with noise filter on top of it (as post processing) looks like an aliased screenshot or quite similar to the de-aliased screenshot as the change in actual amplitude is much smaller with the noise filter.

We refer to noise when there are small amplitude variations on top of the actual solution...thus I am wondering if it even has an impact on the analysis.


Another problem I see with using FFT is that FFT assumes a periodic function. Thus, when the function values are not periodic over the domain (it is not easy to find a screenshot with lots of stuff and detail but periodic color values), Gibbs phenomenon occurs which introduces again artefacts (wiggles) in certain frequencies...these can also have a huge impact, depending on the situation.

There are techniques available which introduce a continuation of a given function such that it is periodic on a larger artificial domain. This then helps to reduce the artefacts when using an FFT...I never thought about using this for image processing. This might actually be quite interesting as a Bachelor thesis topic :)
 
But isn't there a difference between noise from aliasing and noise as a post processing filter?!
I'm not sure what you're asking. Going back to your original question about adding a noise filter, that already exists. Here's my own mock-up.

Image2.png

The first image is 'perfect' (120 light samples per pixel). The next has 6 samples per pixel, then 3, then 1. The results have considerable noise versus the perfect version. This would appear in the graphic analysis, with the single sample having red fuzz everywhere, and the 6 samples being a more focused red blob in the centre. The 6 sample image is basically the same as the perfect image with a subtle post-process noise added. Okay, it's not exactly the same as a post noise, but the results in the frequency analysis should be the pretty much the same.

So I consider VFX_Veteran's 6 sample image akin to a perfectly clean render with subtle post effect noise added. You can see the noise in the image.

Other than that, yes, there's a difference between noise form aliasing and post noise, but those will also feature in the Mathlab code. The original Oblivion example is all high-frequency noise from aliasing. The results from The Order should show very little impact from the added noise because it's low amplitude, just as VFX_Veteran's results show massive fuzz for the high noise of 2 samples and focussed red for the low noise of 6 samples. It would be nice to have a control of a perfect, noise-free image!
 
This is kind of neat, I guess, but what's the point? You know what's really good at telling if an image is blurry? Your eyes. On the dev side, I think comparing images would be the best option. What exactly is the point of this, and when would it be relevant?
 
Was Durante taking out the black bars and just running the test on the 1920x800 image? If he did not, what would that do to the test?

@Scott_Arm - Bah -- It is fun that is why! Although, I agree that humans tend to differ from each other and strangely carry different opinions that make no sense to me at all.:runaway:
 
This is kind of neat, I guess, but what's the point? You know what's really good at telling if an image is blurry? Your eyes. On the dev side, I think comparing images would be the best option. What exactly is the point of this, and when would it be relevant?
With enough transforms, we'll find a mathematical pattern in the galactic cycles of species development and preferences.
 
Last edited:
This is kind of neat, I guess, but what's the point?
I agree with you. However, it does bring a quantative metric to image analysis, or at least that's the idea. When you can ascribe a number to things, you can prove something's true or not, which is what some folk who have trouble with subjective values prefer to do. When Game X gets a qualitative image index of 79.4 and Game Y gets a qualitative image index of 80.1, that proves Game Y is better than Game X even if people say they think Game X looks better. And remember, the internet only exists so people can prove others wrong on an international scale... :yep2:

It's kinda cool to have a novel way to measure things, and it might lead onto something else too.
 
I agree with you. However, it does bring a quantative metric to image analysis, or at least that's the idea. When you can ascribe a number to things, you can prove something's true or not, which is what some folk who have trouble with subjective values prefer to do. When Game X gets a qualitative image index of 79.4 and Game Y gets a qualitative image index of 80.1, that proves Game Y is better than Game X even if people say they think Game X looks better. And remember, the internet only exists so people can prove others wrong on an international scale... :yep2:

It's kinda cool to have a novel way to measure things, and it might lead onto something else too.

But, what are you proving, and why? You're proving one game has more high-frequency detail than another, which means absolutely nothing on its own. It doesn't tell you how good a game looks (subjective). It just helps you compare how much high-frequency detail games have relative to each other, which is completely useless. It's basically just a way for people to "objectively"(not really) settle versus wars.
 
Let me put it this way, say you're playing a game and you think the graphics look pretty clean and sharp, and you think overall the visuals are great. Someone uses Matlab to determine the game is relatively lacking in high-frequency detail. Relative to other games, it could be considered a little bit blurry. Well, who cares? Why does that even matter? Your perception of the game is important. If you think the game is nice to look at, that's all that's important when you're playing it. Doing this kind of test isn't informative in any way. It's not really a tech breakdown where you can learn something. It's just a scale with no purpose. The same could be true if you think a game looks a bit blurry, and it's not. Why does that matter? To you it looks blurry.
 
Last edited:
Back
Top