Sony PlayStation 5 Pro

This guy is the future of game analysis. he is the best with real-time Frame-time, FPS, resolution and min/max and average stats.

EDIT: he also shows power consumption in his latest videos ! PS5 Pro consumes up to 252W (and often at 240W) in F1 24. 233W max in Dynasty warriors: origins Demo
I do quite like the way they have their graphing system laid out as well as showing the real-time dynamic resolution scaling.. however they're not really analyzing anything and presenting an informed opinion on why it's doing what it's doing.. they're just showing the data. Which on one hand some people might actually prefer.. :)

I think this is an obvious next step for Digital Foundry. They've got the right people to analyze games and visuals and give informed opinions on the tech side, now they just need to adopt some newer more modern ways to visually present that data to people.
 
This guy is the future of game analysis. he is the best with real-time Frame-time, FPS, resolution and min/max and average stats.

EDIT: he also shows power consumption in his latest videos ! PS5 Pro consumes up to 252W (and often at 240W) in F1 24. 233W max in Dynasty warriors: origins Demo. I was expecting max 250W (launch PS5 consumes max 230W), we could see games consuming 260W later.
This only works for some games. They can’t apply this technique it to titles that are properly upscaled.

So definitely not the future. These basic upscaled images will all be gone in the near future. That’s why DF doesn’t use this method.
 
This only works for some games. They can’t apply this technique it to titles that are properly upscaled.

So definitely not the future. These basic upscaled images will all be gone in the near future. That’s why DF doesn’t use this method.
If humans can find some kind of native resolution, AI tools will be able to do as well...and very soon will do a much better job at this. In 10 years few people will still do that by hand.
 
How does he measure resolution? What's that noisy rectangle top-left?

It’s the technique I used and documented here.


Pretty positive this is 99.99% the same technique it was lifted from this site.

Once AMD leaves FSR2, this is no longer going to work.
 
Last edited:
How does he measure resolution? What's that noisy rectangle top-left?

Actually I know this guy. He is a reader from my website, and I've had several talks with him about his tools, including a video talk where he presented is tools to me and we talked about how he could improve them.
The noisy rectangle is the result of the automation he does for checking the native resolution in the games. He applies a Discrete Cosine Transformation on the images transforming them into a wave. The noisy screen is the result of the transformation.
In that image you will find a vertical and a horizontal black line. That is the zone where the Discrete Cosine Transformation results are less noisy, and their placement equal the native resolution of the image.
He is pioneer in this, and understanding the results of the Discrete Cosine Transformation, and above all removing the false positives was is main problem.
The tool is working fine now.
I promised him I would not divulge data about is tool, but he already made public the method he uses so I'm not breaking my word. I will tell him he is being mentioned here!
 
It’s the technique I used and documented here.


Pretty positive this is 99.99% the same technique it was lifted from this site.

Once AMD leaves FSR2, this is no longer going to work.
Actually he did mention that someone here had talked about the technique he was using. So I believe you can up those extra 0.01%. ;)
 
I think this is an obvious next step for Digital Foundry. They've got the right people to analyze games and visuals and give informed opinions on the tech side, now they just need to adopt some newer more modern ways to visually present that data to people.
For PC we are upgrading our graphs to show more data as we can actually gather that data reliably with the open nature of PC. For consoles that is a bit harder, as we have not found a good way to actually measure internal resolution that is foolproof. The errors we have found in techniques with blind testing has shown them to not be reliable enough, especially if a game uses a form a TAAu (FSR, DLSS, etc.). It tends to have big errors there. If a game does not use upsampling, then the accuracy is higher.

We are not too comfortable with showing some sort of graphs or data on screen which is shown as 100% ground reality when it may be inaccurate.

Yes @iroboto points it out.

If this person's work manages to be accurate for DLSS + DRS, then I would consider it applicable without concern.
 
Actually he did mention that someone here had talked about the technique he was using. So I believe you can up those extra 0.01%. ;)
I appreciate that he did it. It’s nice to see it works.

But like Alex said, I couldn’t precisely determine resolution when it got complex with upscaling algorithms, particularly as more temporal and vector information was added. So after a few attempts at trying to do image quality analysis, work ceased. I started a new job that took up my time, way too much unfortunately, and I guess not seeing it finished was my only regret to see what it would look like.

So I’m happy to see it done. Richard told me about this channel a few months ago, I ran the analysis across some b-roll PSSR footage, and the DCT or Fourier transform didn’t show anything I could work with, so we stopped again lol.
 
I appreciate that he did it. It’s nice to see it works.

But like Alex said, I couldn’t precisely determine resolution when it got complex with upscaling algorithms, particularly as more temporal and vector information was added. So after a few attempts at trying to do image quality analysis, work ceased. I started a new job that took up my time, way too much unfortunately, and I guess not seeing it finished was my only regret to see what it would look like.

So I’m happy to see it done. Richard told me about this channel a few months ago, I ran the analysis across some b-roll PSSR footage, and the DCT or Fourier transform didn’t show anything I could work with, so we stopped again lol.

He is to be congratulated on what he did... He really lost time, and broke his head to create those tools! PSSR is not a problem and neither only one axe re-scaling like in Killzone where re-scaling is made on X axis only.
 
He is to be congratulated on what he did... He really lost time, and broke his head to create those tools! PSSR is not a problem and neither only one axe re-scaling like in Killzone where re-scaling is made on X axis only.
I’d be interested to see it if it works. The artifacts are not present in every frame, so im not really sure what to count in those situations.

As NN become better at their job, I see an issue where these artifacts will become too hard to find.

But perhaps the more relevant question is whether the base resolution even matters after a certain point. If the output for most people is indistinguishable, is there any point.

Case in point:

If this guy can’t see a difference given the massive difference between 5pro and ps5. There was no way in hell anyone would be able to tell the difference between XSX and PS5.

I would just be complicit in console wars.

and that’s where going back on the topic, if image quality can be solved by things like NN algos, then the power should be used to solve rendering challenges we couldn’t solve before, which brings us back to next gen techniques. RT, GI, micro geometry, etc.
 
Last edited:
It’s the technique I used and documented here.


Pretty positive this is 99.99% the same technique it was lifted from this site.

Once AMD leaves FSR2, this is no longer going to work.
it seems to work with PSSR quit well still and DLSS is not going to be in home console gaming anytime soon.

And it seems you underestimate the power of NN for such a task. If a human can find a native resolution of an image in say 10 hours of work, NN will be able to in a fraction of seconds.
 
I’d be interested to see it if it works. The artifacts are not present in every frame, so im not really sure what to count in those situations.

As NN become better at their job, I see an issue where these artifacts will become too hard to find.

But perhaps the more relevant question is whether the base resolution even matters after a certain point. If the output for most people is indistinguishable, is there any point.

Case in point:

If this guy can’t see a difference given the massive difference between 5pro and ps5. There was no way in hell anyone would be able to tell the difference between XSX and PS5.

I would just be complicit in console wars.

and that’s where going back on the topic, if image quality can be solved by things like NN algos, then the power should be used to solve rendering challenges we couldn’t solve before, which brings us back to next gen techniques. RT, GI, micro geometry, etc.
Funny I watched the video (on the post, without full screen), and could see Pro was the second. The RT reflexions on the windows are much higher res.

But I get your point...
 
it seems to work with PSSR quit well still and DLSS is not going to be in home console gaming anytime soon.

And it seems you underestimate the power of NN for such a task. If a human can find a native resolution of an image in say 10 hours of work, NN will be able to in a fraction of seconds.
I was a data scientist, I did the discovery work to be able to do the pixel counting. This guy automated counting of the output, which is not easy in its own right, but please don’t insult me that I don’t understand the power of neural networks. My job is AI and to implement and sell AI based solutions now, I no longer develop them.

I did not see the value in exploring this solution any further.

There is a loose guesswork here done when frames appear that don’t have the artifact that indicates the base resolution. That problem isn’t consistently solved.
 
And it seems you underestimate the power of NN for such a task. If a human can find a native resolution of an image in say 10 hours of work, NN will be able to in a fraction of seconds.
What would cause it to take 10 hours for a person? If it's a case of having to interpret a new set of data from a new algorithm, to actually think about what's going on and how a new technique is being employed, then the NN would never be able to solve it. Imagine an NN that could pixel count back in the PS3 era far faster than a person, and then along comes checkerboard rendering...

Once people have found a method and process to derive the necessary values, an ML can indeed be set to the task to automate it, but if the modes of upscaling keep changing, I doubt a general purpose NN can be constructed that can solve resolving rendering res from any arbitrary input.
 
PSSR is turning out to be far from the slam dunk win that it seemed to be based on early promo shots and a limited selection of games where it performed well. I'm sure that it will improve as the model is tuned, but currently it really needs to be a user option.

I may have missed it, but has there been confirmation that PS5Pro is simply using int operations on shaders to run PSSR, like XeSS on none Intel hardware (or intel with integrated graphics)? Intel XeSS can deliver surprisingly good results on low powered GPUs so long as support DP4a, but it's a long way behind the more complex XeSS model used with XMX units (and Nvidia's DLSS), and the results aren't always great and sometimes worse than using FSR.

The development of PSSR ove the coming months and years is going to be interesting to follow. I can understand why some Pro owners - who've paid a fair chunk for a premium device with premium upscaling - are frustrated with the results in some games. Taking a perceptual step back (as it appears to be in some games or in parts of games) is going to be frustrating, especially when you can't turn it off.
 

This to me is indicative of the fact that a lot of gamers can't recognize sharpening filters, and PSSR should probably include one (or better, an option in game) or these types of criticisms will be common.

looks like a lot bunch devs are sadly using an old version of it and not updating.

Do we have evidence of games that have updated their version with significant results though? There's no point to compare a game that has a 1440p+ input resolution and no ray tracing de-noising requirements to a game that's running internally below 1080p and has tons of lower-res post processing inputs.

I have little doubt PSSR will continue to iterate and improve, but we need to compare games with the relative same quality of input.
 
Back
Top