Digital Foundry Article Technical Discussion [2025]

I think that at some point PSSR and FSR 4 will become almost the same thing, but Sony will still be calling it PSSR. It will just be Sony's implementation for playstation platforms. And that's not a bad thing, the only way that AMD and Sony can remain competitive is by collaborating.
PSSR has a very limited shelf life. Sony has one more Playstation before it becomes unfeasible to produce them.
 

0:00:00 Introduction
0:01:04 News 1: GTA 5 slated for PC upgrade
0:15:32 News 2: Microsoft announces game generating AI
0:32:49 News 3: Cyberpunk 2 job listing suggests ultra realistic crowds
0:44:53 News 4: 5070 Ti launches to price hikes and scalping
1:06:35 News 5: Nvidia deprecates 32-bit PhysX on 50 series GPUs
1:19:16 Supporter Q1: Couldn’t frame extrapolation solve input lag for PC games?
1:24:20 Supporter Q2: Why are so many people underwhelmed with current-gen graphics?
1:36:32 Supporter Q3: If Microsoft is giving up on console market share, where does Sony’s primary competition come from in the future?
1:42:45 Supporter Q4: With FSR 4 delivering good results, why should Sony continue to develop PSSR?
1:47:53 Supporter Q5: How can we prevent digital licensing issues when PSN is offline?

Alex goes off on the Physx 32 bit boondogle:
👏🏻👏🏻👏🏻👏🏻👏🏻👏🏻👏🏻

If someone can get something positive from nVidia is people like Alex or the community showing their discomfort. These are truly damned times. nVidia steps in and launches GPUs that, had they been from a new company, would have spelled their 'bankruptcy' (though, of course, nVidia wouldn't have made it this far without a certain level of quality control over its designs, and support from third-party manufacturers).

But this is dangerous. nVidia should tread carefully. If the AI bubble bursts, as so often happens with booming, revolutionary industries in their early days, the sector will need to mature to stay relevant. This means many companies that rely solely on investments will disappear. And if their products are of poor quality, the fall will be an even more "emotional" and terrible ordeal than what Intel—my long-time favorite company— has been enduring.
 
Last edited:
No ?

Current PSSR implementation relies on an entirely custom set of 44 3x3 convolution shader instructions and if Sony wants to impose a full BC requirement for next generation then there's potentially less room for evolution compared to the possibly more opaque black box nature FSR4 is looking up to be ?
On a PS6, with project Amethyst and all of that, UDNA and the next console will probably have similar hardware to accelerate AI. On PS5 Pro PSSR will remain custom.
 
PSSR has a very limited shelf life. Sony has one more Playstation before it becomes unfeasible to produce them.
Sony isn't abandoning Playstation any time soon. It would be like Apple not producing iPhones because it can't innovate much anymore. It's their biggest product, they will try selling it until nobody wants it and they have to give up.

On PSSR, playstation spectral is the branding umbrella for all playstation ai technologies, they aren't going to abandon it, even just for branding reasons.
 
CPU version, which is junk. GPU PhysX is proprietary and locked up.
Huh, the press release says "gpu accelerated".
Free, Open-Source, GPU-Accelerated
PhysX will now be the only free, open-source physics solution that takes advantage of GPU acceleration and can handle large virtual environments. It will be available as open source starting Monday, Dec. 3, under the simple BSD-3 license. PhysX solves some serious challenges.
Though, I guess the PhysX code could be open source but rely on closed source CUDA dependencies. I guess that makes sense why AMD and Intel never released a PhysX driver.
 
Though, I guess the PhysX code could be open source but rely on closed source CUDA dependencies. I guess that makes sense why AMD and Intel never released a PhysX driver.

It's not just that. PhysX just wasn't very successful. It wasn't implemented by many games back then.
The main problem with PhysX (or any other "GPU accelerated physics engine") is that in many situation you still need to access those data on the CPU side. If all you need to do is to show something without much interactions, such as fireworks made of particles, or the waves of a water surface, of course it's good. However, in most cases you need to be able to interacts with the aftermath of these effects, such as a wall falling down or somthing. So you need to have CPU able to access the results of these calculations. Back then data transfer between CPU and GPU was slow. Furthermore, games need to be able to support devices with no GPU acceleration, so they end up only doing physics effects with little or no interactions to avoid players without such GPU with potentially different gameplay.
 
It's not just that. PhysX just wasn't very successful. It wasn't implemented by many games back then.
The main problem with PhysX (or any other "GPU accelerated physics engine") is that in many situation you still need to access those data on the CPU side. If all you need to do is to show something without much interactions, such as fireworks made of particles, or the waves of a water surface, of course it's good. However, in most cases you need to be able to interacts with the aftermath of these effects, such as a wall falling down or somthing. So you need to have CPU able to access the results of these calculations. Back then data transfer between CPU and GPU was slow. Furthermore, games need to be able to support devices with no GPU acceleration, so they end up only doing physics effects with little or no interactions to avoid players without such GPU with potentially different gameplay.

Yep and the idea was that GPU PhysX is a deadend and 8/12/16 core CPUs would unlock scalable interactive physics. That hasn’t happened. Maybe there’s still not enough CPU horsepower to drive particle simulation. Surely there are enough free threads to run a decent cloth sim on the CPU though.
 
Hence all the PhysX showcases were just bouncing sparks and gusting leaves. ;)
Actually no, there were lots of interactive water simulations, fluid simulations, interactive rigid body simulations, cloth simulations, interactive smoke and fog, interactive force fields, particles, debris .. these things caused frame pacing issues when spawned in high numbers because they involved the CPU.

Water: CryoStasis, Alice Madness Returns
Fluid: Borderlands 2, Borderlands Pre Sequel
Cloth: Mafia 2, X-Com The Bureau, Mirror's Edge
Smoke: Assassin's Creed 4, Call of Duty Ghosts
Fog: Batman Arkham Asylum, Batman Arkham City, Batman Arkham Origins
Force Fields: Borderlands 2, Borderlands Pre Sequel, X-Com The Bureau
Particles/debris: Metro 2033, Metro Last Light, Borderlands 2, Borderlands Pre Sequel, X-Com The Bureau, Mafia 2, Mirror's Edge, CryoStasis, Alice Madness Returns
 
Last edited:
Yep and the idea was that GPU PhysX is a deadend and 8/12/16 core CPUs would unlock scalable interactive physics. That hasn’t happened. Maybe there’s still not enough CPU horsepower to drive particle simulation. Surely there are enough free threads to run a decent cloth sim on the CPU though.

I don't think it's technical reason per say but more of a design and business one.

We do have games, even ones on the consoles with weaker CPUs, that have physics. The problem here then is with a fixed resource pool how do developers choose to allocate this? Well not the physics.

Now well you say PCs have excess CPU resources but how many developers are going to chose develop entire subsystems for a small segment of PC users (bear in mind not all PC users have those high amount of cores)?

Now you say some developers have done the above but have they without IHV assistance (however you want to term it)? However the PC IHV with the most (by far) resources to assist developers also doesn't make a CPU. The other issue is that true a CPU system would be CPU hardware agnostic (at least way more so than GPU), so even if a CPU IHV made one it doesn't necessarily sell more of their CPUs.
 

900p with FSR1 in performance mode...
It's like they understood the assignment to improve it from the beta:

•from 40-50 fps to 60
•solved the clarity bug
•added hit stop

But then they forgot the last step, getting fsr2 in there.

Just upscale from 900p to 1440p and it's good (not great, but better than fsr1)

Reduce the draw distance, foliage density and other settings to get that millisecond you need to compute FSR 2.
 
Last edited:
Now well you say PCs have excess CPU resources but how many developers are going to chose develop entire subsystems for a small segment of PC users (bear in mind not all PC users have those high amount of cores)?

I wouldn’t expect games to roll fully custom cloth physics just for high end PCs. Cloth simulations are inherently scalable though. Why not just scale up the mesh resolution and update rate and use lower settings for consoles?

Now you say some developers have done the above but have they without IHV assistance (however you want to term it)? However the PC IHV with the most (by far) resources to assist developers also doesn't make a CPU. The other issue is that true a CPU system would be CPU hardware agnostic (at least way more so than GPU), so even if a CPU IHV made one it doesn't necessarily sell more of their CPUs.

Yep would be a great advertisement for many core CPUs. Too bad Intel and AMD aren’t interested.
 

900p with FSR1 in performance mode...
2080 still doesn't get 60 FPS even at low and DLSS Performance@1080p. This performs vastly better than equivalent PC specs. Probably the worst optimized PC port of all time, where does all the performance go?
 
2080 still doesn't get 60 FPS even at low and DLSS Performance@1080p. This performs vastly better than equivalent PC specs. Probably the worst optimized PC port of all time, where does all the performance go?
That would surprise me, unless there are obvious CPU issues like with DD2. That or VRAM requirements are very high.
 
Last edited:
I wouldn’t expect games to roll fully custom cloth physics just for high end PCs. Cloth simulations are inherently scalable though. Why not just scale up the mesh resolution and update rate and use lower settings for consoles?

With the consoles working on a fixed performance target my general impression is the developers haven't felt that adding those physics is worthwhile compared to the budget being spent elsewhere.

And if that functionality doesn't make sense for the consoles (or isn't optimal) then it would need to be an unique add-on feature for the PC. In general we don't see developers undertaking this without any IHV assistance.

Yep would be a great advertisement for many core CPUs. Too bad Intel and AMD aren’t interested.

It's likely been tried. Intel had one point purchased Havok but then sold it to Microsoft after 8 years. I'd guess they couldn't find a way to leverage it business wise. I'm guessing during that time from 2007 to 2015 no one even associated Havok physics with Intel much less the need to buy an Intel CPU.

A problem here is with CPUs you can't really build a platform/ecosystem as was the case with Nvidia's PhysX implementation. Intel (or AMD) spending resources on developing a CPU physics system and pushing content might as well just sell more AMD (or Intel) CPUs rather than their own.

Somewhat related to this Intel actually was the original one who pushed post processing AA methods (MLAA, FXAA, SMAA etc.) with MLAA and tried to sell the idea of the CPU being important to gaming (at that time PC CPUs were way ahead, and Intel was way ahead with PC CPUs). That as well know didn't really go anywhere for them.
 
2080 still doesn't get 60 FPS even at low and DLSS Performance@1080p. This performs vastly better than equivalent PC specs. Probably the worst optimized PC port of all time, where does all the performance go?
Was there any uplift on consoles between beta and release? If so maybe the the pc version will also benefit. I know I saw people on all platforms during the beta talking about how poorly it ran and looks.
 
Was there any uplift on consoles between beta and release? If so maybe the the pc version will also benefit. I know I saw people on all platforms during the beta talking about how poorly it ran and looks.
The beta had lows of 35 fps on PS5 while looking so bad that I believed that it was running at 720p (there was a visual bug making it worse that it should have).

The final game runs at 900p at mostly 60fps with minor drops. Not sure if they fixed the bug and lowered the resolution to get to 60 or if they actually optimized the game somewhat. Still missing up sampling in 2025.
 
Back
Top