Sony PlayStation 5 Pro

BVH4 is just broken in my opinion, 64-byte per BVH4 with 128-byte cachelines doesn't make sense, you're still fetching the 2nd set of 64 bytes you often won't need. The intersection HW isn't as expensive as the memory hierarchy bandwidth. So BVH8 is basically a "free"(-ish) improvement and it makes complete sense to do it in a single TMU of a single CU.

It feels like an easy improvement and doesn't say much about AMD's long-term plans for raytracing - it'll be interesting to see whether they do stick to adding as little specialised hardware as possible as you say, I guess it depends how much better they can make it with this kind of incremental improvement.
 
It feels like an easy improvement and doesn't say much about AMD's long-term plans for raytracing - it'll be interesting to see whether they do stick to adding as little specialised hardware as possible as you say, I guess it depends how much better they can make it with this kind of incremental improvement.
Problem is stacking multipliers get out of control quickly. For example A is 1.5x faster and B is 2x faster gen on gen. 2 gens is 2.25x vs 4x (almost 1.8x difference), 3 gens 3.38x vs 8x is 2.37x, 4 gens will be >3x. Obviously CU/SM counts, clock speeds etc can change to make up ground but at a certain point you can't keep relying on that to equalise things, you don't gain ground by improving less than your competitors.

Although if they manage a significant per unit improvement plus CU count and core speed increase I won't be complaining
 
Problem is stacking multipliers get out of control quickly. For example A is 1.5x faster and B is 2x faster gen on gen. 2 gens is 2.25x vs 4x (almost 1.8x difference), 3 gens 3.38x vs 8x is 2.37x, 4 gens will be >3x. Obviously CU/SM counts, clock speeds etc can change to make up ground but at a certain point you can't keep relying on that to equalise things, you don't gain ground by improving less than your competitors.

Although if they manage a significant per unit improvement plus CU count and core speed increase I won't be complaining

Improving hardware performance and allowing programmer control isn't necessarily a tradeoff. They could add a dedicated L1$ to the RT unit, then have user/driver assigned tags for keeping/flushing $ entries. The hardware shouldn't care if it's caching a BVH tree, SDFs, or etc. but it's still a hardware improvement. There's a good amount of room for that.

Besides after the initial burst of hw improvements a given overall design tends towards an asymptotically flat 0 improvement gen over gen. Each one is likely to be less improvement relative to the last improvement, we can see that with stuff like Apple's CPU cores, with the days of year over year double digit improvements gone a good while now.
 
Let’s wait until June to see if that claim holds. It’s not quite clear what fsr3.1 is.

But decoupling frame Gen from far3.1 means it could be used with pssr etc.

Has frame Gen been used on any console titles yet ?
Immortals of Aveum will support FSR 3 frames generation...

On all consoles!


.... we're assured by the Ascendant that frame-gen is in for the console builds.
 
Maybe some aspects of FSR3 could work better on console. For example, a game will have a much narrower performance windows than across the diverse range of PCs so you might be able to tune some parameters a little more tightly.

Also, a controller is a waaaaay slower device for moving a first person camera than a mouse with high sensitivity and so some types of artefact are probably a lot less likely to manifest.

If PS5Pro upscaling doesn't support frame gen, perhaps we'll see a combination of PSSR and FSR3 frame gen for 120hz in some Pro games.
 
Maybe some aspects of FSR3 could work better on console. For example, a game will have a much narrower performance windows than across the diverse range of PCs so you might be able to tune some parameters a little more tightly.

Also, a controller is a waaaaay slower device for moving a first person camera than a mouse with high sensitivity and so some types of artefact are probably a lot less likely to manifest.

If PS5Pro upscaling doesn't support frame gen, perhaps we'll see a combination of PSSR and FSR3 frame gen for 120hz in some Pro games.
These are all facts.
 
The only question is, if FSR 3.1 (Oh man... it could have had a better name.) will really be as good as it was presented, then what was the need to develop PSSR, a PS specific image enhancement technique?
 
The only question is, if FSR 3.1 (Oh man... it could have had a better name.) will really be as good as it was presented, then what was the need to develop PSSR, a PS specific image enhancement technique?

Sony probably started in on PSSR well over a year ago.
 
Is there a chance FSR3.1 might find its way on PS5 Base?
Dont know why it shouldn't be possible.

Would be interesting to see DF take a deep dive into how well frame gen(Nvidia and FSR3) actually works at 30fps rather than just making assumptions about it.
 
Dont know why it shouldn't be possible.

Would be interesting to see DF take a deep dive into how well frame gen(Nvidia and FSR3) actually works at 30fps rather than just making assumptions about it.
Assumptions about it? We tested 30 fps on DLSS 3 at launch in many titles and found it not too good at all except for something like "controller only play in flight simulator."

We found a 40 fps internal frame-rate for DLSS 3 with m + kb to be the point where it starts looking and feeling convincing. With FSR3 it is even worse.

For me, the biggest issue I see with FSR 3 on console is that consoles do not employ a reflex like thing yet. So devs will have to roll their own automated frame-rate and utilisation limiter or have to live with the latency added. FSR 3 has a good deal more latency than DLSS 3 which everyone seems to just categorically ignore for some reason in the discussion sphere online.
 
Assumptions about it? We tested 30 fps on DLSS 3 at launch in many titles and found it not too good at all except for something like "controller only play in flight simulator."

We found a 40 fps internal frame-rate for DLSS 3 with m + kb to be the point where it starts looking and feeling convincing. With FSR3 it is even worse.

For me, the biggest issue I see with FSR 3 on console is that consoles do not employ a reflex like thing yet. So devs will have to roll their own automated frame-rate and utilisation limiter or have to live with the latency added. FSR 3 has a good deal more latency than DLSS 3 which everyone seems to just categorically ignore for some reason in the discussion sphere online.
Isn't Xbox's DLI kind of a Nvidia Reflex counterpart.
 
Back
Top