Current Generation Games Analysis Technical Discussion [2022] [XBSX|S, PS5, PC]

Seriously, you are calling me low grade, at least try to hide your bias.

You are rude and condensing, to moderate you do not need to wear a cap as you are or gate keep discussion in a technical forum which is derived from actual data points, facts and results.

I am shocked at your display here, sadly I was warned already this would happen.
Hello! I was responding to the discussion, which always criticises NXG's contributions rather than discussing technical points. I've not watched your content and have no personal opinion, so I can hardly be biased! I'm only biased in favour of genuine technical discussion held at a competent level which is fact-based and can discuss methods and datapoints.

IF NXG tech breakdowns are low-grade as everyone here states (in interpreting scans of the discussion rather than being involved in depth with said discussion), they shouldn't be posting it here. There is no gate-keeping info on the internet, which is as it should be, but that doesn't make everything out there worthy of discussion. No-one should be posting anything that they consider to be poor data. People should only paste stuff in this thread if they believe it to be valuable content, at which point the discussion should be about the content. Ideally there'd be consensus on what content is worthwhile and what isn't so discussion don't get constantly bogged into arguments about reputation and biases.

I respect your appearance here and hopefully you can talk about methods and results in a way that contributes to the discussion. I will pay more attention to this thread for the time being to ensure the discussion is healthy and productive.

Edit: One suggestion, based on how people have spoken of NGX videos over the past coupled with your confusion here, I wonder if there's a communication issue, that the information you are presenting is being interpreted in a different way? Perhaps people should question what precise information you were meaning to share and what they heard? NX Gamer is here to talk to directly and ask questions about their positions without assumptions being needed. The discussion floor is open...
 
Last edited:
A final comparison towards the VRAM thing.

QRbva76.jpg


with low textures, my 3070 gets 56 framerate average with matched PS5 settings in this scene. this is what you expect from 3070, being much better than ps5 in both rasterization and ray tracing. a clear cut %30 framerate difference
a6glnra.jpg

with very high textures, it drops to 41

CxtnbuY.jpg


a clear and profounded VRAM bottleneck. simple as that. ps5 is not performing like a 3070. instead, 3070 is unable to perform like it should.

swap 3070 with a 2080ti in this scene and it would get 56 framerate average with very high textures. I have no idea why some people correlate this event to architectural differences. it is clear that VRAM is not up for the task, but it is so because it is not matching what game requests on PS5.

so when the dude says 3070 would perform like a ps5 in that scene, yeah, it sadly happens. but it happens because of the huge VRAM constraint. Having this kind of enormous VRAm related performance drop while card chills at 5.7 GB VRAM usage is not cool at all. But that is up to Nixxes to find out.
I actually warned NXG to not to come to wrong conclusions in august 16th via twitter. I made this discovery just mere days after the release of the game.

cWCiLwq.jpg

naturally he ignored and went with his own deductions, thinking it was caused by "architectural" differences.

no, it is just VRAM bottleneck. i experienced this back in 2017 with ac origins. i had a 2 gig gpu. using medium textures instead of low textures tanked my framerate from 40 to 20. my 2 gb was failing, because ps4 had 4 gb budget to the game, and i simply didn't have such a budget. what does this have to with architecture? nothing. you either have enough vram budget for those nice and clean textures, or you just lose half your frames, stalling your GPU by half

. this is the exact same thing happening, but with a more modern GPU that tries to tackle something it is not designed for (native 4K, very high textures). this is a very basic knowledge that almost every pc gamer knows, when your framerate tanks because of texture, you do not correlate it with "oh, architectural stuff i guess". you just know that you ran out of VRAM and now your GPU buckles up, waiting for VRAM to do its thing. it is never wanted, it is never a good thing. solution is reducing texture or playing at a lower resolution.

as long as he ignores 3070 / 2070 is being heavily handicapped unnatturally by VRAM and instead insisting on saying " this is caused by difference in architectures", this discussion will go nowhere. ps5 equivalent settings on ps5 probably uses 9-10 gb vram and demands as much as vram as it demands there on PC. 2080ti/3060 and similar non-VRAM constrained GPUs are not showing huge performance degregations , which proves that this is a usual case of VRAM bottleneck and not a case of "architectural differences in play"
 
by the way, this is not the first time this happens either. Godfall also exhibited the exact same situation. i'm actualy surprised that nx gamer did not prey on that game for his PS5 promoting


you can see that VRAM constraint buckles 3070 and 3060ti to the half of what 2080ti is capable of.
at 1440p, they're more closer to each other.
but yeah, it has to be some weird architectural thing, sure.

YA8PofG.jpg
XNbyPIM.jpg



I mean it is clear as crystal: 8 GB was already stressed to its maximum without ray tracing involved. RT on top of 4k and super high textures simply destroys these cards. This is also becoming a common knowledge: this also happened with RE Village. Nothing surprising here.

If he wants decent PS5 to desktop GPU comparisons, he should choose rtx 3060/6700xt instead of hugely VRAM constrained cards who are out of their designated spec. yes, 3070 is not targeted towards 4K. never has been. it can be a medium(entry level 4k gpu for past titles, but it wont cut it for nextgen titles.
 
I actually warned NXG to not to come to wrong conclusions in august 16th via twitter. I made this discovery just mere days after the release of the game.
Please don't talk about NXG in third person now, seeing as he's posting here as @Michael Thompson. Raise any issues directly.

Caveat - it's not actually confirmed Michael Thompson is NXG at this point, but I take it on good faith for the time being. ;)
 
by the way, this is not the first time this happens either. Godfall also exhibited the exact same situation. i'm actualy surprised that nx gamer did not prey on that game for his PS5 promoting


you can see that VRAM constraint buckles 3070 and 3060ti to the half of what 2080ti is capable of.
at 1440p, they're more closer to each other.
but yeah, it has to be some weird architectural thing, sure.




I mean it is clear as crystal: 8 GB was already stressed to its maximum without ray tracing involved. RT on top of 4k and super high textures simply destroys these cards. This is also becoming a common knowledge: this also happened with RE Village. Nothing surprising here.

If he wants decent PS5 to desktop GPU comparisons, he should choose rtx 3060/6700xt instead of hugely VRAM constrained cards who are out of their designated spec. yes, 3070 is not targeted towards 4K. never has been. it can be a medium(entry level 4k gpu for past titles, but it wont cut it for nextgen titles.

While I get your point I don't think I would be posting 4k benchmarks for a 3070 and a 3060ti as they're not 4k GPU's

I have a 3060ti and don't really see it as a 1440p GPU never mind a 4k one.

And while I've not played every game with ray tracing the only game that I have played that hit the 8GB limit at 1080p was Doom Eternal....but performance was solid.
 
howfine.jpg

a usual case where you lose your half your framerate. it literally uses 4.5 gb worth of normal RAM as a substitute for VRAM. the fact that dedicated+shared gpu memory always amounts to something like 10 gb is proof that ps5 equivalent settings simply need/require 10 gb total budget

naturally, using this much normal RAM as a substitute VRAM tanks the cards performance, just like it happens on Godfall, and just like how it happened back in 2017 with AC Origins. This is a widely known thing: If your GPU starts seeping into the normal RAM, you lose huge amounts of frames. Entire computing power is going to waste because GPU actively waits for slow DDR4 memory to catch up. If you have, as I said, enough VRAM budget, this does not happen, and you simply get your compute worth of performance, whatever GPU you have.

DDR4 RAM is not speedy as GDDR6 VRAM is. Maybe DDR5+6400 mhz would lessen the tanking effect. but at that point, if you can afford ddr5/12th gen intel platforms, you wouldn't be gaming on a 8 gb 2070/3070 either...

So if you have a 3070/2070, you do not play with tanked framerates where almost %50 of your compute power is wasted on stalling. You instead lower the resolution to 1440p, or suck it up and use high textures. that way you can have the full power of your GPU, and in the case of 1440p, you will even have higher framerates, upwards of 70+ instead of huddling around 40 framerate or so.

@davis.anthony

that's my point. ps5's vrr fidelity mode is able to do very high textures at native 4k, and get upwards of 40 framerates. this is respectable. 2070 would be able to do so too, if it had enough budget. but this did not stop @Michael Thompson using as a comparison. I actually can get 55+ frames at native 4k with my 3070 with RT enabled with high textures all the time. using very high textures however tanks the performance a lot. 4k dlss quality gets me upwards of 70 frames, but I'm still unable to use very high textures, sadly, without it tanking the performance. I have literally linked a video where 3060 gets 36+ framerate average at native 4k with ray tracing. not quite a match for PS5, but still respectable, and proves that 2070 is hugely underperforming due to vram constraints (yes, I'm being a parrot. but I have to convey my point across)

2070/3060ti and to some extent 3070 are seen as highly capable 1440p cards. i wouldn't assign 3060ti to 1080p, even in this game at 1440p NATIVE, it is able to get upwards of 70 framerates with RT enabled. unless you chase 1440p/144, these cards are perfect match for 1440p. however I expect these VRAM issues to start creeping at 1440p too, so yeah, 8 GB seems like a value that will be required at 1080p with true nextgen titles 2-3 years later. 3060ti at 1080p is a safe bet for the next 3-4 years, but 8 GB 3070ti will really experience a lot of "having enough grunt but not enough memory" situations.

now you @Michael Thompson claim that no one can know if ps5 would perform better with more memory. to prove this, you can look at 24 gb rtx 3090 and 12 gb 3080ti. the fact that both have enough vram and perform same proves that game does not request more than 10 gb memory. if it did scale and performed better with even more memory as you suggest, 3090 would take the ball and run away. no, the game is specifically designed for 10 gb budget ps5 can allocate to from its total 16 gb pool. you know this very well, more than I do, as a matter of fact. so yeah, trying to compare 8 GB GPUs that are artficially handicaped to 6.4 GB pool to a fully allocated 10 GB pool of PS5 will naturally see your 2070 tank to half of what PS5 is capable of. Congratulations, you made a great discovery about VRAM bottlenecks. and no, a 36 gb ps5 would not make it render the game faster. it just has enough budget the game needs because the game is specifically engineered towards that budget. my example of 3090 not outperforming the 3080ti in any meaningful way proves this.
 
Last edited:
2070/3060ti and to some extent 3070 are seen as highly capable 1440p cards. i wouldn't assign 3060ti to 1080p,

There are more games with RT than just Spiderman, from an RT point of view the 3060ti and 3070 are 1080p cards if you want more than one RT effect enabled.

They're 1440p cards with one RT effect enabled

If like me you want as many RT effects enabled as possible and still want to get to 60fps you need to use these cards at 1080p.

In 2022 the 2070 is a flat 1080p card with one RT enabled in my opinion as it gets beaten by a regular 3060 non TI model.

3080ti/3090/3090ti are strong 1440p cards when more than one RT effect enabled.

I believe we don't have a strong 4k card available yet with more than one RT effect enabled (RTX 4090 should change that)
 
Last edited:
A final comparison towards the VRAM thing.


with low textures, my 3070 gets 56 framerate average with matched PS5 settings in this scene. this is what you expect from 3070, being much better than ps5 in both rasterization and ray tracing. a clear cut %30 framerate difference
with very high textures, it drops to 41
a clear and profounded VRAM bottleneck. simple as that. ps5 is not performing like a 3070. instead, 3070 is unable to perform like it should.

swap 3070 with a 2080ti in this scene and it would get 56 framerate average with very high textures. I have no idea why some people correlate this event to architectural differences. it is clear that VRAM is not up for the task, but it is so because it is not matching what game requests on PS5.

so when the dude says 3070 would perform like a ps5 in that scene, yeah, it sadly happens. but it happens because of the huge VRAM constraint. Having this kind of enormous VRAm related performance drop while card chills at 5.7 GB VRAM usage is not cool at all. But that is up to Nixxes to find out.
I appreciate the research and testing and posting all your work here. It's really good and valuable information for readers. But I have to say, stick around, we could use more posters like you with passion to investigate what's going on. It's pretty convincing to me that this is a VRAM allocation here, once you see how much more VRAM allocation there is on a 12GB 3060, and the 3090 and 3080TI comparisons. This is pretty thorough. It's pretty clear the 8GB cards are choking on VRAM limitations.

Good work. Keep it up.
 
There are more games with RT than just Spiderman, from an RT point of view the 3060ti and 3070 are 1080p cards with a good amount of RT enabled.

They're 1440p cards with some form of RT enabled.

If like me you want as much RT enabled as possible and still want to get to 60fps you need to use these cards at 1080p.

In 2022 the 2070 is a flat 1080p card with minimum RT enabled in my opinion.
that's an entirely different discussion though. i would partly agree and partly disagree, even going by game by game basis, I've been very happy with 4k/rt performance of 3070actually.

cyberpunk, even with rasterization, almost barely pushes a native 1440p/60 fps on 3070. so putting RT on top of it really buckles the performance, yes, you're correct. only at 1080p/dlss quality I'm able to get a locked 60 with rt reflections/shadows and GI. without GI, at 1440p/dlss quality, I'm getting 45-50 frames, and with VRR, it is playable

dying light 2 and cyberpunk truly stresses these cards, and even foils them at 1440p. that is true. but running these games without RT at all is also a challenge, even at 1440p.

for other games, however experience can be quite different. doom eternal, practically is possible to run 4k/dlss balanced, Ray tracing and get upwards of 90+ frames, avery smooth experience on a 144 hz screen.
guardians of galaxy is able to get 4k/dlss performance rt reflections (only rt present in this game, though), is also getting me a locked 60

metro exodus ee with its full RT GI at 4k/dlss balanced with rt high and rt reflections also gave me a locked 60 fps experience. control is also very scalable but i havent experimemented it with yet. i think you also get super high framerates with re village.

so actually with "some form of RT", 4k is very playable on a 3070, especially with DLSS. to me, even DLSS performance looks fantastic, and balanced/quality is really perfect, especially for the performance they claw back. I agree that RT is not that transforming in GotG, Far Cry 6 and Doom Eternal. But still, you get cool reflections here and there, and it is still cool to have great performance alongside with it, honestly. but metro exodus is truly an outstanding exception to this rule.

what I think is that with how nextgen consoles are engineered, fully blown RT will never be used as a complete substitution for rasterizaton. naturally having that some form of RT with good performance is also good enough in my book. I actually have a controversial opininon regarding this, where I believe Cyberpunk at 1080p with ray tracing looks marginally worse than 4K without ray tracing. Both DL2 and Cyberpunk looks so blurry to a point you question whether full RT effects are worth or not for dropping to 1080p

so 1440p with some form of RT and high framerates is going to be great for the use case of 3060ti/3070. even 4k is possible with that mindset actually, which I played lots of games with !
 
Again, it is NOT. The test is for the RTX2070 as PS5 settings, I cannot, for the life of me, fathom how the collection here are just saying the test is flawed due to VRAM. The card ONLY has 8GB.

By this logic any tests on PC with a 12900K CPU are equally as "flawed" as the consoles do not have such a CPU and thus could be CPU bound where the PC would not.

This test is for people with that machine spec and this game running on it, end of. The results are valid, repeatable and reflective of the real world. If the 2070 had 12GB Vram it would be better, but of the PS5 had more VRAM it may also, but they do not so where is the logic?

There are two issues here:

1. You state that you're simply comparing how your specific machine (a common combination of CPU and GPU) performs vs the PS5 at PS5 like settings. That would be completely fair IMO if you actually presented it that way in the videos. But you don't. You show whole reels of footage (RT performance mode) of the 2070 underperforming in CPU limited scenario's while providing commentary on how many percent faster the PS5 is than that GPU. That's simply wrong because you are not measuring GPU performance there, yet you are giving the impression of the GPU itself being heavily outperformed by the PS5. I'll grant you do also discuss CPU limitations during those scenes but you repeatedly flip flop the messaging back to direct GPU performance comparisons. At one point you even criticize "other" content producers for using high end CPU's (specifically the 12900k, so no prizes for guessing who that criticism was targeted at) in their GPU comparison pieces which is in fact exactly what they should be doing, and what you should be doing if you want to make direct GPU to GPU performance comparisons. You isolate the tested components performance by ensuring no other component acts as a bottleneck.

I know you also added a whole section which is likely not CPU limited in the Fidelity matched settings comparison, but that entire comparison is seemingly invalidated by the VRAM limitation which I'll give you the benefit of the doubt on and assume you simply didn't realize at the time of putting out the video. Sure you can argue that you're still showing the relative performance of the systems because that's simply how much VRAM the component that you're testing. But if you're going to do that then you need to make it very clear to the viewer that the 2070 isn't being fully utilised at these particular settings because it's VRAM limited. But you don't present it that way. In fact you never once mention VRAM limiting the GPU's frame rate there and instead frame it as the PS5 GPU simply being more powerful and/or more efficient due to running in a console environment.

2. if you're truly only wanting to show the relative experience possible on two specific systems as opposed to a deeper architectural analysis of those systems performance potential then the basis of the comparison is unfair to begin with. If you want to show how the experience on your machine compares in Spiderman vs the PS5 then limiting the testing to PS5 matched settings which are suboptimal for that machine and ideal for the PS5 is skewing the result. The VRAM limitation which has been demonstrated spectacularly in this thread is a perfect example of that. It's a fact, that the 2070 is lagging the PS5 in VRAM capacity, but it's also (generally speaking) matching or exceeding it in RT capability and has tensor cores to provide superior upscaling quality at a given internal resolution. So while showing the matched PS5 settings is valid, it should be balanced by showing what experience is possible at more PC favoring settings. In this case that may have been something like High textures with very high ray tracing resolution and geometry, 16xAF and DLSS Performance which on balance should provide a much better balance of graphics and performance better suited to the 2070's strengths and compare far more favorably with the PS5 experience.

Some basic facts which are (intentionally) being missed here:-

- The results now are how this card and others (as evidenced here by many of you) will perform and display Mips, thus if you play on PC you get this experience and if you play on PS5 you get this experience - Fact 1

So call it out as a bug specific to the PC version. Give the PS5 version all the kudos you want for not having that bug, but don't frame that bug as some kind of general architectural deficiency of the PC platform which "requires a $500-$600 CPU to address".

- Changes have come and will continue to come that may help reduce this and remove it on certain cards (I even state that in the video) that does not change the results are they are now and how all will play - Fact 2

That's fair enough, but see above. It's not that fact that you call attention to the difference, it's how you frame it as a platform architecture advantage rather than what it is - a software bug.

- Reducing the VRAM usage can and does help reduce the Mip issues due to the fact the allocated Pool of ram can store the higher mips, show in the video and repeatable - Fact 3

But framing this as the solution to the problem (which it isn't because VRAM often has no impact on the issue), thus re-enforcing the false argument that this is simply an architectural deficiency that needs to be brute forced past in the PC... is wrong.

I am dizzy from all the NON technical discussion and hoop jumping, merry-go-round, gatekeeping and attempts to undermine me here with no success or credence. We have gone from VRAM, CPU affecting GPU bound scenarios, game is a poor port, wait till they fix it, the test is not fair. I am so very confused and did not expect this sort of behaviour and lack of logic on such a forum.

VRAM has been proven incontrovertibly within this thread as a limitation on the 2070 at the settings you are using.

I don't think anyone has claimed that the CPU is a bottleneck in GPU bound scenarios, only pointed out that you make GPU performance comparisons in CPU bound scenario's.

Personally I think the game is a great port, but it was also a mammoth undertaking and is open to much closer scrutiny than the vast majority of ports and so inevitably, a few fairly serious bugs have been identified, but as you state, Nixxes seems to be quick to fix them, hopefully that will be the case with the VRAM under-allocation and mip loading issues. Until then, the onus is on testers like you to ensure the public is properly informed where these bugs impact testing and not to use them to make potentially misleading claims about platform architectural advantages or raw performance claims.

a usual case where you lose your half your framerate. it literally uses 4.5 gb worth of normal RAM as a substitute for VRAM. the fact that dedicated+shared gpu memory always amounts to something like 10 gb is proof that ps5 equivalent settings simply need/require 10 gb total budget

That's a really good find. I didn't even know that column existed in task manager but I'll be using it future! iroboto beat me to it but I just wanted to second his statements about your excellent contributions in this thread. It's certainly change the way I think about VRAM. Very informative!
 
A final comparison towards the VRAM thing.

QRbva76.jpg


with low textures, my 3070 gets 56 framerate average with matched PS5 settings in this scene. this is what you expect from 3070, being much better than ps5 in both rasterization and ray tracing. a clear cut %30 framerate difference
a6glnra.jpg

with very high textures, it drops to 41

CxtnbuY.jpg


a clear and profounded VRAM bottleneck. simple as that. ps5 is not performing like a 3070. instead, 3070 is unable to perform like it should.

swap 3070 with a 2080ti in this scene and it would get 56 framerate average with very high textures. I have no idea why some people correlate this event to architectural differences. it is clear that VRAM is not up for the task, but it is so because it is not matching what game requests on PS5.

so when the dude says 3070 would perform like a ps5 in that scene, yeah, it sadly happens. but it happens because of the huge VRAM constraint. Having this kind of enormous VRAm related performance drop while card chills at 5.7 GB VRAM usage is not cool at all. But that is up to Nixxes to find out.
PS5 does use IGTIAA on Fidelity Mode 4K VRR, correct? It's also dynamic. I'm not sure how close here is how it runs on my 2080 Ti at Native 4K/TAA Fidelity Mode settings in that scene. My 2080 Ti is paired with an i9-9900K.

Spider-Man 4K+TAA.PNG

In that scene, it fluctuates between 61-68fps.

Here is how it runs at Native 4K/ITGI Ultra Quality

Spider-Man 4K ITGI.PNG

Once again, in that scene, it fluctuates between 75-81fps. There is no DSR in any of those examples. Whatever the case, I think that the PS5 being equivalent to a 3070/2080 Ti is a gross exaggeration. The 2080 Ti is A LOT faster in those scenes.
 
2. if you're truly only wanting to show the relative experience possible on two specific systems as opposed to a deeper architectural analysis of those systems performance potential then the basis of the comparison is unfair to begin with. If you want to show how the experience on your machine compares in Spiderman vs the PS5 then limiting the testing to PS5 matched settings which are suboptimal for that machine and ideal for the PS5 is skewing the result. The VRAM limitation which has been demonstrated spectacularly in this thread is a perfect example of that. It's a fact, that the 2070 is lagging the PS5 in VRAM capacity, but it's also (generally speaking) matching or exceeding it in RT capability and has tensor cores to provide superior upscaling quality at a given internal resolution. So while showing the matched PS5 settings is valid, it should be balanced by showing what experience is possible at more PC favoring settings. In this case that may have been something like High textures with very high ray tracing resolution and geometry, 16xAF and DLSS Performance which on balance should provide a much better balance of graphics and performance better suited to the 2070's strengths and compare far more favorably with the PS5 experience.

All good points, but I may slightly quibble with this atm as it's not so clear-cut as it is with other titles, such as when you're comparing DLSS to checkerboarding and it's an obvious win for DLSS. As I've pointed out before, DLSS can has some pretty glaring issues with specular highlights when combined with dof effects which happen quite often in this game, especially when using DLSS performance.

Certainly in more favorable scenes/lighting conditions (and when comparing it to PS5 RT Performance mode where the internal resolution is lower) DLSS can definitely resolve more detail, in screenshots you could reasonably come to the conclusion that DLSS is far superior - but it motion with common actions, not necessarily so. Nixxes have spoken previously how they had some trouble with DLSS with this game as it's highly tuned to Insomniac's own reconstruction tech so who knows if they can even address this - the PS5 does exhibit these artifacts too, but just to a far lesser degree. Something is going on with how these effects combine with DLSS in general - it's not simply a case of DLSS's lower internal resolution as they're far less pronounced with just using no reconstruction tech at all, so who knows what the specific cause is.

Hopefully Nixxes can improve this, if they can wrangle these artifacts then DLSS in this game would actually end up being an excellent implementation, but these stand out to me too much atm.
 
- Reducing the VRAM usage can and does help reduce the Mip issues due to the fact the allocated Pool of ram can store the higher mips, show in the video and repeatable - Fact 3
That is not a fact. And that's the problem I have with you... you present your theories as facts when they are NOT. When are you going to get it through your head? The same EXACT settings that you're running can be ran on GPUs with FAR MORE VRAM... and the issue persists... EVEN AT 1080p...

Your response to that is "well that's not my GPU and not my test".... handwaving away all the other evidence of it happening that's being brought forward... while demanding people accept everything you say.

This issue is 100% reproducible.
 
PS5 does use IGTIAA on Fidelity Mode 4K VRR, correct? It's also dynamic. I'm not sure how close here is how it runs on my 2080 Ti at Native 4K/TAA Fidelity Mode settings in that scene. My 2080 Ti is paired with an i9-9900K.

View attachment 6912

In that scene, it fluctuates between 61-68fps.

Here is how it runs at Native 4K/ITGI Ultra Quality

View attachment 6913

Once again, in that scene, it fluctuates between 75-81fps. There is no DSR in any of those examples. Whatever the case, I think that the PS5 being equivalent to a 3070/2080 Ti is a gross exaggeration. The 2080 Ti is A LOT faster in those scenes.

Joined today, thanks for quality technical discussioning :) Seems the bad faith video on spiderman actually spawned new quality b3d forum members :p
You too welcome.
 
I don't know the exact details, I thought PS5 runs 40fps locked at native 4K in this Fidelity mode?

If someone can list and explain to us all the modes available for the PS5 in Spider-Man, it would be of tremendous help.

30fps locked with a DRS range from 4k down to 1512p according to DF. But there is an option to increase the framerate cap to 50fps in VRR mode as I understand it so that we can effectively see what the unrestrained performance is. I believe that's what NXG is doing.

So without pixel counting it's impossible to know what resolution the PS5 is running at other than to know it's no higher than 4k and probably no lower than 1512p. If the 2080Ti is faster in native 4K as shown here then it's definitely faster than the PS5 in that scene. But if the PS5 is running at less than 4k in that scene the gap may be even wider.
 
Better than nothing.

If Sony is serious about PC gaming, we can expect many more of these ports, so I think it's important to know what PC hardware performs equivalent to the PS5 when actually playing games.

For example, if the performance difference between equivalent hardware and PS5 remains that way in current gen only ports, then I will have to buy a new laptop soon, as my i7 9750H is on par with a i5-10400, which is similar to a 3600, and that gets outperformed dramatically by the PS5 CPU in Spiderman. So if these added layers of decompression, API etc are here to stay, then my laptop won't be able to keep up once cross gen is over, as CPU performance is hard to scale and PS5 level CPU performance might be the minimum requirement in the future. Likewise for more beefy PCs out there, it is also important to know what CPU can achieve double the performance of PS5 to get 60 FPS in a CPU limited 30 FPS current gen only game in the future.

So yes, I really need those comparisons at unlocked framerates to gauge the performance of my machine to the common denominator for true current gen only games.

True. However, the amount of Sony ports to PC wont be as many as multiplatform games that target current gen i hope. For a system to play with ports coming from another system where it was originally designed for, you usually have to overcompensate with more capable hardware, even if a quality studio like Nixxes is behind it.

Eitherway, i wouldnt want to be on a 6 core cpu going forward anyway. Your 9750H is a older gen intel 6 core cpu, the PS5's cpu should outrun it in most instances, quite much so.

I feel confident my 5800h/3070m laptop will tag along just fine at equal to PS5 settings even with ports, probably with somewhat higher settings here and there.
 
I don't know the exact details, I thought PS5 runs 40fps locked at native 4K in this Fidelity mode?

If someone can list and explain to us all the modes available for the PS5 in Spider-Man, it would be of tremendous help.
Also related, I'm also unclear on what happens in the performance mode on the PS5 with the framerate unlocked - what happens with the dynamic res? I'm not sure how that even works with an unlocked framerate. Wouldn't it just be at it's lowest possible bounds all the time if there's no specific framerate target?

As @Dampf brings this up, I also talked about this early on with the comparisons not being done with an unlocked framerate on the PS5, but I didn't really think about how it works to compare GPU performance when dynamic res is involved - wouldn't it potentially involve pixel peeping at every measurement point...? I guess, as I'm not sure how it really works, the refrain I can often see in youtube videos showing this is "Now you see how much consoles are held back when they target a fixed framerate!" - this doesn't necessarily make sense though if it just means the res is dropping to achieve framerates over 60.

Now for comparing CPU bottlenecks, sure, it makes sense to use every unlocked mode on the PS5 to compare as long as the settings are equalized. GPU-wise though it's not as simple, at least in the performance modes.

If Fidelity mode is 4K fixed, and that remains so unlocked, then at least that mode can be used to draw some conclusions there.
 
Also related, I'm also unclear on what happens in the performance mode on the PS5 with the framerate unlocked - what happens with the dynamic res? I'm not sure how that even works with an unlocked framerate. Wouldn't it just be at it's lowest possible bounds all the time if there's no specific framerate target?

As @Dampf brings this up, I also talked about this early on with the comparisons not being done with an unlocked framerate on the PS5, but I didn't really think about how it works to compare GPU performance when dynamic res is involved - wouldn't it potentially involve pixel peeping at every measurement point...? I guess, as I'm not sure how it really works, the refrain I can often see in youtube videos showing this is "Now you see how much consoles are held back when they target a fixed framerate!" - this doesn't necessarily make sense though if it just means the res is dropping to achieve framerates over 60.

Now for comparing CPU bottlenecks, sure, it makes sense to use every unlocked mode on the PS5 to compare as long as the settings are equalized. GPU-wise though it's not as simple, at least in the performance modes.

If Fidelity mode is 4K fixed, and that remains so unlocked, then at least that mode can be used to draw some conclusions there.

@Michael Thompson provides a clue in his video to this. I don't know whether it's just an assumption or he had it confirmed somehow but he briefly mentions the unlocked fidelity mode retains a 30fps target. I would assume that means the game sets the resolution (from a limited set of just 3 or 4 options if I recall @Dictator correctly) to ensure the framerate doesn't dip below 30fps for a give scene but that may still result in a higher than 30fps frame rate while still being unable to move up to the next higher resolution stepping which would result in dips below 30.
 
Also related, I'm also unclear on what happens in the performance mode on the PS5 with the framerate unlocked - what happens with the dynamic res? I'm not sure how that even works with an unlocked framerate. Wouldn't it just be at it's lowest possible bounds all the time if there's no specific framerate target?

As @Dampf brings this up, I also talked about this early on with the comparisons not being done with an unlocked framerate on the PS5, but I didn't really think about how it works to compare GPU performance when dynamic res is involved - wouldn't it potentially involve pixel peeping at every measurement point...? I guess, as I'm not sure how it really works, the refrain I can often see in youtube videos showing this is "Now you see how much consoles are held back when they target a fixed framerate!" - this doesn't necessarily make sense though if it just means the res is dropping to achieve framerates over 60.

Now for comparing CPU bottlenecks, sure, it makes sense to use every unlocked mode on the PS5 to compare as long as the settings are equalized. GPU-wise though it's not as simple, at least in the performance modes.

If Fidelity mode is 4K fixed, and that remains so unlocked, then at least that mode can be used to draw some conclusions there.
there is a way to do this. I've seen a final output of this with Watch Dogs Legion.
It's still with Rich. And he has to determine if it's worthwhile to present results in that way. But I've seen it, and it's pretty neat, I don't think it works everywhere, it's a bit troublesome to explain the graph and I think he wants to avoid being in a situation where people misinterpret the values. But hopefully one day we see it happen. I've left him all my tools to do it, so we know it works, and he's written results back over the videos so it looks like a DF graph; hopefully he finds a something he's comfortable with. Or he can just ignore it entirely if it's going to muddy the waters unnecessarily.

The challenge would be acquiring that precise frame rate for VRR however. The issue here is the moving frame time, makes for comparison between 2 footage very challenging, because you don't have the same frame time and then I'd have to compare resolution where it's likely not the same frame.
 
Back
Top