Digital Foundry Article Technical Discussion [2022]

Status
Not open for further replies.
First Post on this Forum after quite a lurking time ;)
So what i think about the Spiderman Port to PC is that in contrary to the overall opinion in this Thread, PC is as a platform (not any high end user mind you) but as a platform of gamers it is in trouble. Cosnoles offer much better per watt / performance than PCs - they are engineered that way.
You always can make things go more fluid if you alter them from being general purpose (PC) to specifically render Game code as efficient as it can possibly get.
Especially Consoles APIs (and here Sony realy shines) are much more lean and have every layer removed that is not needed.

So that was and is true for PS4. The latest Video from Richard Leadbetter shows this - PS4 cannot be matched on PC with similar Hardware. He is running the Game in 900p and it still runnes less performant. The Base PS4 Game renders almost all of the time in 1080p. With hardlocked 30fps. And thats all with Richard Leadbetter using a CPU that is running circles around the PS4s Jaguar Cores.
The Performance Edge that Consoles have did not manifest as much last Generation mainly because the extraordinary edge the PC Community had with their CPUs over the Jaguar Cores. Console Performance and API Edge over PC was here "sacrificed" in a way by using a under normal (PC) circumcstances totally unfit for gaming CPU.

Only the less friction ( hardware and software wise ) Enviroment on Consoles made it possible to found a whole gerneration of still very good looking games with a Netbook CPU at a Time where so much more performant PC CPUs were used by people.
This Time however , all is different!
This Time a very good CPU is used. But it is still a lean and friction less console enviroment. Even more so on PS5 where its dedicated HW Decompression block does even more free up CPU time. Something the PC simply cannot copy 1:1.
Direct Storage will help but it will come at a cost.
Ill jump over a few points now and adress my take on the Question of a RTX3070 is sufficient to play PS5 Ports in the future at the same ( or even better) Detail Settings.
My Answer is of course NO.
We all see what it takes CPU and GPU wise to play a mere PS4 Game uplifted per PS5 patch on PC to play at the same Settings. Gone are the times where way worse PC Hardware (read GTX 750) could bring something to the table.

I saw i a page or so earlier in this thread the Idea that People with RTX 30 Series or even RTX20 Series GPUs would be fine simply because RTX I/O is here to save the Day. It was aknowledged that RTX I/O on RTX 20 and 30 Series does not even come with dedicated decompression HW. Nvidia simply wants to use underutilised GPU parts. Someone stated that a mere 2Tf sacrifice would be enough to match PS5s I/O throughput. Also wrong - RTX30 Cards from 3070 upwarts enjoy a rasterisation edge vs PS5 but its not like it is in the next league. For RTX 20 cards? Forget it. I state that when the first 2nd Wave PS5 exclusive arrives wich should leverage PS5s I/O Block hard, the RTX 20 cards are not going to cut it anymore. And the chance is there that a RTX 3070 could not match a PS5.
Heck, i want to see a Ratched and Clank Rift Apart port to PC right now. Does anyone, given how a PS4 Game with a PS5 Patch stresses PC Hardware right now, believe that Rift Apart would run very good on the same range of hardware of today? The Game relies heavily on real time decompression of several GB/s with every cameramove. The "what is not in sight is not in memory" aproach that Cerny forecasted in his Road to PS5 Talk. And Rift Apart does not even bring the PS5 to sweat - we know that because the swap out of external SSDs wich are technicaly under Sonys recommendation for a SSD upgrade.
And please do not read this post as a stab against PC in general. I game on one as well - but things need to adressed in all fairness.

First, i think this PS5(console) vs PC is better suited for a system wars section or something. Anyway, a 7870/i5 setup from 2013 still does fine at equal to console (base console) settings, even today. Largest hurdle would be vram if anything. GTX750 didnt age very well, but thats due to Keplers architecture.
The notion that a RTX3070 is no match for the PS5 is kinda.... out of this world. The 3060 non-ti does actually run spiderman and practically everything else just aswell, mostly better whenever RT is involved vs the PS5 versions. I shared a YT video before here, 3700x/3060 combo outperforms the PS5 version of spider, at higher settings.

Theres no indication anywhere that you need 'more hw to make up for the evil, bad pc hw', in almost all titles, both PS4 and PS5, you can get around with ballpark matching hardware.
I also doubt that NV was lying when they said performance impact will be minimal to the point you wont notice when the GPU is decompressing. Thing is, i think GPU decompression is the way forward, GPU's are extremely efficient at doing parallel/decompression work. With todays drives being capable of 7gb/s and faster before decompression, i see no reason for concern that GPUs would loose any meaningfull performance doing on-gpu decomression. GPU
s could easily scale way beyond what the consoles are doing in this regard.

Gone are the days when you needed more bruteforce hw to compete with the consoles (pre PS4 generation), these days you roughly need matching specs to come by. Probably due to consoles having moved over to x86/amd hardware and windows/api and engines having made strides since. Heck, the lastets DF video shared today shows the 750Ti holding up very, very well. And thats for a remake/PS5 title.

The saying 'PC as a platform is in trouble' has been used in platform warring since forever, yet the platform never died. The wattage notion isnt true either, it depends on what you want. A laptop sporting 5800h/3070m does outperform the PS5 yet draws less watts from the wall. Power efficiency right. Now let laptops have become the most popular choice among gamers these days (with numbers growing). Also, a 5600x/3060 setup isnt drawing crazy numbers either.
I can turn this around and point to the consoles 'being in trouble', were clearly moving away from having singular platforms, everythings moving to multiplatforms these days, see sony extending to the pc and mobile market aswell as streaming and services. I think it will be a while, but the pc probably exists longer as a platform, though perhaps with gaming being declined. Still, gaming on the go/mobile will probably always exist, hence laptops in pc form will exist way beyond the desktop and console box under the tv.
Though to be fair, i think both pc and console will co-exist for a long time to come. No need to be concerned really.

Rift Apart will do well on pc if you're equipped with the right hw, PCIE4 nvme (already in the 13gb/s raw numbers today) 3700x or better and a 3060/6600XT or better and your probably looking at a better experience. Even with a fast PCIE3 nvme.
 
First Post on this Forum after quite a lurking time ;)

Welcome, and congratulations on the first post, it's certainly an interesting one!

So what i think about the Spiderman Port to PC is that in contrary to the overall opinion in this Thread, PC is as a platform (not any high end user mind you) but as a platform of gamers it is in trouble. Cosnoles offer much better per watt / performance than PCs - they are engineered that way.
You always can make things go more fluid if you alter them from being general purpose (PC) to specifically render Game code as efficient as it can possibly get.
Especially Consoles APIs (and here Sony realy shines) are much more lean and have every layer removed that is not needed.

I'm not sure why this would spell trouble for the PC as a platform. Consoles have always been dedicated games machines vs the PC's more general purpose design. However that gap has narrowed significantly over the last 2-3 generations with PC's becoming increasingly more console like their gaming focus, e.g. thinner API's, ever increasing gaming focus from Windows, Direct Storage (the first gaming focussed storage API), while the consoles become more general purpose and PC like. By this stage there isn't a lot separating the two in terms of gaming efficiency, especially once Direct Storage gets it's full release. We're really just talking about ultra thin console API's vs thin PC API's at that point (and PC API's will continue to evolve in that respect where-as console API's have likely already got a low level as they can).

In terms of actual performance per watt I disagree that the consoles are better. In fact, PC parts, particularly with the upcoming generation will offer considerably more performance per watt than the consoles, that should go without saying since they are newer, more efficient architectures on better nodes from the same vendors. What you're interpreting as poor performance per watt in the PC space is merely the high end components going well past the efficiency sweet point on the power curve to extract the highest possible performance in a PC environment that allows such things.

Look at laptops though and they tell a very different story. There it's possible to get greater than console performance at a much lower power draw.

So that was and is true for PS4. The latest Video from Richard Leadbetter shows this - PS4 cannot be matched on PC with similar Hardware. He is running the Game in 900p and it still runnes less performant. The Base PS4 Game renders almost all of the time in 1080p. With hardlocked 30fps. And thats all with Richard Leadbetter using a CPU that is running circles around the PS4s Jaguar Cores.

Richard makes it clear in his comparison that this is not a like for like test. The PC is running the remastered version of the game while the PS4 is running the last gen version. The remastered version has many enhancements over the last gen version which you can't turn off on the PC for comparison purposes. As Richard said, the comparison was simply for the lolz and not to be taken as some kind of relative performance benchmark. Add to that that both of those GPU's are so old that they long ago stopped receiving driver support and almost certainly have not been optimised for by Nixxes.

The Performance Edge that Consoles have did not manifest as much last Generation mainly because the extraordinary edge the PC Community had with their CPUs over the Jaguar Cores. Console Performance and API Edge over PC was here "sacrificed" in a way by using a under normal (PC) circumcstances totally unfit for gaming CPU.
Only the less friction ( hardware and software wise ) Enviroment on Consoles made it possible to found a whole gerneration of still very good looking games with a Netbook CPU at a Time where so much more performant PC CPUs were used by people.


But almost all games last generation were capable of running perfectly well on a dual core CPU which despite having roughly 4x more performance per core vs the console Jaguars, also have 4x fewer cores, so the overall performance is roughly equivalent. So it's not like PC's were throwing enormously greater amounts of CPU power at the problem to overcome this performance edge you speak of. Naturally a little more CPU power will always be needed on the PC side though to overcome the thicker API's.

This Time however , all is different!
This Time a very good CPU is used. But it is still a lean and friction less console enviroment.

It's not really that different at all. If anything this generation could play more towards the PC because we now have CPU's of very similar architecture in the consoles with similar core count and IPC. So optimisations that are done for those console CPU's should carry over quite naturally to PC. Last gen, optimisations made for the 8 core Jaguars would have been much harder to translate to much wider 2 and 4 core PC CPU's.

Even more so on PS5 where its dedicated HW Decompression block does even more free up CPU time. Something the PC simply cannot copy 1:1.
Direct Storage will help but it will come at a cost.

What cost specifically? DirectStorage will remove the majority of the decompression load from the CPU. There is no CPU cost to this, it's a massive CPU saving. The GPU will be doing the work but by all reports the performance impact there will be negligible, or in Nvidia's words - barely measurable. It's not as if the spare GPU power isn't available to absorb this negligible impact.

Ill jump over a few points now and adress my take on the Question of a RTX3070 is sufficient to play PS5 Ports in the future at the same ( or even better) Detail Settings.
My Answer is of course NO.
We all see what it takes CPU and GPU wise to play a mere PS4 Game uplifted per PS5 patch on PC to play at the same Settings. Gone are the times where way worse PC Hardware (read GTX 750) could bring something to the table.

As above, this is a false comparison, Richard was quite clear about that in his video and I'm sure would be disappointed to see it being twisted in this way.
 
Okay this is a first, I've had to break this post into two parts because I apparently broke the character limit of a single post!

I saw i a page or so earlier in this thread the Idea that People with RTX 30 Series or even RTX20 Series GPUs would be fine simply because RTX I/O is here to save the Day. It was aknowledged that RTX I/O on RTX 20 and 30 Series does not even come with dedicated decompression HW. Nvidia simply wants to use underutilised GPU parts. Someone stated that a mere 2Tf sacrifice would be enough to match PS5s I/O throughput. Also wrong - RTX30 Cards from 3070 upwarts enjoy a rasterisation edge vs PS5 but its not like it is in the next league. For RTX 20 cards? Forget it. I state that when the first 2nd Wave PS5 exclusive arrives wich should leverage PS5s I/O Block hard, the RTX 20 cards are not going to cut it anymore. And the chance is there that a RTX 3070 could not match a PS5.

This is a pretty confusing paragraph which seems to wrongly conflate IO performance with GPU performance. Let's try and break this down a bit.

RTX-IO, or more generally GPU based decompression is there to free up CPU cycles by moving that activity onto the GPU. Nvidia have stated that the impact of that on the GPU is "barely measurable", which stands to reason given that GPU's have orders of magnitude more compute power than CPU's. The RTX 3070 for example has roughly double the compute performance of the PS5 (20TF vs 10TF). That doesn't translate directly into game performance but if the GPU decompression is going to use that compute, then there is more than enough on tap with any Ampere to spare for this.

Now putting aside the IO point and talking about raw performance, a 3070 is roughly equivalent to a 2080Ti. All benchmarks we've seen to date put the PS5 performance at between 2060s and 2080 level depending largely on the use of RT, with more RT pushing PS5 down that stack. RT will obviously increase as the generation goes on and we see more current gen exclusives, so that performance advantage is likely to solidify over time. Spiderman is just the latest in a long line of examples of this with the 3060 (2070 level) offering an equivalent or better experience to the PS5.

Heck, i want to see a Ratched and Clank Rift Apart port to PC right now.

You're not the only one!

Does anyone, given how a PS4 Game with a PS5 Patch stresses PC Hardware right now, believe that Rift Apart would run very good on the same range of hardware of today?

Absolutely, there's no reason to assume it wouldn't. Spiderman remastered is a PS5 game that's been remastered for the PS5 and stresses the PS5 (down to 1080p in performance RT mode). Just because it originated on the PS4 doesn't mean it's more easy going on the hardware, especially given it implements RT which is very heavy on both the CPU and GPU.

The Game relies heavily on real time decompression of several GB/s with every cameramove. The "what is not in sight is not in memory" aproach that Cerny forecasted in his Road to PS5 Talk. And Rift Apart does not even bring the PS5 to sweat - we know that because the swap out of external SSDs wich are technicaly under Sonys recommendation for a SSD upgrade.

That's been addressed many times here. The game is only about 66GB uncompressed. It's inconceivable that it's loading several GB/s worth of data with every camera move or else the game would be over in a hand full of seconds.

The game does load new areas very fast when a portal is used but that's simply a loading speed challenge. Something that could well be a little slower on PC at the moment until DirectStorage GPU decompression is available, but if that equates to 2 second portal transitions vs the current 1 second then that's not exactly a deal breaker, and could very likely be mitigated with additional pre-caching into the PC's larger memory pool. Outside of those portal transitions there's no reason to believe the game wouldn't perform similarly to Spiderman insofar as a 3060 class GPU being capable of providing an equivalent experience. I would actually expect the game to be more forgiving on the CPU side due to the much slower world traversal which likely requires more modest streaming and certainly far less frequent/significant BHV updates which are the main cause of the high CPU requirements in Spiderman.
 
We all see what it takes CPU and GPU wise to play a mere PS4 Game uplifted per PS5 patch

To be clear, Spiderman PS5 isn't a 'patch'. It's a significantly re-architected, fully native PS5 version. You can argue that as a cross-platform title it isn't 'taking advantage' of the PS5 to the extent that something like Ratchet and Clank:Rift Apart is I guess, but to imply this is an upjumped PS4 version like say, God War PS5 patch is erroneous. It had significant work done to take advantage of the platform from the outset and has consistently received major patches since its introduction to further modernize the engine, well beyond simple bugfixes. Even compared to other PS4 games that got 'native' PS5 versions - like Death Stranding DC & the Resident Evil Series, it has noticeable more profound changes from the PS4/Pro versions.

Gone are the times where way worse PC Hardware (read GTX 750) could bring something to the table.

That was a very brief window around the PS4 launch though. The 750ti did certainly not hold up for most titles released around halfway into the Xbone/PS4's lifespan, and so what? It was quickly forgotten as far faster cards were released in short order. This was nothing new for any generation, it's interesting as an academic exercise when speaking about 'optimizations' but largely irrelevant to actual PC gamers who recognize there's an increased cost to the flexibility of the platform. You pay more yes, but your entire library immediately gets a benefit without having to patch-beg developers, that's why the "Cost per $" arguments don't always fit.

However, there is a wrinkle here - and that is the near-eradication of the sub-$300 GPU market as a performance per $ proposition (or hell, with current prices more like sub-$400). We're right on the cusp of a new generation mind you, but even then the 4060/4060ti replacements aren't expected until 2023. So yes, this is a somewhat new proposition for the PC gaming space - approaching 2 years into a new console's lifespan and usually something in the $300 range would deliver a far superior experience. So this is new and concerning. There are many confounding factors driving this of course, crypto and the pandemic are two huge, huge wrenches thrown into the works and the first one (at last) may cease to be gumming up the engine for much longer.

But invariably, we still have a problem: As the cost of process shrinks keeps increasing, bespoke solutions will tend to further win out in terms of delivering tangible benefits to end users at a reasonable price point when you can't just double the transistor count for the same cost every 2 years. Closed platform systems are ideal environments for these - for one, because the parent company can make huge orders at once to enable favorable price deals for parts that would be far more esoteric in a more open hardware market, and also to garner developer support for these custom solutions.

So yes, that is a concern for the PC gaming space as fab costs keep rising and the former brute-force increases per year and lessened in frequency.

And Rift Apart does not even bring the PS5 to sweat - we know that because the swap out of external SSDs wich are technicaly under Sonys recommendation for a SSD upgrade.

As was mentioned earlier in this thread, really think what it means if a game actually needed to stream in 8+ GB a sec constantly. I, for one, do not welcome our new 3TB game install overloads. This also seems at odds with what you just said before this - that it demonstrates how powerful the PS5 is because it's swapping out 3+GB of textures with every camera move. But then it's also not stressing the PS5, because you can run it on a SATA SSD? Which is it?


(Welcome to posting btw, but in the future I suggest you might take a little more time with your posts in the sense of having a more clear direction, that was kind of rambling and difficult to parse with the formatting to boot.)
 
The 750ti did certainly not hold up for most titles released around halfway into the Xbone/PS4's lifespan

That had alot to do with Kepler being the architecture where things didnt go in the end (compute). AMD gpus from that time era aged much better, almost linear to the consoles.

So yes, that is a concern for the PC gaming space as fab costs keep rising and the former brute-force increases per year and lessened in frequency.

Maybe. At the same time consoles are getting more and more expensive each generation too, not just the hardware itself (and the horrible price hike for the PS5), but games are increasing in price aswell as the services. The diminishing returns could be a concern, but that is if technology comes to a halt though, which i think wont really happen yet.

Edit: also a thing of note, when looking at Spiderman the game's ray tracing is designed around the PS5 and thus AMD's hardware. RT in other titles might and probably will display different results.
 
Last edited:
I thought NVMe drive requirement was for the saving the CPU having to drive as much I/O, i.e. not related to recompression.

I could be wrong, but I think some of the savings come from the way IO requests can be batched and use multiple queues, and the also the way some GPU/D3D loads can be left to be completed by DS while the CPU is released to work on other things e.g. rendering. Or at least that's what I've taken from the Github stuff MS have put out there (pinched this from github):


ModelViewerArchiveLoadDiagram.png


I think this should carry over to SATA SSDs, though I'm sure the perceived benefits would be a lot smaller there. Some of the future stuff like loading straight from NVMe and more directly to GPU memory does seem to be directly tied PCIe though (I think?).

Interestingly, in that Forespoken developer GDC thing their file IO graph for with / without DS (after decompression) shows a huge leap for NVMe, a modest improvement for SSD, and a slight regression for HDD. They do say that they optimised for NVMe drives.


Forespoken GDC.png


Despite DS + CPU decompression being slower wrt IO + decompression for the HDD, the game does still load slightly faster on HDD using DS. Perhaps this is because of the CPU being freed up slightly sooner (as per the MS github example) to carry on with other aspect of setting up the next scene?

Also, remains to be seen if the Win 10 implementation of DS actually does offer anything meaningful at all, or if it's more some kind of wrapper for the Win32 system. We've heard a couple of times that DS requires Win11 to work its best, and that MS have done some changes under the hood for Win11 to optimise for DS.

Good material for a Digital Foundry video at any rate. :D

I don't think we have any information on this yet so yes it's entirely possible that function won't require an NVMe which would potentially mean ~94% of the PC gaming base (according to Steam) is already capable of supporting that function. It seems sensible to not make that a requirement if possible, although I guess it depends if there are any capabilities enabled by the NVMe standard in general that are required to make the routing of data in this way possible.

I hope there's an optimised path for NVMe, but some kind of legacy fallback to support GPU decompression from SSD would be great, and probably help adoption of DS. Fingers crossed I guess!
 
Last edited:
Great video! Some nice performance comparisons at nearly matched PS5 settings and it's good to see the 3060 seemingly offering a largely equivalent to PS5 experience (as close as it's possible to measure given the differences) - so much for needing an RTX 3070 for this which is almost 50% faster than the 3060.

Interesting CPU comparison using the 11400F and 12400F too - both of which are falling a little below the locked 60fps level (although it's still worth remembering that according to Alex the PS5 uses an object range of between 7 and 8, not exactly 8 and this could have a noticeable performance impact given Richard shows the difference between 8 and 10 here is around 18%).

I have a ram speed issue with my I5-12400 in that my ram only runs at 2800mhz (brought over from an older system that could only run it at 2666hz, rated for 3200 but just can't get it stable at that speed), as ram-speed hungry this game is that certainly could be a factor in not being able to maintain 60fps with RT and object draw distance of 6. With setting at 5 though, much better - but that's getting quite a bit below the PS5 setting.

So very close to a PS5, sure. But as I mentioned earlier, that's doing the PS5 a disservice as it has an unlocked VRR mode too - if you're going to do equivalent hardware comparisons with an uncapped PC framerate you should do the same when a console gives you the option. Maybe in this mode the dynamic res means it's closer to 1080p on average, but it's above 60fps most of the time too - a 12400f definitely will likely get outpaced here. DLSS, in performance mode where it mostly is when set to dynamic at a 4K output res with RT also has some pretty nasty artifacts on high-contrast edges when doing certain moves as I've covered earlier, so even though DLSS can certainly produce a sharper image in some lighting conditions and when strolling around the city (significantly so when compared to PS5 RT performance mode), the PS5 in RT performance does produce a more stable, consistent image throughout actual gameplay - DLSS when it's going into performance mode territory really likes to pop-up and announce itself at times. Hopefully Nixxes can further improve this.

Without RT: It's very much a toss-up between the PS5 and my 3060/i5-12400 most of the time, but there are still some cases where even with dynamic DLSS it's GPU limited and can drop below 60 with optimized settings (which have a lower DOF and slightly lower hair setting than PS5 to boot). These are rare mind you, such as challenges where you're firing off a bunch of anti-grav weapons where it can go into the 50's - the PS5 can handle these fine however, and also don't have any slight dynamic res stutters that the PC gets when it has to adjust res (mostly in cutscenes though). The bigger issue is that CPU-culling problem when launching yourself over the top of buildings, it's a fun move but when doing so on moderate to tall buildings I get a few frames of stutter as the city comes into view. The PS5 can exhibit this too on certain very large skyscrapers, but it's far more rare - most of the time propelling yourself across the city is a completely locked 60.

Close overall - but the PS5 still wins out imo, and perhaps with a greater degree if we compared unlocked performance. I think a 3070 would definitely produce a superior experience to the PS5 in most cases, but again you'll potentially need a better CPU to get that consistency too based on my experience, RT or not.
 
Last edited:
Despite DS + CPU decompression being slower wrt IO = decompression for the HDD, the game does still load slightly faster on HDD using HDD. Perhaps this is because of the CPU being freed up slightly sooner (as per the MS github example) to carry on with other aspect of setting up the next scene?

In the DF tech interview with Nixxes, across a bunch of questions they talk about the games impact on the CPU and how the CPU is pushed in certain areas compared to others - relative to PS5. On Windows with CPU decompression, with faster I/O, you generally are putting more stress on the CPU at certain points (loads/streams) because there is more demand for CPU time to decompress in realtime. If you don't decompress in realtime, I assume Windows/memory buffering quickly hits a point where the consequences are worse than not trying to do it in realtime. ¯\_(ツ)_/¯

There is the endless back-and-forth between software and hardware that's played out over decades. A signifiant change in hardware/architecture here, reveals a bottleneck where there wasn't one before. Software compensates, new bottlenecks reveal themselves, the cycle repeats.
 
Last edited by a moderator:
Therefor its better to make comparisons with an 8 core CPU for this one and going forwards. As DF noted, a 3060 and a i5 is what you need to match PS5 experience. Thats counting in that the PC version cant be dialed down as much in RT/perf mode as the PS5 does.
 
First Post on this Forum after quite a lurking time ;)
So what i think about the Spiderman Port to PC is that in contrary to the overall opinion in this Thread, PC is as a platform (not any high end user mind you) but as a platform of gamers it is in trouble. Cosnoles offer much better per watt / performance than PCs - they are engineered that way.
You always can make things go more fluid if you alter them from being general purpose (PC) to specifically render Game code as efficient as it can possibly get.
Especially Consoles APIs (and here Sony realy shines) are much more lean and have every layer removed that is not needed.

So that was and is true for PS4. The latest Video from Richard Leadbetter shows this - PS4 cannot be matched on PC with similar Hardware. He is running the Game in 900p and it still runnes less performant. The Base PS4 Game renders almost all of the time in 1080p. With hardlocked 30fps. And thats all with Richard Leadbetter using a CPU that is running circles around the PS4s Jaguar Cores.
The Performance Edge that Consoles have did not manifest as much last Generation mainly because the extraordinary edge the PC Community had with their CPUs over the Jaguar Cores. Console Performance and API Edge over PC was here "sacrificed" in a way by using a under normal (PC) circumcstances totally unfit for gaming CPU.

Only the less friction ( hardware and software wise ) Enviroment on Consoles made it possible to found a whole gerneration of still very good looking games with a Netbook CPU at a Time where so much more performant PC CPUs were used by people.
This Time however , all is different!
This Time a very good CPU is used. But it is still a lean and friction less console enviroment. Even more so on PS5 where its dedicated HW Decompression block does even more free up CPU time. Something the PC simply cannot copy 1:1.
Direct Storage will help but it will come at a cost.
Ill jump over a few points now and adress my take on the Question of a RTX3070 is sufficient to play PS5 Ports in the future at the same ( or even better) Detail Settings.
My Answer is of course NO.
We all see what it takes CPU and GPU wise to play a mere PS4 Game uplifted per PS5 patch on PC to play at the same Settings. Gone are the times where way worse PC Hardware (read GTX 750) could bring something to the table.

I saw i a page or so earlier in this thread the Idea that People with RTX 30 Series or even RTX20 Series GPUs would be fine simply because RTX I/O is here to save the Day. It was aknowledged that RTX I/O on RTX 20 and 30 Series does not even come with dedicated decompression HW. Nvidia simply wants to use underutilised GPU parts. Someone stated that a mere 2Tf sacrifice would be enough to match PS5s I/O throughput. Also wrong - RTX30 Cards from 3070 upwarts enjoy a rasterisation edge vs PS5 but its not like it is in the next league. For RTX 20 cards? Forget it. I state that when the first 2nd Wave PS5 exclusive arrives wich should leverage PS5s I/O Block hard, the RTX 20 cards are not going to cut it anymore. And the chance is there that a RTX 3070 could not match a PS5.
Heck, i want to see a Ratched and Clank Rift Apart port to PC right now. Does anyone, given how a PS4 Game with a PS5 Patch stresses PC Hardware right now, believe that Rift Apart would run very good on the same range of hardware of today? The Game relies heavily on real time decompression of several GB/s with every cameramove. The "what is not in sight is not in memory" aproach that Cerny forecasted in his Road to PS5 Talk. And Rift Apart does not even bring the PS5 to sweat - we know that because the swap out of external SSDs wich are technicaly under Sonys recommendation for a SSD upgrade.
And please do not read this post as a stab against PC in general. I game on one as well - but things need to adressed in all fairness.
That's quite the first post!

I completely disagree with you on most things... but welcome to the forum :)
 
I have a ram speed issue with my I5-12400 in that my ram only runs at 2800mhz (brought over from an older system that could only run it at 2666hz, rated for 3200 but just can't get it stable at that speed), as ram-speed hungry this game is that certainly could be a factor in not being able to maintain 60fps with RT and object draw distance of 6. With setting at 5 though, much better - but that's getting quite a bit below the PS5 setting.

So very close to a PS5, sure. But as I mentioned earlier, that's doing the PS5 a disservice as it has an unlocked VRR mode too - if you're going to do equivalent hardware comparisons with an uncapped PC framerate you should do the same when a console gives you the option. Maybe in this mode the dynamic res means it's closer to 1080p on average, but it's above 60fps most of the time too - a 12400f definitely will likely get outpaced here. DLSS, in performance mode where it mostly is when set to dynamic at a 4K output res with RT also has some pretty nasty artifacts on high-contrast edges when doing certain moves as I've covered earlier, so even though DLSS can certainly produce a sharper image in some lighting conditions and when strolling around the city (significantly so when compared to PS5 RT performance mode), the PS5 in RT performance does produce a more stable, consistent image throughout actual gameplay - DLSS when it's going into performance mode territory really likes to pop-up and announce itself at times. Hopefully Nixxes can further improve this.

Without RT: It's very much a toss-up between the PS5 and my 3060/i5-12400 most of the time, but there are still some cases where even with dynamic DLSS it's GPU limited and can drop below 60 with optimized settings (which have a lower DOF and slightly lower hair setting than PS5 to boot). These are rare mind you, such as challenges where you're firing off a bunch of anti-grav weapons where it can go into the 50's - the PS5 can handle these fine however, and also don't have any slight dynamic res stutters that the PC gets when it has to adjust res (mostly in cutscenes though). The bigger issue is that CPU-culling problem when launching yourself over the top of buildings, it's a fun move but when doing so on moderate to tall buildings I get a few frames of stutter as the city comes into view. The PS5 can exhibit this too on certain very large skyscrapers, but it's far more rare - most of the time propelling yourself across the city is a completely locked 60.

So very close overall - but the PS5 still wins out imo, and perhaps with a greater degree if we compared unlocked performance. I think a 3070 would definitely produce a superior experience to the PS5 in most cases, but again you'll potentially need a better CPU to get that consistency too based on my experience, RT or not.

Do you.have both versions and a VRR display? If so perhaps you could have a look at the performance difference there. I think for the closest comparison you'd have to use IGTI (lighter than DLSS) with 4k output in performance mode and no DRS to simulate a fixed 1080p internal res with upsclaling. Likely a match for PS5's unlocked mode but I may be assuming too much there (although how else could it work)? The only issue is your memory speed which as you say very likely causes your CPU to underperform in this game, perhaps very significantly.

As the 3060 is a 2070 equivalent I'd expect it to perform similarly or a bit worse than the PS5 in no or very light RT areas, but heavy areas would be interesting to see. Unfortunately ensuring its not held back by the CPU might be impossible. I don't think utilisation stats are a reliable method for determining that tbh.
 
Do you.have both versions and a VRR display?

Unfortunately no. Both versions yes, but no display over 60hz.

As the 3060 is a 2070 equivalent I'd expect it to perform similarly or a bit worse than the PS5 in no or very light RT areas, but heavy areas would be interesting to see. Unfortunately ensuring its not held back by the CPU might be impossible. I don't think utilisation stats are a reliable method for determining that tbh.

One very odd 'heavy' area btw that Rich didn't test (while not dropping below 60fps at least) is...Central Park(!). Outside of RT, the biggest stressor in this game are areas with trees and no buildings, very strange.

As for CPU utilization, I can see the 12400 can be a bottleneck in RT simply by choosing a CPU limited resolution (such as 720p). Even at 1080p (dynamic off) with RT and object setting of 8, I can see GPU usage dropping to ~85%.

However, just tested and it's actually better than I thought, at a fixed 1080p with objects at 8, it only dropped to 59fps during the Times Square stress test. Swinging through the majority of downtown and it's usually in the 70's. I think my issue before was with dynamic res enabled, you can get ~100fps on top of a building with RT on @ 1080p and 70-80fps at many points, but then you hit a rough area and the dynamic res will have to lurch and it just can't change quickly enough, leading to more stuttering. Without it it's much better, a 12400 is probably quite doable for 60+ fps with RT then and a comparable objects setting to the PS5.
 
One very odd 'heavy' area btw that Rich didn't test (while not dropping below 60fps at least) is...Central Park(!). Outside of RT, the biggest stressor in this game are areas with trees and no buildings, very strange.

Is it the trees, or is it the lack of buildings meaning the draw distance is much, much greater?
 
My watercooled 3700x (avg. 4200 Mhz) paired with a RTX 3080 (10 GB) and 32 GB 3600 Mhz RAM cannot keep the framerate above 60 fps in Spider-Man, when running in 5120x1440p or 3840x1080 (32:9).

In fast traversal through the city, there will be dips to the mid-forties. No amount of changes in visual settings seems to remove these dips, and I have had the game install on both my (system) NVMe drive and SATA SSD drives, but it doesn't seem like the drive makes any tangible differences.

So I've stuck with medium-high settings with RT (object range 1) and IGTI at 5120x1440, which gives me good image quality along with the best performance possible, barring those low dips which will appear anyway.

I've also tried disabling SMT, which many people have suggested, but to no avail.

So Alex findings with Zen 2 generation CPUs seems to be spot on in his initial assesment of those CPUs and RT turned on, and in the latest performance video by DF, I'm just surprised that I cannot get 60+ fps at all times even with RT turned off.
Maybe the streaming tech has problems with the wider horizontal view on 32:9 screens?

For those interested, Hardware Unboxed also made a CPU comparison worth watching:
 
Last edited:
Increasing FOV will increase draw calls significantly, even with RT maybe if RT does frustum culling. Maybe that is what you are seeing?
Yes, that would be my semi-laymans guess, the 32:9 aspect ratio (affecting FOV) could be a factor. Would be nice if others could verify my performance issues.
If we are just counting pixels, the 5120x1440 resolution has fewer pixels than 3840x2160 (16:9) , but i think the wider FOV could have a different impact on performance in this case.

Btw. Didn't you try Spider-Man on a 32:9 screen in your tech review? I know my setup is more an edge case, but the game does officially support the aspect ratio (even if a few cutscenes are still assuming 16:9 ratio. The Fisk auction house fail cutscenes being case in point)
 
My watercooled 3700x (avg. 4200 Mhz) paired with a RTX 3080 (10 GB) and 32 GB 3600 Mhz RAM cannot keep the framerate above 60 fps in Spider-Man, when running in 5120x1440p or 3840x1080 (32:9).

In fast traversal through the city, there will be dips to the mid-forties. No amount of changes in visual settings seems to remove these dips, and I have had the game install on both my (system) NVMe drive and SATA SSD drives, but it doesn't seem like the drive makes any tangible differences.

So I've stuck with medium-high settings with RT (object range 1) and IGTI at 5120x1440, which gives me good image quality along with the best performance possible, barring those low dips which will appear anyway.

I've also tried disabling SMT, which many people have suggested, but to no avail.

So Alex findings with Zen 2 generation CPUs seems to be spot on in his initial assesment of those CPUs and RT turned on, and in the latest performance video by DF, I'm just surprised that I cannot get 60+ fps at all times even with RT turned off.
Maybe the streaming tech has problems with the wider horizontal view on 32:9 screens?

For those interested, Hardware Unboxed also made a CPU comparison worth watching:

Going by what others experience with 3700x at stock speeds, seems like the ultrawide fov is impacting your performance.
 
My watercooled 3700x (avg. 4200 Mhz) paired with a RTX 3080 (10 GB) and 32 GB 3600 Mhz RAM cannot keep the framerate above 60 fps in Spider-Man, when running in 5120x1440p or 3840x1080 (32:9).

In fast traversal through the city, there will be dips to the mid-forties. No amount of changes in visual settings seems to remove these dips, and I have had the game install on both my (system) NVMe drive and SATA SSD drives, but it doesn't seem like the drive makes any tangible differences.

So I've stuck with medium-high settings with RT (object range 1) and IGTI at 5120x1440, which gives me good image quality along with the best performance possible, barring those low dips which will appear anyway.

I've also tried disabling SMT, which many people have suggested, but to no avail.

So Alex findings with Zen 2 generation CPUs seems to be spot on in his initial assesment of those CPUs and RT turned on, and in the latest performance video by DF, I'm just surprised that I cannot get 60+ fps at all times even with RT turned off.
Maybe the streaming tech has problems with the wider horizontal view on 32:9 screens?

For those interested, Hardware Unboxed also made a CPU comparison worth watching:

That HUB review is quite illuminating. It seems extra cores do make little difference within the same architecture which is highly surprising given Nixxes statement about dedicating a whole core to decompression. On that basis I can only assume hardware decompression wouldn't do an awful lot for this game unless Nixxes were just simplifying and really meant roughly the equivalent of a core spread across all available cores.

The memory speed comparison is pretty staggering though! Over a third extra performance on a 12900K by going from DDR4 3200 to DDR5 6400!! Holy cow, I know what memory I'll be getting with my next CPU!

That's also pretty illuminating with regards to the PS5's performance given the PS5 has access to very high bandwidth GDDR5 as it's system memory.
 
hm is editing not allowed for new accounts or is it deactivated in general? I accidently clicked "reply"
ok so some realy did go into detail with their answers ,so is shall respect the effort and come back at a couple of points :)
First, i think this PS5(console) vs PC is better suited for a system wars section or something. Anyway, a 7870/i5 setup from 2013 still does fine at equal to console (base console) settings, even today. Largest hurdle would be vram if anything. GTX750 didnt age very well, but thats due to Keplers architecture.
The notion that a RTX3070 is no match for the PS5 is kinda.... out of this world. The 3060 non-ti does actually run spiderman and practically everything else just aswell, mostly better whenever RT is involved vs the PS5 versions. I shared a YT video before here, 3700x/3060 combo outperforms the PS5 version of spider, at higher settings.

So first of all no intention of any plattform warring on my side. But a honest discussion must be possible without someone immediatly making use of "war" vocabulary , so please in all friendlyness - knock it off.
I did not try to say that a RTX3060/ 3700x Combo cant render the Game in same quality as a PS5. Spiderman on PS5 is more than just a mere patched PS4 Version , People are right here and i was wrong. But it still is a very early into the Gen first party PS5 Game that for sure does not leverage PS5 Intrinsics to an extend 2nd or 3rd wave PS5 Exclusives will do. Even Rift Apart is only tipping the Foot into the water when it comes to PS5 potential.
So the Question we need to ask is what cards will long into the Gen still be able to render PS5 Exclusive Titels in the same quality. I said a RTX 3070 will not be enough . Many disagree and that is fine. It will be intresting to see what comes out of it. :)
I also doubt that NV was lying when they said performance impact will be minimal to the point you wont notice when the GPU is decompressing. Thing is, i think GPU decompression is the way forward, GPU's are extremely efficient at doing parallel/decompression work. With todays drives being capable of 7gb/s and faster before decompression, i see no reason for concern that GPUs would loose any meaningfull performance doing on-gpu decomression. GPU
s could easily scale way beyond what the consoles are doing in this regard.
<i believe it when i see it. Nvidia can talk all day long about their RTX I/O . PS5 is here and its concept has proven to be way way more advanced than the old way of doing it on PC. There are games out there that do stuff that is not possible today on PC. - The obvious Answer always is replyd to that point is :,,But PC can go always pure uncompressed and just have 32GB of mainram to make up for large filesizes". My Answer to that is - Where is it? When it is so easy why nobody does it? Why Star Citicen does not try to do it for example? Why none of the High End PC Games like Cyberpunk have a extra Detail Setting called "Hyper Ultra" wich would make use of such a possibility?

Cerny came up with a ingeniuos idea that Sony paid alot of millions for in development. They streamlined for the very first time the entire I/O Process and build a lot of custom HW. Nvidia on the other hand still needs to prove that a GPU busy with a AAA Game doing raytracing at 60 fps still has time left to decompress lets say 5GB/s. Where are the Demos for that? I did not see a single Demo from them proving their concept..
Welcome, and congratulations on the first post, it's certainly an interesting one!
Thx for welcoming me. :)

Welcome, and congratulations on the first post, it's certainly an interesting one!



I'm not sure why this would spell trouble for the PC as a platform. Consoles have always been dedicated games machines vs the PC's more general purpose design. However that gap has narrowed significantly over the last 2-3 generations with PC's becoming increasingly more console like their gaming focus, e.g. thinner API's, ever increasing gaming focus from Windows, Direct Storage (the first gaming focussed storage API), while the consoles become more general purpose and PC like. By this stage there isn't a lot separating the two in terms of gaming efficiency, especially once Direct Storage gets it's full release. We're really just talking about ultra thin console API's vs thin PC API's at that point (and PC API's will continue to evolve in that respect where-as console API's have likely already got a low level as they can).

In terms of actual performance per watt I disagree that the consoles are better. In fact, PC parts, particularly with the upcoming generation will offer considerably more performance per watt than the consoles, that should go without saying since they are newer, more efficient architectures on better nodes from the same vendors. What you're interpreting as poor performance per watt in the PC space is merely the high end components going well past the efficiency sweet point on the power curve to extract the highest possible performance in a PC environment that allows such things.

Look at laptops though and they tell a very different story. There it's possible to get greater than console performance at a much lower power draw.
I see that the phrase " PC is in trouble" caused some unrest in People hehe. But maybe my english was of a bit. What i was trying to say was that not that PC as a Plattform is in trouble but rather the people withhin its community. Because lets face it - the absolut majority is still gaming on machines that are way below PS5 in terms of render Capabilitiys. Those People need to invest alot in either a new PC altogheter or a serious upgrade.
In that sense this Gen is different than the PS4 one. 2013 you were fine with a little older HW than PS4 almost all of its Gen. Mainly due to Jaguar being used as a CPU and Games not using 8 Core CPU to their potential.

Is there a Chart or diagram that shows the power usage of a Laptop with similar enough HW to PS5 while it renders a game? Best would be the same game and said game should be a modern AAA Game. I think a PS5 should still win this if everything else is the same like Detail settings and resolution ect..
 

Attachments

  • ps5-slides-06-1440x810.png
    ps5-slides-06-1440x810.png
    341.5 KB · Views: 11
Status
Not open for further replies.
Back
Top