davis.anthony
Veteran
IIRC Death Stranding also performs very closely to 3060ti. Seems like PS centric games simply like the hardware. Maybe they exploit a specific strength of PS5 that shifts the balance towards there. Just a guess IMO.There has been one. Can you guess which? Uncharted 4. The 2080 Ti is only a smidge faster than the PS5 overall in that game despite being consistently 40-50% in other games.
A 3060 Ti is fine but barring a VRAM limitation, the PS5 has no business matching a 3070.IIRC Death Stranding also performs very closely to 3060ti. Seems like PS centric games simply like the hardware. Maybe they exploit a specific strength of PS5 that shifts the balance towards there. Just a guess IMO.
I have no qualms with PS5 matching my 3070 or 3060ti.
Only in floating point math which accounts for a small portion of the total rendering time of a typical frame. From what users have posted with Nvidia’s profiling tools, I haven’t seen a frame where it was able to reach full utilization.This is a stupid question I will look stupid asking it but here goes. I thought 4090 was 8 times the general GPU power of the PS5 and not 4 times? I know flops are not a real measurement and all, but considering how much power that component requires among other things I could have sworn it was much stronger than just 4x
Yes, it says 1600 MB for me as well. But the game will happily use near 7500 mb if I have free unused VRAM space (I've managed to push dedicated VRAM usage to 7600 mb at one occasion). I have to take back what I said in that matter, this game definitely uses more VRAM than Spider-man or most other recent titles; so there are no problems on that front. That meter is still weird though. Also, game utilizes a huge amount of shared memory similar to Spider-man but it doesn't impact the performance the way I would think it is. The game definitely does not limit itself to the "game application" value which I'm grateful. (However I confirmed that OS+Background is a fixed %20 percentage of your total VRAM. I've seen my friend's 4090 reporting 4.8 gb for OS+background whereas only... 700 mb was actively used in total. It is clearly a %20 fixed percentage)I am really impressed how much of a CPU this game can utilise, but I am not really understanding why and what it is doing to warrant that CPU utilisation and performance.
Like this alley way, why is this alley way doing this to a Ryzen 5 3600? What processing is occuring to render this little corner?
@yamaci17 what is your in-game VRAM meter saying about OS reservation? 1600 mb?
Game is crazy CPU heavy too.
View attachment 8536
A 13700k/13900k needed to do a locked 120fps.
Anyone on Zen and Zen+ will be in for a rough ride.
I think it’s from an earlier GDC presentation that they use atomic threads to build work queues so that all cores are fully utilized, they are constantly doing and putting more work back onto the stack.I am really impressed how much of a CPU this game can utilise, but I am not really understanding why and what it is doing to warrant that CPU utilisation and performance.
Like this alley way, why is this alley way doing this to a Ryzen 5 3600? What processing is occuring to render this little corner?
@yamaci17 what is your in-game VRAM meter saying about OS reservation? 1600 mb?
Game is crazy CPU heavy too.
View attachment 8536
A 13700k/13900k needed to do a locked 120fps.
Anyone on Zen and Zen+ will be in for a rough ride.
I am really impressed how much of a CPU this game can utilise, but I am not really understanding why and what it is doing to warrant that CPU utilisation and performance.
Like this alley way, why is this alley way doing this to a Ryzen 5 3600? What processing is occuring to render this little corner?
@yamaci17 what is your in-game VRAM meter saying about OS reservation? 1600 mb?
The game is probably compiling shaders in the background as you play despite having waited for the compilation to finish at the beginning.
I was still getting that much usage more than 1h into the game though.The game is probably compiling shaders in the background as you play despite having waited for the compilation to finish at the beginning.
There has been one. Can you guess which? Uncharted 4. The 2080 Ti is only a smidge faster than the PS5 overall in that game despite being consistently 40-50% in other games.
I don't think that was anywhere near this bad though. It ran fine with 8GB as far as I've seen and the 2080Ti is still faster in that came. By a decent margin at 4K as well. Whereas here is actually seems quite a bit slower. Which to me isn't just poor optimization, it's flat out broken.
But it doesn't? A 2080 Ti is equivalent to a 3070, not a 3060 Ti.A 2080ti trades blows with a 3060ti and a 3060ti isn't really that much faster than a 6600XT or 6700.
A 2080ti trades blows with a 3060ti and a 3060ti isn't really that much faster than a 6600XT or 6700.
GPU's that in pure raster workloads are the level that PS5 performs at.
So is a 2080ti that much better in raster?
The 3060 Ti doesn't match the 2080 Ti under any circumstances. The 2080 Ti and 3070 are almost always dead even. They're both about 15% faster than the 3060 Ti in general.3060Ti is marginally faster than the 2080 Super but the 2080Ti is a good 10% faster according to TPU. Only in RT corner cases would the 3060Ti be matching or beating the 2080Ti.
Again according to TPU the 3060Ti is 12% faster than the 6600XT in raster and the 2080Ti is 23% faster.
3060Ti is marginally faster than the 2080 Super but the 2080Ti is a good 10% faster according to TPU. Only in RT corner cases would the 3060Ti be matching or beating the 2080Ti.
Again according to TPU the 3060Ti is 12% faster than the 6600XT in raster and the 2080Ti is 23% faster.