I watched this video, and I don't know why people keep expecting a game that runs at 30FPS with upscaling on Series X is going to be able to hit 60FPS on weaker GPUs, even if the resolution is lower.
I watched this video, and I don't know why people keep expecting a game that runs at 30FPS with upscaling on Series X is going to be able to hit 60FPS on weaker GPUs, even if the resolution is lower.
another comparison, found this one very interesting 'cos of the motion and how DLSS looks a lot better in still images and when moving/scrolling/panning the camera. Also some artifacts that get fixed with DLSS:
Even this day 1 DLSS mod looks better than FSR2. In fact DLSS looks better than native + TAA in this video. DLSS performance is the same (slightly better within margin of error). This lines up with my experience with FSR and DLSS. FSR2 is decent but when I got a 4070 and tried DLSS it was immediately noticeably better, especially in motion.
With FSR2 I've noticed black dots appearing in places for no apparent reason. They are visible regardless of motion or anything else and they never go away. It's odd because the dots only appear on certain objects or skyboxes and I'll be damned if I can think of a good reason. No Man's Sky has this problem for sure but I also saw it in other games. And it's not a GPU specific thing either. Happens on both my GTX970 and 6700XT. Also some fast motion or transparent grids like fences get wonkey with FSR2. I never saw any of that with DLSS. I'm not saying FSR2 is terrible. Before I got an RTX card I thought it was really impressive. But after seeing DLSS it's hard to go back.another comparison, found this one very interesting 'cos of the motion and how DLSS looks a lot better in still images and when moving/scrolling/panning the camera. Also some artifacts that get fixed with DLSS:
he even added DLSS frame generation. My favourite thing about that technology is that it doubles the framerate but imho the best thing is that your GPU usage con go from 90%+ usage to a 40%-50% GPU usage depending on which framerate you want to play at.FSR looks at it's best in pictures and even then it loses. In motion, it's just plain bad.
FrameGen also added within a day of launch. https://www.patreon.com/PureDark/posts
Too many faces, not enough egg.
I read the Patreon post. He said he expected it to take 2 hours to implement DLSS2. Seems he was pretty much correct.FSR looks at it's best in pictures and even then it loses. In motion, it's just plain bad.
FrameGen also added within a day of launch. https://www.patreon.com/PureDark/posts
Too many faces, not enough egg.
I read the Patreon post. He said he expected it to take 2 hours to implement DLSS2. Seems he was pretty much correct.
I can't think of a good reason for Bethesda to omit DLSS other than a contract restriction from AMD. I mean 1 guy who had never seen the game before could do it in 2 hours.
I read the Patreon post. He said he expected it to take 2 hours to implement DLSS2. Seems he was pretty much correct.
I saw a video showing how the game scales almost linearly with RAM bandwidth. Whatever the case, performance is hilariously bad from what I've seen. Definitely gonna be a pass for me until further notice.It’s a total dumpster fire of a game. PC version doesn’t even have HDR. Not a hint of ray tracing. Cant wait to shit on it in a few more days when XGP version unlocks.
Overly aggressive streaming, maybe?I saw a video showing how the game scales almost linearly with RAM bandwidth. Whatever the case, performance is hilariously bad from what I've seen. Definitely gonna be a pass for me until further notice.
Who knows.Overly aggressive streaming, maybe?
Who knows.
Video is hella long but the point is the game scales almost perfectly with memory bandwidth. He mentions (like a dozen times ) that it scales just like the AIDA64 memory bandwidth test. The delta between the 13600K and 12900K is exactly in line with the difference between DDR5-4400 (12900K) and DDR5-5600 (13600K). L3 helps some (see 13900K vs 13700K/13600K) but he guesses the dataset doesn't fit in L3 so bandwidth to RAM is king. This explains why AMD performance is so bad. AMD's memory system is inferior to Intel.
I'd love to see some in depth bandwidth and latency testing on various CPUs in this game.
Overall I'm still shocked at how bad CPU and GPU performance in this game is compared to how it looks.
Anymore questions?
I saw a video showing how the game scales almost linearly with RAM bandwidth. Whatever the case, performance is hilariously bad from what I've seen. Definitely gonna be a pass for me until further notice.
Overly aggressive streaming, maybe?
It's not really RAM bandwidth, but overall system memory performance.
My guess is going by the current information is that the CPU side with respect to actually rendering the game itself is likely well threaded.
What the issue then is game logic/simulation. In the heaviest scenarios the data set is likely very large and needs to go heavily into system memory, this means how fast you can access that memory (so not just bandwidth but other factors such as latency) then becomes the limiting factor before the CPU itself can actually even start working with said data.