Current Generation Games Analysis Technical Discussion [2023] [XBSX|S, PS5, PC]

Status
Not open for further replies.
Shocking how much better it performs on PS5. Lately many UE4 games are performing a bit better on Sony console, but here the XSX version clearly lacks optimization with big frame-time issues.

Also FSR still sucks.

It may still hold a significant advantage, but I've seen footage of the PS5 in the main hub world and it can get ugly too.
 
Hey if you want to define "escalated" as me politely explaining why blaming the performance issues on the GPU when it's actually a CPU bottleneck does matter - with a real world (albeit extreme to clearly illustrate the point) example of how doing so could misinform the buying decisions of consumers, then you have right at it.

We'll just ignore the fact that in your response to me you dismissed that example as irrelevant on the basis that it wouldn't apply to many people (despite the example actually being transferrable to a much wider audience which I'm quite happy to explain further if you need me to), and then accused me of trying to "protect the 4090's image". Perhaps we just define escalation differently?
And the truth finally comes out. You talk about me being triggered yet you were the one triggered all along. All you needed to say was that I hurt your feelings and I would've apologized. Instead you've built some fantastic strawmen in our discussion. I think this one is the best though. People's buying decisions could be misinformed because a reviewer said the strongest gpu can't "run" this one game well? What are they going to buy, a weaker gpu? Frankly ridiculous. I want to laugh but, its actually sad.

Now to answer the first part of your question, it doesn't matter if the reviewer blames it on the cpu or the gpu. What doesn't change is the outcome which is, you can't sustain a solid 60fps. From what we know so far, there's no cpu available that overcomes the bottleneck. Maybe when Zen 5 comes out, that might change but by that point, this game is not even relevant anymore.

Finally, I implore you to look up the word pedantic in the dictionary. Perhaps it'll provide some clarity as to the uselessness of your distinction.
 
And the truth finally comes out. You talk about me being triggered yet you were the one triggered all along. All you needed to say was that I hurt your feelings and I would've apologized. Instead you've built some fantastic strawmen in our discussion. I think this one is the best though. People's buying decisions could be misinformed because a reviewer said the strongest gpu can't "run" this one game well? What are they going to buy, a weaker gpu? Frankly ridiculous. I want to laugh but, its actually sad.

Now to answer the first part of your question, it doesn't matter if the reviewer blames it on the cpu or the gpu. What doesn't change is the outcome which is, you can't sustain a solid 60fps. From what we know so far, there's no cpu available that overcomes the bottleneck. Maybe when Zen 5 comes out, that might change but by that point, this game is not even relevant anymore.

Finally, I implore you to look up the word pedantic in the dictionary. Perhaps it'll provide some clarity as to the uselessness of your distinction.

Sigh... okay, to try and bring this back to something resembling a technical discussion, let me explain for you why this kind of misreporting of technical details does indeed have the potential to impact peoples purchasing decisions.

Scenario 1
  1. Reviewer claims that a 4090/7900XTX is unable to maintain 60fps in a game while in reality they are bottlenecked by lets say a 5950X to 50fps average and without that bottleneck the GPU could comfortably average 120fps.
  2. Consumer who owns (for example) a 3060 along with a 5600x concludes that since their GPU is less than 1/3 as fast as the GPU that cannot hit 60fps, the game, will be unplayable on their system.
  3. Consumer decides not to purchase game on those grounds.
  4. Consumer has been misinformed because in fact their GPU was quite capable of hitting playable frame rates in the game while their CPU, being only roughly 10%-15% slower for gaming would have still been in the 40fps range for a solid 30fps lock (or higher with VRR).
Scenario 2
  1. Reviewer claims that a 4090/7900XTX is barely able to maintain 60fps in a game while in reality they are bottlenecked by lets say a 7800X3D, and without that bottleneck the GPU could easily exceed 100fps.
  2. Consumer who owns (for example) a 3080 along with a 3700x concludes that since their GPU is only around half as fast as the GPU that just about hitting 60fps, they need to upgrade their GPU to something faster
  3. Consumer throws $1200 down on a 4080, buys the game and loads it up
  4. Consumer is massively bottlenecked by their 3700x to around 30 fps because they were misinformed by the reviewer. If they had spent 1/4 of that $1200 on a a new CPU (5800X3D) instead of the GPU, they could have achieved the same ~80% performance of the reviewers system that they expected to get from their GPU upgrade. Instead, they spent 4x as much and got no performance increase at all over their older GPU.
Obviously given the variety of hardware combinations out there it would be possible to come up with hundreds of different variations on the above scenarios that all lead to the consumer making a poor choice because they were misinformed.

Let's be honest, if we boil this right down to basics, you're arguing that reviewers putting out false/misleading information doesn't matter. Are you really willing to die on that hill?
 
Shocking how much better it performs on PS5. Lately many UE4 games are performing a bit better on Sony console, but here the XSX version clearly lacks optimization with big frame-time issues.

Also FSR still sucks.
What do we blame this on?! Do devs need more time to optimize the games? Are pubs forcing games out the door cause they want them out?
 
Bit of an interesting video that I thought was worth a share, the Ryzen 4700G is basically the Series-X (And PS5 CPU) and this guy tests it.

I am speechless, this means the Series X and PS5 CPU is basically a glorified 8 core i7 from the 5th gen (i7 5960X), the low L3 cache (8MB) and clock speed (3.6GHz) really hurts these CPU, though I suspect the much bigger memory bandwidth from GDDR6 on consoles help equalize things a bit. Still, these CPUs are weak compared to even medium end desktop CPUs.
 
I loved the meme :LOL: (y)

On a serious note though, i think it depends on the definition of 'don't need any more' though. In a linear game like TLOU for example you play and area, then move on, and will not revisit that area again without starting a new game. In that case, absolutely release the VRAM. But in a game like Survivor where I understand there is an element of open world, and you can travel back to previously visited locations, then I'd say it's better to keep that already loaded data in VRAM, even if it's no-where near the current play zone provided there is spare capacity in VRAM. It should just be flagged as low priority for replacement with more relevant data when the likelihood of needing that data increases.

I don't think it's a good idea to free up VRAM of data that's not currently (or going to be imminently) in use though if it can potentially be used at some point, just for the sake of creating more free VRAM. As you should just be able to copy over it (I assume?) when needed. Or is there a penalty involved in overwriting existing data vs writing to empty VRAM?

Yeah, in principal if you might re-use data in vram and you are within your vram limits, with no upcoming loads anticipated, then that could be a valid thing to do.
 
I am speechless, this means the Series X and PS5 CPU is basically a glorified 8 core i7 from the 5th gen (i7 5960X), the low L3 cache (8MB) and clock speed (3.6GHz) really hurts these CPU, though I suspect the much bigger memory bandwidth from GDDR6 on consoles help equalize things a bit. Still, these CPUs are weak compared to even medium end desktop CPUs.

Under 220 Watts total power draw for the entire console system versus 110+ Watts PC CPU alone. Yeah, something has got to give to fit into that power envelope.
 
Is it another case of DirectX12 just being shit?
Yeah of course, this is the first DX12 title for the studio.

This is the previous game running on the supposedly "bad" DX11 API, the game is CPU limited at 144 fps for all GPUs, min fps is 135 at 4K on a 4090.

This is the new game running on the supposedly "better" DX12 API, game is CPU limited at 120 fps for AMD GPUs, and 115 for NVIDIA GPUs, min fps is 72 at 4K on a 4090.
 
Under 220 Watts total power draw for the entire console system versus 110+ Watts PC CPU alone. Yeah, something has got to give to fit into that power envelope.
I would argue the CPUs are still strong enough to create most major game experiences if major open world games like horizon can exist at stable 60fps.

Although not as strong as the best, they are very good for what they are meant to do.

I could not imagine PS4 games running at 60 unless it was a very specific situation, let alone games that pushed the hardware and visuals, yet we here we have years worth of games of 60fps modes outside of a very small handful. And inside that small handful are games that had huge developmental issues or bad optimization for all platforms.

I think only plague tale is the only game that legitimately justified a lower fps and even then Alex made the argument that it also could have had a 60fps mode on console had they lowered the resolution and certain settings
 
Last edited:
I am speechless, this means the Series X and PS5 CPU is basically a glorified 8 core i7 from the 5th gen (i7 5960X), the low L3 cache (8MB) and clock speed (3.6GHz) really hurts these CPU, though I suspect the much bigger memory bandwidth from GDDR6 on consoles help equalize things a bit. Still, these CPUs are weak compared to even medium end desktop CPUs.

I'm not sure how much the bigger memory bandwidth will help. If it was a big factor for performance these CPU's would come with triple or quad memory interfaces on PC. I expect the memory latency is a much bigger factor in performance. Just look at how much CAS level impacts that in the PC space. I don't know what the CAS latency is on GDDR but I assume it's a fair bit higher than even mediocre DRAM.

That said, cache size is something that can be worked around on a console. An X3D chip in the PC space is much faster than a non X3D chip because it can fit more of the workload into cache. On a console, they will tailor the workload itself to fit into the far more limited cache. Obviously the result won't be the same, but it's certainly an advantage of the fixed platform.
 
Bit of an interesting video that I thought was worth a share, the Ryzen 4700G is basically the Series-X (And PS5 CPU) and this guy tests it.


Interesting video, but I'd just point out that the PS5 CPU has half width vector units compared to PC / Series consoles (perhaps to reduce peak power use of these units), so it's only half the throughput for SSE and VMX.

But whether that would affect the results in these tests I can't say!
 
Status
Not open for further replies.
Back
Top