I defer to @pjbliverpool post. It's nice to look at anomaly cases, but you're not going to find many anomaly cases within a family of GPUs. We don't see the 2080S outperforming the 2080TI despite being significantly higher clocked for instance. There are very few anomalies on a whole that within a family of GPUs in which the design is to beef the entire GPU around it's compute performance, that it would perform 20% worse. Clocking higher isn't a new thing within a family of cards, and GPUs have always gone slower and wider. This is the general direction of all GPU development for some time now and so it's a natural position to take.None of these things are equal between PS5 and Series X so why are people focussing on just one metric and expecting the higher to result in more performance?
I don't, I don't need to compare PS5 to XSX. It's not a useful metric except an interesting data point about power consumption on PS5s side of things. XSX compared to itself showcases (I measured) titles pushing the XSX will hit closer to 200W power draw consistently. Unoptimized and BC titles are 145 to 150W range. Now I suspect those computershare.de numbers might be wrong, so I need to wait for my copies. But if BC titles are drawing 145-150W for 30fps titles and AC Valhalla is pulling only 135W, to me this is a large red flag on the performance of the software with respect to how much of the silicon is being used.How are you comparing two different pieces of hardware and software and assessing performance based on the amount of power draw? You know that PS5 runs higher clocks and that there is a logarithmic scaling of power draw coming into play. There is a reason that PS5 has a high-rated PSU than Series X?
Typically if I were to choose features for being able to deploy quickly, _easy to use_ and mature are the features I would look for.Nobody knows how optimized PS5's tools are either. The thing about optimizing is that effective techniques only comes with experience and both consoles are brand new. What techniques work better than other techniques and how tools will adapt to help developers exploit said techniques is something that will take a while. What we do know, because Dirt 5's technical director said so, is that Xbox tools easy use and mature.
DX11 respectively is both easy to use and mature
If I were looking for high performance, the words I would be looking for are, optimal, flexible, high performance, low level, low overhead etc. These would be the type of words I would be using to describe a highly performant kit.
Often _easy_ tends to conflict with performance. Often easy implies lower performance, and harder implies the ability to reach higher performance.
ie DX11 vs DX12
GNM vs GNM+
Python vs C++
Maturity of the tools is a discussion around stability, not performance.
The GDK can be great and mature, and still not be well optimized for Series consoles performance. A bulk of the work on the GDK would be around the oldest platforms: 1S, 1X, and PC. The majority of the work within the GDK is bringing in the newest Series X|S platforms.
As for tools optimizations or the constant comparison against PS5 and XSX. This isn't the discussion I was interested in. If tools are an issue, more data would support this theory. It's as easy as removing PS5 out of the equation and still ask if we can prove XSX is under performing. I think the answer is yes. We can look at the 2070, 2060, the 5700XT, and 5700, and see with console settings and see if they out perform the XSX. I don't need PS5 to prove XSX is underperforming, and it shouldn't be part of that debate. I have power measurements right now (crude but w/e) still showcasing that if you're well below 200W on XSX, your'e likely not really using the full potential of the system.
If people want to make a claim that PS5 is over performing, they should remove XSX out of the equation and compare it to its equivalent GPUs on the market to see if that's actually true.
I can't say that PS5 is running a super efficient kit, that's not a claim I've made. But clearly poor optimization hurts XSX much more than it hurts PS5. PS5 you just go faster if it's got less work to do. XSX is just going to truck along at the same pace it's always has. PS5 will perform better with less load, as that is its characteristic to its boost nature. XSX will always perform better with more load, because that is characteristic to its fixed clock nature.
If you want to compare PS5 performance to see if its really over-performing, you need to compare it to other boost clock GPUs.
On the topic of PS5 vs XSX:
To Cerny's credit, my greatest oversight in all of the discussion leading up to the launch, was making the assumption games were optimized enough to make full use of the silicon. I never once considered how inefficient a majority of titles could be and not take advantage of the silicon. And this is where boost is probably making PS5 perform very well and out of the range I expected it to perform. And he did mention it, in perhaps a way it didn't click for me. But now it does. It clicks now.
We've never done power measurements for games on consoles for every game. But it's clear that perhaps we should have and I will try. Though I don't need to guess much for PS5. It's the XSX that will need measuring.
Last edited: