Current Generation Hardware Speculation with a Technical Spin [post GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
None of these things are equal between PS5 and Series X so why are people focussing on just one metric and expecting the higher to result in more performance?
I defer to @pjbliverpool post. It's nice to look at anomaly cases, but you're not going to find many anomaly cases within a family of GPUs. We don't see the 2080S outperforming the 2080TI despite being significantly higher clocked for instance. There are very few anomalies on a whole that within a family of GPUs in which the design is to beef the entire GPU around it's compute performance, that it would perform 20% worse. Clocking higher isn't a new thing within a family of cards, and GPUs have always gone slower and wider. This is the general direction of all GPU development for some time now and so it's a natural position to take.

How are you comparing two different pieces of hardware and software and assessing performance based on the amount of power draw? You know that PS5 runs higher clocks and that there is a logarithmic scaling of power draw coming into play. There is a reason that PS5 has a high-rated PSU than Series X?
I don't, I don't need to compare PS5 to XSX. It's not a useful metric except an interesting data point about power consumption on PS5s side of things. XSX compared to itself showcases (I measured) titles pushing the XSX will hit closer to 200W power draw consistently. Unoptimized and BC titles are 145 to 150W range. Now I suspect those computershare.de numbers might be wrong, so I need to wait for my copies. But if BC titles are drawing 145-150W for 30fps titles and AC Valhalla is pulling only 135W, to me this is a large red flag on the performance of the software with respect to how much of the silicon is being used.

Nobody knows how optimized PS5's tools are either. The thing about optimizing is that effective techniques only comes with experience and both consoles are brand new. What techniques work better than other techniques and how tools will adapt to help developers exploit said techniques is something that will take a while. What we do know, because Dirt 5's technical director said so, is that Xbox tools easy use and mature.
Typically if I were to choose features for being able to deploy quickly, _easy to use_ and mature are the features I would look for.
DX11 respectively is both easy to use and mature

If I were looking for high performance, the words I would be looking for are, optimal, flexible, high performance, low level, low overhead etc. These would be the type of words I would be using to describe a highly performant kit.

Often _easy_ tends to conflict with performance. Often easy implies lower performance, and harder implies the ability to reach higher performance.
ie DX11 vs DX12
GNM vs GNM+
Python vs C++

Maturity of the tools is a discussion around stability, not performance.
The GDK can be great and mature, and still not be well optimized for Series consoles performance. A bulk of the work on the GDK would be around the oldest platforms: 1S, 1X, and PC. The majority of the work within the GDK is bringing in the newest Series X|S platforms.

As for tools optimizations or the constant comparison against PS5 and XSX. This isn't the discussion I was interested in. If tools are an issue, more data would support this theory. It's as easy as removing PS5 out of the equation and still ask if we can prove XSX is under performing. I think the answer is yes. We can look at the 2070, 2060, the 5700XT, and 5700, and see with console settings and see if they out perform the XSX. I don't need PS5 to prove XSX is underperforming, and it shouldn't be part of that debate. I have power measurements right now (crude but w/e) still showcasing that if you're well below 200W on XSX, your'e likely not really using the full potential of the system.

If people want to make a claim that PS5 is over performing, they should remove XSX out of the equation and compare it to its equivalent GPUs on the market to see if that's actually true.

I can't say that PS5 is running a super efficient kit, that's not a claim I've made. But clearly poor optimization hurts XSX much more than it hurts PS5. PS5 you just go faster if it's got less work to do. XSX is just going to truck along at the same pace it's always has. PS5 will perform better with less load, as that is its characteristic to its boost nature. XSX will always perform better with more load, because that is characteristic to its fixed clock nature.

If you want to compare PS5 performance to see if its really over-performing, you need to compare it to other boost clock GPUs.

On the topic of PS5 vs XSX:
To Cerny's credit, my greatest oversight in all of the discussion leading up to the launch, was making the assumption games were optimized enough to make full use of the silicon. I never once considered how inefficient a majority of titles could be and not take advantage of the silicon. And this is where boost is probably making PS5 perform very well and out of the range I expected it to perform. And he did mention it, in perhaps a way it didn't click for me. But now it does. It clicks now.

We've never done power measurements for games on consoles for every game. But it's clear that perhaps we should have and I will try. Though I don't need to guess much for PS5. It's the XSX that will need measuring.
 
Last edited:
The GDK can be great and mature, and still not be well optimized for Series consoles performance. A bulk of the work on the GDK would be around the oldest platforms: 1S, 1X, and PC. The majority of the work within the GDK is bringing in the newest Series X|S platforms.

I'm not sure what you're suggesting here. Are you saying that Microsoft's SDK is badly optimised for Xbox's hardware, or that AMD's RDNA2 architecture requires some effort on the software side to extract performance from? Or something in the middle? If Series X is RDNA2 why doesn't GDK include AMD's tools for performance profiling?

Have any devs indication any of these are the source if the performance disparities?

As for tools optimizations or the constant comparison against PS5 and XSX. This isn't the discussion I was interested in. If tools are an issue, more data would support this theory.

Did you just say more data would support your theory? Data you haven't seen? I assume this was a typo! ;-)
 
m not sure what you're suggesting here. Are you saying that Microsoft's SDK is badly optimised for Xbox's hardware, or that AMD's RDNA2 architecture requires some effort on the software side to extract performance from? Or something in the middle? If Series X is RDNA2 why doesn't GDK include AMD's tools for performance profiling?

Have any devs indication any of these are the source if the performance disparities?
Unsure, maybe a middle ground. I do not know exactly. You can only look at saturation of the silicon, we are not necessarily sure what the cause is. Power will tell one story, performance will tell another. Comparisons to other data points will help us, but without a lot of data to generate a trend, it's too early to point at one or the other.

Did you just say more data would support your theory? Data you haven't seen? I assume this was a typo! ;-)
I mean, when i talked about tools argument, it was not to draw comparisons to PS5 performance. We should be able to prove XSX is underperforming without needing PS5 as a data point if the hypothesis is true, as there should be other data points that would back the claim even if we delete off PS5 data points.
 
Unsure, maybe a middle ground. I do not know exactly.
These ten words epitomise my frustration with the last several pages of threads. The unassailable position that something is wrong, there is literally no evidence for it, and nobody can say what it is but something is wrong.

We should be able to prove XSX is underperforming without needing PS5 as a data point if the hypothesis is true, as there should be other data points that would back the claim even if we delete off PS5 data points.

You would need a frame of reference as a starting point and there is none.
 
These ten words epitomise my frustration with the last several pages of threads. The unassailable position that something is wrong, there is literally no evidence for it, and nobody can say what it is but something is wrong.



You would need a frame of reference as a starting point and there is none.
Not true. Typically windows store releases leverage the MS kits. So that would include the GDK.

I would need to find a steam version and a windows store version and compare their performance.

since all else is equal, API is still direct x, it should just come down to the development kit.

As for the issue being optimization vs dev kit. Well, yea that’s going to be hard to tell unfortunately. Boost clocks seem to actually do a great job at covering up poorly optimized code. You just go faster when it’s shitting about. If ps5 is the lead platform that’s not going to work out in favour for XSX. If you are time constrained and using PS5 as your lead code base, variable clocks would mask small issues that could crop up in fixed clocks.
That’s unfortunate for MS.
 
Not true. Typically windows store releases leverage the MS kits. So that would include the GDK. I would need to find a steam version and a windows store version and compare their performance.

Why wouldn't Steam or EGS stores have GDK built games? Neither store places any restrictions as far as I'm aware, the only difference is outputting a UWP binary which is different.

Boost clocks seem to actually do a great job at covering up poorly optimized code. You just go faster when it’s shitting about. If ps5 is the lead platform that’s not going to work out in favour for XSX.

Getting good performance without heavy optimisation seems like a developer win to me. And yes, that could result in a disparity of performance if performance if Xbox requires more effort to extract performance. But this reverts back to the question of why is narrow/fast delivering better performance on un-otpimized code (a bit of a slight to Ubisoft and Codemasters there!) than wide/slower?

An explanation would help my understanding. :yes:
 
Just to provide some updates:
Demon Souls will rail to 205-210, this is for both 30 and 60 fps modes
Call of Duty Warzone holds 116-123.

Gears 5 is 190-196 which is still currently the leader in power consumption on XSX.
167 -191 on FH4
153 - 191 on Warzone 60Hz
172- 197 on Warzone 120Hz

Other metrics in my signature under XSX Optimized titles.
Sheet 1 is for XSX.
Sheet 3 will be the same thing for PS5, Just need to get more time to fill out the values
But it looks constant around 205-210 for PS5 once it gets to temperature, so the fan is spooling up and does take a bit of time to catch up before it hits that rail.
I'll try a couple more titles.

I've reached out to my BIL who works at Ubisoft and ordered up WD and AC Valhalla. I can do some testing.

in the mean time, I need to get a new wattage meter and hook it up to a raspberry pi for constant polling so that I can get power measurements per frame at 16ms or something like that. I just need to find a device that can pin out it's data, and the raspberry can do the results polling.
Great job! Your results with Gears 5, FH4 and Warzone show the console can be used pretty well, even in multiplat games like Warzone. It could be great if we could have some power consumption of COD cold war as both consoles perform very similarly here. I am not surprised about PS5 210W. I was expecting about 220W max. It was expected with such high frequencies.

But clearly poor optimization hurts XSX much more than it hurts PS5.
This is where I think you are all making the same mistake. It's not only because of tools or optimizations, it's probably (also) about GPU architecture: narrow fast design and keeping the CUs busy (which is different).

With 12/14 CUs by shader array XSX has a design that is well suited for compute tasks, but less optimized for gaming tasks. This is where its GPU is unique if you compare it with AMD or even Nvidia GPUs. For instance the AMD big GPUs have all either 8 or 10 CUs by shader array. It is important because it is currenlty the sweet spot for RDNA between compute and the rest (primitive, rasterization, cache etc.). So comparing PS5 with XSX efficiency is different than comparing those AMD cards or Nvidia cards between them.

Besides the GPU I think others factors could be important notably for high framerates gaming like the CPU. As long as we don't have more details about PS5 CPU, we don't have the full picture.
 
You shouldn't. In NXGamer video about COD he has noticed a few things DF completely missed. On XSX: the missing muzzle flash, the different implementation of RT and flickering in some shadows.

And VGTech was the first to notice some framerate drops were actually random drops so unrelated to pure performance. He also gives us precise pixel counts (while providing his screenshots) and exhaustive stats about framerate.

Both those channels are starting to get known and valued as good source of technical accuracy for good reasons.
There are twitter loonies who notice missing muzzle flash and alike, doesnt mean they are HW engineers. NXGamer, from what I saw from him, is fan of games and tech, but he is really no authority on anything. Referrencing him in TF talk is completely pointless, Cerny didnt talk about his "TF dont matter" argument nor was he confirming its true. Comparing 750TI with PS4 and Xbox One and saying "yap I was right" does not make it right.
 
Oh nx gamer reached here now? Boo..
And don't bring df into the discussion. That's an insult to df.

I admittedly haven't watched much NXGamer content because he comes off as not very knowledgeable on the topic matter and more of an excited fan of technology. I'm actually surprised to see anyone on Beyond3D reference his content in any serious way.

Its because he's heavily biased, im also surprised he made it as far as this forum, atleast in use as evidence for anything technical.

The PS5 is doing very well for being its TF numbers, it seems performing basically on-par with the more powerfull 12TF XSX. On the other hand, declaring a 'winner' already now is just not being too un-biased i think. Their going to perform very close for the rest of the generation in all regards.
The comparison to PC is just stupid from this NXgamer anyway, its no comparison to even start with.
 
This is where I think you are all making the same mistake. It's not only because of tools or optimizations, it's probably (also) about GPU architecture: narrow fast design and keeping the CUs busy (which is different).

With 12/14 CUs by shader array XSX has a design that is well suited for compute tasks, but less optimized for gaming tasks. This is where its GPU is unique if you compare it with AMD or even Nvidia GPUs. For instance the AMD big GPUs have all either 8 or 10 CUs by shader array. It is important because it is currenlty the sweet spot for RDNA between compute and the rest (primitive, rasterization, cache etc.). So comparing PS5 with XSX efficiency is different than comparing those AMD cards or Nvidia cards between them.

I'm increasingly thinking that this is likely to be one of the big factors: current workloads using the graphics pipeline are a better fit for the PS5.

Workloads change over time, and it's always towards more shader/compute vs fixed function. PS5 has a similar balance to the X1X, which is rather good at PS4 targetted games scaled up to 1800 ~ 2160p. On top of that, PS5 appears to boost to some really impressive clock speeds (even by RDNA 2 standards!) under current gaming loads.

8 - 10 - 12 CUs per shader array seems to be the balance that is best for shifting mainstream ~ high end RDNA graphics cards that are on, or entering, the market right now (5700 - 6800 with 8 - 10 and 5500 with 10 - 12), and graphics ultimately need to bench high in reviews using the games that are currently available. XSX goes beyond even the 5500 XT with its 12 - 14 CUs per SA. I think it's going to be a while till games suit it, and even then PS5 won't be particularly disadvantaged as the CUs that take up most of the GPU will be busy regardless.

I agree that compute (and RT IMO) would seem to fit XSX well. L0 is per CU, as is the RT and texture op stuff, and compute bypasses the L1 (per SA) and goes straight to the 5 MB of L2.

Ultimately, I think things will change a little in the XSX's favour. But not massively, and it'll never be anything like PS4 vs X1 (which I don't anyone claimed). And I don't think the difference will ever be worth considering from a customer POV. Both are great machines.

One last thing: the further you get past launch the less performance matters from a sales POV. While it matters most, right now, Sony are mostly neck and neck and perhaps slightly ahead in some of the 120 fps games that MS were the ones trumpeting. For the second generation in a row, Sony have really nailed their launch. And that's the window when hardware matters most. Hats off, they've made a really good set of tradeoffs once again.
 
For the second generation in a row, Sony have really nailed their launch. And that's the window when hardware matters most. Hats off, they've made a really good set of tradeoffs once again.

Sony is just very experienced in console launches/generations, they have been around longer and for Sony consoles are much more important then consoles are for MS. Sony knows how to make consoles and content for them.
 
Great job! Your results with Gears 5, FH4 and Warzone show the console can be used pretty well, even in multiplat games like Warzone. It could be great if we could have some power consumption of COD cold war as both consoles perform very similarly here. I am not surprised about PS5 210W. I was expecting about 220W max. It was expected with such high frequencies.


This is where I think you are all making the same mistake. It's not only because of tools or optimizations, it's probably (also) about GPU architecture: narrow fast design and keeping the CUs busy (which is different).

With 12/14 CUs by shader array XSX has a design that is well suited for compute tasks, but less optimized for gaming tasks. This is where its GPU is unique if you compare it with AMD or even Nvidia GPUs. For instance the AMD big GPUs have all either 8 or 10 CUs by shader array. It is important because it is currenlty the sweet spot for RDNA between compute and the rest (primitive, rasterization, cache etc.). So comparing PS5 with XSX efficiency is different than comparing those AMD cards or Nvidia cards between them.

Besides the GPU I think others factors could be important notably for high framerates gaming like the CPU. As long as we don't have more details about PS5 CPU, we don't have the full picture.

I'm increasingly thinking that this is likely to be one of the big factors: current workloads using the graphics pipeline are a better fit for the PS5.

Workloads change over time, and it's always towards more shader/compute vs fixed function. PS5 has a similar balance to the X1X, which is rather good at PS4 targetted games scaled up to 1800 ~ 2160p. On top of that, PS5 appears to boost to some really impressive clock speeds (even by RDNA 2 standards!) under current gaming loads.

8 - 10 - 12 CUs per shader array seems to be the balance that is best for shifting mainstream ~ high end RDNA graphics cards that are on, or entering, the market right now (5700 - 6800 with 8 - 10 and 5500 with 10 - 12), and graphics ultimately need to bench high in reviews using the games that are currently available. XSX goes beyond even the 5500 XT with its 12 - 14 CUs per SA. I think it's going to be a while till games suit it, and even then PS5 won't be particularly disadvantaged as the CUs that take up most of the GPU will be busy regardless.

I agree that compute (and RT IMO) would seem to fit XSX well. L0 is per CU, as is the RT and texture op stuff, and compute bypasses the L1 (per SA) and goes straight to the 5 MB of L2.

Ultimately, I think things will change a little in the XSX's favour. But not massively, and it'll never be anything like PS4 vs X1 (which I don't anyone claimed). And I don't think the difference will ever be worth considering from a customer POV. Both are great machines.

One last thing: the further you get past launch the less performance matters from a sales POV. While it matters most, right now, Sony are mostly neck and neck and perhaps slightly ahead in some of the 120 fps games that MS were the ones trumpeting. For the second generation in a row, Sony have really nailed their launch. And that's the window when hardware matters most. Hats off, they've made a really good set of tradeoffs once again.

Excellent posts.
 
Great job! Your results with Gears 5, FH4 and Warzone show the console can be used pretty well, even in multiplat games like Warzone. It could be great if we could have some power consumption of COD cold war as both consoles perform very similarly here. I am not surprised about PS5 210W. I was expecting about 220W max. It was expected with such high frequencies.


This is where I think you are all making the same mistake. It's not only because of tools or optimizations, it's probably (also) about GPU architecture: narrow fast design and keeping the CUs busy (which is different).

With 12/14 CUs by shader array XSX has a design that is well suited for compute tasks, but less optimized for gaming tasks. This is where its GPU is unique if you compare it with AMD or even Nvidia GPUs. For instance the AMD big GPUs have all either 8 or 10 CUs by shader array. It is important because it is currenlty the sweet spot for RDNA between compute and the rest (primitive, rasterization, cache etc.). So comparing PS5 with XSX efficiency is different than comparing those AMD cards or Nvidia cards between them.

Besides the GPU I think others factors could be important notably for high framerates gaming like the CPU. As long as we don't have more details about PS5 CPU, we don't have the full picture.
I think it proves the range of utilization from the hardware. I'm not really sure what the upper limit is for utilizing the console because we've never seen a console exclusive yet. But I suspect 200W is probably around the mark. That being said, my measurements are extremely crude, they don't represent any sort of average, median, mode etc. There's no histogram of power usage, I'm just try to hit the min max values that my reader sees. It's very far from a perfect measurement system.

Warzone isn't a pure BC title, so I'm not exactly sure what the implications are for it. It's modified on the xbox side (16x AF default for all BC titles), and its not exactly clear how BC works on the PS5 side. Destiny 2 for instance seems to operate at the PS4/4Pro speed limit and features. The aliasing is really bad in parts of the world for some reason (starting a new character in cosmodrone for instance)

So BC isn't apples to apples comparison unfortunately. Destiny 2 is about 90-99 W on PS5 at least the area I got the chance to measure. On Xbox the area I measured was in the 145-150 ranges.

I went to prove that the titles on XSX aren't optimal. Pretty well is seeing a title hold 190+W. Which most don't and are often sitting around 150W. That's about all I was able to prove. I'm not convinced that I would follow down the same paths of thinking until there was more data to suggest what you were saying is true.
 
I agree that compute (and RT IMO) would seem to fit XSX well. L0 is per CU, as is the RT and texture op stuff, and compute bypasses the L1 (per SA) and goes straight to the 5 MB of L2.

I don't think so.
RDNA1:
8qnqV2C.png

RDNA2:
image-136-1536x827.png
 
Its because he's heavily biased, im also surprised he made it as far as this forum, atleast in use as evidence for anything technical.

The PS5 is doing very well for being its TF numbers, it seems performing basically on-par with the more powerfull 12TF XSX. On the other hand, declaring a 'winner' already now is just not being too un-biased i think. Their going to perform very close for the rest of the generation in all regards.
The comparison to PC is just stupid from this NXgamer anyway, its no comparison to even start with.


Empirically as long as the data is gathered in an unbiased fashion and careful to not include bias it doesn't matter who did it, even if there was a conflict of interest.

In some of these cases other frame analysis websites that had better reputation came to the same conclusion so in this case they're pretty much peer reviewed and were fine.
 
The MS silicon team would disagree with your assessment of their silicon.
50kl3l016lh51.jpg

It's a little disappointing to see B3D turn against the stated capabilities of hardware and take up the costumes of consoles wars.

Some of that is inevitable, but the likes are ... disappointing.

What if Globby was simply thinking about the RDNA 1 in comparison RDNA 2, possibly forgetting about that particular customization MS/AMD applied to XBSX/S GPU? I haven't seen any console warring, just bickering on who to trust.
 
Getting good performance without heavy optimisation seems like a developer win to me. And yes, that could result in a disparity of performance if performance if Xbox requires more effort to extract performance. But this reverts back to the question of why is narrow/fast delivering better performance on un-otpimized code (a bit of a slight to Ubisoft and Codemasters there!) than wide/slower?

An explanation would help my understanding. :yes:
To be clear, I only say this because I haven't measured the wattage myself. But 135W/145W respectively on SX seems erroneously bad. I will check myself to hold that claim, which is what started this discussion in the first place.

I don't necessarily think in particular narrow/fast is delivering more performance than wide/slow. I think in any GPU lineup for a family of cards, you'd generally see the smaller, narrow chips at the beginning of the lineup, and get slower but much wider as they get to the flagship card. Clockspeed is king because everything goes faster, but the power/mm^2 is unsustainable as clockspeeds go higher which is why to gain more performance we move to going wider, and thus slower. But if there was a GPU lineup today, the XSX would be the 6600. And a PS5 would undoubtedly be a 6500 and the 6500 would never outperform the 6600. And the 6600 would not outperform the 6700 and vice versa to the 6800 XT.

But here we are with data suggesting differently: let's get tools out of the way entirely then for the sake of supporting your argument.
I think when you consider say comparing a 2080 and a 2080TI. There is a massive clock speed difference on the spec sheet. But comparison after video comparison, the boost just takes it right to the top and keeps it in line with what the 2080 is. This is probably applying to a whole slew of GPU families. The 3070 runs at 1965Mhz for AC Valhalla, with the advertised boost clock of 1725Mhz. And both of these cards, 2080TI and 3070 are super wide compared to 2080 but are putting in the same clock rates, because, it can.

I think it's just going to come down to, what you (? I'm pretty sure it's you), variable clock rate has been in the industry for so long, and appreciably it's likely ten fold better than fixed clocks. If I had to attribute the XSX to a critical performance failure, it's not about narrow and fast, it's just about maximizing the power envelope and SX cannot. I really underestimated how effectively the GPU was being utilized here and that cost them. Like if I look at this video here:

DOOM Eternal RTX 3070 | 1080p - 1440p - 2160p(4K) | FRAME-RATE TEST - YouTube

The marketed spec is 1730Mhz for boost: but in the test it's running 2070Mhz in 1080p, 1920Mhz at 1440p, and back up to 2070Hz for 4K. That really shows you how it's maximizing the full potential of the card. It's so lightly loaded its going to overclock itself beyond the marketed spec. Like no where there did it even come close to its marketed clock rate. As long as it's running cool, it'll take it's boost way above the expected values.

And so when I think about it, 2230Mhz, I think yea it's probably 95% of sitting there as it looks like there is so much downtime it's got no issues holding it, because things aren't as heavy duty as we've made them out to be.
And in doing so, it's making full usage of it's power envelope.

But the SX, if you got titles coming in at 135W, that's still a fraction of what the power supply is able to provide, and it's just not maximizing that power envelope. It could have easily jacked the clockrate all the way up as long as the power didn't go beyond say 210W. And so I'm not even sure if the advertised clock rate really ever mattered or matters, it's whether or not a fluctuating 135-155W worth of processing can compete with a constant 200W pumping at all times.

I think it would be a very different story if SX had the same variable clock rate system as PS5. Because then it would be maximizing it's power envelope at all times. A simple graph of the wattage for PS5 is just a straight line for nearly all of it's titles. And I'm positive if I do this for XSX, it's going to be all over the place. And in the end, I'm pretty convinced that if there was a reason (hardware wise) why XSX can't really beat PS5 it's this. It's not about front end, it's not about sizing work for CUs. It's just a complete inability to take full advantage of the power envelope available. When I think about the other reasons brought forward as to why PS5 is performing better:
ROPs? No, 4Pro had 2x ROPs the X1X had, and got beat handidly.
Secret Cache? No, 32mb of esram operating at 100% clock speed w/ simultaneous read/write, xbox got creamed
Faster clockspeed? X1S is faster than PS4 with the cache, so nope.
Bandwidth? X1S has higher aggregate bandwidth than PS4 with less latency. Nope.
4Pro has 2.2x the compute of PS4 and more bandwidth, but for a great deal of many titles still only ran 1080p. (10% I think we counted)
But for all console generations prior to this, they all ran fixed clocks. So they all suffered the ups and downs of unoptimized code together.

So really, I'm looking at one thing here that's the major difference: It's not about narrow and fast here beating slow and wide. It's about maximizing that power envelope. 200W of a faster but narrower chip will not outperform the same 200W of a slower but wider chip. Vis a vis, PS5 is going to run 200W continuous 99.99% of the time and the rolling average of XSX will be < 200W. The 20% increase in TF may not be enough to make up the deficit of the rolling average of power consumption. But that's a problem for MS, there are likely very few games that will come along and keep the power envelope around 200W for XSX. At least not any time soon.

That's my explanation. I may have seriously underestimated how much performance can be left on the table using fixed clocks.
 
Last edited:
Status
Not open for further replies.
Back
Top