Digital Foundry Article Technical Discussion [2021]

Status
Not open for further replies.
It's interesting but it's all wild conjecture without any kind of actual data. First I have heard of latency of about ~1ms for the clocks adjustements.
I do recall an old slide deck that indicated how quickly it adjusted power in the micro seconds. But that's going to be on me to find it, unfortunately it looks like I can't find it, so I'll be honest in saying I may have imagined it.

But I didn't say 1ms latency, 1ms for a power transfer is way too long. Typical DVFS has some latency which they have to account for, I believe in the nanoseconds. There is going to be added latency whenever you go through the CCX.

Second Cerny told use when there is going to be any kind of downclock it's going to be usually by 2 or 3%, not 5% or 10%.
I chose stepping for easy math, but as for your statements: he did not use any percentages for anything. As for advertised AMD game clocks, they are typically 10% below the max boost, representing a conservative average to clear.

But I think the biggest proof is the actual power consumption during different scenes. It's actually extremely rare to even reach 200W during gameplay (I actually haven't seen it yet on any analysis) while many cutscenes are consistently at that level.
Frequency has tremendous impact to power at a specific activity level. Furmark kills the system because the activity of the silicon is very high across the chip. All the cores are constantly in use with very to little down time before it must perform the calculation again.

The reason why GPUs can get away with boosting higher clock rates is because it's counting on to send that power to speed up the calculations.

And I wouldn't say rare at all.

Typically higher FPS draws more power in almost all cases, that should have been the take away you got from the DF video.
 
Cerny told use when there is going to be any kind of downclock it's going to be usually by 2 or 3%
That's not what he said

That doesn't mean all games will be running in 2.23 GHz and 3.5 GHz when that worst case game arrives it will run at a lower clock speed but not too much lower to reduce power by 10% it only takes a couple of percent reduction in frequency.

So I'd expect any down clocking to be pretty minor.

Then as we have seen in current games
The current games are not particularly revealing in this regard
The CPU is used by half at best, much less emphasis on computing and so on
 
It is best to just look at the late PS4 games. The later the first party titles came, the louder the console got. Now the PS5 should not get any louder, but instead reduce it's clock over time while games push it more and more to the limits.
Over time the PS5 might no longer reach it's highest clockrate in newer titles. Therefore it boosted earlier titles.
That is what I meant when I wrote "autooptimization". If you write non-optimal code with lots of "latencies" in it, you get a boost in clockrate. If you optimize the code so you have as good as no latencies you get a clock reduction but you still have performant code.
But you can also look at it and think that it punishes developers who optimize their code. :D

It will take a while to get the optimal mix between optimized code and clock-rate.
But some of the most demanding games on PS4 were already released during its launch or first year like Killzone SF, Infamous Second Son or one very specific scene I have found in TLOUR. Those games (or scenes) were already pushing the fan near its max. Like @DSoup said the PS4s got more noisy usually because of the bad cooling.

And with dynamic resolution I'd say games like Spiderman MM (1440p + 60fps + RT) are better using the hardware than most late PS4 games. It's the Killzone SF of PS5 IMO.

Those are first party games but one could make a similar argument for multiplat games. First games like AC Unity and BF4 were not significantly graphically improved by more recent games. In the case of AC Unity some would say the following games were even worse looking, while having less NPCs.
while games push it more and more to the limits.
Finally I'd even disagree about the statement. I'd say pushing it more to the limits doesn't mean the GPU is going to consume more. TLOU2 is a good example of that. It's a game that is one of the most impressive game on the console and it's actually a relatively (surprisingly) quiet game most of the time (considering how great it looks). Why? It's probably because the game is a CPU heavy game and instead of pushing tons of (mostly not visible) effects they show plenty of effects yes, but also the best animations, tons of polygons, sound effects etc.

..

Typically higher FPS draws more power in almost all cases, that should have been the take away you got from the DF video.
Of course you'll always find specific spots that are pushing the hardware but those are actually rare (you need to carefully look for those) and those scenes usually require some kind of simple graphical effects (I believe it was tons of smoke in astro's Playroom), but even in those instances, I am pretty sure the devs could optimize this part in order to display the same amount of visible effects while costing much less power. If you are not carefull GPUs can actually render too much effects, some of them basically useless for the final image.
 
Of course you'll always find specific spots that are pushing the hardware but those are actually rare (you need to carefully look for those) and those scenes usually require some kind of simple graphical effects (I believe it was tons of smoke in astro's Playroom), but even in those instances, I am pretty sure the devs could optimize this part in order to display the same amount of visible effects while costing much less power. If you are not carefull GPUs can actually render too much effects, some of them basically useless for the final image.
I think DF videos showcase SM:MM hitting max power draw swinging around the city. And less power draw in quality mode. This is in line with rendering and hitting more of the GPU pipeline in a single second.
Activity level matters, yes, increasing the frame rate by 2, means your activity level is doubled in the same amount of time. You increase the framerate by 4, your activity level is 4x in the same amount of time. That's why the power draw keeps going up. The GPU has less time to rest.

I'm not saying you're wrong as a whole, I do agree that having less work to do likely implies a high clock rate; I'm just saying that I disagree with the statement that just because power draw isn't at maximum, doesn't necessarily imply that the GPU is running maximum clock speed during those times. Even at maximum power draw, we have no guarantee the clockspeed is at a maximum. There's no guarantee of finding a point in the game where it's running maximum clock speed because boost by it's nature will alter it's characteristics. It's just easier to make that probabilistic determination if you take the CPU out of the equation.

As for cutscenes, most developers will still choose to load the game while the cutscene is playing. I don't necessarily believe that it's sitting there just rendering the cutscene only. Many developers may take advantage of that time to do other things.

As for TLOU2. I suspect many people would agree that the game looks great because everything possible has been baked to the highest of quality. So real-time calculations are significantly less. That's not an insult btw. People don't care how the cow is raised, they just the burger. They want it to look and taste good and they delivered best in class.
 
Uncharted tech breakdowns also showed really good culling to the point where they could get pretty close to one triangle / pixel (though of course for Uncharted 2 that was a smaller resolution ;)). The best optimization is doing as little as possible.

I think the major point here - and forgive me if I am wrong because I am not overtly technical - is that the power budget is always there. Your system will run hot and then the thermal limits and cooling will eventually slow you down. When you design a game you have to take this into account, and this means you or the system needs to take a considerable margin to make sure you stay under the budget.

Sony’s (and AMDs) system here is various different techniques to make sure the system doesn’t heat up so much it should slow down (or make noise). Managing the power budget is a means of making sure you don’t have to rely on the thermal sensors to manage your system load and and should allow you to run closer to the limit than you would be able to do just relying on the thermal sensors, without activating jet engine mode.
 

What I am getting told is that basically a year ago, people were afraid of future supply shortages and when China shutdown more got spooked it could happen again. So everybody put in larger orders, to keep local stock. So basically all capacity is taken.
We are getting lead times from the normal pricelist around 52 weeks. If you are willing to pay premium pricing you can shave some weeks of.
In addition there are issues with transport out of Asia, lack of containers is one thing i have heard. And they are trying to build more of those to.
 
DF Article @ https://www.eurogamer.net/articles/digitalfoundry-2021-cyberpunk-2077-pc-played-on-xbox-one-cpu

We played Cyberpunk 2077 PC on an Xbox One CPU
Unbelievably, it works - but is it playable?

Since the arrival of Xbox One and PlayStation 4, the dividing line between PC and console technology has become more of a blur - both last-gen and current-gen hardware from Microsoft and Sony are effectively integrated, customised PCs. Profoundly illustrating this is the arrival in Far Eastern markets of a PC motherboard that's actually based around the Xbox One processor. We've acquired one, tested it, and even run Cyberpunk 2077 on it - so what we have we learned from this bizarre experience?

While there is convergence in technologies - in terms of AMD CPU and GPU architecture at least - the last-gen consoles were still a novelty in their own way. The AMD Jaguar CPU cores were originally targeting low power laptops and tablets, only reaching the desktop PC market in a quad-core configuration with the AM1 'Kabini' platform, designed for basic tasks and even more basic gaming. Faced with a lack of CPU options that would work within a console, Microsoft and Sony adapted the same solution - strapping two of those quad-core CPU complexes together. If they couldn't have a fast CPU, they'd have a 'wide' one instead. The graphics side of the equation was more easily accommodated, tapping into AMD's new-for-the-time Graphics Core Next architecture.

All of which makes the board based on the Xbox One processor I acquired from AliExpress a somewhat weird proposition: its CPU architecture is extremely slow by PC standards, while its graphics are out of date, to put it kindly. Even more curious is that the processor is the original 28nm 'Durango' offering found in the hulking set-top box Xbox One - not the smaller, cooler, more efficient model found in Xbox One S. Markings on the chip may even suggest that the Chinese boards are built using left-over quality assurances samples - after all, we would assume that this chip left production some time back in 2016.

...
 
Interesting.
Yes, the PCIe 1x interface might have some effect especially on streaming but at least, now we know how "good" the old CPUs really were.
The inactive SRAM is a bit of a downer. Would have been nice if it would have been reused as some kind of L3 Cache for the CPU, but I guess this would require some hardware rework and is not just a feature from the CPU.

The other interesting thing is, Sony and MS could have been using higher clock-rates for the CPU from day one, as this is the 28nm chip. Would have made things much easier for developers. And at least xbox one should have enough headroom in its cooling solution.
It would be interesting to see how much power the CPU consumes while on load in both power profiles.
 
Really cool video, I love seeing things like this out of China. You'd think MS would take advantage of the Series X and S APUs and put them into Surface tablets and laptops.
 
the jaguar cpus are so interesting to me. I wonder what cat designs will look like when they come back for AMD's big little designs. Puma didn't do to much over jaguar.

But I would love to see a 5nm 4 core zen cpu with 4 core cat or 8 core cat in stuff like the surface pro , go or neo
 
Very interesting video. PC even with Vulkan/DX12 is still rather poor in terms of CPU performance. It was already known by how low the performance scaling of these monstrous CPUs has been relative to consoles but it was cool to finally see a direct comparison on the same silicon.
 
Very interesting video. PC even with Vulkan/DX12 is still rather poor in terms of CPU performance. It was already known by how low the performance scaling of these monstrous CPUs has been relative to consoles but it was cool to finally see a direct comparison on the same silicon.

I don't know, I was expecting much worse than this.

Now It's really sad to see that the 32mb cache is not used, but I wonder why it's not.
 
Very interesting video. PC even with Vulkan/DX12 is still rather poor in terms of CPU performance. It was already known by how low the performance scaling of these monstrous CPUs has been relative to consoles but it was cool to finally see a direct comparison on the same silicon.

Without the cache, it is hardly the same silicon?
 
Status
Not open for further replies.
Back
Top