Digital Foundry Article Technical Discussion [2022]

Status
Not open for further replies.
Alex's speculation seems reasonable to me. The only error he might have made is calculating the TFLOPS of the RTX GPUs. I suspect his cards run above the official stated boost clocks from which the TFLOP numbers are calculated. Wouldn't change things to a very meaningful degree though.
good point, for example according to techpowerup avarage cloock of 2060super is 1800mhz making it 7.68tf so ~74.6% of ps5 gpu vs 69.8% described in video, for 2080 avarage 1890mhz so 11.17tf 108.5% vs 96.6% described but yeah, whatever ;d
 
Last edited:
good point, for example according to techpowerup avarage cloock of 2060super is 1800mhz making it 7.68tf so ~74.6% of ps5 gpu vs 69.8% described in video, for 2080 avarage 1890mhz so 11.17tf 108.5% vs 96.6% described but yeah, whatever ;d

TF comparisons between those architectures (in special Turing vs RDNA) dont make alot of sense. its still intresting though but it doesnt say all too much at the same time.
 
TF comparisons between those architectures (in special Turing vs RDNA) dont make alot of sense. its still intresting though but it doesnt say all too much at the same time.
yeah better not to compare diff arch. tflops tough here is quite close
 
Isn't that dependable on what kind of game/situation one would be in? More GPU grunt is mostly more important when say a high-fidelity 'next-gen' game is on display at 30fps. The developers allegedly have witnessed the CPU clocking down so in that scenerio the CPU had to give in to keep the GPU at max capabilities.

Thinking about it, you're probably right. I've only ever seen mention of "unused" CPU budget being allocated to the GPU boost (not vice versa), but this doesn't mean that the CPU doesn't adjust frequency based on activity.

Yeah, just checked Road to PS5 and AMD smartshift is shown only sending "spare" power from CPU to GPU, but Cerny also talks about variable "frequencies" (plural) so yeah it looks like under some circumstances under load the CPU can drop.

Actually I remember that now from a couple of years back. Doh!
 
I literally laughed my ass off many times, I can tell they had a lot of fun making this.

"It might just be his favourite game ever"

TACynAs.png
 
Don't be obtuse on purpose, AMD wouldn't implement a powerful AI upscaler right now because it would eat their processing power when it runs on regular shaders, they will wait when they implement an actual proper ML engine on their consumer hardware, then they would release the AI upscaler.

Intel released their AI upscaler from the get go because they have matrix engines on their consumer GPUs.

Why not? If the frametime of sub4K + Al upscale <<< frametime of 4k where there is an appreciable increase in framerate with Al upscale then wouldn't that equal a win?

Whats the point of waiting? Because if XESS D4pa solution is performant and provides quality close to DLSS, I can see devs adopting it across the board as it's hardware agnostic and is supported on a broader level of hardware. Why use DLSS or AMDLSS when XESS can be used for consoles, RDNA2, non RTX Nvidia gpus and all the GPUs that have their own proprietary upscale hardware but still support D4pa? 1 solution is cheaper and more cost effective versus providing 3 or 4 with each specific for a particular product.

An AMD solution now heads XESS off at the pass and can nip that opportunity in the bud (in Old West tang)! LOL
 
Last edited:
Remember that Dictator is quiting what actual developers have experienced and shared. Also, go and read the DF topic forum rules, you are breaking multiple of those with that last part, which is totally not-needed imo. I do believe in self-moderation and you are generally a poster with good-writing, but i think going personal towards the only DF member we have on the forum isn't all that good of a discussion. This forum is already quite dire in the amount of developers/studios or people who have contact with them.
DSoup was supporting Dictator in that reply by providing the Cerny reference to variable clocks.

Unless you meant to quote davis.anthony re: insulting Dictator.
 
That's better, we've normal folk have heard nothing about this in 12+ months of PS5's release so to get some form of actual confirmation that it does happen is nice!

Being cheeky.......did they happen to say what it drops too? Is there a min clock it simply will not drop below :mrgreen:
I think what's happening here is a mix up of technical terms. Not reaching maximum boost all the time is normal behaviour for all GPUs, PS5 included. Considering the type of yields they must support and having a universal algorithm, is even more so on PS5.

Throttling as a result of insufficient voltage would only apply to PS5 if the CPU is drawing on more power that it exceeds the SOC.

The former should always be happening PS5, it does not maintain a permanent 2.23GHz clock on a boost strategy as the frequency is not fixed. I myself, am pointing to this technicality when we refute the discussion around _always being at 2.23Ghz_.

If the CPU is drawing a lot more power beyond the allocated budget and pull from the GPU, I would call that throttling.

If you want to say the PS5 rarely ever throttles, and in only rare circumstances, I would agree with you.
If you want to say that PS5 never leaves 2.23GHz like a fixed clocks, you are absolutely incorrect. The average for most GPUs will often be 90% of max boost for most GPUs, and we can go look at a large number of performance/watt graphs.

6600XT operating here at 95% of maximum boost if you eliminate outliers.
clock-vs-voltage.png


6700XT - 95% of max on average if you eliminate outliers.
clock-vs-voltage.png


Interpretation of these graphs are trivial. If you are a fixed clock system like XSX, the graph would look like a straight line. The Series consoles can only scale their core voltage on the GPU to account for workload. Less work load means dropping the voltage down further. ie, moving to the left on the graph. As workloads increase, the Series consoles will move to the right increasing core voltage since it cannot adjust frequency.

Deciding which frequency you go with has to do with the yield of chips per wafer that can handle the voltage amounts. If they decide the cut off for core voltage is 0.8V then they have to find a fixed frequency that at maximum workload will not exceed 0.8V. Deciding what a maximum workload is unfortunately difficult - which is the problem that Sony decided to solve by going variable.

PS5 with variable clocks enable it to move left and right on the graph as well as up and down. The caveat is of course, that more clock speed will result in more core gpu voltage to accommodate for all the additional signaling happening faster. But like MS they must choose a cutoff point to obtain reasonable yields to ship a mainstream product. If they choose a hypothetical 0.8V, the chip will stop increasing core voltage at 0.8V and it can only vary frequency from that point. If the workload keeps increasing it will have to bring the boost down to keep the voltage within 0.8V.

I stress again, this is normal behaviour and PS5 clocking algorithm is universal to ensure all PS5s perform the same; it must accommodate the lowest common denominator in core voltage yield.
The likelihood a boost clock PS5 has a voltage frequency curve that resembles a straight line like XSX is highly improbable, perhaps impossible. It will likely look something like the above.

Even the 6700XT drops below 2000Mhz at 1.2V, there is a small dot just between 1900 and 2000. The limits of how far the clocking can drop is variable, only the core voltage (maximum) is fixed. If the workload keeps pushing the system beyond the core voltage limit, it will just keep downclocking.

None of this is dependent on the CPU.

CPU throttling would imply it's reducing the core voltage limit from 0.8V to 0.75V or 0.7V in order to feed the CPU - and that would directly impact the clocks as well.
 
The difference here over the other youtuber that harps on vsync being a limiting factor though is that Alex recognized the default behavior on the PC was flawed, and took a small step to get it in line with the console implementation.

Exactly. What we've seen in the past from other Youtubers are comparisons between the PC and PS5 with the PS5 locked at 60fps and thus not experiencing this issue while the slightely less powerful PC is hitting this issue on account of occasionally missing the 16ms window. That's fine until said Youtubers start to draw percentage performance comparisons from those results. Essentially ignoring the vsync performance deficit.

Worse, they go on to conclude that because, in these scenarios the PC loses performance from having vsync on, the PS5 which is also running with vsync, but completely locked as 60fps, is losing a similar amount of performance.
 
DSoup was supporting Dictator in that reply by providing the Cerny reference to variable clocks.

Unless you meant to quote davis.anthony re: insulting Dictator.

I was talking about the 'die on a hill' comment, which i took as being towards Dictator?

Why not? If the frametime of sub4K + Al upscale <<< frametime of 4k where there is an appreciable increase in framerate with Al upscale then wouldn't that equal a win?

Whats the point of waiting? Because if XESS D4pa solution is performant and provides quality close to DLSS, I can see devs adopting it across the board as it's hardware agnostic and is supported on a broader level of hardware. Why use DLSS or AMDLSS when XESS can be used for consoles, RDNA2, non RTX Nvidia gpus and all the GPUs that have their own proprietary upscale hardware but still support D4pa? 1 solution is cheaper and more cost effective versus providing 3 or 4 with each specific for a particular product.

An AMD solution now heads XESS off at the pass and can nip that opportunity in the bud (in Old West tang)! LOL

Running ML/AI on seperate hw cores means less impact on the gpu shaders itself, like with ray tracing.
 
I was talking about the 'die on a hill' comment,

"Die on that hill" / "The hill you want to die on" is an English idiom and nothing more.

https://grammarist.com/idiom/the-hill-you-want-to-die-on/

The hill you want to die on stems from 20th-century American literary works related to military origins. It is often used in a questioning form to ask if an opinion or action is truly worth the effort.

It also can be used to strengthen an argument further; that something is important enough to die upon that hill. In this case, the hill is meant to represent a struggle worth fighting for.​
 
Status
Not open for further replies.
Back
Top