They're using one CPU in that graph though - a 10700k. That's a 5ghz 8core, 16 thread CPU, one of the most performant gaming CPU's out at the moment. The top-end CPU HU used was a 5600x, a 6-core 12 thread CPU.
A driver potentially requiring 'more CPU overhead' doesn't mean the actual % impact to the final framerate will stay the same as you significantly increase CPU power, and especially cores/threads - it's quite possible whatever supposed increased load some DX12 games require on the CPU when paired with Nvidia GPU's could be spread across threads/cores that aren't particularly saturated from the game itself. HU's focus was on older CPU's with less cores, there is little argument that a top-end CPU will make any supposed increased demand required by the Nvidia driver far less noticeable, even in lower resolutions.
Yes, more tests are necessary to make HU's click-baity video title actually reflect a consensus (and one they seemed more careful to parse in the actual video) so we are not in disagreement here. I would def like to see Digital Foundry and Gamers Nexus tackle this as well, lord knows Steve from GN is not adverse to throwing screens of 100 bar graphs in his videos.