Digital Foundry Article Technical Discussion [2021]

Status
Not open for further replies.
first comment from Jaguar Cyberpunk DF video "I bet he is the one who bought cyberpunks code." lmao ;d edit: funny to see 6800xt dropping more frames with similar cpu to xox but it was observed that tough gpu doesn't seems to have better performance/optimisation on consoles vs their desktop similar counterpart we saw that on pc stronger cpu is needed so there are some console optimisation here
 
Last edited:
first comment from Jaguar Cyberpunk DF video "I bet he is the one who bought cyberpunks code." lmao ;d edit: funny to see 6800xt dropping more frames with similar cpu to xox but it was observed that tough gpu doesn't seems to have better performance/optimisation on consoles vs their desktop similar counterpart we saw that on pc stronger cpu is needed so there are some console optimisation here
I think the bigger problem like Rich mentioned in the video is that the memory is underclocked unfortunately on the motherboard with this chip. It cannot get up to proper speed which definitely is affecting CPU limited performance.
 
I think the bigger problem like Rich mentioned in the video is that the memory is underclocked unfortunately on the motherboard with this chip. It cannot get up to proper speed which definitely is affecting CPU limited performance.
I really don't think that the memory speed is important here. Even newer systems with ddr4 have less memory bandwidth available. The board has a quad channel interface this provided more than enough bandwidth together with the rx6800. when using the onboard gpu, then I would also thing that the memory speed could be a problem here.
 
I really don't think that the memory speed is important here. Even newer systems with ddr4 have less memory bandwidth available. The board has a quad channel interface this provided more than enough bandwidth together with the rx6800. when using the onboard gpu, then I would also thing that the memory speed could be a problem here.

Kabini (Jaguar based) which clocked between 1.5Ghz and 2Ghz on 4 cores supported single channel DDR3 at 1600Mhz for 12.8GB/s of bandwidth. The Xbox One X CPU as well as the one tested by DF have double that number of cores and clock at 2.3Ghz. But Richard said he only managed to get 14GB/s on his test unit so it's not unreasonable to assume if might be a little bandwidth starved.

And we do have to consider crazy low PCI-E bandwidth as well which is only 1/64th of what the 6800XT can consume.
 
Very interesting video. I wonder where they got the 28nm chips from?

Probably repurposed and/or unsold XBO chips. If I'm not mistaken, unsold XBO hardware was repurposed for xCloud blades I believe. So, more than likely AMD/Microsoft came to a decision on selling those unused chips in mid-tier markets.
 
Kabini (Jaguar based) which clocked between 1.5Ghz and 2Ghz on 4 cores supported single channel DDR3 at 1600Mhz for 12.8GB/s of bandwidth. The Xbox One X CPU as well as the one tested by DF have double that number of cores and clock at 2.3Ghz. But Richard said he only managed to get 14GB/s on his test unit so it's not unreasonable to assume if might be a little bandwidth starved.

And we do have to consider crazy low PCI-E bandwidth as well which is only 1/64th of what the 6800XT can consume.
I think you forgot that this board & chip has a quad channel. Even if you have "only" 12.8 GB/s per module, you still have 51.2 GB/s altogether. That would be equivalent of DDR4 3200 dual channel bandwidth. So I really don't think the memory is limiting here. So even 8 Jaguar cores should have enough bandwidth if even a 12 Core Ryzen does not scale much more with more memory bandwidth (yes it still scales, but it is not a day-night difference). And the Jaguar cores are in a whole other speed hemisphere.
The xbox one has a bit faster memory, because it must also power the GPU on the chip with this bandwidth. In the test, this is not the case as the 6800 takes the GPU role.

So what is limiting here is the CPU itself and of course the PCIe Bus.
Btw, this can be tested quite easy. Get a board for a Core i5 or 7 that uses the same PCIe standard (even old ryzen boards should have a to new standard), than use just an x1 Port to connect the card. Then we would see when the PCIe bandwidth would be limiting.
My guess would be, less hiccups (because of the much better cpu) but some still occurring in the same spots.

btw, are those PCIe lanes from the board or from the CPU. I guess the xbox one chip didn't really need PCIe lanes. This might also be a problem.
 
Last edited:
I think you forgot that this board & chip has a quad channel. Even if you have "only" 12.8 GB/s per module, you still have 51.2 GB/s altogether. That would be equivalent of DDR4 3200 dual channel bandwidth. So I really don't think the memory is limiting here. So even 8 Jaguar cores should have enough bandwidth if even a 12 Core Ryzen does not scale much more with more memory bandwidth (yes it still scales, but it is not a day-night difference). And the Jaguar cores are in a whole other speed hemisphere.

But Richard specifically says he's only getting 14GB/s (presumably measured) out of the board and thus speculates the boards quad channel memory interface isn't working / activated. Which makes sense considering it's usually only very high end / server class boards that support quad channel memory and this board is basically the cheapest of the cheap with already confirmed major features of the XBO disabled like the esram and support for 2133Mhz memory.

In fact we know it's only achieving 1333Mhz which gives up to 10.66GB/s per channel. So it's presumably working in dual channel mode for a theoretical 21.3GB/s but with only 14GB/s measured. It'd be interesting to understand how it was measured and then to compare that to how more modern CPU's fair in the same benchmark vs their theoretical potential.
 
Without the cache, it is hardly the same silicon?

I don't think that is likely to be relevant when using a dedicated GPU. I would be surprised if the lower memory bandwidth is having much effect given all the memory scaling benchmarks over the years combined with the very low throughput of this CPU.
 
But Richard specifically says he's only getting 14GB/s (presumably measured) out of the board and thus speculates the boards quad channel memory interface isn't working / activated. Which makes sense considering it's usually only very high end / server class boards that support quad channel memory and this board is basically the cheapest of the cheap with already confirmed major features of the XBO disabled like the esram and support for 2133Mhz memory.

In fact we know it's only achieving 1333Mhz which gives up to 10.66GB/s per channel. So it's presumably working in dual channel mode for a theoretical 21.3GB/s but with only 14GB/s measured. It'd be interesting to understand how it was measured and then to compare that to how more modern CPU's fair in the same benchmark vs their theoretical potential.

Yeah, a cheap ass board won't want to be running traces and power for a 256-bit bus. I'd be surprised if this was in practice anything more than a dual channel setup.
 
Probably repurposed and/or unsold XBO chips. If I'm not mistaken, unsold XBO hardware was repurposed for xCloud blades I believe. So, more than likely AMD/Microsoft came to a decision on selling those unused chips in mid-tier markets.

Additionally, this is China, so there are likely boards being sold using "recycled" chips that are basically chips that have been desoldered from the mainboard of a recycled XBO.

Regards,
SB
 
One of the most beautifully animated fighting games to date, Guilty Gear Strive's beta proved a huge succcess in showing the next evolution of Arc System Works' cel-shading. It uses Unreal Engine of course, but not as we usually see it - and builds on the tech used in DragonBall FighterZ to achieve stunning result at 4K on PS5. The PS4 and PS4 Pro systems run at significantly lower resolutions however, to maintain a solid 60fps performance - as Tom and Alex discuss.
 
It's a question we've been mulling over for some time. Does Nvidia DLSS achieve results comparable or even better than native resolution rendering because of the impact of TAA on today's games? What would happen if DLSS was added to a title with minimal anti-aliasing? Nioh 2 on PC recently received a DLSS upgrade, so we can put that to the test with a deep dive into image quality and performance boosts.
 
DF Article for NIOH 2 @ https://www.eurogamer.net/articles/...-2-dlss-quality-vs-native-rendering-challenge

Nvidia DLSS in Nioh 2: the most demanding challenge yet for AI upscaling?
Deep-learning super-sampling vs native resolution rendering.

Nvidia's DLSS has gradually evolved into one of the most exciting technological innovations in the PC space. The idea is remarkably straightforward: the GPU renders at a lower native resolution, then an AI algorithm takes that frame and intelligently upscales it to a much higher pixel count. There's an instant performance win, but remarkably, also a quality advantage too up against native resolution rendering. In the past, we've wondered whether this quality win comes down to mitigating the artefacts of temporal anti-aliasing - TAA - but the recent arrival of a DLSS upgrade to Nioh 2 provides us with an interesting test case. Nioh 2's basic rendering lacks much in the way of any form of anti-aliasing at all. It's pretty much as raw as raw can be. So the question is: can DLSS retain its performance advantage and still provide an actual increase to image quality up against native resolution rendering? Remarkably, the answer is yes.

DLSS was - and essentially still is - a replacement for TAA. Temporal anti-aliasing effectively uses information from prior frames and integrates them into the current one, typically using motion vectors to map where pixels in prior frames would sit in the frame being rendered. In best case scenarios, it's effectively improving image quality, and it is certainly the AA method of choice in modern gaming. But it can have its negative points: ghosting and added blur foremost amongst them. DLSS does have commonalities with TAA, which is why it is generally considered to be a replacement - it too requires the motion vector data in reconstructing its image. DLSS performance mode reconstructs from just 25 per cent of native resolution - so a 4K DLSS image is built from a 1080p native frame. Meanwhile at the other end of the scale is DLSS quality, which in this example would be generated from a 1440p frame. Balanced is the other major mode, sitting somewhere between the two.

...
 
DF Article @ https://www.eurogamer.net/amp/digitalfoundry-2021-need-for-speed-remastered-ps5-xbox-series-x

Need for Speed Hot Pursuit Remastered: 4K 60fps tested on PS5 and Series X
The upgrade isn't all it should be - and there's no boost for Series S.

With the arrival of the new wave of consoles, we didn't have the time to fully check out Need for Speed Hot Pursuit Remastered when it launched, but the Criterion masterpiece is especially deserving of our focus now as support has been added for the nex-gen consoles, opening the door to a 4K experience running at 60 frames per second. More than that, in the wake of the news that Criterion's new Need for Speed title has been delayed, it's also an opportunity to reflect on an astonishing run of iconic racing games from the Guildford-based developer.

It's something I was discussing with John Linneman recently: what exactly is peak Criterion? Some might say it's Need for Speed Hot Pursuit - a game that radically rebooted the franchise, bringing over the best of Burnout but respecting the core DNA of what made the original NFS titles so great. And then there was Autolog, of course, a remarkably successful attempt to meld social networking into a video game. But despite some remarkable coding resulting in input latency that matched or even beat some 60fps games, Hot Pursuit was a 30fps title in an era where 60fps was Criterion's hallmark. So maybe it's actually Burnout Paradise that's peak Criterion? But what about the incredible Burnout 3: Takedown? Or maybe the purist's favourite, Burnout 2: Point of Impact?

...

 
Another title using BC modes. This Hot Pursuit title uses uncapped framerate setup from the last-gen game versions. What that means is the Series S is running the OneS version, the Series X is running the One X version, and the PS5 is running the 4Pro version.
 
Status
Not open for further replies.
Back
Top