Digital Foundry Article Technical Discussion [2021]

Status
Not open for further replies.
Is the PS5 resolution right? nxgamer and elanalistadebits have it at 1800p average

NXGamer responded on neogaf claiming short timeframe for coverage and counts at the start of the game. Further down he claims dynamic is dynamic for a reason.

https://www.neogaf.com/threads/digi...its-tech-review.1619397/page-3#post-264687319

Although Alex produced a picture from the first 5 minutes of the game that anyone can check which was nowhere close to the rezolution nxgamer claims.

Perhaps IGN demanded a video right at the launch of the game so he could only look at the game in a surface level manner
 
NXGamer responded on neogaf claiming short timeframe for coverage and counts at the start of the game. Further down he claims dynamic is dynamic for a reason.

https://www.neogaf.com/threads/digital-foundry’s-kena-bridge-of-spirits-tech-review.1619397/page-3#post-264687319

Although Alex produced a picture from the first 5 minutes of the game that anyone can check which was nowhere close to the rezolution nxgamer claims.

Perhaps IGN demanded a video right at the launch of the game so he could only look at the game in a surface level manner

This is not only NXamer but Digitalfoundry are the best. They prefer take their times and give the best analysis as possible. VGTech is at least most of the time good with the number but there is no analysis at all.

I think other channel try to release video as fast as possible or at least before df.
 
Weird, I know. Almost like IO isn't always the limiting factor for loading time.

Like HDD they probably have problem of seek time and need to have redudancy of data;) The problem dev have with HDD was the same on disk and it begins to be a huge problem with PS3. But HDD did not solve it on PS4.

Cerny did not use a PS3 or PS4 game for illustrating storage were too slow and a constraint for devs but a PS2 game Jak2.

cyjmc897hgn41.png
 
That map from Jak II only proves that the PS2 was loading whole chunks of the level in the few seconds while you run through the loading hallways. Probably about 6 seconds I would imagine.
 
That map from Jak II only proves that the PS2 was loading whole chunks of the level in the few seconds while you run through the loading hallways. Probably about 6 seconds I would imagine.

No again he said all hallway was created to be able to design the game around slow storage constraint, but maybe you did not see the video. Streaming is a constraint since the end of cartdrige. This is the first time that this is not a contrainst for a Playstation console. He said HDD slow storage means only 1/3 of the time data are loaded and 2/3· of time is lost to seek data if I remember well. And HDD seek time are better than DVD or BR seek time.


PS2 had 32mb of main RAM and 4MB of VRAM. DVD were 4.7 GB, it means data streaming exisiting since PS1 was needed. Funny thing, when Andy Gavin and ND begins to use streaming on PS1, some teams were thinking first party studio had more advanced SDK with more memory available.:mrgreen:

EDIT: A good source about someone who created a PS2 emulator about streaming from the DVD and realworld seek time 100 ms...

https://www.patreon.com/posts/technical-ps2s-48244607
 
Last edited:
No again he said all hallway was created to be able to design the game around slow storage constraint, but maybe you did not see the video. Streaming is a constraint since the end of cartdrige. This is the first time that this is not a contrainst for a Playstation console. He said slow storage means only 1/3 of the time data are loaded and 2/3· of time is lost to seek data if I remember well. And HDD seek time are better than DVD or BR seek time.


PS2 had 32mb of main RAM and 4MB of VRAM. DVD were 4.7 GB, it means data streaming exisiting since PS1 was needed. Funny thing, when Andy Gavin and ND begins to use streaming on PS1, some teams were thinking first party studio had more advanced SDK with more memory available.:mrgreen:

EDIT: A good source about someone who created a PS2 emulator about streaming from the DVD and realworld seek time 100 ms...

https://www.patreon.com/posts/technical-ps2s-48244607

Yes thats the idea. The SSD used as a memory extension is an evolution of Naughty Dog's discovery (or even hack if you want to call it that) back in he PS1 days by streaming data from the CD. It freed them of the limitation of having to load the whole level of the game in the limited RAM. With CD streaming they could load in and out what needed to be visible instead.

People were right to assume the visuals were more detailed because the game was in a corridor. But thats 1/10th of the whole story. The predictable movement of the game's camera helped them organise what information would move in and out of the memory via streaming from the CD. The result is outstanding visual detail that no PS1 game came close to and I dare say, even the N64 couldnt compete either. The only game I find similarly impressive in terms of pushing detail is MGS1.
But the variety found in Crash Bandicoot games was just nuts and beyond anything else.
 
Weird, I know. Almost like IO isn't always the limiting factor for loading time.
I know, I never said it was?

That map from Jak II only proves that the PS2 was loading whole chunks of the level in the few seconds while you run through the loading hallways. Probably about 6 seconds I would imagine.
Which is why the faster loading is good right? Devs don’t have to design a game around loading speed limitations.

Of course there will always be some other limitations, but the big jump this gen is a massive barrier removed.
 
This is despite the engine being tailored to PS4/XBO with their 8 CPU threads. This means the engine needs to use more threads, 8 and above.
Sometimes yes, sometimes no. When you're running a game and streaming huge amounts of data in you need to be mindful about how L2/L3 cache is being used. You don't want to throw heavy I/O threads willy nilly across threads spanning multiple clusters because what you may end up doing is contaminating the cache of clusters where efficient cache usage might be critical to hit 16ms frame times.
 
And because they use an API devs don't know what is done in the background. He use data from the profiler and they are CPU bound setting up entites at least for now.


This is an interesting point to consider that I missed before. R&C Rift Apart goes CPU bound at 5GB/s effective (i.e. post decompression) transfer rate, while the I/O system should be able to go up to 4x faster than that.
R&C doesn't strike as the most CPU intensive game out there, but I could be wrong.

Were Insomniac's devkit not using the decompressor at its full capacity, or could R&C still be using the CPU for decompression? It looks like the dev in question can't answer this because apparently the API handles all the data transfers without transparency. To get 5GB/s effective transfer the CPU would need at least 3 Zen2 cores to get that throughput and that sounds like a bit too much.
Is the entity initialization that he mentions a single-threaded process that could eventually turn multi-threaded with future engine iterations?

Or could we be looking at a major bottleneck that Sony didn't see coming, which is the CPU not being able to keep up with the I/O over 5GB/s?



I think what we do is different than what you may desire.
(...)
Beyond that, I do not want my coverage of PC stuff to just be covering tons of GPUs and graphs and whatever. PC outlets have been doing that for 30 years almost and other outlets do that well... we do something different.

I did not ask for, nor do I expect Digitalfoundry to do present GPU comparison tests. If I wanted that I'd go to Techpowerup or watch a Gamers Nexus video.

This is what I wrote:

On the othet hand, it's a bummer that only RTX cards are mentioned in the PC performance paragraphs.


My comment on the piece at hand referred to the fact that only Geforce RTX GPUs are mentioned while the game runs perfectly fine with maximum settings on many Geforce GTX and Radeon graphics cards, and it doesn't make use of any RTX-specific feature. The game plays with Ultra settings at 1080p30 or more on a GTX 1070, 1080, 1060, 1660, RX580/480, RX5600/5700 and others.

I wish there was a little more care about this subject because at the moment the cheapest GPU one can get for less than $500 is the GTX 1660 in the US or in the EU.
Most PC gamers who want to play Kena probably don't have a RTX 2060 or better, which is the card that you mention as minimum common denominator, and also goes by over $700 in the US at the moment.

You obviously don't have to care about the majority of people who don't own a RTX graphics card nor have the means to buy one at the moment because of the crypto craze. This is just my personal feedback.
 
Last edited by a moderator:
This is an interesting point to consider that I missed before. R&C Rift Apart goes CPU bound at 5GB/s effective (i.e. post decompression) transfer rate, while the I/O system should be able to go up to 4x faster than that.
R&C doesn't strike as the most CPU intensive game out there, but I could be wrong.

Were Insomniac's devkit not using the decompressor at its full capacity, or could R&C still be using the CPU for decompression? It looks like the dev in question can't answer this because apparently the API handles all the data transfers without transparency. To get 5GB/s effective transfer the CPU would need at least 3 Zen2 cores to get that throughput and that sounds like a bit too much.
Is the entity initialization that he mentions a single-threaded process that could eventually turn multi-threaded with future engine iterations?

Or could we be looking at a major bottleneck that Sony didn't see coming, which is the CPU not being able to keep up with the I/O over 5GB/s?

Elan Ruskin seemed to be specifically talking about initialisation - so probably things like setting the environment up, setting up characters, animation, setting initial variables for instances of things. This can be very heavy on the CPU quite apart from decompression.

How much load this takes and how much it bottlenecks you will be heavily dependant on the engine, the game, and perhaps even the level.

So I wouldn't say it's something Sony overlooked, it's more that it's just one of those things that can happen. You're always going to be bottlenecked somewhere. You remove a bottleneck somewhere, you hit one somewhere else, so you get working on (or working around) that .... if you have time.
 
Elan Ruskin seemed to be specifically talking about initialisation - so probably things like setting the environment up, setting up characters, animation, setting initial variables for instances of things. This can be very heavy on the CPU quite apart from decompression.

How much load this takes and how much it bottlenecks you will be heavily dependant on the engine, the game, and perhaps even the level.

So I wouldn't say it's something Sony overlooked, it's more that it's just one of those things that can happen. You're always going to be bottlenecked somewhere. You remove a bottleneck somewhere, you hit one somewhere else, so you get working on (or working around) that .... if you have time.

Insomniac said the next game will goes further with the SSD and it is always a matter of optimization on the CPU side. Insomniac told after Spiderman when they will have time they will complety overhaul the CPU game engine. They release games at a high pace but now they have time before Spiderman 2 release. 2022 is the first year they will not release a game since 2017. Insomniac release on PS3, PS4, Pc, Xbox One, smartphone, VR, AR magic leap, 23 games since 2011. Not all AAA but they are a release machine.


jzCHabM.png


And they said work with main and render threads is inefficient. They know it but they will solve it when they will have time. They need to do what ND did in 2014. Will they have a bottleneck for sure CPU and GPU power are limited but currently the CPU game engine of Insomniac is far from being fully optimised. It can goes faster on any CPU even a Jaguar one. And storage size is another limitation.

And it doesn't mean than all engine will push the SSD. I don't expect to see it out of first party studio.

EDIT: And Insomniac CPU game engine is far from being the only where performance can improve a lot with a full overhaul. This is the reason Unity develop and ECS design pattern branch of the engine, CPU performance will jump above Unreal Engine when it will be ready. @Lurkmass

https://en.wikipedia.org/wiki/Entity_component_system

In October 2018[5] the company Unity released its famous megacity demo that utilized a tech stack built on an ECS. It had 100,000 audio sources, one for every car, every neon sign, and more – creating a huge, complex soundscape.

 
Last edited:
Status
Not open for further replies.
Back
Top