Digital Foundry Article Technical Discussion [2023]

Status
Not open for further replies.
No this is not an example. You speak about Doom Eternal game design.

You bought up Doom Eternal, not me.

You act like it's doing something special, the game is dated and is doing very little in terms of AI, physics and world simulation.

Heck MGS2 on PS2 has a more physically alive world than Doom Eternal.
 
You bought up Doom Eternal, not me.

You act like it's doing something special, the game is dated and is doing very little in terms of AI, physics and world simulation.

Heck MGS2 on PS2 has a more physically alive world than Doom Eternal.

You confuse game design and technology. If you talk with developers or ask them you will see it is possible to multithread a huge chunk of an engine. Gameplay code can't be fully multi threaded and there is a moment the engine need synchronisation between all parts.

I gave example of different engine part all multithreaded audio, physics and AI(pathfinding).

AI and audio are more difficult to multi thread than physics but it is not impossible.

EDIT:

And very funny most of the video comes from 2012 to 2018. This is not a new topic for Game engine.
 
Gameplay code can't be fully multi threaded and there is a moment the engine need synchronisation between all parts.

Look at what I said....

There will always be tasks that have too high of a latency penalty when spread over multiple cores and thus there will always be a need for higher clocks and improved IPC from CPU's.

I didn't say that nothing can be successfully multithreaded, I said there will always be some tasks that can't be multithreaded due to latency issues.

So you've literally backed up what I said.

Reading comprehension is everything.
 
Look at what I said....



I didn't say that nothing can be successfully multithreaded, I said there will always be some tasks that can't be multithreaded due to latency issues.

So you've literally backed up what I said.

Reading comprehension is everything.

But here it is not because it is a problem of latency. This is a problem of parrallel programming. Again like in Doom Eternal where the engine is so well multithreaded they define a 1000 fps limitation for future CPU. The framerate gain is bigger than the added latency at the end the game play better with less input lag at 300 fps or more than without good multi threading.

Same for ND at the beginning they could not run TLOU remastered at 60 fps but only 30 fps. The added latency wasn't enough to cancel the gain to run at 60 fps.

EDIT: And a single core 40 or 50 GHz would be much better. First for dev sanity and for latency but this is impossible.
 
Is there Hogwart's Legacy analysis video or article? Have I missed something?
 
Hogwarts Legacy

Returnal PC Feb 15th(tomorrow, btw)
Wild Hearts Feb 17th
Atomic Heart Feb 17th
Horizon Call of the Mountain Feb 22nd and PSVR2 in general
Octopath Traveler 2 Feb 24th
Wo Long Fallen Dynasty March 3rd

Games I imagine DF will have an interest in covering(Like a Dragon Ishin and Company of Heroes 3 also coming in this period but maybe not high priority for DF). That's a lot to deal with in the next couple weeks.

This year is also just gonna be insane in general.
 
I see somewhere else saying the game utilized 2xMSAA, and to my eyes it does look like there is antialiasing. However DF seems to disagree with it? I’m not particularly convinced by the argument on the video cuz MSAA can’t handle specular aliasing anyway so even msaa is on you‘ll still have that shimmer.
 
BVH updating is also something that doesn't scale infinitely across cores, I'll have to check the ray tracing thread again for the link to the paper I posted but I believe 3 cores are optimal for it as using more cores starts to hurt performance

Great point here, although I'm just not sure which line of reasoning it will end up supporting. :)

Currently the algorithms that are commonly used for BVH updating dont scale well across N cores ( N being 4 or more ),
but thats not to say with a redesign of data structures and algorithms it won't be a much more scalable process in the future...?

My area of expertise is game audio, and I can confidently say there is nothing stopping the audio engine I work on scaling to 8+ even for the most basic of cases.
So I'm not up to date on the latest and greatest BVH processing work.
But again, it's not just about scaling, it's about fitting in with all the other work, if you can do 2ms here, and 2ms there, thats a hell of a lot more useful than doing 4ms,
as it allows your work to be scheduled around other more important work. Which yet again, speaks to the importance of breaking your work up into smaller more granular tasks.

Then again my argument basically boils down to, " be smarter devs" and "anything is possible if you make more efficient use of the HW",
Which, may not actually be possible for everyone, AND basically ignores most of the workflow/processes/pressures of modern game design.

But really a modern 8 core 3Ghz cpu is an absolute monster, imho it's poor form to blame lack of cpu power for lack of progress in game design.
 
Currently the algorithms that are commonly used for BVH updating don't scale well across N cores ( N being 4 or more ),
but that's not to say with a redesign of data structures and algorithms it won't be a much more scalable process in the future...?

Well hopefully we won't ever get to that point as GPU's will have fixed function hardware for job by then and thus completely removing the burden from CPU's.
 
1080p reconstructed to 1440p and upscaling to 4k means blurrying the texture. The difference is huge.
I wonder if it could be mitigated by FSR2 as opposed to the reconstruction method they are using. It would still be upscaling but designed to go to 4k as opposed to 1440p
 
Status
Not open for further replies.
Back
Top