Digital Foundry Article Technical Discussion [2023]

Status
Not open for further replies.
@Inuhanyou I don’t think devs are lazy. There’s obviously a knowledge gap. I’d blame internal training more than anything. If you have CS grads entering an industry that don’t have experience with best practices and profiling, you have to teach them. I’m the case of Jedi survivor the issue is allocating memory in the render thread. That just shouldn’t happen. Should have been found in code review, in profiling. People should know not to do that. Doesn’t make sense that no one noticed. It’s an engine architecture problem. It’s not at the level of the publisher. It’s at the level of the developers. The whole thing is very strange.
Whether the explanation is devs are lazy, less talented or what have you, I don't like it.

There's no reason for that to be the conclusion when the blame is always going to be higher up. Like you said it could be training for certain situations, the right leadership properly advising teams to solve problems efficiently, the right production schedule allowing for devs to study and address issues before launch etc

It's clear none of this is happening now
 
Whether the explanation is devs are lazy, less talented or what have you, I don't like it.

There's no reason for that to be the conclusion when the blame is always going to be higher up. Like you said it could be training for certain situations, the right leadership properly advising teams to solve problems efficiently, the right production schedule allowing for devs to study and address issues before launch etc

It's clear none of this is happening now

Well, game studios (developers) have their own internal management. It's not all grunts doing the work and all of the management and executives are at the publisher level. The development studios are accountable to and for their employees.
 
Well, game studios (developers) have their own internal management. It's not all grunts doing the work and all of the management and executives are at the publisher level. The development studios are accountable to and for their employees.
True. I was using devs as a catch all for the programmers(ie the lower level devs) not the leads of the projects overseeing the work within the dev studios. I wasn't clear on what I was saying. I apologize
 
Battlefield 1 is from 2016. It's an example of great multithreading under DX11. BF V ups the object and detail level considerably while also being properly threaded under DX11.

I tried to replicate Alex's core scaling test on BF1 on my 5800X3D but it was inconclusive. The engine is capped at 200 fps in campaign mode and even with 4 cores and HT disabled I could still hit the cap.

Tried BFV and it was more scalable. 4 cores with no HT hovered around 160-170. 8 cores with HT stayed above 190 and was often capped at 200. DX11 in BFV uses an 8 core extremely well.

On a side note DX12 in BFV is still a stuttery mess.
 
I tried to replicate Alex's core scaling test on BF1 on my 5800X3D but it was inconclusive. The engine is capped at 200 fps in campaign mode and even with 4 cores and HT disabled I could still hit the cap.

Tried BFV and it was more scalable. 4 cores with no HT hovered around 160-170. 8 cores with HT stayed above 190 and was often capped at 200. DX11 in BFV uses an 8 core extremely well.

On a side note DX12 in BFV is still a stuttery mess.
Multiplayer is a far better test of CPU. A fully populated game of breakthrough hits the CPU hard.
 
When talking about complexity/challenge, you're probably imagining I'm talking about depth of systems and whatnot and that's not it. It's the graphical make-up of games most of all, though it can also be the increasing amount of system complexity in what used to be more straightforward action games and whatnot.

And if I really have to explain how graphical complexity has gone up with these recent games on this board of all places, I dont know if this can even be any kind of constructive conversation to begin with.
I was thinking you were talking about the totality of what games are doing, so yes that would include gameplay systems and mechanics. Isolating visuals, I would still say only a hand full of games have done anything beyond upping the resolution of various buffers/effects and pushing LOD out a bit. Hardly anything that hasn't been normal on PC ports of the past. And typically those titles arent the ones with huge issues. Again it's hard to buy into the narrative that all these developers know what they are doing and the issues are entirely out of their control when there are games that have the same deadlines and budgets and release in a much better state, often while being technically more impressive to boot. And it’s typically the same developers who always release technically poor products.
 
Last edited:
Before we had this expression “arcade perfect” when port of an arcade game on consoles was superb. Now they took it to heart and we have another “console perfect “ port.
Yeah. You know why. Thats because Arcade machines were so much more powerful than consoles.


Do you get the hint?.......


Eat PS5 secret sauce you PC peasants!!!
 
Multiplayer is a far better test of CPU. A fully populated game of breakthrough hits the CPU hard.

Sure but the games we’ve been talking about with poor CPU scaling are all single player. I don’t know why some people say BF1 has better graphics than BFV. The art style and atmosphere of the games are different and I can see why some would prefer BF1. From a technical perspective though BFV is definitely doing more on screen.

Going back to the original question of whether current gen games are more complex. BFV doesn’t have anywhere near the density of geometry on screen at once compared to something like HFW. BFV is using a butt load of POM to simulate geometry.
 
Sure but the games we’ve been talking about with poor CPU scaling are all single player. I don’t know why some people say BF1 has better graphics than BFV. The art style and atmosphere of the games are different and I can see why some would prefer BF1. From a technical perspective though BFV is definitely doing more on screen.

Going back to the original question of whether current gen games are more complex. BFV doesn’t have anywhere near the density of geometry on screen at once compared to something like HFW. BFV is using a butt load of POM to simulate geometry.
BF1 has better art design and higher quality TAA. There isn't much if any POM in BFV, it's all tessellation. No game has near the density of geometry as HFW. Even the PS4 version is probably equal or at least very close to any current gen game. HFW doesn't have performance problems and isn't available on PC though.
 
Last edited:
If games with heavy tesselation are excluded the new Star Wars game has a higher geometry density than HFW. All the rocks and the ground are 3D and small objects have many polygons. PCGH also said that Coruscant fro Jedi Survivor looks like the 3DMark Draw Call test. It almost looks like Nanite.

But geometry doesn't make a modern game as impressive as state of the art lighting which can be seen in Cyberpunk 2077 or Control. Without heavy ray tracing a game will always look like a last gen game with highest PC settings.

 
Last edited:
BF1 has better art design and higher quality TAA. There isn't much if any POM in BFV, it's all tessellation. No game has near the density of geometry as HFW. Even the PS4 version is probably equal or at least very close to any current gen game. HFW doesn't have performance problems and isn't available on PC though.

I must be going blind then because it’s either POM that’s giving depth to the large amount of stone debris on desert maps or it’s some of the worst tessellation I’ve ever seen.

DF also seems to think it’s using POM.
 
I must be going blind then because it’s either POM that’s giving depth to the large amount of stone debris on desert maps or it’s some of the worst tessellation I’ve ever seen.

DF also seems to think it’s using POM.
I'm fairly sure it's tessellation. They had a presentation about it being one of the biggest upgrades in FB3. It's by far the most widespread use of tessellation displacement in a game. Nearly the entire terrain of every map uses it. Battlefield 1 was where it really became wide spread and industry leading. They tested the waters with a much more limited implementation in Battlefront 1.
 
Last edited:

I'm really glad we have someone like Alex championing for this stuff. Yes you've got to be considerate when posting/making content about stuff like this.. but sometimes harsh words need to be heard. Not an attack on devs, but a call to publishers to give more time, and consider the quality of the product they're pushing out before it becomes a massive storm of negativity. Absolutely everyone from IHVs to ISVs, to storefronts and game clients, to the OS and APIs.. have an absolute stake in seeing this stuff improve.

If some of this bitching can result in improvements to APIs or OSes, QA processes, or hardware that can make dev's jobs easier.. resulting in better products, then it's all the better.
 
Shows how bad dx12 compared to dx 11.. Microsoft did a bad job on dx12

This is very simplistic and naive interpretation on where the actual problem lies. If we say that DX12 is bad than all PCs low level APIs are bad. The subject was already discussed here few times. Its not that the DX12 or Vulkan or whatever API problem. Is the complexity of new low level APIs that are problem.
Thoughts on graphics APIs and libraries (asawicki.info)

"In DX11 you could just specify intended resource usage (D3D11_USAGE_IMMUTABLE, D3D11_USAGE_DYNAMIC) and the driver chose preferred place for it. In Vulkan you have to query for memory heaps available on the current GPU and explicitly choose the one you decide best for your resource, based on low-level flags like VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT, VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT etc. AMD exposes 4 memory types and 3 memory heaps. Nvidia has 11 types and 2 heaps. Intel integrated graphics exposes just 1 heap and 2 types, showing the memory is really unified, while AMD APU, also integrated, has same memory model as the discrete card. If you try to match these to what you know about physically existing video RAM and system RAM, it doesn’t make any sense. You could just pick the first DEVICE_LOCAL memory for the fastest GPU access, but even then, you cannot be sure your resource will stay in video RAM. It may be silently migrated to system RAM without your knowledge and consent (e.g. if you go out of memory), which will degrade performance. What is more, there is no way to query for the amount of free GPU memory in Vulkan, unless you do hacks like using DXGI."

As you see everything comes at the cost.

Screenshot 2023-05-08 at 11.33.26.png

One more interesting quote

"One could say the new APIs don’t deliver to their promise of being low level, explicit, and having predictable performance. It is impossible to deliver, unless the API is specific to one GPU, like there is on consoles. "
 
Last edited:
With regards to the whole low vs high level API thing, I don't understand why we can't just have two API's with the same feature set with one being high level and the other being low level like Playstations GNM and GNMX. Surely this is the best of both worlds solution and seems like a bit of a no brainer. Any idea why it hasn't been done yet?
 
I don’t have the knowledge to determine whether DX12 is bad, but it doesn’t seem like the best idea to have a unified lower level API trying to support such widely differing GPU architectures. Best practices are often completely different on each IHV.
 
With regards to the whole low vs high level API thing, I don't understand why we can't just have two API's with the same feature set with one being high level and the other being low level like Playstations GNM and GNMX. Surely this is the best of both worlds solution and seems like a bit of a no brainer. Any idea why it hasn't been done yet?
I believe Xbox SDK have already the low level API available for devs. GNMX is the high level API (similar to DirectX) on Playstation.
 
I don’t have the knowledge to determine whether DX12 is bad, but it doesn’t seem like the best idea to have a unified lower level API trying to support such widely differing GPU architectures. Best practices are often completely different on each IHV.

Yeah, and it does make sense right? People are praising PS5/4 APIs but i think in the bottom of this is the fact that there is only one HW that you need to support. In PC space you are trying to achieve to the metal level of performance on infinite amount of hardware.
 
Status
Not open for further replies.
Back
Top