Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Funny enough years ago there was a thread about game engines supporting multiple cores and I asked the question
are game engines now being developed to use a specified number of core (eg: 8 cores) or are they being developed to use as many cores as they find (eg: if they find 8 cores they will use 8 cores, if they find 64 cores they will use 64 cores ect)
and was told it was the later (might have been Sebbi or Demirug it was someone involved in game dev (it wasn't humus) )
IHVs could offer their own explicit APIs for the few developers who want to maximize performance.
There is probably no reason for AMD to do it. Given Nvidia's market share though, I could see a small pool of developers possibly writing a separate path for RTX GPUs.I think something like, and I feel this way with the current APIs to some extent as well, in that the theory of how something like that would work has serious issues in practice given the actual dynamics of the PC market.
While what you said is true modern game engines don't conform to either they arn't designed for optimal performance on X amount of cores (according to whoever answered me) and they certainly don't use all the cores they find regardless of performance scaling .Using X amount of cores isn't really the same as being designed for optimal performance for X amount of cores. Or saying something will use as many cores as available also isn't specific at all in terms of what type of performance scaling target you see from 1 through to X cores.
There are "separate paths" for all GPUs for a long time now, with Nvidia in particular providing tech which just doesn't work on other IHVs. A game which is using DLSS or some ODM RT feature of Lovelace is already using a "separate path" for RTX GPUs.There is probably no reason for AMD to do it. Given Nvidia's market share though, I could see a small pool of developers possibly writing a separate path for RTX GPUs.
Question: is OpenGL still a thing in game development It's been a long time since I've seen a game using it?
You already see this with current games, they are all extremely well threaded in the sense that they run a lot of threads. But from the layman user side what they aren't seeing is what the user considers significant performance scaling (at least in terms of FPS) from 6->8->more (or even some cases just 4->6) cores.
Another aspect of this is it's worth keeping in mind that designing to scale performance to as many cores as possible as a target may not actually be optimal in an overall sense, as in order to do so you might run into trade offs including less performance with less cores.
Many CPU-bound games actually degrade in performance when the core count increases beyond a certain point, so the benefits of the extra threading parallelism are outweighed by the overhead
On high-end desktop systems with greater than eight physical cores for example, some titles can see performance gains of up to 15% by reducing the thread count of their worker pools to be less than the core count of the CPU
Instead, a game’s thread count should be tailored to fit the workload. Light CPU workloads should use fewer threads
Executing threads on both logical cores of a single physical core (hyperthreading or simultaneous multi-threading) can add latency as both threads must share the physical resource (caches, instruction pipelines, and so on). If a critical thread is sharing a physical core, then its performance may decrease. Targeting physical core counts instead of logical core counts can help to reduce this on larger core count systems
On systems with P/E cores, work is scheduled first to physical P cores, then E cores, and then hyperthreaded logical P cores. Using fewer threads than total physical cores enables background threads, such as OS threads, to execute on the E cores without disrupting critical threads running on P cores by executing on their sibling logical cores
for chiplet-based architectures that do not have a unified L3 cache. Threads executing on different chiplets can cause high cache thrashing
Core parking has been seen to be sensitive to high thread counts, causing issues with short bursty threads failing to trigger the heuristic to unpark cores. Having longer running, fewer threads helps the core parking algorithms
Warhammer Darktide have a setting that controls the number of worker threads available to the game, and the max number is always less than the number of available cores, on my 7800X3D system, I have 16 threads, the game only allows me to use 14 threads.Yah, some games benefit from turning hyperthreading off. I know warzone was a game where you could alter your render thread count and people were lowering it to boost framerates. Not sure if the guidance was the same for AMD and intel, or if it's changed, but there was some number of threads that was a sweet spot and anything higher or lower would impact performance. This was for people trying to push max frame rates in cpu-limited scenarios like low graphics 1080p. Some people were using this process lasso tool to try to keep threads from moving around too. I'm hoping things like this get improved over time so there is lessing fussing and better out of the box settings.
PCGH tested the game too, DX11 provided the best frame pacing and avg fps, except on the top 3 cards.A rare new game where we can compare DX11 with DX12:
*DirectX 12 only at Radeon RX 7900 XTX and Geforce RTX 4080 Super and RTX 4090 to escape the CPU limit. On all other GPUs, DX11 delivers the overall better performance with more even frame output.
On a two CCD system, you want to have two thread pools locked on these cores, and you want to push tasks to these thread pools in a way that minimizes the data sharing across the thread pools. This requires designing your data model and communication in a certain way.
This is such an obvious thing even in less performance intensive applications that any engine designer not doing this should go back to school. Sad that this even needs to be said.