Still a lot of popin' / weird lod behaviors. You can clearly see a lot of stuff appearing, and even the lighting changing/enter into effect as you walk/drive. I guess it's an engine limitation and will never be fixed :/
Imo it's way more important than rt...
Is it an actual hard engine limitation or having to optimize for very "tight" VRAM buffer due to optics and market implications. Looking online it seems like Cyberpunk 2077 stays under 10GB VRAM allocated (just allocated) at 4k and RT with a RTX 3090.
If so it could fixed with another update (maybe the "Enhanced Edition/GOTY" editions they've been doing) once we've firmly moved on to >8GB and the last gen consoles.
I'd say that this has a lot more to do with the need to support old h/w like consoles and PCs from 2015 (4C CPUs, etc) than some size of VRAM on latest GPUs.Is it an actual hard engine limitation or having to optimize for very "tight" VRAM buffer due to optics and market implications. Looking online it seems like Cyberpunk 2077 stays under 10GB VRAM allocated (just allocated) at 4k and RT with a RTX 3090.
If so it could fixed with another update (maybe the "Enhanced Edition/GOTY" editions they've been doing) once we've firmly moved on to >8GB and the last gen consoles.
That won't change when we leave last gen behind. Staying under 10 GB will always be the target for current gen.
The current gen consoles actually do have very limited amounts of video memory, due to the shared configuration and OS allocations. You can see how the consoles are running medium textures in games like Watch Dogs Legion, Control and some others.
But that doesn't mean we won't be seeing improvements in that field. UE5 for example is very efficient with its VRAM usage.
Indeed. I'd hazard a guess that 10 to 12GB VRAM usage will be the max these consoles will use. That'd be about 4GB left for everything else, which aint all that much. Game logic, OS, background services etc etc. The SSD's are a godsent probably with these amounts of ram.
Not that they need more VRAM allocation to deliver the graphics we want anyway (see forbidden west and rift apart).
I'd say that this has a lot more to do with the need to support old h/w like consoles and PCs from 2015 (4C CPUs, etc) than some size of VRAM on latest GPUs.
From what I have observed, the big consoles do have between 6-8 GB available as video memory in most games. The Series S around 3-4 GB.
What's available to games on Series X is 10 GB @ 560 GBps and 3.5 GB @ 336 GB/s. What's available to games on Series S is 8 GB @ 224 GB/s.
What's available to games on PS5 is 12 GB @ 448 GB/s (rumored ram size).
All of that is shared memory. It's not dedicated video memory like on PC. It is shared between CPU and GPU.
All of that is shared memory. It's not dedicated video memory like on PC. It is shared between CPU and GPU.
We were talking about LOD pop-in, which is the same on 4 and 24GB GPUs. This is likely due to processing power limitations of old h/w.If it was tied to last-gen consoles, you wouldn't see it use more than 5GBs memory for the entire game (cpu code segment plus data segments for cpu and gpu).
We were talking about LOD pop-in, which is the same on 4 and 24GB GPUs. This is likely due to processing power limitations of old h/w.
Elden Ringbut it's almost a gurantee CPU intensive games will need more than 3.5GB.
It’s the same on shared. You only need to account for the size of the OS. Typically on PC, assets are moved into system memory and then copied to VRAM. Here, it’s just one pool.All of that is shared memory. It's not dedicated video memory like on PC. It is shared between CPU and GPU.
The slower 3.5 GB bus on the Series X is recommended by Microsoft as CPU memory, but it's almost a gurantee CPU intensive games will need more than 3.5GB. Series X has well below 10 GB available as effective video memory. Unless the game logic is so simple that 3.5 GB is enough of course.
Series S also has shared memory. You can actually substract the Series X CPU recommended 3.5 GB, as that will not change as it's the same CPU. Then you get 4.5 GB as video memory, but it will likely be lower because as I said, games will certainly need more than 3.5 GB DRAM.
IDK about PS5 though. Chances are it's pretty similar.
since 2005 I've had laptops that used unified memory and had poor graphics performance, so unified memory is not new, although consoles "popularised" the concept.The shared memory was one of the reasons consoles often dont apply as much AF if it all (DF analysis). High resolutions also eat alot of BW, so having dedicated vram with its own BW has its advantages. The cpu wont have to content with the gpus ram requirements either. And vice versa.
Also, DDR as system ram has generally been better latency wise for cpu work. No idea if that is still true though.
It's a shame that this doesn't work better.GPU Subwarp Interleaving
Paper: https://research.nvidia.com/publication/2022-01_GPU-Subwarp-Interleaving
Patent application: https://www.freepatentsonline.com/y2022/0027194.html
I'm assuming as we move further into generations of ray tracing support, the hardware will change to improve the performance in this area.It's a shame that this doesn't work better.
So, more cache, smaller hardware threads.
If ray tracing really is "the future" then optimising SIMD size for rasterisation seems to be a mistake. So we should expect smaller hardware threads. With gaming separated from HPC/AI, there's no real reason for gaming cards to stick with a hardware thread size of 32.
Volta class is 16, Intel is dynamic down to 4 I think? AMD is the only vendor with 32 right now. But I dunno, seems like some sort of compromise will be needed for the time being.It's a shame that this doesn't work better.
So, more cache, smaller hardware threads.
If ray tracing really is "the future" then optimising SIMD size for rasterisation seems to be a mistake. So we should expect smaller hardware threads. With gaming separated from HPC/AI, there's no real reason for gaming cards to stick with a hardware thread size of 32.
The hardware has already been changing, e.g. doubled triangle-intersection-test rate. Some hardware changes appear to be non-gaming related (motion-blur).I'm assuming as we move further into generations of ray tracing support, the hardware will change to improve the performance in this area.
The hardware is designed to implement algorithms. With the right choice of algorithms available, the hardware can adapt to workloads. The driver can provide hints, too.I guess what I wonder is, can the schedulers be updated using firmware to support different ways to schedule? or are we looking at new hardware?
You're referring to HPC? Ampere is 32, isn't it?Volta class is 16, Intel is dynamic down to 4 I think? AMD is the only vendor with 32 right now.