It's to save performances but the texture filtering isn't worse than in other games. All consoles games tend to have a mediocre texture filtering.
Many PS4 and Xbox One game have low anisotropic filtering game...
Last edited:
It's to save performances but the texture filtering isn't worse than in other games. All consoles games tend to have a mediocre texture filtering.
It's to save performances but the texture filtering isn't worse than in other games. All consoles games tend to have a mediocre texture filtering.
That's what I don't get. On PC AF 16x is basically "free" for years and years now. Even on old GPU without a lot of raw power and bandwith. Why is this so hard on console to have ? Especially on big first party game like this one.
That's what I don't get. On PC AF 16x is basically "free" for years and years now. Even on old GPU without a lot of raw power and bandwith. Why is this so hard on console to have ? Especially on big first party game like this one.
It's free because GPU caches are sufficiently large to localize many texture fetches with AF, it's not free only on crippled mobile chips with very low bandwidth and crippled cachesIt's clearly not free on console because almost no games have a high AF unless you take into account the less impressive ones such as the remasters, etc.
Your PC gpu has memory bandwidth to spare. Set your game to 0x AF and try downclocking your VRAM to just beyond the point you start getting fps slowdown, then activate 16x AF and see if you see a drop in framerate.That's what I don't get. On PC AF 16x is basically "free" for years and years now. Even on old GPU without a lot of raw power and bandwith. Why is this so hard on console to have ? Especially on big first party game like this one.
PS4 and higher end 7000 series cards have is the 512KB L2 cache. How is that sufficient to store every texture/mipmap in a scene that needs to be sampled 16 times for 16x AF? Additionally the cache may not be used exclusively for textures.It's free because GPU caches are sufficiently large to localize many texture fetches with AF, it's not free only on crippled mobile chips with very low bandwidth and crippled caches
You don't need to store every texture/mipmap in cache, this would be stupid. With each draw call, GPU fetches only a small bunch of textures. With 4:1 DXT compression, 128*128*24bpp fragment of texture takes just 12 kilobytes of L2 cache, you can store 18 different fragments for each CU and still have 296 KB of L2 for geometry attributes and other stuff.How is that sufficient to store every texture/mipmap in a scene that needs to be sampled for AF?
Thanks for the clarification.You don't need to store every texture/mipmap in cache, this would be stupid. With each draw call, GPU is sampling only a bunch of textures. With 4:1 DXT compression, 128*128*24bpp fragment of texture takes just 12 kilobytes of L2 cache, you can store 18 different fragments for each CU and still have 296 KB of L2 for geometry attributes and other stuff.
One of AF's drawbacks with PBS is shader aliasing on distant textures because of small and frequent texture details (close to 1:1 texels to pixels ratios), this might introduce more temporal aliasing which is obviously undesirable. I think higher resolution is the main reason of better AF on PS4 PRO in such games as Horizon or FFXV
But if you compare a game like MG5, the PC version doesn't seem to get more aliasing with 16xAF. So, why the console version still gets lower AF ?
So maybe its not a vram bandwidth issue and perhaps the simple act of sampling 16 times for 16x AF puts extra load on the shaders. Or perhaps it is a vram bandwidth issue and the occasional cache missefor textures fragments represents uses that bit more bandwidth which is already being strained by having to share it between the console's cpu and gpu. The cpu's only have a ratio of 512KB of L2 cache per core, typically PCs have double or quadruple that, making the juggling of bandwidth between the cpu and gpu just that much tighter.You don't need to store every texture/mipmap in cache, this would be stupid. With each draw call, GPU fetches only a small bunch of textures. With 4:1 DXT compression, 128*128*24bpp fragment of texture takes just 12 kilobytes of L2 cache, you can store 18 different fragments for each CU and still have 296 KB of L2 for geometry attributes and other stuff.
Crytek would be a great visitHe visited Epic Games and like I have said a few months ago it was the first engine he decided to ditch and he explained in an interview with the PS Blog he want to customise tools too not only the renderer.
He visited all first party studio, DICE too...
After he is now independent if one day he is not happy of his collaboration with Sony he can decide after Death Stranding to use UE 4 for example or more probably create a new engine... Or if he wants to do a new thing he is free...
Edit: I doubt he will not be happy with Sony, it is a relationship build upon years of trust and Sony continue to offer him the best support for Death Stranding
Crytek would be a great visit
So maybe its not a vram bandwidth issue and perhaps the simple act of sampling 16 times for 16x AF puts extra load on the shaders. Or perhaps it is a vram bandwidth issue and the occasional cache missefor textures fragments represents uses that bit more bandwidth which is already being strained by having to share it between the console's cpu and gpu. The cpu's only have a ratio of 512KB of L2 cache per core, typically PCs have double or quadruple that, making the juggling of bandwidth between the cpu and gpu just that much tighter.
Crytek would be a great visit