Digital Foundry Article Technical Discussion [2021]

Status
Not open for further replies.
I'm lost. Efficiency of zlib PS4/XBO hardware compressor vs CPU, or zlib decompressor (hardware/CPU) vs software kraken on PS4/XBO? Or something else? :runaway:

I can well believe software kraken decompression is faster in the round than software LZ decompression because LZ is ancient. For previous gen consoles I think the key comparison is software kraken decompression versus hardware LZ decompression. And even if software kraken is faster, is there sufficient CPU overhead to shift from hardware LZ decompressions to software kraken decompression?

And to be clear, I don't the know the answer to any of these questions! :nope:
They obviously think it is, otherwise would make little sense to compare it on last gen.
But I would think it's a case by case basis, is the benefit of using Kraken on cpu out weigh using up that precious cpu resource.
 
From Oodle site http://www.radgametools.com/oodlekraken.htm


I read that as RAD want you to use Oodle Kraken instead of zlib and that it will be competitive, probably/maybe even better than using the zlib HW on the PS4/XBO.
Why would they highlight AMD Jaguar chip in PS4 and XBO if it was "worse" than the builtin stuff?


Going on, if we assume that we are looking at it from a total cost pov, and with even more assumptions.

1. With Kraken you create smaller files
2. With Kraken you decode a file quicker than LZ if the files are the same size on the cpu, but HW LZ is quicker per 1MB by some ratio.

Now if Kraken file is 1MB and the LZ file is 2MB due to #1.
And you decode a 1MB in 8ss on Kraken CPU and 6s on LZ HW (silly numbers, but easy to follow)
So even if the CPU is slower to decode on PS4/XBO the total win in total is still with Kraken.
Of course you need to factor in other resources like cpu time (multi-threaded), bandwidth etc etc but Kraken on CPU can then be better than LZ HW.
Ok I get it now, I didn't know Oodle was actually the generic name of their whole APIs including BC1-BC7 'Oodle' textures.
 
I'm lost. Efficiency of zlib PS4/XBO hardware compressor vs CPU, or zlib decompressor (hardware/CPU) vs software kraken on PS4/XBO? Or something else? :runaway:

I can well believe software kraken decompression is faster in the round than software LZ decompression because LZ is ancient. For previous gen consoles I think the key comparison is software kraken decompression versus hardware LZ decompression. And even if software kraken is faster, is there sufficient CPU overhead to shift from hardware LZ decompressions to software kraken decompression?

And to be clear, I don't the know the answer to any of these questions! :nope:
Not to add more confusion but there could be a bottleneck somewhere else. Assuming the LZ hardware in Xbox One can handle 150-200MB/s like the leaks say, it should be limited by the read speed of the hard drive. If the read speed is something like 60-80MB/s but Kraken has a significant compression advantage, then your throughput could be better simply because you move 60MB of data either way, but with Kraken it decompresses to more data.
 
I am wondering how much this affected last gen consoles.

200 MBps worth of decompression last gen XBs versus 5 GBps today on X series consoles.

I’ll guess will see with series S as it comparable to last gen mid gen consoles but has the ability to move far more data.
 
Kraken over zLib is around 15% compression advantage from what I recall, assuming you feed it the same data inputs. The advantage is really in the decompression speed.
 
Not to add more confusion but there could be a bottleneck somewhere else. Assuming the LZ hardware in Xbox One can handle 150-200MB/s like the leaks say, it should be limited by the read speed of the hard drive.
150-200 would be the output/write speed. The input speed is, as you say, limited by the read speed from HDD/SSD but what determines the output speed is the ratio of compression. E.g. if you can read at 50MB/s and the compression ratio is 10:1 you cannot decompress on the fly because you can't write data fast enough.
 

I was gonna give this a pass until watching this - Linneman is very enthusiastic.
It requires a specific taste.. but luckily you'll know right away if you're down with it or not.. so if it's not your thing, you can always refund.

It's beautiful though.. and has some surprisingly cool mechanics as John shows.

One thing I didn't know though before watching that video was that the PS5 version has 60fps pre-rendered cutscenes, while the PC only has 30fps. That's disappointing. Hopefully that gets patched and included as an option to toggle 30fps or 60fps.
 
thing I didn't know though before watching that video was that the PS5 version has 60fps pre-rendered cutscenes, while the PC only has 30fps. That's disappointing. Hopefully that gets patched and included as an option to toggle 30fps or 60fps.
on the other hand it must be one of worst ps5 implementation, 5700xt running this game around 60 at 4k
 
on the other hand it must be one of worst ps5 implementation, 5700xt running this game around 60 at 4k
It feels like Unity needs a lot of hand-holding on consoles because of automated GC.
 
  • Like
Reactions: snc
on the other hand it must be one of worst ps5 implementation, 5700xt running this game around 60 at 4k
Yeah. PS5 should have been able to hit 1800p@60fps I feel.. but maybe current gen support on Unity just isn't quite there yet on the consoles?

Oh and I really hate the stuttering as you pass checkpoints when the game saves. We should be past that point by now. I just don't understand how they thought that was acceptable. You have this nice smooth flowing seamless world with a camera that does beautiful panning and framing.. and then every time you run past a checkpoint you get this massive stutter.

It's like Horizon ZD on PC when it first launched and the game would stutter every single time the quest updated and the UI came on screen... like who would have thought that was acceptable? lmao.

Hell on PC, I'd just love a "quick save" feature instead of all these checkpoints. Just let ME decide when I'll quicksave and have the game stutter if it has to do it.. instead of every time I come to a new puzzle.
 
don't understand ? :) how many fps would have without capping to 60 ? no idea but 1440p is far from 4k that similar level desktop gpu is capable to manage

Could be bandwith, power delivery etc. The 5700XT is otherwise very comparable to the PS5's GPU, they shouldnt perform too differently.
 
don't understand ? :) how many fps would have without capping to 60 ? no idea but 1440p is far from 4k that similar level desktop gpu is capable to manage
There you have your answer, you can't compare them, one is uncapped, the other not. Besides Unity engine is historically heavily CPU limited, it was on PS4. What kind of CPU they use for that 4K benchmark? A 1700x - 2700x clocked at 3.5ghz?
 
There you have your answer, you can't compare them, one is uncapped, the other not. Besides Unity engine is historically heavily CPU limited, it was on PS4. What kind of CPU they use for that 4K benchmark? A 1700x - 2700x clocked at 3.5ghz?
I think ps5 cpu is cappable enough (and we taling about high resolutions here and not low resolution 120fps mode)
 
I think ps5 cpu is cappable enough (and we taling about high resolutions here and not low resolution 120fps mode)
Have we forgotten how we do proper benchmarks here? People have being doing those for years on PC. It's not about opinion, it's about following some simple rules.
 
Have we forgotten how we do proper benchmarks here? People have being doing those for years on PC. It's not about opinion, it's about following some simple rules.
Not sure what you imply here but simple looking at comparable hardware desktop counterpart Oddworld is one of worst implementation of ps5 version (second dynamic cb in performance mode Avengers). Whats the reason I don't know (imho small not most capable team but maybe some ps5 limitation).
 
Status
Not open for further replies.
Back
Top