Sharper shadows are normally less compute intensive and less realistic than soft shadows.The shadows are sharper on the PS5 vs the SX.
But that should really be a very tiny difference
Last edited:
Sharper shadows are normally less compute intensive and less realistic than soft shadows.The shadows are sharper on the PS5 vs the SX.
Gaf user found answer comparing to pc settings using config file, cascade settings higher on ps5 (r.Shadow.CSM.MaxCascades 4 vs 3), perf. not big diff so maybe they will fix it on xsx laterSharper shadows are normally less compute intensive and less realistic that soft shadows.
But that should really be a very tiny difference
Oh I was just making an objective observation and not a qualitative one.Sharper shadows are normally less compute intensive and less realistic that soft shadows.
But that should really be a very tiny difference
Great video separating the different technologies coming together to create the product.
I honestly like these videos much better than the console comparisons which have gotten stale by now. Usually, SX and PS5 perform 99% the same and only console warriors nitpick the differences to declare a "winner". This is what DF should do more often. Tech deep dives, dev interviews, videos about rendering techniques, etc. But I guess it wouldn't generate as many clicks as comparisons which is a shame.
Why not compare GPUs of different architectures (since SER is locked to Ada) that are delivering similar raster performance like a 4070 vs a 3080 and then measure the performance difference in Overdrive? If SER really makes a difference, the 4070 should be noticeably faster.
We have more than one.We need more than one data point before coming to any conclusion about SER.
Comparing SER on and off is the only way to actually know its performance differential.I liked the video and I get Alex is very excited about this technology (I am too) But I would like a bit more critical thinking next time.
That doesn't matter to the enduser. If the 2023 $599 4070 cannot push above its weights against the three year old $699 3080 in an ideal scenario (path tracing with SER enabled), then that feature is worthless.Comparing SER on and off is the only way to actually know its performance differential.
Comparing 2 completely separate architectures, hardware, and clocks and one having SER and the other does not, and using that differential to judge performance is the definition of a lack of critical thinking.
This has already been tested and the 4070 is just on par with a 3080 in Cyberpunk overdrive. Therefore we can assume this feature is more of a marketing gimmick rather than anything else.
I honestly like these videos much better than the console comparisons which have gotten stale by now. Usually, SX and PS5 perform 99% the same and only console warriors nitpick the differences to declare a "winner". This is what DF should do more often. Tech deep dives, dev interviews, videos about rendering techniques, etc. But I guess it wouldn't generate as many clicks as comparisons which is a shame.
That doesn't matter to the enduser. If the 2023 $599 4070 cannot push above its weights against the three year old $699 3080 in an ideal scenario (path tracing with SER enabled), then that feature is worthless.
Alex was talking about how these modern cards are better adapted to path tracing workloads than Ampere, but we've not seen that based on the data we have (being, the 4070 is just about as fast as the 3080).
To be fair, there were plenty of df pc gpu comparison vs consoles using top notch cpu on market ;dNo one does comparisons in that way. You want to know the value of a singular component all other features must be isolated.
To be fair, there were plenty of df pc gpu comparison vs consoles using top notch cpu on market ;d
With possibility with ending with wrong idea that you dont need to have strong gpu when majority will not have as powerful cpu and will be more limited in this aspect. Wrote it many times so will not repeat myself but using mid range gpu with high end cpu can create false representation of what real performance will be on mid range pc.Yes, which is doing exactly what @iroboto said. Isolating the GPU as the point of comparison by remove any possibility of a CPU bottleneck on the PC side. Obviously the chance still exists that there is a CPU bottleneck on the consoles side (although a well developed game, particularly if it's using DRS, should balance the usage of the two components pretty well in a console) but at least by removing any potential bottleneck on the PC side you're reducing the number of unknowns from 2, to 1.