. I think that's another reason to use checkerboard instead of dynamic resolution. .
why have dynamic or checkerboard, can you not have dynamic checkerboard?
. I think that's another reason to use checkerboard instead of dynamic resolution. .
Unless there is dynamic checkerboard resolution I don't think they are quite the same.Nice. They said they can't tell the difference at 3 or 4 feet from a 60" TV. Much less noticeable than the difference between 900p and 1080p.
Not surprising, since it appears only during panning, and the motion resolution is horrible in 4K anyway. Resolving the full 4K visually requires the image to be practically still at 30fps. I think that's another reason to use checkerboard instead of dynamic resolution. It should provide a much better image when it counts, and artifacts happen at such a small scale they would be masked by motion blur, since they only appear during significant motion.
...
Checkerboard looks to be a very effective form of generating high resolution with less resources, but it doesn't save you from poor frame rates.
Sent from my iPhone using Tapatalk
Days Gone uses both on Unreal 4. At the least that code can probably work it's way over to the main ps4 branch.Also agree, shouldn't be too bad, but integration of checkerboard rendering and HDR would require more effort. Well the checkerboard will obviously be a larger effort I think.
Sent from my iPhone using Tapatalk
As we understand it, with the new enhancements, it's possible to complete two 16-bit floating point operations in the time taken to complete one on the base PS4 hardware.
On GCN 3 & 4 the only FP16 support in the shader cores is for storing FP16 data as FP16, rather than having to promote it to FP32 (i.e. it halves register pressure). GCN 3/4 don't have any fast FP16 math modes; FP16 operations are processed at the same speed as FP32 math.EDIT: Ok apparently FP16 is already available on RX 480
Thanks.On GCN 3 & 4 the only FP16 support in the shader cores is for storing FP16 data as FP16, rather than having to promote it to FP32 (i.e. it halves register pressure). GCN 3/4 don't have any fast FP16 math modes; FP16 operations are processed at the same speed as FP32 math.
They talk about lower power and reduced memory usage for GCN 4.0, not double speed. See @Ryan Smith's post. Seems like FP16 at 2x speed (compared to FP32 or even GCN 4.0 16FP) could be specific to Pro.I always hate this kind of journalism. Either you dig deeper or don't make statements like "Of course, we already knew that the Pro graphics core implements a range of new instructions - it was part of the initial leak - but we didn't really know exactly what they could actually do. "
When I see this:
The Polaris GPUs are also capable of native FP16 and Int16 support. This allows FP (Floating Point) performance at half the rate of single precision which is better tuned for graphics, computer vision and data learning markets. The use of FP16 results in lower power compared to FP32 compute and also reduced memory/register usage.
(http://wccftech.com/amd-rx-480-polaris-10-full-slide-deck-leak/)
It sounds like the Pro uses GCN 4.0 CUs and has FP16 (and Int16 ?) support giving you 2x performance at 1/2 the precision.
But I'm not an expert.
Thanks.
They talk about lower power and reduced memory usage for GCN 4.0, not double speed. See @Ryan Smith's post. Seems like FP16 at 2x speed (compared to FP32 or even GCN 4.0 16FP) could be specific to Pro.
FP16 is good for storing values for most lighting values, but it certainly isn't good enough to calculate lighting with.Maybe stupid question, but if you use FP16 will you notice ? In end you've to map to HDR10 (when enabled) with 10-bits per component and REC2020 color space.
An interesting Sebbi post related to FP16 shaders:FP16 is good for storing values for most lighting values, but it certainly isn't good enough to calculate lighting with.
I would expect FP/INT16 to be used for things like SSAO passes, vertex tricks/culling and such, anywhere low value range and some rounding errors are acceptable.
Sometimes it requires more work to get lower precision calculations to work (with zero image quality degradation), but so far I haven't encountered big problems in fitting my pixel shader code to FP16 (including lighting code). Console developers have a lot of FP16 pixel shader experience because of PS3. Basically all PS3 pixel shader code was running on FP16.
It is still is very important to pack the data in memory as tightly as possible as there is never enough bandwidth to lose. For example 16 bit (model space) vertex coordinates are still commonly used, the material textures are still dxt compressed (barely 8 bit quality) and the new HDR texture formats (BC6H) commonly used in cube maps have significantly less precision than a 16 bit float. All of these can be processed by 16 bit ALUs in pixel shader with no major issues. The end result will still be eventually stored to 8 bit per channel back buffer and displayed.