Maybe, but unified or not, it's still supported for Vega, even though you could argue that that's the only GPU with real (so to say) FP16 support in AMDs line up.
Vega probably uses an forked driver branch until it's stable. If the branch was merged into main-line it might have broken all other platforms. Just a possibility. (But I think I withdraw the suggestion based on the thought below)
Fiji seems not to have full FP16 support, I haven't seen the driver generate FP16 arithmetic ops, just logical ops. I haven't looked at Polaris assembly. I don't even know if I can now that the flag has been dropped.
What's kind of weird is that the flag might have been allowed in the past as "the driver can eat min16float, so all is fine", and now MicroSoft says "no FP16 arithmetic? no IEEE FP16 standard behaviour? drop the flag please". Could be because results between Vega and the other chips would differ (although in predictable ways). If that's the case I'd assume min16uint could still possible, as the arithmetic can hardly be unprecise, 24 bit int is hardware supported since ages.
Now, I wouldn't excluse the possibility the driver-team actually makes the FP16/32 mixed stuff IEEE compliant, who knows? Packing is still supported, it's an independent intrinsic f16tof32/f32tof16, and it should be safe to assume that the GCN instruction doing that is IEEE conformant.
It might be possible to see if GLSL with lowp/mediump is still producing fp16 code using
RGA/CodeXL. Oh BTW,
renderdoc has RGA integrated now. Maybe it makes it a lot easier to test this now, no more mad batch files.