Indeed, 232mm at Samsung/GloFlo's smaller node process implies 2560-3072 of AMD's "compute" units. If we just naively scale with clockspeed to the same speed as a 1080 we get anywhere between 3% to 25% faster than a Fury X not counting efficiency gains from the new architecture. But then at the moment Nvidia's own gains besides "Faster than a Titan X" are fantastically obfuscated. An almost markerless chart using just 2 actual titles in comparison to the 980 (instead of the much closer 980ti) aren't much help other than saying "it's faster, but not by a fantastic margin, than the 980ti". So any actual comparison will have to wait till reviews for both probably. Still, good job to Nvidia, while they've stretched the comparison of cards rather a bit, it's being bought consumers at the moment so kudos to their PR team again.
Still, if we go by this we can guess:
Going by a 980ti as a benchmark instead of a 980, we can guess around a 10%-18% performance boost on these 2 titles. Though since they fixed async compute it'll be larger on titles that use that (Ashes of The Singularity and Hitman so far really, but more to come).
There's a bunch of acronyms thrown around that get confusing, even if you're trying to research it. "Deep color" in the case of Alien Isolation, and other things, is just 10bit per pixel srgb output. Which can provide less banding in dark areas if you look hard, but that's about it. HDR, as others have said, is something different, which is P-3 to REC2020 Gamut colorspace output, with 10-12bit per pixel color "depth", 1k to 10k nits brightness, and a variable minimum brightness below 1nit (what's the standard? Is there one yet? This shit is too hard to find at the moment).