It wouldn't necessarily affect prior architectures until ported towards LLVM where Vega started.
It's in the code in header files intended to be applied to multiple architectures. If it's compiled and run on the relevant hardware, it would affect the GPU.
With HBCC Vega could accommodate far larger textures as well beyond larger memory capacities on top of that. It may very well not have been an issue before.
It wasn't an issue before because the fields were 64-bit values and the nonsense code generation didn't happen until they tried changing that.
Unless the assertion is that Vega's driver and hardware cannot handle 64-bit values, the final structure would appear to be less onerous for hardware to handle than what came before it.
This issue does not seem to have existed early enough in order to give Vega's missing features an excuse.
Not a patch, just that at the same time Polaris and Vega had conflicting opcodes, FP16 support broke on Polaris in many titles. Was the premise for that discussion a while back in regards to Carsten's benchmark issues. No hard evidence, but seems a likely culprit and indicates the same or similar compiler being used for the Windows code base.
The opcode conflict I can think of stems from changes for the LLVM ROCm stack, dealing with AMD's decision to assign the pre-existing higher-level operation names from prior ISA versions to new variants with slightly different semantics in Vega.
This required introducing new names for tools like dissassemblers to use for referencing the unchanged legacy binary encodings.
The problems with purposefully creating a naming collision like that on purpose aside:
The idea that doing this for a newer code branch must damage functionality for pre-existing and functional drivers, for a separate functioning architecture, on titles that were likely using higher-level abstractions, and this wasn't noticed until years after the binary encodings were decided, months-years after the hardware should have been a prototyping target, and work towards correcting it didn't ramp until 2017 and after that silicon was final and either in production/shipping/retail points to a far more massive/fatal problem with RTG.
(edit: Granted, I have serious doubts that it can be that bad. I'd rather chalk the publicly visible indications to a trailing-edge project coupled with AMD's chronic underinvestment in software or due-diligence. The above scenario is crazy.)
Also, here's a new
Region-based Image Decompression I haven't quite figured out. Fairly broad, but compressed mip-trees for binning/occlusion or perhaps a wireless VR compression? Lossy compression, while not technically accurate, could be a substitute for foveated rendering. Might also be practical for compressing occluded portions of a frame for reprojection as they do mention motion estimation.
Per reading at least part of the text, this is a continuation of a 2012 and 2011 filing. If it has VR implications, it probably didn't at the time.
The method in question has a cycle in the encoding path, which means non-deterministic time to encode and an asymmetry in encode/decode complexity. This was considered problematic for in-line latency-sensitive hardware encode/decode for DCC.
The logic also appears to be data-dependent on the content of a tile, not its global position versus something like the region corresponding to the fovea.