That depends on what they're willing to count in the total.
AMD's presentation for Vega 10 stated it had 45 MB of SRAM across the entire chip, which I don't think there's been a full public accounting of.
A 56 CU GPU would have ~14.7 MB just for the register files, ~3.67 MB for LDS, ~2.29 MB for L0, instruction, and scalar caches. Maybe 4MB or more for L2 cache. I'm not certain at this point about the L1, but if this is a 4 shader-array GPU, there's .5 MB for L1.
There could be other caches like the parameter cache, which was 1 MB for Scorpio and might be higher for the next gen.
All these known entities push the buffer totals to a similar amount as the 38 MB figure, so the question is how much of the "other" that wasn't detailed for Vega is in the total for Arden.
The standards can outline a fair amount of details surrounding the DIMM and ECC scheme adopted, which the industry would be unusually consistent in implementing in the absence of a standard. Wouldn't JEDEC have a provision for this for DDR4 DIMMs?
GDDR is more like HBM in how a single device is the endpoint of a channel or set of channels. For HBM, there was provision in the standard for ECC.
GPUs did initially offer GDDR5 ECC by setting aside capacity and additional memory accesses, though this was a less than complete solution for GDDR5 as that standard didn't protect the address and command buses.
GDDR6 is at least somewhat more protected due to the error detection on those buses, though I'm curious what use case Microsoft has for ECC on a console APU. Is there a more high-end workload that would justify such a scheme, concerns about higher penalties from bit corruption due to memory encryption/compression, or maybe a hinderance for rowhammer attacks?