How much power does the Vega 64’s 8 GB of HBM2 use? According to the informative memory video above, that figure would then be multiplied by 7-9 x for 16 GB of GDDR6.
I think the figures often center on the power cost of the interface itself, when discussed from the memory module's point of view. From a board perspective, there are elements like the memory controller and the memory chips themselves that scale differently.
The memory controller itself likely has a component of its scaling proportional to the voltage of the interface, while the memory arrays and modules themselves can have different voltage levels than the data lines, and the memory arrays on the chips are relatively consistent across memory types--and so their power consumption tends to match.
I'm not sure about the exact figures for Vega 64, and there may be some penalties depending on whether a given board is using a version of HBM2 that obeys the voltage specifications for the type. For many Vega 64 boards, the HBM2 stacks were running above spec, most likely if they were Hynix prior to a shift to the most recent manufacturing process.
https://www.gamersnexus.net/hwreviews/3020-amd-rx-vega-56-review-undervoltage-hbm-vs-core
From the above, the two components on the board that drive memory are VRM phase dedicated to the HBM stacks, and a separate VDDCI phase dedicated to the memory controller.
The VDDM phase was given a range of 10 to 20 to 30 amps at an assumed 1.2V (hoping it wasn't the over-specced Hynix memory), although the assumption was that for standard settings it would be between 10-20 amps.
The VDDCI phase for the memory controller was a 10 amp device, and this is the one that is several times smaller than a corresponding GDDR system.
10 to 20 * 1.2V + 10 * 0.9V (that one's a bit iffy, but at least part of the memory controller depends on that setting).
This may slot somewhere between 25-35W, with unknown but probably sizeable margins of error.
A different analysis of the Radeon VII with double the stacks has two VDDM phases and a 20 amp VDDCI phase. The memory power delivery was speculated to be oversized for the Radeon VII in order to accommodate a 32 GB board, although I am unsure for reasons I'll go into next.
As far as 16GB of GDDR6 goes, you may need to specify how that capacity is reached. As noted, there's a component of power consumption that scales by the width and speed of the memory bus, and another that is scales more closely to the device count.
A 256-bit GDDR6 bus can get to 16GB several ways. A 256-bit bus allows for 8 chips, which if you splurge can get to 16GB if there's a 16Gb density version of GDDR6 available in 2020. If not, an existing 8Gb GDDR6 version can be used with 16 chips in clamshell mode to get 16GB.
The power budget that varies most between GDDR6 and HBM is the speed and width of the memory bus, and would be mostly the same between the 256-bit GDDR6 possibilities, assuming constant speeds.
Part of the power budget of the GDDR6 devices is likely bound to the higher interface speed per device, with the rest of the budget being per-chip elements and their DRAM arrays.
Capacity-based power consumption has been shown to be very small.
Rather than being dominated by the size of the DRAM arrays, it's how active they are that matters--and that scales with the overall bandwidth of the system.
So if we were to take the 3.5x figure for GDDR6 versus HBM2 from the earlier video back to Vega 64, that pushes the VDDCI count to 3-4 chips, but I think the growth for the memory module supply would be possibly one additional. The 2080 TI has 50% more channels and has a significantly overspecced memory power delivery setup. A gamersnexus evaluation of the PCB for the 2080TI speculates that its loadout of GDDR6 would top out at ~30W for the devices in aggregate.
Clamshell does raise the number of devices, but at the same time each one uses half the interface width and its arrays will see about half the activity versus a single module serving at full bandwidth.
Maybe the ceiling goes to 60-80W, and that's going by the specifications for higher-end GPU boards. Bringing it closer to 50W rather than 80W for a 256-bit board seems reasonable, and so maybe 20-30W savings if the memory systems are otherwise comparable.
What calendar? By the standard one, that sounds too late. The start of silicon mass production and then getting assembled consoles out through the supply chain last time was on the order of 6 months.