There's Raja in the architecture day stream talking (with a smile) about still having scars on his back for trying to bring expensive like HBM to gaming at least twice." (timestamp 1:26:48)
I believe Fiji was a pipecleaner for HBM. AMD co-financed and co-developed HBM for years so they had to use it sometime/somewhere to prove the concept, so that's why it was used in Fiji despite the capacity limit. My guess is he's talking about Vega 10 and Kaby Lake G.
As for Vega 10, there's a lot of clues pointing to Raja / RTG planning for the chip to clock a whole lot higher than it ever did. At an average 1750MHz (basically the same as the GP102 with similar size and supposedly similar 16FF process), a full Vega 10 with standard ~1.05V vcore would have been sitting closer to the 1080Ti (like Vega VII does) which at the time sold for higher than $700.
Even their HBM2 clocks came up shorter than they predicted, as
Micron edit: SK Hynix (with whom AMD developed HBM and would probably supply them the memory for significantly cheaper than Samsung) couldn't supply standard 2Gbps HBM2 to them, and only Samsung got close at the time.
Had Vega 10 clocked like AMD planned since the beginning, they'd have 64 CUs @ 1750MHz and 512GB/s bandwidth (not to mention some stuff that didn't work out as they planned, like the primitive shaders) with a performance level that would have allowed them to sell the card for over $700. Instead they had to market the card against the GTX 1080, for less than $500, which in turn gave them much lower profit margins.
Of course, shortly after Vega came out, the crypto craze went up, ballooning the prices of every AMD card out there, so in the end it didn't go so bad.
So just to get to my point: I think Raja's mistake was not to implement HBM in consumer cards. It was to implement HBM in consumer cards that failed to meet their performance targets. I guess if Pascal chips had hit a power consumption wall above ~1480MHz, their adoption of GDDR5X would have been considered a mistake as well. Though a lesser one since they could always scratch the GDDR5X versions and use GDDR5 for everything, of course.
It was a problem of implementation cost vs. average selling price of the final product. Apple seems to be pretty content with HBM2 on their exclusive Vega 12 and Navi 12 laptop GPUs, for example.