Regarding boost/powertune/turbo, while it is definitely a can of worms, it's ultimately unavoidable. As these cards are increasingly power-limited, you enter the space where you can't turn the whole chip on at once. If you design your chip to run at the same clocks in firmark as a game you're going to be leaving a lot of useful performance on the floor.
This is no different than the situation on CPUs for the last couple years, particularly on ultra-mobile (15W, etc. and down).
I'd move the time frame to say that GPUs have passed that point generations ago, likely before Furmark was even a thing.
The notable thing was just how exceptionally primitive they were shown to be, with hacky driver black lists and cards killing themselves on demanding applications (or more recently, StarCraft 2's menu screen).
For the desktop market, I think that CPUs were definitively past that point with the Prescott and Sledgehammer/Barcelona at the latest, so roughly the 90/66nm time frame. It could probably be debated that their predecessors could have already been past there with power virus software, but it was irrevocably past the inflection point at those nodes. That's after the market's willingness to keep bumping TDPs and the massive number of bins and SKUs they used to soak up each part of the yield curve.
My perception of the gap is that the last desktop CPU that could be forced to kill itself thermally was the Athlon XP, whose thermal failsafe required some motherboard support and the thermal diode-prompted shutdown might not be fast enough in the case of the user ripping off the heatsink.
I sadly know from personal experience how much easier it was for the Thunderbird core to kill itself.
The P4, for all the bad press it got for throttling, was the proper first step to having on-chip and autonomous thermal controls that worked faster than the silicon could kill itself.
The most recent and roundly confirmed case of GPUs offing themselves without even touching the cooler was in 2010 when StarCraft 2's menu screen fried GPUs.
I've seen some reports of a driver release causing similar problems in 2013, but I'm not sure that's as definite.
I find it morbidly fascinating that chips with 4-5 times the power draw of a 2000 Willamette and up to 100x the transistor count were killing themselves ten years later.
At least we're finally getting GPU hardware designs that have moved beyond the "might die from rendering furry donuts" stage.