It's assigning meaning to a marketing name that was never advertised.
The GTX 275 had 100% of the shading units of GT200B. Every 70 card after that was highway robbery. The GTX 570 was 94% of GF110. Kepler is where it really fell apart. The GTX 770 was only 53% of GK110B. And now the 5070 is a stingy 25% of the 5090. So the 70 class card can be anywhere between 25% and 100% of the big honcho.
Of course you need to ignore that entirely new tiers of performance, die size and cost were introduced over time but that's inconvenient.
I've had this discussion with a few people in real life in the office at work, and I like to use this thought experiment:
Let's rewind back to the 40 series, and let's assume that NVidia found a near-perfect solution for bonding multiple compute dies into one logical GPU, sort of like how Apple's doing it.
For the sake of argument, let's say that they that they had the tech to bond 2x, 4x, 8x, and 16x AD102 dies together on an interposer with >90% performance scaling, and doing so allowed them to create 4 new SKUs at those performance levels, with price increases roughly aligning with their performance increases.
With the above assumed to be true, some questions:
1) Does the existence of a hypothetical 4090x16 at ~$25,000 with performance to match materially change anything at all about the value of the lower part of the product stack? Does a 4070 now have 16x lower perceived 'value' than before because it has only ~1.5% of the performance of the new top card in the stack?
2) If the answer to the above is 'no', would it change anything if some breakthrough (but expensive) process tech suddenly appeared out of nowhere, allowing them to scale the top end product(s) in the stack into the stratosphere? Like imagining a hypothetical world where TSMC suddenly figured out a 5 angstrom process, but with wafers costing $300,000 each. Or alternatively, a way to break the current reticle limit, and essentially letting Nvidia create wafer-scale chips of arbitrary size, Cerebras-style, so they could make a monolithic ~10,000mm2 monster with ~16x the performance of AD102.
3) If the above answer is also no, then why do people assume the arbitrary spot where Nvidia (or AMD) in any generation chooses to place the top end SKU in the stack matters?
Or alternatively, looking at it another way, imagine a hypothetical world where Nvidia just straight up didn't make an AD102 for the 40-series generation, and the 4080 Super was the top card in the stack, perhaps even given the name 4090. In this hypothetical world, does the 4070 having 58% of the functional units of the top card in the stack somehow increase its value proposition, making it now 'one of the best 70 series cards in history?'