It doesn't matter what the excuse is, bad ports, etc. Those will always exist. Nvidia's job is to deliver value with their products and so far, the whole Ada line fails to deliver much value. If you purchase a graphics card for $800 USD in the case of the 4070ti, the card should be designed with longevity in mind. The 4070ti is not.
Bad ports will indeed always exist, but there's a difference between a port that uses the additional VRAM for something of true benefit to the gamer, and one that simply inflates the VRAM requirements artificially as a marketing tactic for one particular IHV.
You can literally save something like 1.5GB of VRAM in RE4 for example by turning the shadow quality down from max to high. And the change is basically invisible.
The real question is why are you defending the card so strongly? Most people look at the ada line up and think it's a bad deal all around. I don't know why you have a vested interest in defending Nvidia.
Lol, I've zero "vested interest in defending Nvidia". If you looked at my posting history you'd see I've been one of the more vocal people on here about the poor value offered this generation from both Nvidia and AMD. I may have a 4070Ti but as I've posted on here, I held my nose when I bought it because I knew it was poor value relative to past generations. The thing is though, it's still in my estimation, the best value GPU available at the moment if you want 3080+ level performance. And 12GB doesn't change that.
As noted, while their may be a very small number of extreme corner cases which have settings that can breach 12GB, the number of those that offer any kind of meaningful benefit to the game for that breach will likely be countable on one hand until the next gen consoles release.
And the number of games that will become unplayable with 12GB (in terms of some massive compromise like PS3 style textures having to be selected, or running the game at 1080p specifically due to VRAM limitations) will be precisely zero.
12GB will at best become a rare, and minor inconvenience for this card for the duration of this generation. And IMO there is simply no alternative available in the price range. Yes the 7900XT will never be VRAM limited at all, but it will be much more regularly RT limited, before we even mention reconstruction or frame generation techniques, so I don't see it as a viable alternative for my gaming preferences which are maximum graphics with acceptable image quality and reasonable frame rates (meaning 60 is perfectly fine, and in a pinch I'll settle for a bit less).
You say it's not built for longevity, but it
will last the entire current generation with ease, likely better on average than anything else available right now in it's price range or below. And after that, the games probably up for every currently available GPU bar perhaps the 4090.
Since when is 30% far faster? I chose the bus to highlight the stagnation that's occurring with the 4070. The whole card is bad, I could have chosen the lack of change in cuda cores or other specifications.
I was talking about the 4070Ti which is over 50% faster than the 3070 despite having "only 13%" more bandwidth. This amply illustrates the pint why judging a GPU based purely on it's memory bandwidth alone is silly.
That's really not the point. Nvidia went 7 years or 3 architectures(1070 -> 2070 -> 3070) and delivered no improvement in memory capacity while increasing the price significantly. 16gb is the base expectation at the prices they're charging for a "70" class gpu which is really a 4060 in disguise. We know they're playing funny games as they tried to pass of a 4070 as a 4080 but got caught red-handed. They then successfully passed off a 4070 as a 4070ti and people lapped it up claiming Nvidia had self corrected. What a joke.
You've moved onto a different argument entirely here. I agree the current offerings from both vendors are poor value. And obviously it would have been better to have more VRAM and/or cheaper prices. But that doesn;t mean that 12GB is going to obsolete the card before the next generation of consoles launch. It quite obviously isn't.
Who says the redesigned architecture has anything to do with it? They went from Samsung's bad 8nm process to TSMC's "4N" process. If you just put ampere architecture on that process, you'd have gotten a huge boost in performance simply by doing nothing. All the things you're talking about are process related. Like I said, when i see evidence that Ada is actually significantly faster clock for clock than Ampere, then I'll gladly give credit where credit is due. So far, I haven't seen any evidence of that at all.
I'm not sure what you're trying to argue now. Who cares whether they achieved the performance boost through a better node, faster clock speeds, improved architecture, or magic fairy dust. What matters is the end result. It's faster. And it doesn't matter how wide it's memory bus is, or how many CUDA cores it has. You're paying for performance (and efficiency), not bullet points of a spec sheet.