Join the true gaming elite: 1440p 240-360Hz with low settings.
Feel free to apply that logic to AMD. The 7900xt is over priced. They just followed Nvidia with the price gouging. That card should be like $600 max. With regards to the '70' class gpus, what Nvidia calls it doesn't matter. It's what they charge for it. The 4070 for example should actually be the 4060 or 4060ti at worst. Nvidia and Amd are out here price gouging and we have folks in here applauding it. It's very sad.The '70' class Nvidia GPU's have always been 1440p GPU's. (With '60' class for 1080p and '80' class and above for 4k) so what are you even doing talking about 4k performance for a '70' class Nvidia GPU?
Can we apply this same silly logic to the 7900XT? As unlike the 4070ti the '900' class for AMD GPU's is a 4k GPU.
With regards to the '70' class gpus, what Nvidia calls it doesn't matter. It's what they charge for it.
It doesn't matter what the excuse is, bad ports, etc. Those will always exist. Nvidia's job is to deliver value with their products and so far, the whole Ada line fails to deliver much value. If you purchase a graphics card for $800 USD in the case of the 4070ti, the card should be designed with longevity in mind. The 4070ti is not.I have both (well the demo on RE4 anyway) along with a 12GB GPU and I can assure you neither game comes even remotely close to 'obsoleting' it based on VRAM.
Putting aside the fact that TLOU should not be used as a watermark for anything given how much of a technical mess it is. It's also not particularly playable on a 4070Ti at native 4K ultra settings which is pretty much what you'd need to break 12GB. I can get the game comfortably below 12GB by turning on DLSS Quality and reducing the environment textures from Ultra to High with literally no visible difference in texture quality. And that's with the game supposedly reserving 2.5GB VRAM for the rest of the system, which according to @yamaci17 it's not actually doing anyway (and is enormous overkill for what most people actually need).
RE4 is a classic case of hyper inflated vram requirements for no appreciable gain. You can max everything out while staying under 12GB by simply reducing shadow quality from Max to High for no appreciable image quality loss. I suspect it's AMD sponsored roots are at play there.
The real question is why are you defending the card so strongly? Most people look at the ada line up and think it's a bad deal all around. I don't know why you have a vested interest in defending Nvidia.It's highly likely that even the $1600 4090 won't be able to play "next gen raytracing/path tracing games" at native 4K, and don't even get me started on the $1000 7900XTX, so I really don't see why you're expecting that of the lower end cards. Upscaling has already been targetted squarly at games with RT because the performance just isn't there to use it at high native resolutions. Try playing CP2077 at native 4k with Path tracing enabled on a 4090 and see what happens.
Since when is 30% far faster? I chose the bus to highlight the stagnation that's occurring with the 4070. The whole card is bad, I could have chosen the lack of change in cuda cores or other specifications.No, this doesn't matter in the slightest. It's a far faster card than the 3070 by much more than 13% in every scenario. Ada obviously has significant changes over Ampere (largely the heavily increased caches) which make it much less reliant on VRAM bandwidth than previous architectures.
I mean come on, the 7900XTX literally has less bandwidth than the Radeon VII. Are you suggesting that makes it a worse GPU somehow?
That's really not the point. Nvidia went 7 years or 3 architectures(1070 -> 2070 -> 3070) and delivered no improvement in memory capacity while increasing the price significantly. 16gb is the base expectation at the prices they're charging for a "70" class gpu which is really a 4060 in disguise. We know they're playing funny games as they tried to pass of a 4070 as a 4080 but got caught red-handed. They then successfully passed off a 4070 as a 4070ti and people lapped it up claiming Nvidia had self corrected. What a joke.Hey no-ones saying that more VRAM wouldn't have been nice, of course it would. But that would also have made it more expensive, and the 12GB it does have is not even remotely going to obsolete it for the remainder of this generation. There may be the odd corner case where some minor compromises have to be made where they otherwise wouldn't if it had 16GB, but my money is on those being very few and far between, and very minor in nature. Nothing like the launch issues with Forsaken for example which meant 8GB GPU's had to suffer PS3 like textures at any setting (which has been resolved now for the record, just like most of the recent 8GB hobbling issues).
Who says the redesigned architecture has anything to do with it? They went from Samsung's bad 8nm process to TSMC's "4N" process. If you just put ampere architecture on that process, you'd have gotten a huge boost in performance simply by doing nothing. All the things you're talking about are process related. Like I said, when i see evidence that Ada is actually significantly faster clock for clock than Ampere, then I'll gladly give credit where credit is due. So far, I haven't seen any evidence of that at all.Again, why does this matter in the slightest? If the redesigned architecture allows it to run at a much higher clock speed which in turn results in much higher performance, along with much improved performance per watt to boot, why does it matter that they achieved it that way?
It really isn't but, lets just agree to disagree.I dissagree, it's still a 1440p class GPU no matter how people try to swing it.
The high prices have just made people expect them to perform above their class (which I can kind of understand) but it's a 1440p card.
It doesn't matter what the excuse is, bad ports, etc. Those will always exist. Nvidia's job is to deliver value with their products and so far, the whole Ada line fails to deliver much value. If you purchase a graphics card for $800 USD in the case of the 4070ti, the card should be designed with longevity in mind. The 4070ti is not.
Not everyone plays in 4K, not everyone cares about the latest AAA games.
51 fps avg is a hell of a lot better than 24 btw - but it also low enough that I would probably not be using overdrive on a 4070ti either.
For some people RT is the only thing that matters, and so, perhaps they are fine with paying more for less memory and less non-RT performance. And that's fine as well.
Regards.
SB
Really? You're using the DX12 versions and not the Beta no RT DX11 versions right?Speaking of Capcom's QA issues, saw a ~300mb update for both RE2 and RE3 on Steam, so went to the Steam forums to see if there were any changes as no patch notes (per usual), and apparently this patch removed RT options. Tested both and can confirm, no more RT.
Sure this is a bug, but lol.
It runs at a pretty consistent and very smooth (I.e no stutters) 60fps at 3840x1600 max settings DLSS Auto on my system. I'm not sure anyone can reasonably say those are unplayable settings.
Really? You're using the DX12 versions and not the Beta no RT DX11 versions right?
Why would they remove RT from the game?
is that with native but no DLSS2, but with just frame gen?
Actually you can. It automatically turns on DLSS if you turn on frame-gen, but if you turn dlss off again after that and apply, then it will be native+FG.You can't have native + frame gen with DLSS3.
It's always DLSS + frame gen.