Jay’s video on the topic kinda sums it up for me. He uses STALKER 2 @ 4K to illustrate performance cliffs on a 3070 Ti when you run out of vram. Then acknowledges that running 4K on that class of card makes no sense which kinda makes the whole video pointless.
These influencers really need to...
Yeah that’s what I’m thinking. Most people probably aren’t playing benchmark games. They’re just playing games. So Nvidia’s going to sell 8GB cards until they’re forced to change due to competition or games actually demanding more vram which would lead to poor sales.
It’s still unclear to me how much it matters for $300 graphics cards to have more than 8GB of vram. Most of the griping on forums and YouTube is from people who don’t actually use that class of card and 8GB cards seem to sell well enough despite the noise. What’s the business case for a 12GB...
Avatar reviews said the same. I like the far cry formula in appropriate doses so it’s fine by me.
Just wrapped up Hellblade 1. Beautiful game and definitely the prettiest UE4 game I’ve played. To keep walking simulator vibes going I’ll give Firewatch a try next.
And even back then there was enough griping for it to be noticeable. I don’t think things have changed much.
This is almost certainly part of it. It’s ingrained in the human psyche at this point to get old and reminisce about the good old days.
The evolution of souls-likes is a good example...
I don’t recall anytime in the past 20 years where people weren’t complaining that this or that class/weapon/tactic was overpowered or nerfed. The idea that games used to have better balance is just nostalgia talking IMO.
I find games less fun now because I’m old and harder to surprise/impress. When you’re young everything is new and exciting and you’re less crotchety about small stuff. Then add a heaping serving of nostalgia on top of that.
That dude has been at Nvidia since 1999. Hope he didn’t sell too many shares along the way.
100 person in house studio is kinda surprising given Nvidia hasn’t been sharing too many demos lately. Must be keeping everything under wraps.
We’re discussing a potential DirectCompute implementation which is no more vendor centric than DirectX. My comparison was to the significant per-application tuning that’s done for DirectX applications. DirectX is also a higher level API than DirectCompute and therefore has a higher surface area...
How likely is it that GPU bugs will affect lower level APIs like CUDA vs all of the app specific hackery that goes on in graphics. We’re not talking about game ready drivers here. Either way it’s a cop out. The APIs are available. The problem is lack of incentive.