Value of Hardware Unboxed benchmarking *spawn

They stated that for 400$ you shouldn't have to be lowering texture settings below console equivalents at low resolutions. I fail to see what’s unreasonable about that.
 
They stated that for 400$ you shouldn't have to be lowering texture settings below console equivalents at low resolutions. I fail to see what’s unreasonable about that.
There are no percievable quality difference between High and Ultra textures in TLOUPI, the latter seem to just allocate more VRAM (+1.5GB or so) for textures as can be seen from monitoring. Thus High is the console equivalent option there.

It would destroy all of HUBs VRAM narrative in TLOU though as High textures now after all patches fit just fine in 8GB in 1080p-1440p.

Edit: or is it Very High? Don't remember. The point is that there are no reasons to run Ultra textures there unless you're purposefully aiming at producing VRAM issues on 8GB GPUs.
 
And yet they've stopped talking about TLOU since the VRAM issue was addressed.

Crazy huh?
They haven't. They used it at 1440p to show the VRAM limitations of the 4060 Ti.
Now they are using "bandwidth" as an argument. Look at the TLoU result from the 4060TI review. The card is slower than a 3060TI - but only in their benchmark. Computerbase, Daniel Owen and Co. the 4060TI is at least as fast or even faster.

"Never change a running system".
HU tested the heaviest section of TLOU in the woods. Daniel Owen tested the prologue so it could be why.
 
What is this in reply to? Use quotes.
No one in particular, just my thoughts on the current HUB drama.
There are no percievable quality difference between High and Ultra textures in TLOUPI, the latter seem to just allocate more VRAM (+1.5GB or so) for textures as can be seen from monitoring. Thus High is the console equivalent option there.

It would destroy all of HUBs VRAM narrative in TLOU though as High textures now after all patches fit just fine in 8GB in 1080p-1440p.

Edit: or is it Very High? Don't remember. The point is that there are no reasons to run Ultra textures there unless you're purposefully aiming at producing VRAM issues on 8GB GPUs.
TLOU isn't the only game. He shows serious problems with RE, Calisto, Plague Tale and Forspoken. He mentions Hogwarts as well. 8GB is not enough VRAM for this cards performance profile. Going forward things will more likely get worse rather than better.
 
They stated that for 400$ you shouldn't have to be lowering texture settings below console equivalents at low resolutions. I fail to see what’s unreasonable about that.
Yeah. I think just historically at this point we would have been expecting better (2.5 years after PS5 launch), using PS3 or PS4 as comparison points.

PS3 released November 11, 2006 (NA).

In October 2007 (11 months after PS3 launch) PC gamers had access to the 8800 GT, which was beyond PS3 in features set on the GPU, and more than 2x the GPU performance with more than 2x VRAM than the PS3 had access to. It was 350 USD MSRP, adjust for inflation it is 508 USD currently. 2.5 years after PS3 we would have something like the Geforce GTX 275. GTX 275 is more than 3x the GPU power of PS3, more features, and more than 3x the available VRAM. It was 250 USD MSRP. That is 355 USD adjusted for inflation.

PS4 released November 15, 2013 (NA).
Roughly 2.5 years after launch there we have the GTX 1060 (july 2016). It is 2x the GPU in games of the time period and has a bit less than 2x the same amount of usable VRAM as the PS4 titles tended to use. It is 299 USD MSRP, roughly 377USD adjusted for inflation.

With the RTX 4060TI at 400 USD we are above the inflation adjusted price of GPUs from the analogous period after the PS3 and PS4 launch. In a good case scenario it is like 1.3x better at raster, >2x better at RT than PS5. It has better features than PS5 with ML and DLSS variants. It has LESS total VRAM than PS5 tends to use for games.

I wanted a GTX 1060 style chip here - but they did not manage that at all.
 
Last edited:
I think it's worth pointing out that at the point of PS3's release it was using a GPU that was up there with fastest PC GPU's on the planet and there was only a handful of GPU's available on PC during PS3's launch that were actually faster.

That was not the case with PS4 as it launched with an already 18 month old mid-range PC GPU so it was much easier to find a GPU that could easily and cheaply beat it.

PS5 is closer to PS3 in respect to it's GPU performance vs what was available at its launch on PC.
 
Last edited:
I think it's worth pointing out that at the point of PS3's release it was using a GPU that was up there with fastest PC GPU's on the planet and there was only two/three PC GPU's available at PS3's launch that were faster, being the 7800GTX 512mb and the 7900GTX.

That was not the case as PS4 launched with an already 12 month old mid-range PC GPU so it was much easier to find a GPU that could easily and cheaply beat it.

PS5 is closer to PS3 in respect to it's GPU performance vs what was available at its launch on PC.
8800 GTX was available at PS3 launch for only 100$ more and was well over twice as fast. Probably 2.5-3x. Not in 1 specific area, but in general performance. The additional features the G80 offered were also available in nearly every game.
 
8800 GTX was available at PS3 launch for only 100$ more and was well over twice as fast. Probably 2.5-3x. Not in 1 specific area, but in general performance.

I had an 800GTX (Well two actually) and it wasn't 2.5-3x faster.

Some times it was 2x and sometimes it was less.

But my point still stands, PS3 was up there with the best when it released where as PS4 was miles off it.
 
TLOU isn't the only game. He shows serious problems with RE, Calisto, Plague Tale and Forspoken. He mentions Hogwarts as well. 8GB is not enough VRAM for this cards performance profile. Going forward things will more likely get worse rather than better.
RE has similar texturing buffer size setting which isn't really improving any quality above a certain point. Using it to prove the point is disingenuous.
CP is a thoroughly broken game even now. Same is true for Forespoken although they did patch the most glaring issues I believe.
APTR may have issues but my experience with it is that I've beaten the whole game on a 3080/10GB in 4K and had zero VRAM issues.

I wanted a GTX 1060 style chip here - but they did not manage that at all.
Again it's nice to want things but judging the current situation by what was happening 15 years ago (!) with PS3 isn't really a good idea since everything's changed now. It's like you're expecting something just because you think it should happen and then reality happens and everyone is disappointed for some reason.
 
From the point of view of business, NVIDIA has to starve their gaming GPUs of VRAM, they don't want AI people to buy gaming GPUs to do their AI work on, and AI workloads are hungry for more memory.

NVIDIA is also facing severe chip shortages at TSMC, as demand for their AI chips is literally through the roof, the market is willing to pay top dollar for these chips, so NVIDIA has less incentive to release any chips at low prices.

And it's not like the alternative is offering anything better, the RX 7600 is launching today with 8GB and 270$ price tag, whille offering slightly above 3060 12GB performance and way behind the 3060Ti (essentially a regular 4060 tier performance). The whole market is screwed.
 
From the point of view of business, NVIDIA has to starve their gaming GPUs of VRAM, they don't want AI people to buy gaming GPUs to do their AI work on, and AI workloads are hungry for more memory.

NVIDIA is also facing severe chip shortages at TSMC, as demand for their AI chips is literally through the roof, the market is willing to pay top dollar for these chips, so NVIDIA has less incentive to release any chips at low prices.

And it's not like the alternative is offering anything better, the RX 7600 is launching today with 8GB and 270$ price tag, whille offering slightly above 3060 12GB performance and way behind the 3060Ti (essentially a regular 4060 tier performance). The whole market is screwed.

AI is not really a good reason though. Even 16GB is pretty limiting.

I don't know if 8GB is good enough or not, but NVIDIA is not providing a convincing argument IMHO. For example, bigger cache does not help when you are limited by VRAM size. It helps if you lacks bandwidth, not size. Even more strange is the fact that 4060 Ti is only PCIe x8. This does not help the case where you should be using streaming to reduce pressure on VRAM size.
 
From the point of view of business, NVIDIA has to starve their gaming GPUs of VRAM, they don't want AI people to buy gaming GPUs to do their AI work on, and AI workloads are hungry for more memory.
It's much easier than that and the illustration why will arrive soon in the form of 4060Ti16 with all its perf/price "glory".
 
IMV Nvidia's problem is that 80 class cards this gen are 70 class, 70 class this gen should be 60 class and now the 60 class of this gen should be 50 class. I am very sceptical Nvidia couldn't use the full AD103 for a 4070 Ti that is $799 with a good enough cooler + board (doubling die cost would what, 80$ more in bom? Doesn't seem to justify a whack ass 499 > 1199 price jump). Defintely feels like Nvidia wants the cover of rising die costs which is a legitimate reason to raise them but to this extent? I doubt it.
 
AI is not really a good reason though. Even 16GB is pretty limiting.
Indeed, even 24Gb VRAM is barely enough for optimized trainers in the stable diffusion space, 40Gb is recommended.
 
Back
Top