Value of Hardware Unboxed benchmarking

Start by asking the question, "why do you need to?"

What is it about this game's scaling and the 4060 that matters? Replace 'low end' with the qualifiers that you are referring to.
Because people expect performace that the card was cleary not buildt for.
The developers even informed people that 8GB cards should expect to run at low settings @1080p for 60 FPS.
People somehow think (falsely) that their "entry"(can I use that word?) level card should be running high settings, I do not know why they should be "shielded" from facts?
That will only assist in their misguidance and let them be stuck in their false expectations.

You can even use console commands to lower settings beyound the in-game minimum settings (and the opposite way too for "flagship"(is that word allowed?) SKU's), but that seem to complicated and people instead cling on to delusions 🤷‍♂️
 
Some people think their $300 card should be running high texture settings. We can discuss whether that’s reasonable or not in 2024.
I think people overly focused on the price and ignore that for a lot of the most recent Nvidia offerings the VRAM quantity is becoming the bottleneck before the actual shader/RT/whatever performance. These cards have the silicon to play the latest games but are hampered by imo poor design choices around VRAM.

As an example, 4060 literally have less VRAM than the standard 3060 (8 vs 12), I don’t get how people can defend an actual regression generation over generation.

4070 ti is another great example, the card is likely going to see its 12GB buffer become the bottleneck before the rest of the card and that’s a shame. If they had just designed this card around 16GB it would have much longer longevity (and alas, they literally did exactly this with the Super refresh lol).

Adding extra VRAM in proportion to these cards performance levels wouldn’t be expensive, Intel and AMD do it and they’re usually cheaper than Nvidia (not by much mind you but they also produce far fewer cards so economies of scale etc).
 
Some people think their $300 card should be running high texture settings. We can discuss whether that’s reasonable or not in 2024.
That's the source of the issue.
The reasoning which HUB promotes is that VRAM is free so why not put x4 amounts of that on each card - sounds great, ain't it?
The problem however is that VRAM isn't free and sometimes putting more of it on some card can result in a serious MSRP increase. Which in turn puts the card higher up the product stack, which if the card still has the same performance generally doesn't pan out that great. Sure you have the VRAM for 4K path tracing, can you use it though? Nope, not on this card.
A case everyone seem to forgot already is still fresh - 4060Ti 16GB didn't do very well on its $500 MSRP.

I think people overly focused on the price and ignore that for a lot of the most recent Nvidia offerings the VRAM quantity is becoming the bottleneck before the actual shader/RT/whatever performance.
Is it though? This example of Indiana game shows that it's kinda not.

Adding extra VRAM in proportion to these cards performance levels wouldn’t be expensive, Intel and AMD do it and they’re usually cheaper than Nvidia (not by much mind you but they also produce far fewer cards so economies of scale etc).
We have no idea what margins these have and can guess that they are doing it because otherwise there would be zero reasons to consider their products over Nvidia's. Saying that this proves anything for Nvidia is disingenuous.
 
Some people think their $300 card should be running high texture settings. We can discuss whether that’s reasonable or not in 2024.
Seeing how $1000 cards performan, I would say no.
I mean it is not like in the past where you had "fake" cards like a GeForce4 MX460 (DirectX 7 card) in the same series as a 4600 Ti (DirectX 8.1 card)
The APU's killed to "terribad" cards

So the generations are better alligned that in the past, but you still have to temper your expectations...and a $300 card is not a "high settings" card in 2024...nor will it be in 2025 🤷‍♂️
 
Start by asking the question, "why do you need to?"

What is it about this game's scaling and the 4060 that matters? Replace 'low end' with the qualifiers that you are referring to.

I get the necessity to want to clamp down on thread diversions ahead of time, but man - I feel you're really fighting an uphill battle here.

Low end/enthusiast/high-end, flagship etc - these are terms which are used exhaustively not only throughout this industry, but virtually all consumer product ranges. They're terms used by end users, reviewers, from the companies themselves.

It's part of everyday consumer language, and has been for decades. To now have to train yourself to avoid using this extremely common terminology, on this one particular small forum, seems a little strange.

Some people think their $300 card should be running high texture settings. We can discuss whether that’s reasonable or not in 2024.

Depends on what "high" means in this case - better than a $370 console, or worse? When it's "worse", I think 3 years into a console's lifespan, with a GPU that's only slightly less expensive than an entire console, having to dip the texture quality below what console games have to accept is naturally going to result in some consternation. It's a rather new development for PC gaming, at least this far into the generation.

And again, I'll add that sure, I get that DLSS3 is not solely marketed as "free frames" for just the low-e, uh, people-who-usually-don't-pay-over-$600-for-their-GPU's segment, but it definitely is a heavily marketed feature for Nvidia in general over the previous.

So, you have the 3060's replacement get this great new feature...which just puts more strain on the VRAM reduction that this price segment of cards from Nvidia received this gen. Saying "well then don't use it" kind of misses the point, it's perfectly reasonable for consumers to be a little peeved at being told what a breakthrough development this is over the previous gen, but you're hamstrung from using it in many games without graphical reductions that 3060 12GB owners probably don't even have to make.
 
Last edited:
Depends on what "high" means in this case - better than a $370 console, or worse? When it's "worse", I think 3 years into a console's lifespan, with a GPU that's only slightly less expensive than an entire console, having to dip the texture quality below what console games have to accept is naturally going to result in some consternation. It's a rather new development for PC gaming, at least this far into the generation.
Did those who are saying that the game looks "worse" on 8GB GPUs than on a "$370 console" actually check how it looks on a "$370 console"?

xbox-series-x-vs-s-(4).png
 
Did those who are saying that the game looks "worse" on 8GB GPUs than on a "$370 console" actually check how it looks on a "$370 console"?

That's a $300 console, not a $370 one - that's the digital PS5, and regardless I'm not solely talking about this game - Indy may have reignited this debate, but there are plenty of games already where on an 8GB GPU, you're going to have to reduce texture detail below the console versions, even without using framegen.
 
Depends on what "high" means in this case

The literal “High” setting in a game’s menu. Of course the absolute requirements will change from game to game but we can’t normalize for that.

The flip side of this argument is that if game developers target the cheapest cards in the market with high settings it invariably means the extra capacity of more expensive cards will go unused.

I would much rather games target 16-24GB VRAM for high texture settings. Low to medium settings for consoles. Supreme should require 24GB+ to justify the supreme amount of money you’re paying. The biggest problem with Indy is that the supreme texture setting can still look like crap.
 
Is it though? This example of Indiana game shows that it's kinda not.
Yes. I have personally seen VRAM become a bottleneck for me. Plenty of games use >12GB (or try to) during an extensive play session (aka, not something that would be shown in a benchmark). And in these circumstances, the game's performance is fine right up until the buffer fills and then performance is trashed until I lower texture settings.

We have no idea what margins these have and can guess that they are doing it because otherwise there would be zero reasons to consider their products over Nvidia's. Saying that this proves anything for Nvidia is disingenuous.
No, it doesn't 'prove' anything, however VRAM prices aren't a mystery, they are commoditized products sold on an open market. You can see the spot price for GDDR6 and 6X at any point in time, and while those numbers don't directly translate to what it would cost to add more VRAM to the BOM, it gives us enough of an idea to speculate on a forum at least.

Again, where is the justification for a literal VRAM regression for the 3060 -> 4060? To me it is rather obvious that Nvidia uses VRAM as a way to upsell customers to its higher end products, which is fine and a valid strategy. I just don't understand why we're acting like that isn't the case, nothing about the VRAM distribution in the 40 series made sense (except the Super series fixed a lot of it, which only reinforces my point tbh, and the 4080 was decently buffered I would say but was just too expensive) besides the 4090.
 
Again, where is the justification for a literal VRAM regression for the 3060 -> 4060?
The 3060 was made on a much much cheaper node (8nm Samsung vs 4nm TSMC for 4060).
The 3060 had much lower transistor count (12 billion vs 19 billion for the 4060).

This meant the 4060 costs more to make than the 3060, so NVIDIA had to cut corners, they reduced the memory bus to 128 bit (vs 192 bit for the 3060). Which left the 4060 with only two options for VRAM: either 8GB or 16GB, there is no middle ground.
 
The middle ground would be offer both versions, with the 2nd at BoM+margins. I know we can never be sure, but it's hard to argue why they don't do that, actually for all SKUs, other than wider business implications. Especially for GDDR6 GPUs as the board implications are much simpler than for GDDR6X. But 24GB RTX 4070/tis well, we know what that is going to cannibalize.

As for the 3060->4060 something to point out is while enthuasists who follow this stuff look at things as gen on gen, most of the buying public does not consider gen on gen upgrades. Even less so as we move more into the mainstream segments of the market. For people actually upgrading and comparing to what they have they would be on RTX 2xxx or older, from that perspective the 8GB is an upgrade compared to what they likely have or the same. I believe in past (although this awhile ago) that Nvidia internally is more focused on presenting new cards as compelling upgrades for those at n-2 generations.

With the above I do suspect Nvidia may try to wait out for 3GB GDDR7 in volume to at least offer 12GB SKUs for the 5060 even with a 128 bit bus.
 
Yes. I have personally seen VRAM become a bottleneck for me. Plenty of games use >12GB (or try to) during an extensive play session (aka, not something that would be shown in a benchmark).
I've swapped out my 3080 10GB at the end of 2022 and I can safely say that never ever was its VRAM size a bottleneck for me up until that moment - and I've been using it in 4K.
There were games which required some settings adjustment and some games which were patched to run on it without issues but no game prompted me to use anything but the highest possible textures. Well, there was one exception - Far Cry 6.
From that moment to now the situation with general VRAM usage has actually improved I'd say as more games started to rely more on streaming and switching from UE4 to 5 helped with that as well.
So if you're regularly running into issues on your 12GB card right now then you're probably pushing it too far - or use a settings combination which overloads your VRAM for no reason (see Indiana's pool size).

Again, where is the justification for a literal VRAM regression for the 3060 -> 4060?
The "justification" was mentioned in almost every review of 3060 back at its launch with people saying that the card doesn't need 12GB.
I'd argue that 4060 is completely fine with 8GBs and will remain so for years still.
The 4060Ti is the one where 8GBs seem low - but we have that with 16 and it's not a good perf/price offering.
The balance between having enough VRAM and being able to sell the card at a good price point is why we're getting these 8/10/12GB products.
You can't just increase the VRAM size without that affecting the product's price - or manufacturer's margins.

With the above I do suspect Nvidia may try to wait out for 3GB GDDR7 in volume to at least offer 12GB SKUs for the 5060 even with a 128 bit bus.
Nvidia might (if only as a second SKU at a higher price though) but AMD will still use G6 on its N44 128 bit cards so I fully expect a "8600" to still have 8GBs.
 
The "justification" was mentioned in almost every review of 3060 back at its launch with people saying that the card doesn't need 12GB.
That doesn't make sense as a justification of the 4060 having less memory, and anyway I don't think anyone was upset that the 3060 had too much memory. They may have mentioned that it wasn't immediately necessary but it turned out be a very good thing.
 
The "justification" was mentioned in almost every review of 3060 back at its launch with people saying that the card doesn't need 12GB.

The context behind this reaction was that higher end parts like the 3070 series and 3080 had less VRAM than a card that was less likely to make good use of it in the future. ie the performance of the 3060 would generally be too poor anyway in a few years when well rounded games started requiring more than say 8-10GB VRAM. Whereas a 3070 or 3080 would see performance drop from ok to unplayable purely down to VRAM limitations. The 3060 12GB is like the Arc A770 16GB in this way. If intel had a higher end Arc 870 10GB and didn't offer any other VRAM config, it would be a similar issue.
 
One should also consider that video memory size was not very adjustable, because there were not much different sized memory chips, so in many case you'd have to double the memory size (e.g. 8GB or 16GB). To make a 12GB version you probably need to have a diffferent GPU chip with different memory bus width.
 
I get the necessity to want to clamp down on thread diversions ahead of time, but man - I feel you're really fighting an uphill battle here.

It's part of everyday consumer language, and has been for decades. To now have to train yourself to avoid using this extremely common terminology, on this one particular small forum, seems a little strange.
Understood. However, low, mid and high end always meant something consistent, but now it's confused. Pretty much every discussion that raises these terms now ends up a discussion on their definitions. What's the alternative to facilitate on-topic discussion other than to pre-empt that and enforce new dialogue protocols to prevent that happening? Open to all workable ideas!
 
The context behind this reaction was that higher end parts like the 3070 series and 3080 had less VRAM than a card that was less likely to make good use of it in the future. ie the performance of the 3060 would generally be too poor anyway in a few years when well rounded games started requiring more than say 8-10GB VRAM. Whereas a 3070 or 3080 would see performance drop from ok to unplayable purely down to VRAM limitations. The 3060 12GB is like the Arc A770 16GB in this way. If intel had a higher end Arc 870 10GB and didn't offer any other VRAM config, it would be a similar issue.
While all of this is correct do note that 4060 is slower than 3070 and thus the same reasoning applies. As I've said.
 
Understood. However, low, mid and high end always meant something consistent, but now it's confused. Pretty much every discussion that raises these terms now ends up a discussion on their definitions. What's the alternative to facilitate on-topic discussion other than to pre-empt that and enforce new dialogue protocols to prevent that happening? Open to all workable ideas!

Just easier to stick to features/performance per dollar. The definition of high end in particular has been completely warped.
 
Just easier to stick to features/performance per dollar. The definition of high end in particular has been completely warped.
That seems like a "metric" that only can become very muddy fast.
Some will claim they do not care about "feature A" and insist only "feature B" is relevant and "feature C" has no relevancy too later on.

Just see how HUB's stance on raytracing is creating debate.
 
Back
Top