Value of Hardware Unboxed benchmarking *spawn

In price but not in specs. 5700 xt has quite a fillrate and flops advantage.

5700xt vs 2060 super vs 2070 super
Fillrate: 122 vs 101 vs 113
Bandwidth: 448 vs 448 vs 448
TFlops: 9.8 vs 7.2 vs 9.1
It really tells something when people have to ignore price to justify comparing product y to x instead of z to paint a picture fitting their view :rolleyes:
Price is the first thing determining what competes with what
 
I'm definitely seeing some desperation here, just not from them...
They literally stuck themselves into a corner in the conclusion of the video, resorting to mind bending justifications as lame as the 2060 Super can't play the game too! Or DLSS doesn't matter at 1080p, or DLSS in general wasn't a good advantage for the 2060 Super over its life span!

DLSS alone should have been more than enough for them to change their recommendation, but they never did. And now their RDNA1 has no DLSS, no RT, and can't play modern mesh shader based titles. How many lame execuses will they give before they admit they were wrong?!
No, what you did was dishonestly twist everything they've said in order to make them sound unreasonable.

Heck, the claim that they were down on DLSS2 early on is a straight up lie.
Again, no. The latest example of them downplaying the DLSS advantage is loud and clear here. They still refuse to say they were wrong about 5700XT being better than 2070 Super on the basis of lacking DLSS, and they still refuse to do it in Alan Wake 2, claiming that DLSS on 1080p is useless even if it's vastly better than FSR.

They couldn't have implicated themselves more, with all of their contradicting logic and mental gymnastics.

Price is the first thing determining what competes with what
Disagree, NVIDIA cards command a premium over AMD GPUs, especially since the introduction of RTX, on the basis of offering more features and vastly more RT performance.
 
Last edited:
It really tells something when people have to ignore price to justify comparing product y to x instead of z to paint a picture fitting their view :rolleyes:
Price is the first thing determining what competes with what

What view is that exactly? And what part of my post do you disagree with?
 
I don't think their conclusions are way off. 30 fps on pc is generally considered unplayable. I think most people are targetting 60 fps minimum. So both 5700xt and 2060 super don't meet the standard, regardless of the 2060 being better. Personally, i think DLSS quality at 1080p is a compromise a worth making for budget gamers, but it still doesn't get to 60 in this case. If there's a trend towards mesh shaders but with games that have a bit more scaling, then it could be the case where their recommendation for the 5700xt turns out bad.
 
What view is that exactly? And what part of my post do you disagree with?
Not disagreeing with your post. It was directed as commentary to someone saying 5700 XT compete(d) against 2070S rather than 2060S.
Disagree, NVIDIA cards command a premium over AMD GPUs, especially since the introduction of RTX, on the basis of offering more features and vastly more RT performance.
That's just ridiculous view on any front. If they price something at 500 its first competition is others priced about 500, not some some card price notably cheaper (or more expensive for that matter), regardless of your personal views on some specific features
 
I mean, what settings are they using? Why not lower settings? Do they have post-processing set to high or low? Them using anything but low there is an extreme waste of performance on a lwotier GPU.
 
Alan Wake 2 presents a disaster case for HardwareUnboxed, they championed RDNA1 over Turing. Now here comes a game that not only have bad performance on RDNA1 due to lacking DX12U features, but also bad image quality due to relying on FSR2. While the Turing alternative suffers none of this, have superior DLSS upscaling quality as well as access to Ray Tracing and Path Tracing.
The fact that four years after the launch of those GPU's there's only one (?) game that massively benefits from mesh shaders says to me that the lack of mesh shaders in the 5700 XT had close to zero significance.

Ray tracing support on the other hand is a feather in the cap of the 2060S but not because of Alan Wake 2. That GPU just isn't powerful enough. I don't know what "access to path tracing" even means here... using it for photo mode? Wow... big win.
 
I mean, what settings are they using? Why not lower settings? Do they have post-processing set to high or low? Them using anything but low there is an extreme waste of performance on a lwotier GPU.

He used the low preset and said "lower than low" was not worth it because the impact to image quality was too great. That part is kind of weird too. Budget gamers tend to make those compromises.
 
The fact that four years after the launch of those GPU's there's only one (?) game that massively benefits from mesh shaders says to me that the lack of mesh shaders in the 5700 XT had close to zero significance.

Yep the ship has sailed. For it to matter there would need to be a few super popular games in the next year using mesh shaders at playable frame rates on 5 year old cards. Unlikely to happen.

The mesh shader situation makes the progress on raytracing in the same timeframe that much more amazing.
 
The fact that four years after the launch of those GPU's there's only one (?) game that massively benefits from mesh shaders says to me that the lack of mesh shaders in the 5700 XT had close to zero significance.

Ray tracing support on the other hand is a feather in the cap of the 2060S but not because of Alan Wake 2. That GPU just isn't powerful enough. I don't know what "access to path tracing" even means here... using it for photo mode? Wow... big win.
If you sum these up and add in the fact that DLSS was there since 2018 (19 for v2+) you may see why the original HUB stance on RDNA1 seems quite a bit shaky at the moment.
 
Yep the ship has sailed. For it to matter there would need to be a few super popular games in the next year using mesh shaders at playable frame rates on 5 year old cards. Unlikely to happen.

The mesh shader situation makes the progress on raytracing in the same timeframe that much more amazing.

Mesh shaders is a core feature in the sense that it's a core part of the pipeline. I don't think it has automatic fallbacks. If you want to support mesh shaders, it's a new geometry pipeline. If you want to run on cards that support mesh shaders and also cards that don't, you need to engineer two geometry pipelines. It's a lot of work to do twice. Most developers are not going to do that. Basically they took the approach to wait until there was enough market coverage and then make the switch.

Ray tracing obviously requires a ton of engineering too, but I the fact that you can choose to just do AO, or shadows, or certain reflections allowed developers to ease in.
 
Mesh shaders is a core feature in the sense that it's a core part of the pipeline. I don't think it has automatic fallbacks. If you want to support mesh shaders, it's a new geometry pipeline. If you want to run on cards that support mesh shaders and also cards that don't, you need to engineer two geometry pipelines. It's a lot of work to do twice. Most developers are not going to do that. Basically they took the approach to wait until there was enough market coverage and then make the switch.

Ray tracing obviously requires a ton of engineering too, but I the fact that you can choose to just do AO, or shadows, or certain reflections allowed developers to ease in.
RT is the same. You need to make two versions of some approach if you want to be compatible with both RT and non-RT h/w.

The key difference is probably that you get quite an improvement in graphics quality from adding RT while with mesh shaders you just render geometry, possibly the same geometry as before even but with less GPU overhead - this isn't as visible most of the time and sometimes can also be done in other ways (compute, tessellation) compatible with older h/w.
 
If you sum these up and add in the fact that DLSS was there since 2018 (19 for v2+) you may see why the original HUB stance on RDNA1 seems quite a bit shaky at the moment.
Yes, DLSS has certainly been an advantage too. However, using mesh shaders as an argument is grasping at straws. Not sure where the need to make arguments as desperate as that comes from... Just sounds like bad marketing tbh.
 
Mesh shaders is a core feature in the sense that it's a core part of the pipeline. I don't think it has automatic fallbacks. If you want to support mesh shaders, it's a new geometry pipeline. If you want to run on cards that support mesh shaders and also cards that don't, you need to engineer two geometry pipelines. It's a lot of work to do twice. Most developers are not going to do that. Basically they took the approach to wait until there was enough market coverage and then make the switch.
But they (Remedy) didn't make the switch, AW2 works workout mesh shaders too despite the initial wrong information
 
However, using mesh shaders as an argument is grasping at straws.
Is it? Game's unplayable (or close to it) on 5700XT because of that. Seems like a pretty big deal. A bigger one than RT I'd say since you can't just turn mesh shaders off and get the usual performance back.
 
Mesh shaders is a core feature in the sense that it's a core part of the pipeline. I don't think it has automatic fallbacks. If you want to support mesh shaders, it's a new geometry pipeline. If you want to run on cards that support mesh shaders and also cards that don't, you need to engineer two geometry pipelines. It's a lot of work to do twice. Most developers are not going to do that. Basically they took the approach to wait until there was enough market coverage and then make the switch.

Ray tracing obviously requires a ton of engineering too, but I the fact that you can choose to just do AO, or shadows, or certain reflections allowed developers to ease in.

Raytracing also requires a second geometry pipeline and devs have swallowed that engineering cost. However in most cases RT is operating on the same relatively low fidelity geometry as the raster pipeline.

I think the key difference is that mesh shaders allow you to ramp up geometry complexity significantly. You can fall back to vertex shaders by sending the same geometry down the classic vertex shader pipe but it will be much slower. Given all the current generation consoles and last two generations of PC hardware support some form of mesh shading it’s a mystery why there isn’t more adoption in games and engines today. It’s the only real alternative to Nanite.
 
Is it? Game's unplayable (or close to it) on 5700XT because of that.
Yes it absolutely is. Again, it's one game four years after the GPU's launched. Changes nothing in the big picture when looking back at recommendations between 2060S and 5700 XT dating back 3-4 years. Funny to even have to argue about this but somehow I'm not surprised.
 
...
If they price something at 500 its first competition is others priced about 500
Your argument falls apart when discussing alternatives priced way down due to market forces. The Arc A770 is priced way lower than it should be, I should now compare vs the RTX 3050 because they are similar in price?

Companies backed into corners (like AMD and Intel) will price down to maintain a margin of competitiveness, because they have no alternatives.

AW2 works workout mesh shaders too despite the initial wrong information
There are visual glitches and many crashes, so it doesn't really work in an optimal manner.

 
Yes it absolutely is. Again, it's one game four years after the GPU's launched. Changes nothing in the big picture when looking back at recommendations between 2060S and 5700 XT dating back 3-4 years. Funny to even have to argue about this but somehow I'm not surprised.
It's one game now, who knows how many games in a year from now. It's also a heavy game where this difference isn't very important but what if the next one will be a lighter game where 2060S will be capable of outputting >60 fps?

I also fail to see how the performance of products right now suddenly doesn't matter for when someone was recommending one product over the other. HUB's typical recommendation which they repeat in every GPU benchmark is to get a GPU with more VRAM because this will be beneficial in the future. How is that different from getting a GPU with mesh shaders because that will be beneficial in the future?

Yes, it is funny how people can't see that the same logic should lead to similar conclusions when they are blinded by brand allegiances.
 
Back
Top