Value of Hardware Unboxed benchmarking

That seems like a "metric" that only can become very muddy fast.
Some will claim they do not care about "feature A" and insist only "feature B" is relevant and "feature C" has no relevancy too later on.

Just see how HUB's stance on raytracing is creating debate.

Yeah people can choose to downplay things they don’t care about but that’s the case with any approach. Don’t think anyone should argue with dollars as an objective metric for “tiering” though. Adjusted for inflation etc etc.
 
Yeah people can choose to downplay things they don’t care about but that’s the case with any approach. Don’t think anyone should argue with dollars as an objective metric for “tiering” though. Adjusted for inflation etc etc.
Dollar vs what?
Last generation (and features/performance)?

Because stuff like SER, OMM, DMM (4000 series vs 3000 series eg) is going to be hard to do cross-vendor IMHO.
 
Yes. Some of these are using common APIs and work on all h/w which support them. In other cases it is also inevitable that APIs will evolve to provide the capabilities needed for such tech to work.
like DirectSR and DirectML ?

That should allow games to mix and match I think. I don't know of any games with that implemented in yet however.
 
So no consensus! Hence, I suggest just not using it. Pick the target you are measuring against and specify that target...

Because people expect performace that the card was cleary not buildt for.
Saying 'low end' doesn't specify that, but say, "people shouldn't be expecting all the features from the lowest-priced card of nVidia's line-up that generation," and it's clear what your point of reference is without drawing in price from previous generations, etc.
Instead cling on to delusions
There are no delusions here - only ideas and discussions around them.
 
So no consensus! Hence, I suggest just not using it. Pick the target you are measuring against and specify that target...


Saying 'low end' doesn't specify that, but say, "people shouldn't be expecting all the features from the lowest-priced card of nVidia's line-up that generation," and it's clear what your point of reference is without drawing in price from previous generations, etc.

There are no delusions here - only ideas and discussions around them.

I wasn't the one wanting the cheapest SKU to perform at high settings nor was the "delusion" part directed at this forum, there are forums that are lost in want people expect from the entry level GPU's.
Some want all for no basically no money...or the vendor is "eveil".

I think it is a generational divide, I know I look back at eg. old Amiga gameplay from games I played at the time with horror and are grateful for how long we have come since then.
Or just a old 3Dfx games and shudder at that I thought it looked great.

I supose I do o't not suffer from nostalgia :mrgreen:
 
Yes. Some of these are using common APIs and work on all h/w which support them. In other cases it is also inevitable that APIs will evolve to provide the capabilities needed for such tech to work.
But it is not working like that.
And there is even a clear line of evidence that the common solutions provide a worse image degradation than the properitary ones.
FSR (for all GPU's) is worse than XeSS (DP4a - for all GPU's) that is worse than the properitary XeSS (XMX) that is worse than the properitary DLSS (Tensor cores).
Here the lowest common denominator is not something I wish for.

Intels new line with Xess2 is also going properitary, running on their version of "AI cores".
Most likely because Intel thinks they can surpass AMD and match NVIDIA in the entry card space.
(And why they are eating their margins to offer mor eRAM, they want to break into the market, so they have to sweten the deal).


NVIDIA's Ray Reconstruction is also better then any alternative.
The lowest common denominator is not something I wish for here either.
Vulkan (open source) also apperas much more fragemtned then DirectX (properitary) in regards to a lot of vendor specific extensions.

As far back as I can remember (ATi's "TruForm") vendors have tried to leverage their technological innovations to be "better" than or distingush themself from the competition.
Then we as consumers vote with our wallets.

So from my perspective, the vendors aregoing in the opposite way of what you are suggeting 🤷‍♂️
 
So from my perspective, the vendors aregoing in the opposite way of what you are suggeting
Quite the opposite: everyone seem to be going in the very same direction here, with FSR4 also using AI now apparently. Which means that a common API is very possible. I have no idea why MS hasn't done anything yet.


Can we stop calling them "AI cores", they're matrix (multiplication) units no matter what each manufacturer calls them.
"Matrix (multiplication) units" happens to also be how some manufacturers call them.
 
Quite the opposite: everyone seem to be going in the very same direction here, with FSR4 also using AI now apparently. Which means that a common API is very possible. I have no idea why MS hasn't done anything yet.



"Matrix (multiplication) units" happens to also be how some manufacturers call them.
What "common" AI API?
They are all very different at the hardware level thus both Intel and NVIDIA runs properitary version that are better than the "one size fits all" 🤷‍♂️
 
They are all very different at the hardware level thus both Intel and NVIDIA runs properitary version that are better than the "one size fits all" 🤷‍♂️
They are not different at the h/w level at all. The only reason why we have NGX and Intel's whatever is because there is nothing better which would work on all h/w.
 
They are not different at the h/w level at all. The only reason why we have NGX and Intel's whatever is because there is nothing better which would work on all h/w.
Lets agree to disagree.
What about all the other points?
Seems like you sorted 90% of my post/points out?
 
This one is weird in a sense that it doesn't really solve the issue as much as provide a plug-in which will make developers lives easier at the expense of users. So no.
APIs are used by developers, not end users, and ideally ever API feature should make developer's lives easier or allow them to do something they couldn't do before. How does DirectSR harm users?
This one is even weirder. No one seem to want to use it for anything at all. So we're still waiting on proper API support here.
What should proper API support look like? WMMA instructions or something even more advanced?
 
You won't be able to use XeSS on non-Intel h/w under DirectSR for example.
You can if Microsoft and Intel can come to an agreement to include XeSS as one of the "extension variants".

Extension variants​

Extension super resolution variants are required to support non-native Super Resolution techniques. Extensions can support cross-vendor super resolution variants and may even run on ML coprocessors such as NPUs. Examples of extension variants include Auto SR and DirectSR built-in variants.

Extension super resolution variants are provided by Microsoft. There are currently no plans to support app-provided super resolution extensions.

We already know that AMD is prepared an AI-power FSR4, that should significantly reduce the motivation to run XeSS on non-Intel HW anyways.
 
"Matrix (multiplication) units" happens to also be how some manufacturers call them.
Probably, but not any of the big players in PC space. One is close but not quite the same.
Point being every manufacturer has different name for them, if you want to use a common name it should be what they are, matrix units or matrix multiplication units, not some made up crap like "AI cores"
 
Back
Top