Speculation: GPU Performance Comparisons of 2020 *Spawn*

Status
Not open for further replies.
No, they demoes Microsoft's solution, and thanked NVidia for the modeling and use of their hardware for their demonstration.

Right?

I think that's getting into semantics. This was Nvidia's model running on Nvidia's hardware. Microsoft weren't demonstrating their own version of either a resolution upscaling model, or the hardware it runs on. They were simply demonstrating how anyone else that wants to develop those things can run them through Microsofts DirectML interface rather than through their own proprietary interface/API as Nvidia currently does for DLSS.

Put another way, this was a demonstration of DLSS running on a vendor agnostic API, nothing more.
 
I think that's getting into semantics. This was Nvidia's model running on Nvidia's hardware. Microsoft weren't demonstrating their own version of either a resolution upscaling model, or the hardware it runs on. They were simply demonstrating how anyone else that wants to develop those things can run them through Microsofts DirectML interface rather than through their own proprietary interface/API as Nvidia currently does for DLSS.

Put another way, this was a demonstration of DLSS running on a vendor agnostic API, nothing more.

Then what was Microsoft doing there..? The exact point was... that MS was looking to using directML for things such as dlss, 2 years ago.
 
Then what was Microsoft doing there..? The exact point was... that MS was looking to using directML for things such as dlss, 2 years ago.

The were demonstrating a vendor agnostic API that developers can use to implement machine learning based solutions on any GPU that supports DML. The machine learning based solution they used to demonstrate that capability was DLSS.
 
I am referring to this: "Not sure if this qualifies as MS working on their solution for quite some time."...

Why the dismissal?
 
What @pjbliverpool said is correct. There's a huge difference between an API and an algorithm. DirectML is simply an API to access vendor-specific hardware capabilities in a vendor-agnostic fashion. You can implement a variety of ML algorithms using that API, including (as a specific example) DLSS's real-time inference component.

The fact that Microsoft has been working on the DirectML API says nothing about whether they have been working on a DLSS-like training infrastructure + inference algorithm that uses that API to perform upscaling. They may very well be, but DirectML is not an evidence of that.
 
I understood what he said perfectly. It is just false. The links I posted show it clearly. Better go gaslight someone else.

And no one said they did it out the kindness of their hearts. They sold like hotcakes and that was their intent.

Could you tone it down and keep your cool? There’s no need to throw around accusations of gaslighting. We’re all here because we’re PC hobbyists and we enjoy talking about GPU tech.

You posted two links to reviews, neither of which disprove what LordEC911 said, nor do they prove what you’ve conjectured about G92 or Maxwell pricing.
 
It would be nice to have a bit more information on the Series X Backwards Compatibility upgrades, as that might show Microsoft has done a few things with DirectML. I'm fairly positive their BC HDR upgrades for even games as far back as Fusion Frenzy (Original Xbox) is using DirectML of sorts.
 
It's kind of amazing that so many tech youtubers have such a loose understanding of the tech and industries that they cover.

The not-so-subtle dig at Raja is pretty childish and unnecessary.

The answer is "yes".
A big, fat, juicy and resounding one at that.

Beat what, and at what cost? RTX 3070 at similar or higher power consumption? Not interested.
RTX 3080 or the RTX 3090? Now you have my attention.
 
It's kind of amazing that so many tech youtubers have such a loose understanding of the tech and industries that they cover.
They exist to entertain, not provide technical insight.
RTX 3070 at similar or higher power consumption
That's N22 territory and at lower power.
RTX 3080 or the RTX 3090?
Yeah.
Now you have my attention.
They really want to; dropping the mic and all.
 
If they have such a great card, then better to drop some announcement soon, before the availability of the Nvidia cards, otherwise, judging by the forums, a lot of sales will go to theeir rival already. If the news about limited NV cards availability until January are not true, of course.
 
They didn't even guide any dGP movement FY.
I.e. they don't give a shit.

What is this supposed to mean? They don't give a shit to lose additional market share to the competition? Quite strange for a company spending money to develop product that must be sold for recovering the investment.
 
Last edited:
What is this supposed to mean?
They're not planning to print any money by selling discrete GPUs this year.
They don't give a shit to lose additional market share to the competition?
Yeah, CPU growth eats wafers and offsets any dGPU sales by a huge, humongous landslide.
Quite strange for a company spening money to develop product that must be sold for recovering the investment.
They're doing it for shits, maybe giggles.
Same as their specialty ULV APU and other stuff like TR (now two of them!).
 
What is this supposed to mean? They don't give a shit to lose additional market share to the competition? Quite strange for a company spening money to develop product that must be sold for recovering the investment.

I'm pretty sure any reasonable buyer takes a wait and see approach when you have 2 next gen launches that are so close. People jumping on pre-orders based on Nvidia marketing slides wouldn't buy AMD GPU if AMD was 100% faster at half the power.
 
The vast majority of people who would buy the 3000 series at release without at least waiting to see what AMD will have would not buy AMD cards in the first place. The pressing need to announce isn't as demanding as some people make it out to be.
 
Eh, people don't expect anything from AMD.
No reason not to preorder 3080.
:^)

True. And yes, considering how unimportant dGPU is to AMDs bottom line, there is no way in hell it take away wafers from much more lucrative CPUs. Mobile GPUs potentially being an exception,
 
Status
Not open for further replies.
Back
Top