Speculation: GPU Performance Comparisons of 2020 *Spawn*

Status
Not open for further replies.
It's worth looking back at things, 2 years ago, most were doubtful any of these NVIDIA initiatives are going to ever take off, whether Ray Tracing or DL scaling, now we are to the point of discussing how competitors are going to step up their game to stay competitive.
 
If any company is going to be able to replicate Nvidias results at the same or better quality in the next few years its most likely to be Microsoft. Although I still think they're at a disadvantage both technically and in terms of timescales (although collaboration with AMD could resolve the first of those disadvantages).

That does beg the question of what Sony will do though. They don't seem particularly well placed to compete in this area. I wonder if Nvidia would be willing to license their models to them? That would certainly have interesting implications for any PS5 games ported to the PC.
I dunno why people are so hell bent on mentioning MS as the one who is the likely party to create their version of DLSS. IMO they are not and never were, they create and support platforms, Windows platform specifically, they don't provide the middleware for this platform.
Epic and Unity are the more likely candidates, and they are also far more likely to license their AI upscaling solution to someone like Sony to be used across other engines.
 
Windows platform specifically, they don't provide the middleware for this platform.
https://playfab.com/
https://www.havok.com/
https://www.simplygon.com/
To name a few right there.

MS is also very interested in AI and ML in general and in games.

The reason MS is mentioned a lot is due to azure, investment in AI/ML they have been doing, vested interest, etc

Like havok i wouldn't be surprised if they would provide it to studios that develop PS games, would just be as part of a paid product.
 
It's not like MS is coming to the scene out of nowhere either, here's one of their talks at SIGGRAPH 2018 with DirectML "superresolution" upscaling based on model from NVIDIA, and I think it's pretty safe to assume they've worked on their own models before and since then, too.
https://on-demand.gputechconf.com/s...-gpu-inferencing-directml-and-directx-12.html

Thank You... I was looking for that directML video earlier.

If you watch the Video of them discussing image sharpen and supersampling they thank NV for the use of their hardware... but later, go on to say you dont even need a GPU.

Remember, that was 2-years ago.... & xbox comes out in a month.
 
Thank You... I was looking for that directML video earlier.

If you watch the Video of them discussing image sharpen and supersampling they thank NV for the use of their hardware... but later, go on to say you dont even need a GPU.

Remember, that was 2-years ago.... & xbox comes out in a month.
they thank nvidia for the use of their model!
the hardware was an intel iGPU. This iGPU supports proper fp16.

This same demo running on a pascal GPU will barely run 1fps. FP16 support is 1/64, 1/128 the speed.

Its the model that matters, we can do ML on CPUs... it just gets faster and better with the right hardware.
 
they thank nvidia for the use of their model!
the hardware was an intel iGPU. This iGPU supports proper fp16.

This same demo running on a pascal GPU will barely run 1fps. FP16 support is 1/64, 1/128 the speed.

Its the model that matters, we can do ML on CPUs... it just gets faster and better with the right hardware.

To add to that there's a chart at 24:00 directly comparing CPU performance with the tensor cores in a Titan X. The tensors are 275x faster lol. So yeah, it will run on a CPU, but it probably won't be a net gain for your framerate!
 
they thank nvidia for the use of their model! the hardware was an intel iGPU. This iGPU supports proper fp16.

This same demo running on a pascal GPU will barely run 1fps. FP16 support is 1/64, 1/128 the speed.

Its the model that matters, we can do ML on CPUs... it just gets faster and better with the right hardware.

Sorry, yes model not hardware.

The point being "2-years ago" Microsoft was SHOWING what is capable with their up coming API. Attach that to Microsoft's own Hardware (via AMD) and the new fp8 fp16 tricks the xbox is pulling off.

Then the mention of MS using their cloud to train models in a near instant.. for their own ecosystem.. puts them beyond what SONY could manage. MS knows this.
 
Sorry, yes model not hardware.

The point being "2-years ago" Microsoft was SHOWING what is capable with their up coming API. Attach that to Microsoft's own Hardware (via AMD) and the new fp8 fp16 tricks the xbox is pulling off.

Then the mention of MS using their cloud to train models in a near instant.. for their own ecosystem.. puts them beyond what SONY could manage. MS knows this so will (have been) leveraging this.

There is a reason MS just updated Windows10 recently for directX12U and new video cards coming. right?
well DirectML is a low level API to running machine learning, in which most libraries are extremely slow. It's comparable to CUDA-X AI in that sense. It contains some very similar functions. What you saw with DirectML and the showcase around it, is that it is vendor agnostic API that is able to run a machine learning model with very low overhead for faster processing times.
The model is the one actually doing the work of upsampling the picture from 1080p to 4K.

Nvidia does the same thing, except that they have CUDA likely doing this work instead of DirectML.
The model is just the model that they've trained.

Sony can manage to build an API that would support this, or borrow one from Vulkan (not sure if they have a low level API for this type of stuff yet) but it's not the hard part in this process.

The model creation is.

As simple as some may look at ML training as straight forward, like some Udemy course. It's not. To do what Nvidia has, provided you have the talent to understand how to create a very fast and light weight model with very good quality, you've got to build the training set and labels to support it. You've also need to know precisely where your model will be leveraged and that specificity is also what makes this harder to implement. You will also run into other hard restrictions as this is a realtime NN, mainly memory size and processing time. You need the best possible NN to take up the smallest amount of VRAM and run in the shortest time possible.

So its not straight forward at all, and building that model can take tons of time or very little depending on what you have as resources. Building an AI that does AntiAliasing and Upscale Resolution is fairly trivial at this point in time for the field. Getting it done in mere milliseconds as opposed to many seconds/minutes is what separates nvidia from the rest.

MS cloud cannot do this type of training in near instant. The size of the models and training corpus is likely to be massive. There are a load of engineering problems when you attempt to train stuff that is way larger than your video memory permits. It's not as simple as saying the cloud. The reason why Nvidia charges so much for their actual ML hardware addresses some of these challenges. Even then, they can take days to train from scratch. When your iteration time is so slow, and the cost of running a fresh train is thousands of US dollars per run in cost of electricity, you better be damn willing to do this and find a way to profit from it.

I'm not saying it's doom and gloom, and I'm not saying MS isn't looking into it. I'm just saying, there's nothing reported as of yet. And until we actually get some real news, the expectation right now is that no one else is really invested on working on this except nvidia. They have a great deal work in the computer vision space where they provide solutions to companies for real time NN processing. Audi and Tesla AI driving for instance is such a thing (prior to Tesla moving to their own FPGA solution)
 
Last edited:
You have completely undersold Microsoft. Check your headspace into an Azure clinic....
I use Azure ML from time to time.
They support bigtime nvidia GPU clusters for this process you can order them up for processing. Quite expensive per run I must say. Doesn't mean they are working on it.

There's just no proof of this yet. There is support for it however. And ideally they have something in this space for their ML/AI chips for hololens etc. But until we see something tangible, I can't support this notion that MS is going to be very far ahead on the AI front. It's reasonable to suspect that they are working in this area, I've been to Build in person and they talk about Azure ML helping all these companies. I'm just waiting for something to actually talk about. And I haven't heard anything so far.
 
I use Azure ML from time to time.
They support bigtime nvidia GPU clusters for this process you can order them up for processing. Quite expensive per run I must say. Doesn't mean they are working on it.

There's just no proof of this yet. There is support for it however. And ideally they have something in this space for their ML/AI chips for hololens etc. But until we see something tangible, I can't support this notion that MS is going to be very far ahead on the AI front. It's reasonable to suspect that they are working in this area, I've been to Build in person and they talk about Azure ML helping all these companies. I'm just waiting for something to actually talk about. And I haven't heard anything so far.


So, you are essentially saying that NVidia can train models... but Microsoft cant..? When Microsoft has very good competitive reasons to be doing so for EVERY game on xbox...

While NVidia has only done 6 dlss games to date in 20+ months..? This is where you loose me on this conversation. Specially when MS drops stuff like this:

https://images.anandtech.com/doci/15994/202008180206261.jpg
 
Last edited:
So, you are essentially saying that NVidia can train models... but Microsoft cant..? When Microsoft has very good competitive reasons to be doing so for EVERY game on xbox...

While NVidia has only done 6 dlss games to date in 20+ months..? This is where you loose me on this conversation. Specially when MS drops stuff like this:

https://images.anandtech.com/doci/15994/202008180206261.jpg
There are 2 models now, 6 games, all of them needed to retrofit their engine to support it. The model will work as long as it's integrated and not every engine can support it easily.

MS can train a model. You need really smart people to do this type of thing. You're biggest mistake is thinking this is a hardware thing. That with enough hardware and power anyone can train and create these things.
You have to understand that Nvidia has been employing data scientists and supporting the data science community for a very long time. Deep Learning was the innovation in ML that solved Computer Vision. They have been working in this field building up tons of experience in knowing how to do this. It has nothing to do with hardware. It has everything to do with have the experience of the right people who know how to put this together.

In the same way, Google is the only one right now that has Alpha Go, and they have been a team working on that AI for years. And despite how much cloud hardware is supporting it, it took years for them to create an AI that would be able to accomplish this.

It takes brains, and there only so brains out there, for a specific field of data science. Listen I get your frustrated, you can ask the Sony fans here how frustrating I can get talking about some topics, MS is totally positioned and setup to provide what it is you think they can provide. They have the API, they have the hardware capable of doing the training, they have quite a few data scientists within MS who work on solutions for companies, and they continue to move further in this direction. But this AI upscale solution can come as quickly as next month or by the end of the generation or never at all and that has everything to do with whether or not the team at MS can pull it off ; it's not hardware problem that needs to be solved.
 
So, you are essentially saying that NVidia can train models... but Microsoft cant..? When Microsoft has very good competitive reasons to be doing so for EVERY game on xbox...

While NVidia has only done 6 dlss games to date in 20+ months..? This is where you loose me on this conversation. Specially when MS drops stuff like this:

https://images.anandtech.com/doci/15994/202008180206261.jpg

There's massive incentive for MS to do something similiar to DLSS. The question is will they and whether they would also support it on PC.

Do we know how good XSX is at ML acceleration? Bullet points are great but it doesn't tell us much.
 
There's massive incentive for MS to do something similiar to DLSS. The question is will they and whether they would also support it on PC.

Do we know how good XSX is at ML acceleration? Bullet points are great but it doesn't tell us much.

My question is around the cost. If NV are doing so much in house..... is AMD paying MS for something similar to DLSS? License fee's?? MS doesn't do free or for the good of their heart.
 
There's massive incentive for MS to do something similiar to DLSS. The question is will they and whether they would also support it on PC.

Do we know how good XSX is at ML acceleration? Bullet points are great but it doesn't tell us much.
more than sufficient power in there to perform a good amount of ML. It will exceed the P100's I use at work. https://www.nvidia.com/en-us/data-center/tesla-p100/
the price differential is hilarious, not sure what double precision metrics look like on XSX.
 
Status
Not open for further replies.
Back
Top