Speculation: GPU Performance Comparisons of 2020 *Spawn*

Status
Not open for further replies.
The size of the models and training corpus is likely to be massive.
I agree with the gist of what you're saying, and I disagree here. Given the real-time requirements (plus symmetry, position & rotation invariance) I'd expect the model to be quite small, and I wouldn't expect the training set to be huge, either - relatively speaking; I think we'd be talking GiB not TiB. The real effort is in picking a good set that contains all the important corner cases.
 
I agree with the gist of what you're saying, and I disagree here. Given the real-time requirements (plus symmetry, position & rotation invariance) I'd expect the model to be quite small, and I wouldn't expect the training set to be huge, either - relatively speaking; I think we'd be talking GiB not TiB. The real effort is in picking a good set that contains all the important corner cases.
Yea you might be right in this respect, I've been too engaged in NLP lately, that I keep thinking back to the BERT corpus.

I recalled the 16K SSAA image, but forgot they sampled it back down to 1080p first as a label before up-resolution to 4K
 
It takes brains, and there only so brains out there, for a specific field of data science. Listen I get your frustrated, you can ask the Sony fans here how frustrating I can get talking about some topics, MS is totally positioned and setup to provide what it is you think they can provide. They have the API, they have the hardware capable of doing the training, they have quite a few data scientists within MS who work on solutions for companies, and they continue to move further in this direction. But this AI upscale solution can come as quickly as next month or by the end of the generation or never at all and that has everything to do with whether or not the team at MS can pull it off ; it's not hardware problem that needs to be solved.
Exactly. That's the point @nAo was trying to make earlier.
Upsampling an image using CNNs is relatively easy and fast these days. To do it in a temporally stable fashion while adding information (and not just hallucinating details) is much more challenging.
 
Frustrated? Machine learning isnt new. I am not the one having a hard time grasping someone other than NV doing ML in games.

Xbox will feature it. <- that is a game console.

I was the one who pointed out it wasnt a hardware thing (isnt a technical achievement) that is was about a sustainable business model. Microsoft has more AI training resources and more reason.

NV in 2 years hasnt produced much & their promises have not been kept. Perhaps that is where you frustration is..?
 
I mean Microsoft could probably whip something like DLSS 1.0 up pretty easily. Will take time to refine but it's not like they can't afford to do it or don't have the data or people. Microsoft is probably one of the top 5 companies doing AI research right now and they probably have very easy access to game data given that DirectX is a thing and they own dozens of game studios, if need something they could basically ask any developer to supply it if they need more. Also close ties with both Nvidia and AMD means they have access to hardware details if needed.

DirectML is just an API tho, the actual implantation would be a different technology. I mean upscaling AI tech is useful and powerful, and a generalized version would be really cool but could also be near impossible at the DirectML level, currently DLSS isn't generalized enough that it could even be built into the API level but require game specific implementations, maybe game engine makers will have to come up with their own implementations but that is still beyond DLSS as of right now. The technology is too new to really know how it will shape up imo.

I don't think it's a given they'd be able to match DLSS 2.0 in quality from the get go, but especially in the console space and the coming of xcloud, they'd be foolish to not pursuit AI driven upscaling. If they can render at 60% resolution in the cloud, it would save massive power, would be even better if somehow they could stream the 60% resolution and be reconstructed at the client but that would be a whole other bunch of problems to solve.
 
Frustrated? Machine learning isnt new. I am not the one having a hard time grasping someone other than NV doing ML in games.

Xbox will feature it. <- that is a game console.

I was the one who pointed out it wasnt a hardware thing (isnt a technical achievement) that is was about a sustainable business model. Microsoft has more AI training resources and more reason.

NV in 2 years hasnt produced much & their promises have not been kept. Perhaps that is where you frustration is..?
Sure I'm frustrated with my work, lol, I spend a majority of my time on data and feature engineering and very little time on actually doing any sort of machine learning. Many companies are now just dumping TB and PBs of data into a hadoop cluster and saying here's the data, get prediction to start working. Managing expectations is a big part now.

When the time comes for something for MS to announce, I'm fully on board. Until that day happens, I'm unsure as to how long it will take for them to develop that solution, or if they are working on it ( or they are hoping someone else will develop it and MS is just hands free)
 
Last edited:
About half as fast as an RTX 2060 according to Digital Foundry. Or about 5ms per frame at 4k. So enough to be worthwhile I guess.

I think that's assuming any upscaling models run on INT8, right? At least I only recall seeing INT4 and INT8 rates advertised for Series X.
 
I think that's assuming any upscaling models run on INT8, right? At least I only recall seeing INT4 and INT8 rates advertised for Series X.
RDNA should support int4 and int8 natively according to the white paper (RPM support for it I do not know)
As I understand it, the customizations for ML on XSX are for mixed dot products for int4 and int8 respectively. It's indicated in the RDNA whitepaper that you need to have a different variant of CU to support this specifically to support the ML domain.

How those are used in specific applications is outside my understanding.
 
As I understand it, the customizations for ML on XSX are for mixed dot products for int4 and int8 respectively. It's indicated in the RDNA whitepaper that you need to have a different variant of CU to support this specifically to support the ML domain.
Those are Vega20 insts transplanted aren't they
 
I think that's assuming any upscaling models run on INT8, right? At least I only recall seeing INT4 and INT8 rates advertised for Series X.

From memory yes I think it was based on theoretical INT8 throughput. So there could presumably be other factors that influence actual performance one way or the other.
 
Sure I'm frustrated with my work, lol, I spend a majority of my time on data and feature engineering and very little time on actually doing any sort of machine learning. Many companies are now just dumping TB and PBs of data into a hadoop cluster and saying here's the data, get prediction to start working. Managing expectations is a big part now.

When the time comes for something for MS to announce, I'm fully on board. Until that day happens, I'm unsure as to how long it will take for them to develop that solution, or if they are working on it ( or they are hoping someone else will develop it and MS is just hands free)

Stir that data until the most fragile, overfit predictions anyone's ever seen come out, I believe in you!

Regardless, as for games we'll see machine leaning added to upscaling and taa as time goes on. There's already papers on it, reshading samples from previous frames and the like. I'm sure the ever impressive Call of Duty guys will show up and upscale 1080p to 4k or something soon enough, probably others as well down the line.
 
I can see a new era of fanboyism will start, if RDNA 2 delivers, now that Nvidia did go for doubling FP32 ALU. The good old "1 AMD ALU is weaker than 1 Nvidia SP/CUDA core" talk is gonna flip in the direction.

It would also be interesting to see how AMD marketing counters the halo numbers from Nvidia, e.g. 10000+ CUDA cores in RTX 3090, vs (alllegedly) 5120 ALUs in Navi 21.
:LOL:
 
I think the big issue AMD are going to have this time around is the apples to oranges problem in benchmarks. Are reviewers going to compare everything at native resolution and then create 2nd sets comparing AMD native to Nvidia DLSS? I can see that happening a lot.
 
I think the big issue AMD are going to have this time around is the apples to oranges problem in benchmarks. Are reviewers going to compare everything at native resolution and then create 2nd sets comparing AMD native to Nvidia DLSS? I can see that happening a lot.

I don't think that will be a problem at all. As it stands today reviewers always include DLSS off numbers and there's no reason for that to change.
 
Status
Not open for further replies.
Back
Top