Speculation: GPU Performance Comparisons of 2020 *Spawn*

Discussion in 'Architecture and Products' started by eastmen, Jul 20, 2020.

Thread Status:
Not open for further replies.
  1. Xmas

    Xmas Porous
    Veteran Subscriber

    Joined:
    Feb 6, 2002
    Messages:
    3,317
    Likes Received:
    149
    Location:
    On the path to wisdom
    I agree with the gist of what you're saying, and I disagree here. Given the real-time requirements (plus symmetry, position & rotation invariance) I'd expect the model to be quite small, and I wouldn't expect the training set to be huge, either - relatively speaking; I think we'd be talking GiB not TiB. The real effort is in picking a good set that contains all the important corner cases.
     
    w0lfram and Bondrewd like this.
  2. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    12,046
    Likes Received:
    13,421
    Location:
    The North
    Yea you might be right in this respect, I've been too engaged in NLP lately, that I keep thinking back to the BERT corpus.

    I recalled the 16K SSAA image, but forgot they sampled it back down to 1080p first as a label before up-resolution to 4K
     
  3. pharma

    Veteran Regular

    Joined:
    Mar 29, 2004
    Messages:
    3,811
    Likes Received:
    2,705
    Exactly. That's the point @nAo was trying to make earlier.
     
    BRiT likes this.
  4. w0lfram

    Newcomer

    Joined:
    Aug 7, 2017
    Messages:
    242
    Likes Received:
    43
    Frustrated? Machine learning isnt new. I am not the one having a hard time grasping someone other than NV doing ML in games.

    Xbox will feature it. <- that is a game console.

    I was the one who pointed out it wasnt a hardware thing (isnt a technical achievement) that is was about a sustainable business model. Microsoft has more AI training resources and more reason.

    NV in 2 years hasnt produced much & their promises have not been kept. Perhaps that is where you frustration is..?
     
  5. Esrever

    Regular Newcomer

    Joined:
    Feb 6, 2013
    Messages:
    812
    Likes Received:
    595
    I mean Microsoft could probably whip something like DLSS 1.0 up pretty easily. Will take time to refine but it's not like they can't afford to do it or don't have the data or people. Microsoft is probably one of the top 5 companies doing AI research right now and they probably have very easy access to game data given that DirectX is a thing and they own dozens of game studios, if need something they could basically ask any developer to supply it if they need more. Also close ties with both Nvidia and AMD means they have access to hardware details if needed.

    DirectML is just an API tho, the actual implantation would be a different technology. I mean upscaling AI tech is useful and powerful, and a generalized version would be really cool but could also be near impossible at the DirectML level, currently DLSS isn't generalized enough that it could even be built into the API level but require game specific implementations, maybe game engine makers will have to come up with their own implementations but that is still beyond DLSS as of right now. The technology is too new to really know how it will shape up imo.

    I don't think it's a given they'd be able to match DLSS 2.0 in quality from the get go, but especially in the console space and the coming of xcloud, they'd be foolish to not pursuit AI driven upscaling. If they can render at 60% resolution in the cloud, it would save massive power, would be even better if somehow they could stream the 60% resolution and be reconstructed at the client but that would be a whole other bunch of problems to solve.
     
  6. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    12,046
    Likes Received:
    13,421
    Location:
    The North
    Sure I'm frustrated with my work, lol, I spend a majority of my time on data and feature engineering and very little time on actually doing any sort of machine learning. Many companies are now just dumping TB and PBs of data into a hadoop cluster and saying here's the data, get prediction to start working. Managing expectations is a big part now.

    When the time comes for something for MS to announce, I'm fully on board. Until that day happens, I'm unsure as to how long it will take for them to develop that solution, or if they are working on it ( or they are hoping someone else will develop it and MS is just hands free)
     
    #206 iroboto, Aug 27, 2020
    Last edited: Aug 27, 2020
    milk, PSman1700, pjbliverpool and 3 others like this.
  7. pjbliverpool

    pjbliverpool B3D Scallywag
    Legend

    Joined:
    May 8, 2005
    Messages:
    8,070
    Likes Received:
    1,682
    Location:
    Guess...
    About half as fast as an RTX 2060 according to Digital Foundry. Or about 5ms per frame at 4k. So enough to be worthwhile I guess.
     
    PSman1700, pharma and BRiT like this.
  8. BRiT

    BRiT (>• •)>⌐■-■ (⌐■-■)
    Moderator Legend Alpha

    Joined:
    Feb 7, 2002
    Messages:
    17,341
    Likes Received:
    17,820
    I think that's assuming any upscaling models run on INT8, right? At least I only recall seeing INT4 and INT8 rates advertised for Series X.
     
  9. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    12,046
    Likes Received:
    13,421
    Location:
    The North
    RDNA should support int4 and int8 natively according to the white paper (RPM support for it I do not know)
    As I understand it, the customizations for ML on XSX are for mixed dot products for int4 and int8 respectively. It's indicated in the RDNA whitepaper that you need to have a different variant of CU to support this specifically to support the ML domain.

    How those are used in specific applications is outside my understanding.
     
    BRiT likes this.
  10. Bondrewd

    Veteran Newcomer

    Joined:
    Sep 16, 2017
    Messages:
    1,042
    Likes Received:
    441
    Those are Vega20 insts transplanted aren't they
     
  11. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    12,046
    Likes Received:
    13,421
    Location:
    The North
    possibly, I didn't check out the vega whitepaper.
     
  12. pjbliverpool

    pjbliverpool B3D Scallywag
    Legend

    Joined:
    May 8, 2005
    Messages:
    8,070
    Likes Received:
    1,682
    Location:
    Guess...
    From memory yes I think it was based on theoretical INT8 throughput. So there could presumably be other factors that influence actual performance one way or the other.
     
    BRiT likes this.
  13. Frenetic Pony

    Regular Newcomer

    Joined:
    Nov 12, 2011
    Messages:
    572
    Likes Received:
    266
    Stir that data until the most fragile, overfit predictions anyone's ever seen come out, I believe in you!

    Regardless, as for games we'll see machine leaning added to upscaling and taa as time goes on. There's already papers on it, reshading samples from previous frames and the like. I'm sure the ever impressive Call of Duty guys will show up and upscale 1080p to 4k or something soon enough, probably others as well down the line.
     
    iroboto likes this.
  14. pTmdfx

    Regular Newcomer

    Joined:
    May 27, 2014
    Messages:
    341
    Likes Received:
    281
    I can see a new era of fanboyism will start, if RDNA 2 delivers, now that Nvidia did go for doubling FP32 ALU. The good old "1 AMD ALU is weaker than 1 Nvidia SP/CUDA core" talk is gonna flip in the direction.

    It would also be interesting to see how AMD marketing counters the halo numbers from Nvidia, e.g. 10000+ CUDA cores in RTX 3090, vs (alllegedly) 5120 ALUs in Navi 21.
    :lol2:
     
  15. Bondrewd

    Veteran Newcomer

    Joined:
    Sep 16, 2017
    Messages:
    1,042
    Likes Received:
    441
    Absolutely.
    Time for everyone to rewarm their old benches.
    Nah it's exactly 5120.
     
  16. Rootax

    Veteran Newcomer

    Joined:
    Jan 2, 2006
    Messages:
    1,717
    Likes Received:
    1,080
    Location:
    France
    Perfs and price will talk, as always.
     
  17. madhatter

    Newcomer

    Joined:
    Jul 23, 2020
    Messages:
    21
    Likes Received:
    17
    :-D
    Wrong link, I meant to post this:
     
    hurleybird and Lightman like this.
  18. Malo

    Malo Yak Mechanicum
    Legend Veteran Subscriber

    Joined:
    Feb 9, 2002
    Messages:
    7,907
    Likes Received:
    4,109
    Location:
    Pennsylvania
    I think the big issue AMD are going to have this time around is the apples to oranges problem in benchmarks. Are reviewers going to compare everything at native resolution and then create 2nd sets comparing AMD native to Nvidia DLSS? I can see that happening a lot.
     
  19. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    10,881
    Likes Received:
    1,100
    Location:
    New York
    I don't think that will be a problem at all. As it stands today reviewers always include DLSS off numbers and there's no reason for that to change.
     
  20. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    11,498
    Likes Received:
    6,260
    Same thing with Radeon Image Sharpening AFAIK.
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...