Baseless Next Generation Rumors with no Technical Merits [post E3 2019, pre GDC 2020] [XBSX, PS5]

Discussion in 'Console Industry' started by BRiT, Jun 9, 2019.

Thread Status:
Not open for further replies.
  1. Nesh

    Nesh Double Agent
    Legend

    Joined:
    Oct 2, 2005
    Messages:
    13,249
    Likes Received:
    3,324
    It sounds like the impossible scenario when people were doubting 8GBs of GDDR6 on PS4. I know it is far fetched but I wish we get surprised again in a positive way :p
     
    egoless likes this.
  2. BRiT

    BRiT (>• •)>⌐■-■ (⌐■-■)
    Moderator Legend Alpha

    Joined:
    Feb 7, 2002
    Messages:
    19,469
    Likes Received:
    22,446
    I hear it can make 2 out of 1.69.
     
    PSman1700 and AzBat like this.
  3. Globalisateur

    Globalisateur Globby
    Veteran Regular Subscriber

    Joined:
    Nov 6, 2013
    Messages:
    4,356
    Likes Received:
    3,225
    Location:
    France
    Wow the statement by a developer that XSX is not really RDNA2 causes plenty of sarcasm here.
     
  4. mpg1

    Veteran Newcomer

    Joined:
    Mar 5, 2015
    Messages:
    2,250
    Likes Received:
    1,996
    Speaking of magical upscaling..

    Microsoft's DirectML is the next-generation game-changer that nobody's talking about
    https://www.overclock3d.net/news/so...on_game-changer_that_nobody_s_talking_about/1
     
    PSman1700, Shifty Geezer and BRiT like this.
  5. anexanhume

    Veteran Regular

    Joined:
    Dec 5, 2011
    Messages:
    2,078
    Likes Received:
    1,535
  6. mpg1

    Veteran Newcomer

    Joined:
    Mar 5, 2015
    Messages:
    2,250
    Likes Received:
    1,996
    I got a feeling they may use it in BC....but maybe that's what you are saying.
     
  7. Janne Kylliö

    Newcomer

    Joined:
    Oct 10, 2019
    Messages:
    55
    Likes Received:
    44
    Maybe. On the other hand the XSX APU has that "8K" text etched on die for some reason...
     
  8. mpg1

    Veteran Newcomer

    Joined:
    Mar 5, 2015
    Messages:
    2,250
    Likes Received:
    1,996
    Going beyond 4K even with some type of reconstruction technique would be wasted imo. Unless the developer only ever has 30fps in mind. Then it might make sense.
     
  9. AzBat

    AzBat Agent of the Bat
    Legend Veteran

    Joined:
    Apr 1, 2002
    Messages:
    7,669
    Likes Received:
    4,676
    Location:
    Alma, AR
    In the grand scheme things it doesn't matter the difference in FLOPS is because most 3rd party developers will just target the lowest common denominator & so they will both be using the same FLOPS, no?

    Unless that difference is 1.9

    Tommy McClain
     
  10. ToTTenTranz

    Legend Veteran

    Joined:
    Jul 7, 2008
    Messages:
    12,702
    Likes Received:
    7,705
    It's okay man. You did your best.
     
    lynux3 and AzBat like this.
  11. MrFox

    MrFox Deludedly Fantastic
    Legend Veteran

    Joined:
    Jan 7, 2012
    Messages:
    6,488
    Likes Received:
    5,996
    Sony makes some of the best ASIC upscalers, with the recent model using AI databases (don't they all now?). They could leverage that asic work to avoid wasting gpu cycles, but i don't know if it's really processing intensive. The hard part is building the deep learning data, during rendering it's just applying the inference model. There is no actual deep learning involved at runtime.
     
  12. Silenti

    Regular

    Joined:
    May 25, 2005
    Messages:
    698
    Likes Received:
    408
    I find myself having difficulty discerning the difference between sarcastic, hyperbolic posts meant in jest and those which are meant seriously.
     
  13. mrcorbo

    mrcorbo Foo Fighter
    Veteran

    Joined:
    Dec 8, 2004
    Messages:
    3,997
    Likes Received:
    2,806
    Are they low-latency, though? That, to me, is the tricky part.
     
  14. mpg1

    Veteran Newcomer

    Joined:
    Mar 5, 2015
    Messages:
    2,250
    Likes Received:
    1,996
    According to Nvidia having it run on dedicated hardware allows more flexibility:

    https://www.techspot.com/article/1992-nvidia-dlss-2020/

    "This first batch of results playing Control with the shader version of DLSS are impressive. This begs the question, why did Nvidia feel the need to go back to an AI model running on tensor cores for the latest version of DLSS? Couldn’t they just keep working on the shader version and open it up to everyone, such as GTX 16 series owners? We asked Nvidia the question, and the answer was pretty straightforward: Nvidia’s engineers felt that they had reached the limits with the shader version.

    Concretely, switching back to tensor cores and using an AI model allows Nvidia to achieve better image quality, better handling of some pain points like motion, better low resolution support and a more flexible approach. Apparently this implementation for Control required a lot of hand tuning and was found to not work well with other types of games, whereas DLSS 2.0 back on the tensor cores is more generalized and more easily applicable to a wide range of games without per-game training."
     
  15. AzBat

    AzBat Agent of the Bat
    Legend Veteran

    Joined:
    Apr 1, 2002
    Messages:
    7,669
    Likes Received:
    4,676
    Location:
    Alma, AR
    Mine was a twofer. Enjoy!

    Tommy McClain
     
    Silenti and London Geezer like this.
  16. MrFox

    MrFox Deludedly Fantastic
    Legend Veteran

    Joined:
    Jan 7, 2012
    Messages:
    6,488
    Likes Received:
    5,996
    Yeah good question. I know their frame interpolator needs 45ms, and there are other stuff in modern TVs which are also temporal, it's all useless for gaming. HDR post processing also needs a few frames depending on the brand.

    However historically the scaling is usually delayed by only 32 or 64 scanlines, maybe that changed.

    Still, an ASIC like this would need to be a really small footprint to warrant it's inclusion. For example, if using the GPU required a 10% or 15% time slice then it might be worth it, but not if it's just freeing up 2%.

    It seems that we only see asic blocks where the gain is gigantic. Codecs are always the best candidates. Not sure about scaling, it used to be worth it when it was really simple algorithms but this time it requires a lot more memory access than inline hardwired stuff.
     
  17. Riddlewire

    Regular

    Joined:
    May 2, 2003
    Messages:
    435
    Likes Received:
    292
    Perhaps because it's just 386 pages of people arguing about the color of Schrödinger's Cat.
     
    egoless, Silenti and Picao84 like this.
  18. bgroovy

    Regular Newcomer

    Joined:
    Oct 15, 2014
    Messages:
    799
    Likes Received:
    626
    According to github there is a tabby that lives in his neighborhood.
     
  19. JoeJ

    Veteran Newcomer

    Joined:
    Apr 1, 2018
    Messages:
    1,511
    Likes Received:
    1,748
    Would like a poll too see how many believe this.
     
  20. Nesh

    Nesh Double Agent
    Legend

    Joined:
    Oct 2, 2005
    Messages:
    13,249
    Likes Received:
    3,324
    How possible is it that MS could be incorporating some kind of special machine learning inhouse technology (or one worked with AMD) where the lower res image will ll be reconstructed to 8K and that might consume less performance than actually rendering native 8k?
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...