Next Generation Hardware Speculation with a Technical Spin [post E3 2019]

Discussion in 'Console Technology' started by DavidGraham, Jun 9, 2019.

Tags:
  1. BRiT

    BRiT (╯°□°)╯
    Moderator Legend Alpha Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    12,409
    Likes Received:
    8,611
    Location:
    Cleveland
    Maybe it's the 14+4 thing again. :runaway:



    Where the devs can allocate tasks however they see fit with the GOU Resources.
     
    egoless likes this.
  2. Shortbread

    Shortbread Island Hopper
    Veteran

    Joined:
    Jul 1, 2013
    Messages:
    3,767
    Likes Received:
    1,888
    I have a feeling there are two different hardware solutions for how RT is implemented in the next-generation systems. I think Scarlett's RT solution will be more of AMD's engineering efforts, but with a certain amount of bespoke features or instructions sets (e.g., DirectX feature sets) that Microsoft's wants implemented directly in. I think Sony's RT solution will be either in-house related tech or something from an unnamed/unannounced third-party partnership.
     
    #842 Shortbread, Sep 10, 2019
    Last edited: Sep 10, 2019
  3. MrFox

    MrFox Deludedly Fantastic
    Legend Veteran

    Joined:
    Jan 7, 2012
    Messages:
    5,434
    Likes Received:
    3,933
    If having RT as part of each compute unit makes them only a little bigger, it would be a clear choice (don't know, we can see nvidia rt being a big addition, but it's not integrated much). But imagine if it made them twice as big. I could see a reason to have only a few with the capability, or having them separate as almost an ASIC function. Or closer to the CPU since traversal is more fitting to the CPU memory access than the GPU.

    Also a separate unit design could make it simple to add some IP from the companies who spent decades trying to make ray tracing ASICs.
     
    egoless and BRiT like this.
  4. Globalisateur

    Globalisateur Globby
    Veteran Regular

    Joined:
    Nov 6, 2013
    Messages:
    2,907
    Likes Received:
    1,642
    Location:
    France
    We know AMD is cooking a hardware based solution for RDNA2. Designing GPU features (hardware + software) is expensive. Sony and Microsoft are most likely going to use the exact same hardware solution: the cheaper solution designed by AMD.

    They better use that money for something else, like more memory, more CUs (so most probably better RT) or better cooling in order to have higher clocks (so again better RT if RT is embedded in CUs like alleged by the AMD texture patent).
     
  5. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    9,856
    Likes Received:
    4,466
    From the difference between TU106 and TU116, it seemingly doesn't need a lot of silicon.
    The TU106 has practically 50% more functional units than the TU116 everywhere, 50% more shader cores, 50% more TMUs, then 33% more ROPs and memory channels.

    The TU116 has 6.6B transistors and 284mm^2 die size. TU106 is 10.8B transistors and 445mm^2.
    A TU116 scaled by 150% would be 9.9B transistors and 426mm^2.

    Considering the TU116 also lacks the AI cores, the RT cores are really not taking a lot of space.



    The patents are pointing to the RT hardware being embedded into the TMUs.
     
    iroboto and anexanhume like this.
  6. Nisaaru

    Regular

    Joined:
    Jan 19, 2013
    Messages:
    870
    Likes Received:
    195
    I would actually expect the opposite is more likely because I don't think Sony is in the position nor willing to do any complex design work since the PS3 disaster. PS4/Pro were pretty straight forward AMD designs with a few tweaks. I see MS far more willing to do custom design work which their past consoles since the 360 showed.

    The Sony of the 80s/90s with a broad engineering power or ambition doesn't really exist anymore at least in the areas I care for like Audio/TV.
     
  7. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    7,788
    Likes Received:
    6,080
    I suppose if there is customization and moving it closer to the CPU is a rumour; that wouldn’t show up in a patent and for obvious reasons would not be part of the main dGPU line.

    I think square said they were working on their own RT API that would hook into both DXR and whatever Sony is working on with their RT API. Gives weight to the probability that we are seeing likely similar implementations.
     
  8. Jay

    Jay
    Veteran Regular

    Joined:
    Aug 3, 2013
    Messages:
    1,906
    Likes Received:
    1,065
    Your logic sounds backwards to me.
    If they was the same there would be less reason to separate it out, not more.

    Not that I personally believe it either way.
     
  9. BRiT

    BRiT (╯°□°)╯
    Moderator Legend Alpha Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    12,409
    Likes Received:
    8,611
    Location:
    Cleveland
    It can never be the same from a software perspective regardless of having identical hardware because DXR will never be on Sony platforms unless they opt to license it from Microsoft if they're willing to.
     
  10. Shortbread

    Shortbread Island Hopper
    Veteran

    Joined:
    Jul 1, 2013
    Messages:
    3,767
    Likes Received:
    1,888
    I don't believe at the moment both are using the same Navi solution or same iteration of it. I think Sony's Navi architecture maybe the earliest version of the architecture, while Microsoft will have the more robust featured one with RT logic. I think Sony had its version locked last year, only updating clocks (if possible) up to this point.

    So, my tinfoil hat is leaning more towards in-house technology or another vendor partnership other than AMD's .

    Sony has facilities, engineers, finances and possible knowhow on coming up with valid designs/tech, even if it requires partnering. Sony isn't a cu** hair short of being bankrupted... maybe a bushel though. j/k :razz:
     
    #850 Shortbread, Sep 11, 2019
    Last edited: Sep 11, 2019
  11. Jay

    Jay
    Veteran Regular

    Joined:
    Aug 3, 2013
    Messages:
    1,906
    Likes Received:
    1,065
    I agree, but I'm talking from a general engine design point of view. I. E. If it was the same then you would have less reason to separate it into its own module compared to having it structurally implemented in the same fashion.

    Sony wouldn't use DXR, it also wouldn't use DX12 either.
    Just highlighting that his logic seems backwards to me. I'm not reading anything more into it apart from them saying they are ready for both systems personally.
     
  12. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    7,788
    Likes Received:
    6,080
    I guess the assumption is that If the 2 RT APIs are functionally the same; then the hardware supports the same features/functions. If there wasn’t enough overlap between the two hardware wise, we would/should see this show up in massive differences between Sony and MS API functions; if this were true then Square would not waste be effort of creating their own API

    edit:
    https://wccftech.com/square-enix-sh...minous-engine-prepared-for-next-gen-consoles/
     
    #852 iroboto, Sep 11, 2019
    Last edited: Sep 11, 2019
  13. MrFox

    MrFox Deludedly Fantastic
    Legend Veteran

    Joined:
    Jan 7, 2012
    Messages:
    5,434
    Likes Received:
    3,933
    MS have more incentive to design a higher level API mostly compatible between their windows platform and their console offering.

    Sony have no such limitations. They would simply add in GNM what is required for ray tracing based on what the ps5 hardware can do.

    Third parties will have trouble developping their cross platform engines if the underlying algorithmic capabilities are very different between the two platforms. The api itself seems to be less of a problem. As long as the capabilities can be mapped without having to reinvent the wheel.
     
    milk likes this.
  14. milk

    Veteran Regular

    Joined:
    Jun 6, 2012
    Messages:
    2,952
    Likes Received:
    2,514
    I read it as the guy at coalition knows scarlet has harware ray tracing support like everybody else does because it's public info (MS e3) but he is a software developer and not a hardware engineer and not a Beyond3D nerd and he is speaking in broad strokes because his salary will still come in idependently of if some no-lives like us take his exact words to exteapolate if AMD's implementation of DXRT for scarlet relies on discrete cores for RT or texture units or whatever else.
     
    ToTTenTranz and vjPiedPiper like this.
  15. Shortbread

    Shortbread Island Hopper
    Veteran

    Joined:
    Jul 1, 2013
    Messages:
    3,767
    Likes Received:
    1,888
  16. Globalisateur

    Globalisateur Globby
    Veteran Regular

    Joined:
    Nov 6, 2013
    Messages:
    2,907
    Likes Received:
    1,642
    Location:
    France
    Not for GPUs if we use historical precedent since 2013. PS4 and Pro have both in-house and exclusive custom hardware features in their GPUs not present in any other APU nor GPU AFAIK.

    XB1 and XBX GPUs are 100% off the shelves AMD hardware. What it custom in them is the number of CUs, cache size, memory controller size and such, but nothing really exclusive to MS platforms. And it makes totally sense as they support 2 platforms. Custom GPU hardware stuff wouldn't make sense with their PC + console software strategy.
     
    senis_kenis likes this.
  17. Jay

    Jay
    Veteran Regular

    Joined:
    Aug 3, 2013
    Messages:
    1,906
    Likes Received:
    1,065
    The biggest mistake ms could make is designing Scarlett with pc in mind.
    The base tech/IP will be the same so 90%+ will be the same and that includes ps5.
    The reason I think it would be such a big mistake is because then you compromise your design. End up either more expensive or less performant than it needed to be.
    The pc will either brute force it, catch up in time, work around it.
    In the beginning if it means pc doesn't get certain games so be it (let's be honest few and far between would be due to architecture).
    But to limit first and third party devs would be a mistake especially when you know ps5 wouldn't be making those kinds of compromises.

    MS will support DX, toolsets etc, and that's enough to maintain good cross platform support.
     
    milk likes this.
  18. Shortbread

    Shortbread Island Hopper
    Veteran

    Joined:
    Jul 1, 2013
    Messages:
    3,767
    Likes Received:
    1,888
    This isn't necessarily true. The DMA Engines (Move) were directly coupled / integrated into Xbox Ones GPU architecture. Making it a very bespoke design.
     
    milk, BRiT and function like this.
  19. Nisaaru

    Regular

    Joined:
    Jan 19, 2013
    Messages:
    870
    Likes Received:
    195
    I'm not arguing about their bankruptcy:) They are just using other basic technologies now and sell it with some bells&whistles of them+label. Look at their TVs, OLEDs from LG and whatever LCD manufacturer. Their proud Hifi audio division is gone. There was a time people were curious about what they manage to engineer each year and these times are gone for 15+ years.

    The XB1 design is more complex in its changes than the PS4 alone. I'm not talking about power here. I see the PS4 as a straight forward AMD GPU/CPU design with slight tunings. Nothing major there. X1X I'm not exactly sure. We all know that MS talked about their extensive profiling of the architecture to locate speed bumps but I'm not sure how that translates into actual design changes outside of buffers+cpu tweaking.

    You surely have a point about RT and PC compatibility and I'm not saying that MS will deliver something custom about RT. Just that I consider it "more likely than" Sony for the mentioned reasons.
     
  20. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    7,788
    Likes Received:
    6,080
    Extremely limited view of what custom means here. The entire kinect/sound space are all custom creations. The changes to the microcontroller that enable executeindirect with state changes, the changes to the microcontroller that can reduce the CPU load on DX12 function calls. Those are exclusive to xbox platforms and continue to be.

    As for changes to cache sizes, memory controllers; even if we ignore customizations for the sake of customizations, we're seeing significantly better performance out of 1X over 4Pro in excess of their power difference. Freeing the SoC of the bottlenecks to operate at 4K competently is pretty massive, I think understating this feat is folly in this discussion. Whole architectures are designed around target goals.

    We have discussions across many forums about RDNA outputting X% more performance per TF over Polaris in the next gen console discussions.

    What MS managed with 33% more TF over their competitor with the same architecture, similar power profile, same NODE should be considered very successful in terms of their customizations considering we see 3P titles operating at double resolution for more titles than it should be - with a lot of titles where 4Pro at 1440p vs the full 2160p on X1X. Lets not get into the discussions where 4Pro is operating at 1080p vs 4K on X1X.

    And I dont' really care about whether X1X outperforms 4Pro, ti's not really relevant; but it seems fairly reductive to call X1X just off the shelf components while it's outperforming it's competitor without requiring the usage of a feature like Rapid pack math, or any of the lot.
     
    #860 iroboto, Sep 11, 2019
    Last edited: Sep 11, 2019
    BRiT, Shortbread and function like this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...