Next Generation Hardware Speculation with a Technical Spin [2018]

Discussion in 'Console Technology' started by Tkumpathenurpahl, Jan 19, 2018.

Tags:
  1. Xbat

    Regular Newcomer

    Joined:
    Jan 31, 2013
    Messages:
    541
    Likes Received:
    242
    Location:
    A farm in the middle of nowhere
    Oh I didn't know it was debunked.
     
  2. Tkumpathenurpahl

    Regular Newcomer

    Joined:
    Apr 3, 2016
    Messages:
    754
    Likes Received:
    520
    True or not, it doesn't necessarily mean anything for the release of the PS5.

    I'm a firm believer that Sony had plans in motion to launch the PS5 in 2019, especially if the X1X had tipped the balance for MS. It's been successful, and MS are in a healthy position, but not to the extent that Sony *has* to release a new console.

    Now, much like Sony have done with their first party games IMO, they're going to let it bake a little longer and move on to their 2020 design.

    Assuming there is any truth to Sony being heavily involved in Navi's development, a decent perf/watt 7nm GPU could still be put to use in 2019 slimmer revisions of the PS4/Pro. The PS4 should be able to pull a PS2 and continue to sell a shed load of units even into the next generation, so it's necessary for them to use a design that's cheap to manufacture for many years. I might be wrong, and this is on the basis of Navi having better perf/watt than Vega, and being AMD's budget GPU for some time, but I think Navi+Zen emulating the PS4 is a good fit.
     
  3. DmitryKo

    Regular

    Joined:
    Feb 26, 2002
    Messages:
    542
    Likes Received:
    336
    Location:
    55°38′33″ N, 37°28′37″ E
    They are early adopters - once these obvious inefficiencies are eliminated, peformance will still be limited by hardware BVH search.
    Sure a 25-30% improvement is nice, but even with these limited applications of realtime raytracing, we probably need like 5x (500%) performance improvement to scale flawlessly with complex geometry and 4K resolution.

    Volta exposes DirectX Raytracing tier 1 - could they be using compute units to emulate BVH acceleration? AFAIK any denoising is left for the developer to implement, it's not a part of the DXR API.

    Thanks! But did they schedule different parts of the rendering pipeline - such as rasterization, raytracing, and compute - to run on separate cards? Or they can schedule different rays to run on a different card?
    Also, stock UE4 does not support explicit multi-GPU yet, and SEED is an experimental rendering engine not used in any game.
     
  4. OCASM

    Regular Newcomer

    Joined:
    Nov 12, 2016
    Messages:
    603
    Likes Received:
    555
    You realize I'm not advocating for the removal of compute units from the hardware, right? And really, how would the addition of RT hardware prevent any of the developments you mention? It wouldn't. It actually increases the range of possibilities because it allows for things that would be too slow to do otherwise. I also find the term fixed-function when referring to RTX very misleading because it's not at all like T&L that can do the one algorithm that makes every single game look the same. Anti-aliasing, lighting simulation, collision detection, audio simulation and who knows what other uses it could have. And that's just RTX, we don't know the features/limitations of future/competing RT acceleration architectures.

    Hardware design is all about trade-offs. You can maximize flexibility at the expense of speed or you could be balanced and have some of both. I mean if flexibility is all that matters lets just get rid of rasterization and devote all the sillicon to compute units...

    It's also limited to non-deformable meshes and procedural geometry.

    It'll be sad seeing games devoting resources to native 4K rendering :(
     
    #3464 OCASM, Nov 9, 2018
    Last edited: Nov 9, 2018
  5. milk

    Veteran Regular

    Joined:
    Jun 6, 2012
    Messages:
    2,336
    Likes Received:
    1,716
    There are thousands of ways to create a Bounding Volume Hierarchy, and for each another thousands of ways to optimise it's traversal. Even more so, when you can custom-make it for the specific scope and characteristics of your game, instead of trying to create a silver bullet.
    But the way DXR was envisioned, the BVH is created by the GPU's drivers and the dev knows nothing of it. This limits the people experimenting with that field to GPU engineers, when it could be an active field of research among the entire game development community.
    A good BVH can also be used for a myriad of other things beside casting rays. From what I understand, the way DXR does it, also keeps the BVH in it's own little private space, and can't be queried or interacted with for other stuff other than by casting rays into it. In that case a game night have 2 simultaneous BVH's operating, one created by the dev for his own purposes, the other created by the GPU driver.
    I hope no console's GPU hardware design has any silicon wasted on functionality created for that specific paradigm.
    If AMD could have pulled some architecture out of their hat that can rival Nvidia's a couple years ago (which is when PS5/Scarlet initiated more solid designing) that is also much more open and flexible, then that's great. I'm not holding my breath though.
     
    Silent_Buddha likes this.
  6. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    39,065
    Likes Received:
    8,915
    Location:
    Under my bridge
    For the moment.
     
  7. DmitryKo

    Regular

    Joined:
    Feb 26, 2002
    Messages:
    542
    Likes Received:
    336
    Location:
    55°38′33″ N, 37°28′37″ E
    Looks legit to me. The design and the renders are professional quality, I can't see someone spending so much effort on a fake.
    Early developer presentations are always rough, with a few mistakes here and there.
    Sony uses PlayStation® as a general reference to the family, but they also use ™ for specific models and logos, such as PS4™, PS4™ Pro, PlayStation™Vue etc. Probably too much hassle to register on multiple markets.
    DLSS is not trademarked either.
    These boxes are probably developer kits, not the final console design.

    The mentioning of custom raytracing extensions (Radeon Rays) might imply that Navi has no hardware-accelerated raytracing...

    Emulation is 32-bit only and its performance is lagging behind x86 APUs. x64 applications are not supported and need to be recompiled to ARM64.
     
    #3467 DmitryKo, Nov 9, 2018
    Last edited: Nov 9, 2018
    vipa899 likes this.
  8. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    39,065
    Likes Received:
    8,915
    Location:
    Under my bridge
    They're not that hard to do for someone with experience. And people go to great lengths for fakes. We've even seen fake hardware mockups before.

    As others point out, as a presentation the text is too small and verbose.
     
    BRiT, Jay and milk like this.
  9. milk

    Veteran Regular

    Joined:
    Jun 6, 2012
    Messages:
    2,336
    Likes Received:
    1,716
    I don't see that limitation changing soon. Maybe for next gen, if conservative rasterisation and maybe other modern architectural changes can make real time voxelization faster, then we may be able to generate SDF's in real time for arbitrary meshes. Using SDF for occlusion instead of naive binary occlusion models might be the missing link to make voxel based GI less leaky.
    Even then, that stuff is only efficient if you have a robust LOD chain, because you shouldn't be voxelising High Poly meshes. Thanks to the fact so many games are open world these days, or at least open-worldish, many games already have comprehensive LOD chains as well as a production pipeline for creating such LODs, which is also not a trivial problem, but one which Epic Games itself has been paying a lot of attention to for some years.
     
    Heinrich4, vipa899 and OCASM like this.
  10. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    39,065
    Likes Received:
    8,915
    Location:
    Under my bridge
    We've no idea. The point is, anyone who's willing to believe raytracing will improve over time with better algorithms should acknowledge the same can happen with other algorithms and techniques, rather than looking at the current limitations and expecting them to always be there. To afford RT the benefit of the doubt but not other techniques is simply discirimination.
     
  11. DmitryKo

    Regular

    Joined:
    Feb 26, 2002
    Messages:
    542
    Likes Received:
    336
    Location:
    55°38′33″ N, 37°28′37″ E
    The video looks fine on 30" QHD / 32" 4K monitors, where it is probably supposed to be viewed.

    The stated specs would line up with my own expectations - mid-range APU, 11 TFLOPs, GDDR6, no dedicated raytracing hardware (though they offer a native version of OpenCL 'Radeon Rays' )...

    There is also a detailed product lineup complete with peripherals and hardware/software specs. If it's fake, it's a very elaborate one, created by professionals.
     
    #3471 DmitryKo, Nov 9, 2018
    Last edited: Nov 9, 2018
  12. OCASM

    Regular Newcomer

    Joined:
    Nov 12, 2016
    Messages:
    603
    Likes Received:
    555
    Just use compute for special cases. Like I said, it's not one or the other.

    I'll believe it when I see it, even in scene demos or research papers.

    Beliefs based on current research trends VS beliefs based on wishful thinking.
     
  13. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    39,065
    Likes Received:
    8,915
    Location:
    Under my bridge
    Hogswash. It's based on the past 20 years precedent in how graphics tech has advanced. Do you genuinely believe that going forwards, all rendering technology is going to stagnate on what we have now? That if RT wasn't introduced, we'd be looking at no algorithmic advances at all??

    All rendering tech is going to advance. Given raytracing hardware, devs will find ways to use it in novel ways to get better results. Given more general compute and ML options, devs will find new ways to use it. There's zero wishful thinking about it - it's a certainty based on knowledge of how humanity operates and progresses, and the fact we know we haven't reached our limits.
     
    #3473 Shifty Geezer, Nov 9, 2018
    Last edited: Nov 17, 2018 at 8:22 PM
    AstuteCobra, Xbat and function like this.
  14. Metal_Spirit

    Regular Newcomer

    Joined:
    Jan 3, 2007
    Messages:
    297
    Likes Received:
    95
    Actually the 3640 shader cores number may be correct. Let me quote:

    "A super single instruction, multiple data (SIMD) computing structure and a method of executing instructions in the super-SIMD is disclosed. The super-SIMD structure is capable of executing more than one instruction from a single or multiple thread and includes a plurality of vector general purpose registers (VGPRs), a first arithmetic logic unit (ALU), the first ALU coupled to the plurality of VGPRs, a second ALU, the second ALU coupled to the plurality of VGPRs, and a destination cache (Do$) that is coupled via bypass and forwarding logic to the first ALU, the second ALU and receiving an output of the first ALU and the second ALU. The Do$ holds multiple instructions results to extend an operand by-pass network to save read and write transactions power. A compute unit (CU) and a small CU including a plurality of super-SIMDs are also disclosed."

    We all know a CU is composed of 64 shader cores. But how many for a small CU?
    If it is composed of 6 shader cores, then a GPU with 52 CU+52 SCU would give us 3640 shader cores.
     
    Heinrich4 likes this.
  15. milk

    Veteran Regular

    Joined:
    Jun 6, 2012
    Messages:
    2,336
    Likes Received:
    1,716
    That's a very simplistic way to wave off the issue.
    If most of my scene is a perfect fit for the specific silver-bullet way Nvidia's driver decided to build the BVH except for some parts that would be tremendously more efficient if done another way through coumpute, then sure, just use compute for special cases. You may still be eating up a some redundancies depending on the situation, which in itself is a sorry ineficiency but not the end of the world. Well, for rendering.
    But say the gaeme's pysics engine can also benefit from a BVH. But it doesn't rely on ray casts, and there is no easy way to translate whatever queries your physics engine needs into rays so it can use the DXR for that. That means your physics engine will create it's own BVH for the physics through compute, while NVIDIA's black box is creating another one, and is anyone's guess what it looks like, and there is no way to reutilize the work from one process to the other. That is a very sorry inefficiency.
    And then there is the case where MOST of your scene would be a much better fit to your own compute BVH system, and you do implement it through compute. Nice, now you've got all RT silicon sitting idle giving you no extra performance because it was designed to do on thing and one thing only. That's another very sorry inefficiency.
    But most of all, the most sorry thing, and one which your idea of "just use compute for special cases" ignores completely, is that you loose the contribution of research and experimentation of thousands of game graphics programmers by throwing a black-box into the problem and limiting all that R&D to GPU and API design teams. I undertand some are hoping next gen consoles get some form of RT acceleration similar to Nvidia's so that we get a wide breath of devs experimenting with it. But what I think you are ignoring, is that we leave a whole other field of research opportunities unexplored by doing that. I think we loose more opportunities of software and hardware evolution by adopting Nvidia's paradigm than we win.
     
    #3475 milk, Nov 9, 2018
    Last edited: Nov 10, 2018
  16. AlBran

    AlBran ¯\_(ツ)_/¯
    Moderator Legend

    Joined:
    Feb 29, 2004
    Messages:
    19,587
    Likes Received:
    4,491
    Location:
    ಠ_ಠ
    What are you quoting?
     
  17. Jay

    Jay
    Veteran Regular

    Joined:
    Aug 3, 2013
    Messages:
    1,421
    Likes Received:
    738
    Yep, that's why I said it would need to be recompiled. And I don't see that happening. Was just highlighting that they already have a native windows 10 arm version.

    But, an ARM based console with a good gpu would be interesting to see. The switch shows that it's developer tools and engine compatibility that are important, where people think it's simply about x86/x64
    The switch was based on older version of soc when it came out even.
     
    OCASM likes this.
  18. anexanhume

    Veteran Regular

    Joined:
    Dec 5, 2011
    Messages:
    1,114
    Likes Received:
    224
    It’s from the YouTube video.
     
  19. Jay

    Jay
    Veteran Regular

    Joined:
    Aug 3, 2013
    Messages:
    1,421
    Likes Received:
    738
    It's crazy what people will spend their time doing. To fool the net?
    The days of, it looks too good to be fake are looooong gone.
    As you said, people even make up physical mock ups also now.
     
  20. DSoup

    DSoup meh
    Legend Veteran Subscriber

    Joined:
    Nov 23, 2007
    Messages:
    10,355
    Likes Received:
    5,112
    Location:
    London, UK
    It's incredibly difficult to obtain a registered trademark for TLAs like 'PS4'. 'Vue' is already a registered trademark and you can't just annex a registered trademark (like PlayStation) with another registered trademarked word like 'Vue'. :nope: That is kind of why trademarks and why common law trademarks, i.e. tm, exist! :yep2:
     

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...