Next Generation Hardware Speculation with a Technical Spin [2018]

Discussion in 'Console Technology' started by Tkumpathenurpahl, Jan 19, 2018.

Tags:
Thread Status:
Not open for further replies.
  1. Tkumpathenurpahl

    Regular Newcomer

    Joined:
    Apr 3, 2016
    Messages:
    971
    Likes Received:
    688
    Would NVME be best in that case? Small, low power, and generally better performance. Lower capacity is the drawback, but that shouldn't matter if used in conjunction with an HDD.
     
  2. anexanhume

    Veteran Regular

    Joined:
    Dec 5, 2011
    Messages:
    1,460
    Likes Received:
    637
    Agree. Putting it in the default SKU is sufficient. If you experience pop in because you swapped in an underperforming drive, that’s on you.

    I’d want it to be swappable to replace the drive if it started experiencing aging effects. I assume that if devs were targeting it, then the default would be no loading in most games, and there’d be no benefit to a faster (or even larger) drive.
     
    mrcorbo and Lalaland like this.
  3. mrcorbo

    mrcorbo Foo Fighter
    Veteran

    Joined:
    Dec 8, 2004
    Messages:
    3,546
    Likes Received:
    1,958
    That's not crazy. That's expected for GDDR6. The only question is how wide the memory bus is going to be.

    At the clockspeeds that Nvidia are running the GDDR6 on the Geforce and Quadro RTX cards:
    • 256-bit gets you 448 GB/s for 8GB of memory @ 1 GB per chip.
    • 352-bit get's you 616 GB/s for 11GB of memory @ 1 GB per chip.
    • 384-bit gets you 672 GB/s for 24GB of memory @ 2 GB per chip.
    12 - 1GB chips on a 384-bit bus seems like a plausible configuration to me for PS5/Scarlet.
     
    Heinrich4 likes this.
  4. Scott_Arm

    Legend

    Joined:
    Jun 16, 2004
    Messages:
    13,151
    Likes Received:
    3,538
    The problem with the rasterizer approach seems to be that it gets harder and harder to find ways to keep pushing visual improvement, and that there are problems, like reflections, that can't be solved in screen space. It also sounds like ray tracing will be far more reliable and require less workarounds from a production standpoint. I think its easy to take the difficulty of rasterizer methods for granted when we just play instead of creating art and designing levels. Not to mention, games have been moving to ray tracing with that compute power for a while. They were just limited to screen space. Full-world visibility testing will also open up possibilities for audio and ai breakthroughs. Unless we want to be stuck with the same games for the next ten years it'd be best if the new consoles had some gpu features to alleviate ray tracing overhead to make ray structures and tracking cheap and push the bottleneck back on to shading/alu and bandwidth.
     
    #2344 Scott_Arm, Sep 2, 2018
    Last edited: Sep 2, 2018
  5. bitsandbytes

    Newcomer

    Joined:
    Nov 27, 2011
    Messages:
    185
    Likes Received:
    64
    Location:
    England
    I like the irony of Mark Cerny singling out Ray tracing as something exotic devs definitely didn't want yet 8 years later I read a tweet from a high profile dev (who "suggested" PS4/X1 have 8GB) say:

    Umm...
     
  6. anexanhume

    Veteran Regular

    Joined:
    Dec 5, 2011
    Messages:
    1,460
    Likes Received:
    637
    As soon as RTX was announced by Nvidia, multiple devs commented they expected it for next next gen.

     
    Heinrich4 and BRiT like this.
  7. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    9,666
    Likes Received:
    4,328
    Maybe because 8 years later the transistor density increased drastically which allows for dedicated RT hardware without eating too much into the amount of units dedicated to traditional rendering and general compute.

    And even 8 years later it's only barely practical in GPUs with enormous die areas and TDPs that would never fit a console.
     
  8. bitsandbytes

    Newcomer

    Joined:
    Nov 27, 2011
    Messages:
    185
    Likes Received:
    64
    Location:
    England
    Maybe. We'll see. That little exchange you quote comes across as some sort of in-joke? Probably just me being optimistic for once.

    You might well be right but this Ray tracing talk gives me similar feels as 8GB GDDR5 RAM in 2011/12. It was "crazy talk" and never going to happen. Then it did.

    Never say never again...
     
    #2348 bitsandbytes, Sep 2, 2018
    Last edited: Sep 2, 2018
  9. MrFox

    MrFox Deludedly Fantastic
    Legend Veteran

    Joined:
    Jan 7, 2012
    Messages:
    5,326
    Likes Received:
    3,796
    Yeah I think it is appearing in hybrid form because we now have enough compute to do so, and additional hardware multiplies it's efficiency enough to make it practical.

    Cerny was talking about a hypothetical ray-tracing hardware replacing traditional rasterizers. The reference was how the cell caused headaches for ps3 development. Devs needed to code very differently, and it would also have been the case with ps4 if they replaced the majority of compute with circuitry exclusively for RT.

    The point is to avoid a situation where games will look like crap unless they rewite the entire engine for this specific new hardware.

    This time, the sacrifice looks fine. For the same area, are we talking about maybe 10TF plus RT, versus 12TF without? In contrast, cell was a massive drop in cpu power if you didn't recode the entire pipeline to use the SPEs correctly. We could say cell was the equivalent of having 4TF+RT available next gen instead of 12TF skipping RT. It's nowhere near that bad.
     
    #2349 MrFox, Sep 2, 2018
    Last edited: Sep 2, 2018
  10. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,336
    Likes Received:
    10,662
    Location:
    Under my bridge
    Well no! 8 years ago they didn't! SD games at <30 fps and all noisy? That'd be like asking for 3D rasterisation with the technology of 1982 in a home computer - it'd be completely underpowered and misplaced. Looking forwards, raytracing makes sense increasingly so for both raysterising and, potentially, purely raytraced visuals, plus ray-traced other stuff (AI, audio, things).
     
  11. mrcorbo

    mrcorbo Foo Fighter
    Veteran

    Joined:
    Dec 8, 2004
    Messages:
    3,546
    Likes Received:
    1,958
    LOL @ < 30 fps turning into that becasue you didn't put a space between the < and 3. Reads as "a heart-pumping 0 fps".
     
    AstuteCobra, turkey, Xbat and 5 others like this.
  12. anexanhume

    Veteran Regular

    Joined:
    Dec 5, 2011
    Messages:
    1,460
    Likes Received:
    637
    RAM is different. You can increase that relatively late in the game if your bus and/or chip size selection allows. If there’s RT hardware in next gen, it was decided a while ago.
     
  13. bitsandbytes

    Newcomer

    Joined:
    Nov 27, 2011
    Messages:
    185
    Likes Received:
    64
    Location:
    England
    That is beside my point. My point was that in 2011/12 most forum users here and GAF etc were saying 8GB RAM was a pipe dream and now are saying similar with RT (and as you say some devs too). That is all.

    I took what Mark Cerny said as if the RT tech was fully available for a late 2013 console they would still have wanted simple and straightforward?....but seems a huge and unfair jump for devs to not want exotic anything 5 years ago and now going by a few dev Tweets it is the next big thing all within 1 gen! Will it mean if no hardware RT is in next-gen chips devs will complain all gen like with previous gens low RAM?

    The good news is it seems the tech has been being worked on for a good while under NDA so hopefully that bodes well for it somehow being in the next-gen systems. Hope so anyway as the complaints all gen will get boring fast....
     
  14. MrFox

    MrFox Deludedly Fantastic
    Legend Veteran

    Joined:
    Jan 7, 2012
    Messages:
    5,326
    Likes Received:
    3,796
    384 would be great, but I'm wondering why there's never been a mid-range gpu with a 384 gddr bus. There must be a large inherent cost associated? Impact on yield?

    gddr6 clocks should rise by one bin step every two years or so. Meaning 16gbps might be a low cost bin in 2020. We have seen this trend with most external memories.

    The whole range of nvidia is launching with 14gbps. So on the low end there might not be enough gddr6 volume that doesn't pass 14gbps to warrant a 12gbps bin for mid-range cards. On the high end, the nvidia controller might be unable to reach samsung's 16gbps, hopefully to be resolved on 7nm.
     
  15. anexanhume

    Veteran Regular

    Joined:
    Dec 5, 2011
    Messages:
    1,460
    Likes Received:
    637
    One thing I think that is an important distinction is that RT hardware in 2013 would be assumed to be very specialized and not good at rasterization, incompatible with existing APIs and known optimizations, and overall difficult to developers.

    Now RT is emerging as an enhancement to existing compute paths in GPUs, has developer and API support, etc. It should be seen as a very different proposition now.
     
    milk, OCASM, Heinrich4 and 1 other person like this.
  16. mrcorbo

    mrcorbo Foo Fighter
    Veteran

    Joined:
    Dec 8, 2004
    Messages:
    3,546
    Likes Received:
    1,958
    If I understand correctly what I've seen stated on the subject, the size of the chip has some relation to what width bus it can (comfortably) accommodate. I expect a big chip, so I don't see a problem there. There is then the cost of the added silicon to the chip itself and the additional traces and space being taken up on the MB.

    I won't try to guess which configuration will best hit the sweet spot of capacity, bandwidth and cost in 2 years, but 12GB of RAM and 672 GB/s of bandwidth seems like a good target to try to hit and a 384-bits interface to 12 -1GB GDDR6 chips is one way to get there.
     
  17. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,336
    Likes Received:
    10,662
    Location:
    Under my bridge
    Because realistically it was right up until PS4 launched, but it was always an outside possibility because we knew the tech was coming - it was just a matter of timelines and Sony got lucky.

    RT hardware could be included, because it exists now. It's actually existed for years in the mobile space but for some reason no-one wants to talk about that. ;)

    Well, if the tech works, they'll want it, but at the same time we don't always get what we want. I think it's important to note the difference between wanting a thing in principle and wanting whatever particular implementation is available. I think many devs want full-on realtime raytracing as it solves a great many engine problems, but realistically, they may not want gimp rasterising hardware if RT tech is impotent in its first iteration. Hence devs could say they want RT and also say they don't want RT, depending on which unqualified type they are talking about.
     
    milk likes this.
  18. mrcorbo

    mrcorbo Foo Fighter
    Veteran

    Joined:
    Dec 8, 2004
    Messages:
    3,546
    Likes Received:
    1,958
    The image in this Digital Foundry article comparing the PS4 Pro and Xbox One X SoCs does a decent, if not perfect, job of showing what the difference is on the chip side. Not perfect because there are other variances between the two designs that are affecting the die size.

    And the image in this Extremetech article shows there's no physical way they were putting a 384-bit bus on the PS4 APU, even if it could take advantage of the BW and it made sense from a cost perspective.
     
  19. msia2k75

    Regular Newcomer

    Joined:
    Jul 26, 2005
    Messages:
    326
    Likes Received:
    29

    I think such configuration could be possible if they put a 80CU GPU inside (72 activated).
    It would be big enough to fit a 384-bit bus on the PS5.
    PS4 OG has a 20CU GPU (18 activated)
    Pro doubles that -> 40CU (36 activated)
    PS5 could double that again? 80CU (72 activated)
    We will see...
    One advantage could be that Sony will lowly clock the GPU (let's say around the same frequency of XBX's GPU) and still obtain some decent flops (~11TF)
     
    #2359 msia2k75, Sep 3, 2018
    Last edited: Sep 3, 2018
    Heinrich4 likes this.
  20. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    7,607
    Likes Received:
    5,881
    Hmm. The discussion in storage speed is interesting. If loading times can be reduced dramatically at the cost of graphics performance, the trade off may be worth it.

    To give an example, if you’re heavy in MP games, most matches end in about 10 minutes. Then there is 1-2 minutes of matchmaking and another 1-3 minutes of loading depending on the game you’re playing. Over the course of the hour, hard drive loading speed can probably net you 1 additional round per hour. So if you’re getting 4 games per hour you might be able to get 5 Games now up to 7 if you are capable of pub stomping and bring matches down to 5 minutes.

    Certain games where you die often and reloading commences will be much more bearable, Wolf:TNO was a terrible offender here, with its 3-4 minute load time after death.
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...