Playstation 5 [PS5] [Release November 12 2020]

Discussion in 'Console Industry' started by BRiT, Mar 17, 2020.

  1. function

    function None functional
    Legend Veteran

    Joined:
    Mar 27, 2003
    Messages:
    5,759
    Likes Received:
    4,105
    Location:
    Wrong thread
    Boost mode is going to show its greatest performance benefits early on when the hardware is less well utilised. That's also when it's most important to not look significantly weaker than the Xbox Series X.

    When AVX use is the norm for SIMD stuff and developers have plumbed the depths of the GPU to maximise utilisation the preformance delta will increase.
     
    Silent_Buddha, goonergaz and BRiT like this.
  2. temesgen

    Veteran Regular

    Joined:
    Jan 1, 2007
    Messages:
    1,680
    Likes Received:
    486
    Agreed, my guess the practical implementation would be situations where a modest shift would allow games to hit targets that are expected like 30 or 60fps. CPU matters a lot I'm just not sure we know how much of the workload is going to be native to the GPU.

    A different way to say is, the CPUs are more a performance leap than the GPU (especially on PS5) so if there is going to be extra resources, it more probable to find it there than from the GPU.

    I just think it's likely we're going to see more situations where games are GPU bound than CPU bound on PS5.
     
    PSman1700 and iroboto like this.
  3. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    13,774
    Likes Received:
    17,075
    Location:
    The North
    Right, basically a page from this generation: purposefully design games to offload as much as possible to the GPU.

    What is the CPU approximately for both? Ryzen 3700 ?
     
  4. MrFox

    MrFox Deludedly Fantastic
    Legend Veteran

    Joined:
    Jan 7, 2012
    Messages:
    6,488
    Likes Received:
    5,996
    That's not what they do. The gpu can take "unused" cpu power based on the power management algorithm, and the lowering of cpu clock was not described outside of a mention of the clock being 3.5ghz most of the time. In fact there is much fewer reasons to apply any downclock to the cpu than the gpu since the cpu is clocked as conservatively as the xbsx. One reason given was about AVX, which is unclear whether it was just an example of wattage allocation.

    In other words, if we had a 4.5ghz cpu with dynamic clocking versus 3.6/3.8 of xbsx there would be an advantage in downclocking it.
     
    Globalisateur likes this.
  5. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    13,774
    Likes Received:
    17,075
    Location:
    The North
    That seems to not make sense to me. Why not just fix the 3.5Ghz since the power allocation is so conservative there and feed the GPU separately.

    I don't see how this power shifting in this way makes it easier for developers. It just adds complexity.
     
    Nesh, Silenti, BRiT and 1 other person like this.
  6. Mitchings

    Newcomer

    Joined:
    Mar 13, 2013
    Messages:
    117
    Likes Received:
    181
    A couple tech demos, a look at the box (cooling!?) and/or controller and perhaps some rough indicator of the timeline would be lovely from Sony.

    I personally enjoyed Cerny's talk and am really excited for PS5 in itself.

    But advertising their GDC-esque talk on consumer facing channels and having it as their first live(ish) event for the PS5 wasn't a great idea. Whether anyone thinks Sony in themselves are having a rough time with marketing or not; there's no arguing that relatively speaking, MS are just knocking out of the park in terms of presentation.
     
    London Geezer and goonergaz like this.
  7. bgroovy

    Regular Newcomer

    Joined:
    Oct 15, 2014
    Messages:
    799
    Likes Received:
    626
    AMD told use RDNA2 was a 50% improvement in performance per watt over RDNA1. People need to adjust their expectations.
     
    goonergaz likes this.
  8. MrFox

    MrFox Deludedly Fantastic
    Legend Veteran

    Joined:
    Jan 7, 2012
    Messages:
    6,488
    Likes Received:
    5,996
    Because then it would burn the unused power all the time, during memory stalls or the few ms idle at the end of a frame, it's an advantage to be able to downclock when possible and free up "unused" power to the gpu which might be peaking demand at that point in time. And it also allows to not count an additional safety margin and downclock if something starts cramming the AVX in an unpredicted way. If the margin is for rare cases, the downclock is a rare case, but the power delivery advantage is always.

    I monitor a render farm doing cg rendering with all cores used efficiently. Enabling or disabling intel adaptative clocking has a negligible impact on render time, but when lookng at the cores they are not always at max clock. Power comsumption goes down without any real world impact. So in an SoC that's more power that can go to the gpu.
     
    Aaron Elfassy likes this.
  9. PSman1700

    Legend Newcomer

    Joined:
    Mar 22, 2019
    Messages:
    5,496
    Likes Received:
    2,409
    Just like with NVidia, those percentages are most likely in the best and most ideal circumstances.
     
  10. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    13,774
    Likes Received:
    17,075
    Location:
    The North
    I’m not following along;
    power is a fixed commodity in this case. There is no boost clock on the GPU apparently so when the GPU is peaking demand and it needs more it needs to draw voltage from the CPU. It’s trying to maintain its frequency.

    This isn’t a typical boost scenario where the CPU has unused power and is transferring it to the GPU to reach higher frequencies. It is the GPU is always set on that higher frequency and borrows power when it’s saturated.
     
    BRiT and PSman1700 like this.
  11. bgroovy

    Regular Newcomer

    Joined:
    Oct 15, 2014
    Messages:
    799
    Likes Received:
    626
    It was largely true when they said the same about the transition from GCN to RDNA1.
     
    goonergaz likes this.
  12. PSman1700

    Legend Newcomer

    Joined:
    Mar 22, 2019
    Messages:
    5,496
    Likes Received:
    2,409
    Ye, a generational shift like that big, from a rather 'bad' GCN arch compared to the competition. Take that 50% with a grain of salt from RDNA to RDNA2. See the XSX performance in Gears 5.
     
  13. MrFox

    MrFox Deludedly Fantastic
    Legend Veteran

    Joined:
    Jan 7, 2012
    Messages:
    6,488
    Likes Received:
    5,996
    The gpu doesn't control this. There is a central module managing the power and the algorithm. It decides on all clocks to make the most of that power. There is more gain dropping the gpu clock by 2% than the cpu clock, unless the cpu is doing an unpredicted pattern, or if cores are idling. If the threshold is reached only during corner cases, the overall performance is close to peak. Also, design margins are no longer necessary.

    Did you watch the presentation? There's a section about it.
     
  14. psorcerer

    Regular

    Joined:
    Aug 9, 2004
    Messages:
    732
    Likes Received:
    134
    Let me understand it.
    Why high CPU clock is needed?
    Or, let's say, what's the penalty of 2.9 vs 3.5GHz?
    Is there any hard data?
     
  15. BRiT

    BRiT (>• •)>⌐■-■ (⌐■-■)
    Moderator Legend Alpha

    Joined:
    Feb 7, 2002
    Messages:
    19,453
    Likes Received:
    22,421
    No hard data from anyone (Sony or others) yet about instruction mixes and measured impacts on the balance.
     
  16. psorcerer

    Regular

    Joined:
    Aug 9, 2004
    Messages:
    732
    Likes Received:
    134
    I think Cerny said that the developers will have full control of the power envelope.
    Essentially decide what the CPU/GPU power distribution will be.
    But it's not clear if it can be done in runtime or only before game is starting.
     
  17. psorcerer

    Regular

    Joined:
    Aug 9, 2004
    Messages:
    732
    Likes Received:
    134
    No. I mean on PC, downclock the CPU, see the difference in 4K.
    I suspect it will be 0.01%
    But maybe someone has some real data?
     
  18. MrFox

    MrFox Deludedly Fantastic
    Legend Veteran

    Joined:
    Jan 7, 2012
    Messages:
    6,488
    Likes Received:
    5,996
    That's new to me, do you have the exact quote from Cerny?

    A bias control would be a resonable addition to provide. I.e. What to prioritize if the power threshold is reached. But that's not controlling power distribution. That's just a contention priority flag.
     
    BRiT likes this.
  19. anexanhume

    Veteran Regular

    Joined:
    Dec 5, 2011
    Messages:
    2,078
    Likes Received:
    1,535
    Why would you trust them on one but not another?

    And why base anything off a port done in 2 weeks?
     
    goonergaz likes this.
  20. Globalisateur

    Globalisateur Globby
    Veteran Regular Subscriber

    Joined:
    Nov 6, 2013
    Messages:
    4,353
    Likes Received:
    3,218
    Location:
    France
    That's how I understood it too. Developers won't be able to control the frequencies (that wouldn't work given the total power limit), only apply some priorities on power distribution to CPU and GPU.
     
    iroboto and BRiT like this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...