Playstation 5 [PS5] [Release Holiday 2020]

Discussion in 'Console Technology' started by BRiT, Mar 17, 2020.

  1. Scott_Arm

    Legend

    Joined:
    Jun 16, 2004
    Messages:
    14,213
    Likes Received:
    5,651
    The only reason to argue about the clocks is just console warring. I think we're probably looking at single digit changes in the worst case, and other than curiosity about how it works I don't think it'll matter.
     
    ToTTenTranz likes this.
  2. mpg1

    Veteran Newcomer

    Joined:
    Mar 5, 2015
    Messages:
    1,988
    Likes Received:
    1,576
    If PS5 performs like how the RDNA 1 cards perform it blows a hole in Cerny's theory of higher clocks...
     
    PSman1700 likes this.
  3. Scott_Arm

    Legend

    Joined:
    Jun 16, 2004
    Messages:
    14,213
    Likes Received:
    5,651
    One thing I don't really get is you want the cpu and the gpu to run at 100% when they're working. You want full utilization for better performance or better graphics. In the mobile world you also want to do your work at 100% so the cpu, gpu can race to a lower power state. I'll have to read through what Cerny said in detail again later, but I'd like to see more of his thoughts on low power programming, because it sounds at odds with what is typically the ideal, which is push the cpu to 100% for as short a time as possible to race to sleep.
     
  4. AbsoluteBeginner

    Regular Newcomer

    Joined:
    Jun 13, 2019
    Messages:
    770
    Likes Received:
    985
    So basically :

    XSX - capped by frequency. Everything will run at max frequency, less power intensive activities will will use much less power, while power intense activities will use more power.

    PS5 - capped by power. Everything within power budget, meaning activities not breaking TDP barrier, will run at 2.2GHz. Once power intense acitivities are ran on GPU, in case it breaks TDP, chip downclocks.

    Not sure where the limit is for PS5, we wont know until we see games, and even then we might not know the difference between 3-5%.
     
    Arwin and Scott_Arm like this.
  5. Esrever

    Regular Newcomer

    Joined:
    Feb 6, 2013
    Messages:
    768
    Likes Received:
    532
    I can see why some games would want higher clocks over higher CU count some of the time, maybe it doesn't do enough to fill out more CUs but can use the extra performance that clocks give. I mean if a dev wants to run a very graphically simple game at 120FPS, they might be able to max out CPU and GPU frequency without maxing out on load. I guess it's nice to have the option but I don't see it being the normal. 10% drop in frequency for a 27% drop in power is very significant. If the base power consumption is 180W at 2ghz, that means at 2.23, it would be almost 250W.
     
  6. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    10,889
    Likes Received:
    11,008
    Location:
    The North
    ehhh =p

    Perhaps it's just me, but this sounds like selling the idea that dreams come true.
    Do more operations per cycle using less power - somehow. I mean these exists, hence why we have computer science as a field; but these don't come nearly fast enough.

    There is a trade-off for operations per cycle and how quick your cycles are.

    Correct me if wrong but:
    Reading this sounds to actually encourage less multi-core usage if you write it in this way. Because to keep clocks up, the other cores need to give away their power to fewer cores to keep the clock rate up.
     
    #1106 iroboto, Apr 2, 2020
    Last edited: Apr 2, 2020
    AzBat, PSman1700 and BRiT like this.
  7. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    43,577
    Likes Received:
    16,030
    Location:
    Under my bridge
    If they listed such a number, forum warriors would be running around say, "PS5 is only x GHz with rare boosts to higher!!!11!" It's not a number that neds to be understood to understand the PS5 architecture and how it uses power. It's about utilisation.

    I understand where you're coming from, but it's a perspective skewed by historical representation, and trying to avoid that simplification as Cerny is doing isn't lying. It's about workload, not clocks. If the devs find the workload is being impeded because the GPU isn't clocking as high as it can, then we have a problem, but that'll only manifest in real workloads. At this point, Cerny can only talk about the design and the intentions and the expectations.

    We really do need to move on from numbers. Pixel counting framebuffers is misleading because games are made of lots of buffers of different resolutions. Flops don't tell you utilisation. Clocks don't tell you efficiency of work per clock. Bandwidths don't tell you how well the data is getting to the processor's L1 caches. Proper technical discussion should move on from wanting to fill in consumer facing specs sheets. That should be obvious actually since PS2's era. All those specs were meaningless in understanding what's on the screen and how machines compare.
     
    ToTTenTranz, Barrabas and Xbat like this.
  8. disco_

    Newcomer

    Joined:
    Jan 4, 2020
    Messages:
    246
    Likes Received:
    189
    I blinked and now I'm at the circus.
     
  9. anexanhume

    Veteran Regular

    Joined:
    Dec 5, 2011
    Messages:
    1,943
    Likes Received:
    1,288
    I don’t see how that ends well for them. It will get latched onto as the “real clock.” That, or he’ll say a realistic number and a case will come out to disprove it and he’ll get labeled as dishonest.
     
  10. Rootax

    Veteran Newcomer

    Joined:
    Jan 2, 2006
    Messages:
    1,529
    Likes Received:
    884
    Location:
    France
    Can we say that Sony screwed up their communication ? Between clock and not showing a lot of rdna2 features apart from RT ? I mean, even some well respected posters here are not 100% in tune with how the boost/non boost is supposed to work for exemple. Seems like a PR mess...
     
    KirkSi, PSman1700, BRiT and 1 other person like this.
  11. disco_

    Newcomer

    Joined:
    Jan 4, 2020
    Messages:
    246
    Likes Received:
    189
    Yup. Their best division over the past years, by far, was EU and ran by Ryan. Now he's in charge of everything, replaced a ton of marketing staff, and it's like they forgot how communication and marketing works.
     
  12. MrFox

    MrFox Deludedly Fantastic
    Legend Veteran

    Joined:
    Jan 7, 2012
    Messages:
    6,488
    Likes Received:
    5,995
    It's because with portable and productivity devices (or IoT), the dominant power is the quiescent current (the amount of power when everything is powered but idling) and the power management have extremely low idling modes to counter this, so it does something for a few ms, then it shuts down almost completely for a second until the user clicks something, or an email is received, etc.... Also with low power devices there isn't as much of an exponential relationship with clocks.

    With the high power used here, any reduction of quiescent current is dwarfed by the exponential (cubic!) power at the top of the frequency curve. And there isn't much opportunities to power down, if any.

    So if you have a task for the frame which is using the max clock at 3.5, filled with AVX256 instructions, but it finishes within 80% of the time allowed before the next frame, you end up wasting a massive amount of power compared to running it at 3.0 for the 100% time allowed. There is zero difference in cpu performance between the two, but the former is consuming a LOT more energy, since power use is cubic, while the time slice is linear versus frequency.

    So that means an additional chunk of watts "for free" which the GPU might have used at that particular point in the pipeline. Hence "squeezing every last drop". It's minimizing the possibilities of the GPU clocking down from 2.23, but the CPU would normally stay at the same effective performance as if it was always locked at 3.5. The power hungry device here is much more the GPU than the CPU, if there's any drop of frequency, it's the GPU that provides the most gain. It's just logical the CPU never drops unless it can do so for free.

    The required real time profiling is no different from trying to predict a resolution downscale or LoD to keep the frame rate consistent, but it's doing the reverse. Estimate if the operation will finish much earlier than the next frame cycle, and lower the clock proportionately, with some safety margin?
     
    #1112 MrFox, Apr 2, 2020
    Last edited: Apr 2, 2020
    Pete and JPT like this.
  13. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    10,889
    Likes Received:
    11,008
    Location:
    The North
    From a marketing perspective, yea it doesn't. It's okay he doesn't mention it.
     
    PSman1700 likes this.
  14. anexanhume

    Veteran Regular

    Joined:
    Dec 5, 2011
    Messages:
    1,943
    Likes Received:
    1,288
    It ultimately doesn’t matter if we understand. It’s important the developers do. The games will drive consumer sentiment and adoption.
     
    Xbat likes this.
  15. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    43,577
    Likes Received:
    16,030
    Location:
    Under my bridge
    And why should he? Has any system architect every in the history of computing laid out all the worst cases for their system in regard contention, seek times, cache misses, controller occupancy, etc? When was the last time nVidia said, "are GPUs can do this much work, but realistically you only get 40% of that give or take"? When was the last time a DRAM manufacturer selling PC DDR modules to the consumer said, "we can do this much bandwidth, but given CPU random accesses you'll be lucky to get half that"?

    Every spec is the paper-level maths describing the best possible outcome which is never, ever achieved. You then have devs try to get higher utilisation from the systems and get as close to that theoretical 100% maximum as possible. That's the way it's always been; why should this time be any different?
     
    JPT likes this.
  16. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    43,577
    Likes Received:
    16,030
    Location:
    Under my bridge
    I can agree with that but that discussion is non-technical and held here.
     
    Rootax and disco_ like this.
  17. Riddlewire

    Regular

    Joined:
    May 2, 2003
    Messages:
    329
    Likes Received:
    178
    Looking forward to future DF video...

    "Kill-A-Watt Shootout! All Consoles tested. Does PS5 take the crown?"
     
    DSoup and disco_ like this.
  18. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    10,889
    Likes Received:
    11,008
    Location:
    The North
    Every chip announces both its base and boost clock. CPUs and GPUs.
    That's all i'm saying. I get why he didn't announce it, I'm not going to harp on this anymore because I see it as a marketing bit.
    You don't want to be in a position where teh competition announces a number in which its worst and bast case scenario is the same number
    And your best and worst case scenarios are both much lower and lower than it.

    I get that. You're right it's about the games. I don't think any of this has any effect on whether people will buy a playstation either. I'm starting to lean towards getting one myself as I eye PC parts here, as well as my move back to m+kb. But the purchase is coming down to the games.

    I just think from a technical point of view, I think as much as he tries to reassure us that the clock is generally not going to be all that variable, I think there are clearly cases where it can be heavily variable. Which is normal, but could result in some challenges for developers to handle hitching etc.

    Developers are more likely to build tolerances in their code than they are to optimize for power. And by tolerances, they would look to build code that would not drop the clock speeds on either.
     
    Silenti, PSman1700 and BRiT like this.
  19. Inuhanyou

    Veteran Regular

    Joined:
    Dec 23, 2012
    Messages:
    1,116
    Likes Received:
    292
    Location:
    New Jersey, USA
    These "cache scrubbers" on the PS5 i keep hearing about...can someone please explain to me what they are? I keep hearing that they are there to partially mitigate the bandwidth issues, but how does that work if that's indeed how it is?
     
  20. Rockster

    Regular

    Joined:
    Nov 5, 2003
    Messages:
    973
    Likes Received:
    129
    Location:
    On my rock
    I agree, and completely understand why they would be hesitant to share that number. I think it would be less of an "issue" were it relatively higher vs the competition rather than lower. And I also appreciate where you're coming from in terms of utilization. However, historically, all those theoretical measures you've been describing as not important have an excellent track record of predicting relative performance. Any developer optimization to improve utilization usually benefits all platforms fairly equally. So similar percentages of relative wholes are valid comparisons. And all the "secret sauce" type features, have historically not proven to be particularly influential in real life performance comparisons.

    Things like resolution and frame-rate differences are a direct result of those performance capabilities. And 3rd party software shipping across all platforms has been the most visible byproduct, as I'm sure it will be again. But from a purely technical point of view, I'm genuinely interested in understanding what sort of physical constraints they encountered with their solution and how that's manifested in the design. It's like telling a car guy, this is a really fast car, but it's not important for you to understand the nuts and bolt of how it achieves that speed or what it's limits are.
     
    tinokun and PSman1700 like this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...