General Next Generation Rumors and Discussions [Post GDC 2020]

Discussion in 'Console Industry' started by BRiT, Mar 18, 2020.

  1. Tabris

    Newcomer

    Joined:
    Sep 24, 2011
    Messages:
    56
    Likes Received:
    40
    https://www.tomshardware.com/news/a...u-arden-source-code-stolen-100-million-ransom

     
  2. MrFox

    MrFox Deludedly Fantastic
    Legend Veteran

    Joined:
    Jan 7, 2012
    Messages:
    6,327
    Likes Received:
    5,655
    Did you watch the presention? He's talking about releasing the memory of everything 180 behind the player. No game every did that because no storage have ever been fast enough to reload it back as the player turns. It frees up half the asset memory. Limitation is how fast the player turns to have enough time to load pretty much everything with a full 180 turn.
     
    egoless and megre like this.
  3. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    9,895
    Likes Received:
    9,246
    Location:
    Self Imposed Work Exile: The North
    Yes. Sorry. I didn’t realize you were referring to Cerny’s example.

    At teh same time, I don't know many games that would need that situation. The games in which we turn super fast, they solved that problem already. TLOU and UC4 and the titles that put tons of effort into graphics, you have locked sensitivity on how fast you and move, traverse and looking around.

    Its an interesting example, but I don't see that many use cases in which people are going to 180 here. Unless you take control away from the player.

    Back when we played Quake, CS, etc, all those games had 360 no-scopes bouncing everywhere and we didn't have those issues at all back then. People had super high sensitivity and no one complained about texture load in. We just held things in memory.

    look how fast we go, there wasn't texture streaming problems back then.

    Games are different now of course, but the use cases for this type of twitch action where you need to look everywhere at once, they made those and solved those problems by holding a lot more in memory.
     
    PSman1700 likes this.
  4. dobwal

    Legend Veteran

    Joined:
    Oct 26, 2005
    Messages:
    5,321
    Likes Received:
    1,346
    You sure? I'm think I need to check GTA4 or 5 because I remember playing an open world game where in you can readily have your view do a 180 flip to see whats behind you when you are driving in third person mode.
     
    PSman1700 and BRiT like this.
  5. MrFox

    MrFox Deludedly Fantastic
    Legend Veteran

    Joined:
    Jan 7, 2012
    Messages:
    6,327
    Likes Received:
    5,655
    That's keeping the LOD assets based on the distance from the player, the game keeps 360 degrees of data in memory. (sure there are exceptions like car games and on-rail shooters, no idea about gta car driving)

    Cerny proposed to cut it in half, or possibly by three if they go 120 range in front. If it works it would double or triple the asset details possible at any moment. That technique is limited by the amount of data you can load within the time frame that the player is turning around, because a full 180 potentially ends up loading a completely new dataset.

    I'm sayimg it contradicts the opinions that the SSD speed cannot improve the IQ and only improves load times.
     
    #605 MrFox, Mar 26, 2020
    Last edited: Mar 26, 2020
    egoless, lynux3, megre and 5 others like this.
  6. jayco

    Veteran Regular

    Joined:
    Nov 18, 2006
    Messages:
    1,102
    Likes Received:
    372
    Yep, that was he said. Not that you need an SSD to do a 180. When playing GTA you need a lot of stuff being loaded into your RAM that you may never use if you do not decide to turn you character around, but it has to be there just in case. Cerny's argument is that PS5 SSD is so fast, that you actually don't to have all those unused assets sitting on the RAM and only retrieve them from the SSD when necessary.
     
    Picao84 likes this.
  7. BRiT

    BRiT Verified (╯°□°)╯
    Moderator Legend Alpha

    Joined:
    Feb 7, 2002
    Messages:
    14,863
    Likes Received:
    12,991
    Location:
    Cleveland
    So why did they not stick to 8 GB total?
     
  8. Betanumerical

    Veteran

    Joined:
    Aug 20, 2007
    Messages:
    1,686
    Likes Received:
    155
    Location:
    In the land of the drop bears
    With regards to this point, does anyone know if IQ could be improved in a current generation game substantially by using a SSD?, or is IQ currently limited by other factors as well in the current generation?.
     
  9. MrFox

    MrFox Deludedly Fantastic
    Legend Veteran

    Joined:
    Jan 7, 2012
    Messages:
    6,327
    Likes Received:
    5,655
    I wonder too, surely games with a dynamic LOD can crank up the draw distance and the LOD bias, the streaming will be pretty much instantneous compared to what the game was designed to stream (a 50MB/s hdd)
     
    megre likes this.
  10. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    9,895
    Likes Received:
    9,246
    Location:
    Self Imposed Work Exile: The North
    It still needs to go into RAM though. You’re just not caching as much as you would be with a slower HDD. So the solution is the same as today but you don’t need a huge buffer amount. There’s an upper limit on texture quality as well. If the texture quality exceeds the resolution it’s pointless.
     
    VitaminB6 likes this.
  11. dobwal

    Legend Veteran

    Joined:
    Oct 26, 2005
    Messages:
    5,321
    Likes Received:
    1,346
    I won’t denied that SDD will improve streaming because it will as there is no denying that increasing drive to RAM bandwidth by a minimum factor of of 24 is going to have an affect.

    However, not having any texture related data of what behind you in RAM is not going to be absolutely true in most cases. And thats even if the SDD can completely keep up.

    A lot of texture data that’s behind you will be in your view in front. If you are outside, there will probably be no need to stream in ground or grass textures if you turn around. They are already in RAM.

    Current systems already stream in the highest resolution texture at the last possible moment dependent on hardware. Level 0 mipmaps take up 2/3rd of a texture mipmap size.

    Plus we have yet to see how hungry RT will be when it comes to memory storage and bandwidth needs. Most of the memory savings may be used to accommodate RT. And streaming in textures faster than ever might not mean a whole lot if we drop down from 4K/1800p back to 1080p or 1440p. LOL
     
    #611 dobwal, Mar 26, 2020
    Last edited: Mar 26, 2020
    PSman1700, BRiT and iroboto like this.
  12. jayco

    Veteran Regular

    Joined:
    Nov 18, 2006
    Messages:
    1,102
    Likes Received:
    372
    I don’t think 8GB are enough for the 4K buffer and some AA, while needing space for the OS and game logic as well. Some PC games go over 8GB in VRAM (not including system memory) when certain post processing effects are enabled. On top of that flexibility, not every game is going to be constantly streaming from the SSD.
     
  13. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,348
    Likes Received:
    3,879
    Location:
    Well within 3d
    36 CUs would be easier to double, though there's less room below since the Series X would be at an intermediate position between the PS5 and a doubled Pro. Sony may need to consider if doubling is enough, especially if there were to be a Pro variant of the Series X.
    Going from 14 to 16 Gbps would be a scant upgrade, and proportionally weaker than the PS4 to Pro transition with a ~14.3% bandwidth improvement stretched over 2x the CU. Perhaps there would be an even faster interface speed, or a change in width, such as at least matching 320-bits, if not going wider.
    Sony's variable clock solution might have some kind of impact on a future Pro, since we'd assume Sony wouldn't want to drop the clock. Raising the clocks could be interesting, though the current clocks are being described as being in a region that's already inefficient. 72 or more CUs may be interesting versus a competing xPro if they are both much larger in CU count but one is still striving for constant clocks. There may be some load scenarios where its costlier or more difficult to hold to a constant clock with many more active units.
    What else could be scaled with a Pro console like the CPUs might be an interesting question. Zen 2 seems to be a more successful initial implementation versus Jaguar, so the clocks currently given aren't artificially low. A 33% jump would give clocks that would be ~4.7GHz, and node jumps at that clock range are often threatened with clock regressions. Don't know if they'd try for a clock bump, or if a non-standard number of additional cores could be an option.

    There is a Sony patent about varying clocks on the fly so that a new unit can emulate an older unit's performance, but with a true clock that is potentially faster and a spoof clock that the legacy software perceives as the original fixed clock.
    https://patents.google.com/patent/US9760113B2/en
    A mildly higher true clock could paper over any higher internal latencies with clock speed so that by the time the spoof clock has reached what the older code expects for forward progress, the emulated operation is done. 2.23 GHz versus 800 MHz or 911 MHz could be too much, but that might be why there are BC modes--whose clocks may still vary somewhat above the advertised base clock depending on the characteristics of what is running.

    The ISA and hardware itself have their own abstraction. The architecture promises certain outcomes or responses to various inputs, but whether those responses are accurately depicting what is happening internally are not required information for the software. Many values like the wave or CU ID are accessed with operations that read from system registers or are privileged locations. The hardware can give an answer that is valid in terms of what is possible for the legacy software, even if the true answer in terms of the modern implementation is different.
    CPUs running a VM can trap out guest requests for CPU or system information, where the hypervisor or a storage location that tracks the host vs guest relationship can patch in values appropriate for the guest.


    It's possible Sony's design may have had more pessimistic projections for the cost of 7nm wafers at the time the decision was made, so a bit less die area may have made more economic sense. It might have been considered easier to dial back on an over-engineered cooler with the next console hardware revision than it is to eat the cost of a die that would need to wait until 5nm for the next adjustment.
     
    disco_, zupallinere, iroboto and 5 others like this.
  14. Proelite

    Veteran Regular Subscriber

    Joined:
    Jul 3, 2006
    Messages:
    1,417
    Likes Received:
    743
    Location:
    Redmond
    On subject of memory contention:

    http://disruptiveludens.com/la-ultima-especulacion-sobre-ps5-antes-del-final
     
    PSman1700 likes this.
  15. Betanumerical

    Veteran

    Joined:
    Aug 20, 2007
    Messages:
    1,686
    Likes Received:
    155
    Location:
    In the land of the drop bears
    milk likes this.
  16. DSoup

    DSoup meh
    Legend Veteran Subscriber

    Joined:
    Nov 23, 2007
    Messages:
    12,064
    Likes Received:
    7,248
    Location:
    London, UK
    The drives may be capable of 10Gb/s theoretical transfer but the filesystem, driver stack and I/O china will eat heavily into that. It's why SSDs in RAID arrays don't eliminate loading times on PC now. Even Stadia running on server infrastructure cannot do that.

    Pulling data from the SSD puts it in main RAM, and data is often consolidated in .PAK files and needs separating, possibly decompressing or even converting by the CPU. And anything for the GPU needs to be transferred there. The PC is a vastly more flexible architecture which necessitates more complexity and that complexity introduces points for bottlenecks. I have no doubt that future PC architectures, bringing faster southbridges and faster local bus will surpass consoles but you need to brute force through the complexity and bottlenecks compared to a PS5 where has a mad SSD connected to a mad controller that talks directly to a single pool and RAM and is connected to the GPU.
     
    megre and BRiT like this.
  17. chris1515

    Veteran Regular

    Joined:
    Jul 24, 2005
    Messages:
    4,522
    Likes Received:
    3,345
    Location:
    Barcelona Spain
    An example of less efficiency was shown during the Cerny presentation because NVME standard only have two queues you need a 7 GB/s to keep up with PS5 controller. Inside the patent but they did not talk about it in the presentation the read unit is expanded. SRAM help with latency. They have coherency engine and GPU scrubbers helping write data in the memory range the GPU will use data for display things on screen. There is the DMAC and the two coprocessors probably ARM managing the SSD.

    Many custom things helping with efficiency.

     
    #617 chris1515, Mar 26, 2020
    Last edited: Mar 26, 2020
    disco_ and megre like this.
  18. Proelite

    Veteran Regular Subscriber

    Joined:
    Jul 3, 2006
    Messages:
    1,417
    Likes Received:
    743
    Location:
    Redmond
    So you're telling me you can add a discrete GPU / more ram via the SSD bay. :runaway:
     
    milk and PSman1700 like this.
  19. pjbliverpool

    pjbliverpool B3D Scallywag
    Legend

    Joined:
    May 8, 2005
    Messages:
    7,722
    Likes Received:
    936
    Location:
    Guess...
    Indeed but isn't DirectStorage designed to mitigate much of that?

    That's not to say there aren't still advantages to the PS5's customizations, there obviously are, but it's not like comparing how today's SSD's perform on the PC that don't benefit from DirectStorage.
     
    PSman1700 likes this.
  20. chris1515

    Veteran Regular

    Joined:
    Jul 24, 2005
    Messages:
    4,522
    Likes Received:
    3,345
    Location:
    Barcelona Spain
    https://forum.beyond3d.com/posts/2114618/

    I am explaining than when we will see data driven rendering it will be a game changer. ;) Here it is the memory size and the SSD streaming the limit... And it will help do things impossible in realtime... This is Ubi soft R&D for next generation.

    EDIT: This is not something we see this generation maybe why people don't understand...;-)

     
    #620 chris1515, Mar 26, 2020
    Last edited: Mar 26, 2020
    Proelite, lynux3, megre and 2 others like this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...