Next Generation Hardware Speculation with a Technical Spin [pre E3 2019]

Discussion in 'Console Technology' started by TheAlSpark, Dec 31, 2018.

Thread Status:
Not open for further replies.
  1. fehu

    Veteran

    Joined:
    Nov 15, 2006
    Messages:
    2,067
    Likes Received:
    992
    Location:
    Somewhere over the ocean
    Wait
    I've grown up believing that dma was the best thing after chocolate and sex, and now we are all happy because the cpu will trash its cache for manually manage all the io?
     
  2. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    44,104
    Likes Received:
    16,896
    Location:
    Under my bridge
    I have no idea.
    For a bulk of the library, possibly not, but for 100% 'play from disk' emulation, I think it necessary.

    You really wouldn't need to. It's 4 mm²! Power draw is going to be a handful of watts

    It's not really about a substantial benefit, but killing two birds with one stone. They need a secondary processor for IO. They want PS3 BC. All the arguments again including a Cell CPU, which are all valid, can likewise be levied at PS1's inclusion in PS2. Do PS2 gamers really want to play old-gen PS1 titles? Is it worth the expense rather than just using the PS2 CPU? Admittedly, PS1's inclusion would have been a lot cheaper than a Cell inclusion in PS5.

    Incidentally, Sony shrunk Cell to fit 8 onto a server rack for PSNow. Shrinking it even further, in a combined PS5/PS3 box, which also runs PS4 and PS1/2 emulated games, would cover all the bases for offering the entire 25 years PS library. The alternative would be running expensive, powerful hardware to emulate PS3 where a few watts of tiny silicon would be more efficient.
     
    Heinrich4 and MBTP like this.
  3. I wonder if it really is. Just look at how far the RPCS3 developers have gone, probably from just doing JIT emulation through trial and error:






    Nah, I think the alternative would be to emulate the PS3 using the exact same hardware that already powers PS5 games so at zero mm^2 cost, probably with the GPU massively downclocked.
     
    #2363 Deleted member 13524, May 23, 2019
    Last edited by a moderator: May 23, 2019
    BRiT likes this.
  4. MBTP

    Newcomer

    Joined:
    Jul 7, 2017
    Messages:
    41
    Likes Received:
    29
    Just to add, Sony would not do a shrink node exclusively to the PS5, but would do it for all of their servers racks of PSNOW that run PS3 games too, which would allow a massive expansion, even 12nm would be interesting from this perspective i think.
     
  5. Globalisateur

    Globalisateur Globby
    Veteran Subscriber

    Joined:
    Nov 6, 2013
    Messages:
    4,592
    Likes Received:
    3,411
    Location:
    France
    I don't know. Nintendo also tried to kill 2 birds at once with the WiiU (with hardware Wii BC)...They created an innefficient frankenstein.
     
  6. Vega86

    Newcomer

    Joined:
    Sep 25, 2018
    Messages:
    191
    Likes Received:
    131
    Is UFS 3.0 going to be the storage of next generation?

    Releasing lockhart early as an honest to G 4K/60 FPS minimum for all of this generation's games 500 would be interesting.
     
  7. temesgen

    Veteran

    Joined:
    Jan 1, 2007
    Messages:
    1,680
    Likes Received:
    486
    The thing that bothers me about discless Xbox is bc, my entire 360 library is physical.
     
  8. RDGoodla

    Regular

    Joined:
    Aug 21, 2010
    Messages:
    609
    Likes Received:
    172
    Does it mean that PS5 controller will have a touch pad with haptic feedback?

    It will be interesting for games if the touch pad really has haptic feedback.




    Is it possible that SPE of Cell can be used for ray-tracing?
     
    Heinrich4 likes this.
  9. bgroovy

    Banned

    Joined:
    Oct 15, 2014
    Messages:
    799
    Likes Received:
    626
    Right, but they're incompetent.
     
  10. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    44,104
    Likes Received:
    16,896
    Location:
    Under my bridge
    They based their entire console architecture on something incredibly outdated, which was dumb. Had they done it the way being suggested, as Sony did with PS2, Nintendo would have designed a new system on a new architecture and only included necessary, useful hardware from GC. The CPU was only 19 mm² so could have been coupled with a non-PPC processor as an audio or other processor. Or keep that part simply the same with compatible PPC CPU and just use useful parts from Hollywood that couldn't be nicely emulated for doing something.

    No-one's suggesting PS5 is hampered by using Cell as a basis for anything important! Like I say, as the IO driver it'd be isolated from devs who wouldn't need to get their hands dirty. As an audio driver, Sony could do all the audio API, but potentially have it open for devs to meddle with. Unlike PS3, devs would not need to be able to write efficient code for it to do anything useful.
     
    MBTP likes this.
  11. TheAlSpark

    TheAlSpark Moderator
    Moderator Legend

    Joined:
    Feb 29, 2004
    Messages:
    22,146
    Likes Received:
    8,533
    Location:
    ಠ_ಠ
    Potentially different in this scenario is that they wouldn’t necessarily be melding the different architectures ( WiiU GPU ) or extending a dead architecture ( WiiU tri-core of Watdom) so much as just using duct tape to hang the Cell off of some interconnect.

    That said there’s going to be some low level RSX headaches to deal with... it’s not just all on Cell.
     
  12. anexanhume

    Veteran

    Joined:
    Dec 5, 2011
    Messages:
    2,078
    Likes Received:
    1,535
    Wii was an OC’ed GameCube - the last time Nintendo tried to compete on HW power.
     
    BRiT likes this.
  13. Aren't N+1 gen PowerPC processors' ISA a superset of N gen's?
    I thought they were like x86 in that regard, so using a scaled down and shrunk Power6 or Power7 seemed like the obvious choice at the time, to maintain BC.


    From what I've seen from the RPCS3 emulator, any DX12 era GPU above Intel's GT2 can emulate the RSX perfectly. They're just saying it needs Vulkan support, so I'm guessing anything with shader model 5 and up is good enough.

    Truth be told, the vast majority of the PS3 games ran at 720p30. AMD's 15W APUs can do way better than that on 7th gen games.
     
  14. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,579
    Likes Received:
    4,799
    Location:
    Well within 3d
    This implements a subset of a general-purpose file system that serves as a sort of bypass of the coexisting standard file system. By limiting the data to a specific use case of read-only large game packages, and using their role as a platform maker to make assumptions on the OS and SSD side that independent OS and SSD manufacturers can't, a lot of steps intended for managing arbitrary accesses, arbitrary clients, and protections.
    There are various places where custom file systems persist (high-performance databases, for example), although it helps significantly in Sony's case that they are not trying to do anything near the complexity of more full systems and can fall back to the established file system for elements that fall outside the archive system.

    The secondary CPU's workflow in this instance is coordinating between the host, accelerator, and SSD controller. Accessing hash tables, running system software, reading buffers/signals, and breaking accesses into 64KB chunks does not seem like it pairs well with the vector-oriented SPEs, and the PPE's value-add may be questionable. I'm curious how its characteristics as a high-clock, long-pipeline, narrow SMT core with vector units measure up to the ARM cores that can be found in SSD controllers.

    One quirk to the design is that the second CPU is attached via a coherent bus to the host, which sounds like one of the ideas AMD threw out years ago as to what other things besides compute GPUs could use HSA.
    It seems like it's not fully coherent so much as IO coherent, given how data is moved into a kernel buffer and not released to general use until the sub-CPU signals it, but the patent does state the host and sub-CPU have the same size page tables.
    One item the patent lists is a separate DMA engine, which at least for the SPEs seems possibly redundant, although given we know the host processor is x86 it's not clear how compatible Cell is to its host.

    Conceptually, the accelerator's functions are in line with the compression done by controllers like those from Sandforce, and now-common SSD hardware encryption.
    I think in part this patent discusses them because they've been moved out of the SSD controller and use system memory.
    Higher-end controllers are already multi-core ARM systems with attached accelerators, so this method seems to be moving some of that to the host die, while leaving one or more on the controller die for low-level physical functions.

    I think the larger benefits are from the the co-designed elements of the platform, where the OS and controller are allowed to cooperate in special-case handling of heavily read static data. This is enough to justify that the OS and host use a dedicated API, a system-level shortcut through the file system, more actively manage a specialized a custom SSD architecture, and the SSD can make a number of simplifying design decisions and passes back lower-level address information to higher levels in the stack.

    These may be harder for generic PCs to match than any specific hardware unit. Controllers generally cannot make such assumptions about the data they receive, and handle more of the low-level business of drive management that the OS layers generally don't. The PC OS designers at least currently haven't had a case for bypassing their file systems in this manner, and wouldn't have bespoke APIs and interfaces per SSD model and vendor.
    I'm not sure with present architectures that something that can bypass the file management, synchronization, and system protections would be considered acceptable. Also, the file archive's ability to lock out any other clients to the disk may not be an acceptable decision for non-dedicated systems.

    As for the potentially custom SSD controller, I guess we'll have to see how robust it is outside of the file archive's range, and whether there is licensed IP for the arcane management of the electrical and physical quirks of NAND, particularly if they shoot for the higher counts of multi-level flash.

    I think that comes down to the rights held by TSMC's client.
    Sony at one point built its own 90nm fab just to manufacture Cell, the cost of which was one big reason why that initiative was so financially ruinous for Sony. If Sony at the time could fab its own Cell chips, perhaps some freedom in contract manufacturing could exist or could be negotiated.


    The main CPU generally hands things off the sub-CPU, whose job is similar to what the cores in SSDs do internally, just on the same die this time. The host processor would send off a request and wouldn't come back until the final output had been copied to a destination buffer and the sub-CPU signaled completion.
     
  15. MrFox

    MrFox Deludedly Fantastic
    Legend

    Joined:
    Jan 7, 2012
    Messages:
    6,488
    Likes Received:
    5,996
    Not sure, they sell patents for dual-axis resonant rumble, which is actually used in the Switch joy con. It's mildly interesting for lower power consumption. The rest of their development doesn't seem to be useful or clever in any way. Their screen haptic stuff looks like patent trolling to me, but I'm not a patent lawyer.
     
  16. TheAlSpark

    TheAlSpark Moderator
    Moderator Legend

    Joined:
    Feb 29, 2004
    Messages:
    22,146
    Likes Received:
    8,533
    Location:
    ಠ_ಠ
    That could be a critical hit to the seamless cinematic experience though.

    Then there's Unreal Engine.

    -----

    Anyone have Sebbbi's old posts handy regarding his virtual texturing, GPU decompression etc. from like a decade back :?: Thought I recall stuff about the amount of data per frame for a 60Hz title (where there's a budget within the frametime).

    ----
    Here's one:
    https://forum.beyond3d.com/posts/1588434/

    https://forum.beyond3d.com/posts/1655866/
     
    #2376 TheAlSpark, May 24, 2019
    Last edited: May 24, 2019
    BRiT likes this.
  17. BRiT

    BRiT (>• •)>⌐■-■ (⌐■-■)
    Moderator Legend Alpha

    Joined:
    Feb 7, 2002
    Messages:
    20,511
    Likes Received:
    24,411
    @AlBran dug up these posts by Sebbi that discuss memory usage per frame and overall needs of engine streaming and virtual textures.

    From 2011 - https://forum.beyond3d.com/posts/1588434/

    "1080p would require a tile cache of 2.25x and 2560x1600 4.44x size compared to 720p"

    "As 720p requires only around 6.7 MB/s to stream all required texture data, even current network connections have more than enough bandwidth for streaming... however hiding the constant 200ms network latency would require lot of additional research."


    From 2012 - https://forum.beyond3d.com/posts/1655866/

    aaronspink: "In the console space, using 2GB as a disk cache alone will make for a better end user experience than 2x or even 3-4x gpu performance."

    sebbi: "I completely disagree with this. And I try now to explain why. As a professional, you of course know most of the background facts, but I need to explain that first, so that my remarks later aren't standing without a factual base."
    ...
    sebbi: "More memory of course makes developers life easier. Predicting data access patterns can be very hard for some styles of games and structures. But mindlessly increasing the cache sizes much beyond working set sizes doesn't help either (as we all know that increasing cache size beyond working set size gives on average only logarithmic improvement on cache hit rate = diminishing returns very quickly)."


    EDIT: Didn't know he was editing his msgs from the IRC discussions...
     
  18. see colon

    see colon All Ham & No Potatos
    Veteran

    Joined:
    Oct 22, 2003
    Messages:
    2,756
    Likes Received:
    2,206
    All I'm saying is that 1X was marketed as 4K. It's the main marketing point used for it, right up there with 6 TF. All I'm saying is that I don't think potential customers who've been fed this 4K 6 TF marketing are going to forget it all. Next gen is a step backwards in resolution? A step backwards in processing power? I'm curious how you thing a GPU with such similar specs is going to produce games that look a generation ahead. The GPU isn't the part of 1X that's holding it back. This has been my point all along. Sure, Ryzen is a big step up from Jaguar, and that extra CPU power can speed up your simulation, AI and other stuff, but it's hard to market framerate. If Microsoft tries to market a less than 4k, less than 6TF GPU that produces 1X quality visuals at higher frame rates as a next generation machine, they are going to have their sit in the corner holding one swollen eye while they see Sony spending their lunch money on twinkies with the other.


    I haven't put that much thought into it. But I'll consider this statement while I contemplate more about the topic.

    To be fair to Nintendo, they repackaged one of their least successful systems in Gamecube into their most popular system ever. They tried it again with WiiU, and it failed to gain steam, but they didn't do it for no reason. The same concept had already been successful in the previous generation. Actually, some hardware choices for SNES were made to maintain hardware compatibility with NES, before that idea was scrapped. The hardware remained, though. So they had successfully done it twice, WiiU is the outlier.
     
  19. borntosoul

    Regular

    Joined:
    Oct 9, 2002
    Messages:
    319
    Likes Received:
    46
    Location:
    Au
    Well the conformation of backwards compatibility with the ps4 means one major thing, cost and or specs of the ps5 will probably be higher than otherwise thought.
    So they include all their ps online players who still have ps4 and the people who can afford the ps5. No need for a vastly cheaper version which might dispel rumors on what the MS will do with 2 vastly different versions of Xbox.
     
  20. Mskx

    Newcomer

    Joined:
    Apr 20, 2019
    Messages:
    196
    Likes Received:
    216
    Assuming that Lockhart is taking what Anaconda is doing at 4K and doing it at 1080p/1440p i think the marketing is pretty easy;

    "If you haven't yet made the investment in 4K don't fret, we will not leave you behind! We have a lower price option that can do everything that these more expensive consoles can do; play all next generation games with next generation features like ray tracing and fast load times and framerate blablabla only at a resolution more suited for your TV."

    The angle with Lockhart is to try to give an early option to market segments and regions that would normally wait years into a console life cycle to buy in, the people that are buying PS4 Slims and Xbox One S's today.

    I really don't think it's that hard nowadays.
    There is a more widespread understanding of the benefits of after a decade+ of the biggest FPS games going for 60fps, various popular streamers and pros all saying how important it is for high level play and 60fps becoming a standard feature on streaming sites and YouTube means it is much easier to show the difference and benefits to people than back when Insomniac decided to stop going for it.

    If someone like Xbox decides that is a thing they want to push for i don't think they would have a hard time marketing it and they sure would get some bonus points from a large portion of the hardcore crowd.
     
    OCASM likes this.
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...