Next Generation Hardware Speculation with a Technical Spin [2018]

Discussion in 'Console Technology' started by Tkumpathenurpahl, Jan 19, 2018.

Tags:
Thread Status:
Not open for further replies.
  1. Theeoo

    Newcomer

    Joined:
    Nov 13, 2017
    Messages:
    132
    Likes Received:
    64
    People who can't play games at anything less than 60 fps, and are willing to pay the premium.
     
    HBRU likes this.
  2. AlBran

    AlBran Ferro-Fibrous
    Moderator Legend

    Joined:
    Feb 29, 2004
    Messages:
    20,583
    Likes Received:
    5,681
    Location:
    ಠ_ಠ
    Plenty of premium to go around on PC these days. ;)
     
    BRiT likes this.
  3. bunge

    Regular Newcomer

    Joined:
    Nov 23, 2014
    Messages:
    725
    Likes Received:
    513
    Fanboys?
     
  4. east of eastside

    Newcomer

    Joined:
    Dec 25, 2011
    Messages:
    219
    Likes Received:
    22
    next-gen console CPU:

    Ryzen = no
    ARM = yes

    There is no Ryzen core that is suitable (tdp/die size/economics) for a console. AMD abandoned the ultra-mobile, low-power market that Jaguar addressed and there is no Ryzen replacement.

    What makes more economic sense: making a console with 4-8 hot/big/expensive Ryzen mid-range CPU cores, or making a console with 12-16 ARM mobile cores? No contest, the economies of scale favor ARM.

    CPU next-gen will have a greater focus on GPU off-loading and GPU integration than has ever been achieved in the past negating the necessity to have a strong stand-alone CPU.

    Microsoft is heading in an ARM direction with Windows and it will do the same with XBOX.

    Sony will break away completely from AMD, skip compatibility and go with an ARM/NVIDIA console. Softbank will be a factor in that development as ARM is now a Japanese owned company and Softbank is a major investor in NVIDIA.

    Sony will produce a PS5 hybrid portable to compete with Switch and Xbox will have a cloud streaming based portable.

    Everything next-gen will feature ARM-based CPU's.
     
    #84 east of eastside, Jan 26, 2018
    Last edited: Jan 26, 2018
  5. Tkumpathenurpahl

    Regular Newcomer

    Joined:
    Apr 3, 2016
    Messages:
    998
    Likes Received:
    716
    --Bandwidth--

    From what I've read, it seems the Pro is bandwidth starved - I haven't read anything similar about the X1X - so is 50GB/s enough or is it just sufficient?

    Given the PS2's insanely fast (but very small) amount of EDRAM, and the wizardry that was produced with it, is too much bandwidth ever really possible?

    --Cost--

    https://www.extremetech.com/computing/258901-samsung-introduces-new-16gbps-gddr6-2gb-capacities

    Quoting from the above "In other words, a system could now field 8GB of RAM in just four GDDR6 chips while maintaining a respectable 256GB/s of memory bandwidth."

    A 16GB 16gbps GDDR6 console would possess 512GB/s bandwidth, utilising 8 chips. The same quantity and bandwidth would be possible with just 2 chips of HBM2 - albeit 8GB chips, so maybe "just" isn't the right term to use...

    HBM3 is supposed to be cheaper to produce than HBM2, possess at least twice the bandwidth, and consist of up to 64GB per stack. If it does meet its promises (hahaha,) just one stack should be enough to supply a 10TF console.

    So, over the early 2020's, does HBM2 or 3 look likely to reach a point where one or two stacks are cheaper than the same size or bandwidth GDDR6?

    --SSG--

    Also, would the use of NAND impact the bandwidth at all?

    https://www.digitaltrends.com/computing/amd-explains-its-monster-radeon-pro-ssg/

    All I'm getting from that is 8GB/s access to the solid state memory.... And I'm struggling to find anything that discusses the bandwidth hit on the GPU+VRAM.
     
  6. itsmydamnation

    Veteran Regular

    Joined:
    Apr 29, 2007
    Messages:
    1,288
    Likes Received:
    385
    Location:
    Australia
    This is just wrong,
    1. CPU cores make up an ever decreasing amounts of die space, across just about all SOC's, 8 Zen cores on 7nm is going to be in the range of 40mm sq, so somewhere in the range of 10-15% of a Console SOC.

    2. Zen isn't hot at all, i dont know where you get that from, especially if you disable SMT because generally SMT is increasing perf per mm not perf per watt. Just look at the 2500u, in TDP bound situations all core loading it beats intel in perf and perf per watt.

    3. Calling Zen mid range, now we know your just shilling, Zen V1 is what ~5-8% behind skylake per clock, Zen V1.something is improving memory/cache performance ( the one obvious area skylake has an advantage) and No one has any idea what Zen V2 is. So the only way i can get Zen to be mid range is by using a logarithmic scale. That would mean your solution is using ultra weak cores and they would loose on marketing alone............

    4. 8+ strong threads is always going to beat N*2 weak threads in complex workloads with complex data sharing needs , Many game developers ( even AAA) are still struggling to get away from needing main threads etc let alone lesser game developers.

    Good to see the hand waving didn't stop at Zen specifically. People have been saying this for forever and it hasn't happened. It hasn't happened for a simple reason, Latency. So what has magically changed? Thats right nothing. We can already run whatever kernels you want on the console GPU's and there has been no revolution.

    Whats the silver bullet to change the paradigm? Be specific

    Want to put money on that? Everything else in your post is wrong...........
     
  7. east of eastside

    Newcomer

    Joined:
    Dec 25, 2011
    Messages:
    219
    Likes Received:
    22
    It's very elementary that an ARM console CPU core solution is going to beat a Ryzen one on tdp, size, efficiency, potential core configurations, cost, economies of sale.

    It's obvious.

    ARM won't have the penalties Zen will have doing 8 cores (or more) in a mesh, not that it matters in the big picture of the case for ARM.

    Xbox One X already demonstrated a scenario for greater GPU offload to off set low power cores. This is the design trend that will continue.

    Ryzen will *never* be in a console.
     
  8. itsmydamnation

    Veteran Regular

    Joined:
    Apr 29, 2007
    Messages:
    1,288
    Likes Received:
    385
    Location:
    Australia
    This is just more hand waving, the simple fact is ARM without major development costs worn by the console developer can only do some of those at any one time or you will end up with a phone SOC and no one will buy your console.

    look at a Hurricane or Typhoon Core and Zen , use L1D's or L2 bank to get a relative point to compare to. Does one look significantly bigger then the other.... a little but not much, people estimate Apple big cores at about 3mm sq , Zen is 5mm sq. that zen core also clocks 1.75x faster on a worse process.

    Size and efficiency advantage? sure at significant poorer performance. Zen has shown that is very performant and efficient upto ~3ghz( linear scaling), 7nm will extend that and any point can be picked along that curve.

    Just like with arm you can have any core config you want with x86, you just have to pay for someone to build it(just like arm), a CCX is a good mid point, 4/8 core/threads with very low cache latency and a method to scale cores up. Look at DynamIQ, private L2, shared L3 upto 8 cores and then connect the clusters together, kind of look like Zepplin/Zen, except no ones implemented it outside one cluster so who knows what that performance looks like......

    I really want to see how you can explain economies of scale to me, AMD can fab at GF, TSMC and Samsung as required just like everyone else......


    Then why do you have such a hard time demonstrating it? You know claims require this amazing thing, you might of heard of it, maybe not? Its called EVIDENCE!

    You obviously have no idea, funny that ARM's interconnect design looks a lot like AMD's interconnect design, yet one is magically worse then the other, yet its the only one that anyone has scaled across 16 "clusters".......

    No it hasn't, they have to tap dance around it, they also have a significant weaker CPU/ GPU console they need the same games to run on so its never going to be a good example because.

    You continue to ignore latency, You also piss on AMD yet there GPU's have the best QOS/latency/concurrency contols of any GPU, funny that.

    You only need to look to PC's to see your full of it, by your logic a 8350 should be a great gaming CPU, yet the bigger the GPU the quicker the bottleneck. Your sprouting nothing we haven't heard before, remember the PPU, look where that ended up. A Decade in NV, the Manufacture willing to pay/implement features directly into games, what do they have to show for it?

    This time it was all going to run in the cloud, look how that went........

    So again i ask you, want to bet on it? I'll go a wager, willing to put your money where your month is?
     
    function likes this.
  9. temesgen

    Veteran Regular

    Joined:
    Jan 1, 2007
    Messages:
    1,531
    Likes Received:
    325
    The bad news is this tech is going to be expensive, the good news is that because the tech is so advanced North Korea won't be allowed to buy all the systems at launch and power their rockets.

    In all seriousness, I have no idea what the specs would look like but while I hope we have machines which can produce a 4k image native or checkerboard at 30 to 60 fps, I'm far more interested in seeing advanced AI, better animation and more variety in the assets used in game.

    Hopefully we have game engines fractally assembling teams of enemies which vary even inside the same class, with some bigger, others smaller, a few darker, others lighter, even slightly different outfits, varying moods and so on. Same with buildings and any other art assets. For example seeing cars going up and down the street with some cars looking fresh off the lot and vehicles having scratches or issues with the paint here and there will greatly increase the realism even if fundamentally its only 3 or 4 models of car but 4 or 5 varieties of each - the changes would really add to realism.

    I'm not sure how CPU intensive or memory intensive this sort of thing would be but having levels with demolition, more variety assets within a specific class of object and some sort of machine learning so tactics evolve based on how you play the game would go along way.

    probably the best way to describe this would be a game with a kennel of dalmations but the dogs are different. They are all white and black but some are bigger or have more spots or the spots are in slightly different locations or some dogs are quieter or more friendly.
     
  10. HBRU

    Regular Newcomer

    Joined:
    Apr 6, 2017
    Messages:
    390
    Likes Received:
    39
    why pass to ARM and not stay with Jaguars ? As time passes more and more tasks are passed to GPU on todays consoles so offloading CPU... As far as I know the biggest trouble today (exept for one x) is bandwith lack... Not CPU too weak. I understand someone wants better IA on enemyes in games but this is maybe better reached by just doubling to 16 the Jaguar cores... Ryzen is a huge chip compared with Jaguar...
     
  11. Theeoo

    Newcomer

    Joined:
    Nov 13, 2017
    Messages:
    132
    Likes Received:
    64
    So what you're saying is we need to avoid the Truman Show effect.
     
  12. msia2k75

    Regular Newcomer

    Joined:
    Jul 26, 2005
    Messages:
    326
    Likes Received:
    29
    Just to be clear, those 40mm sq estimations are without any cache included or with cache L1 & L2?
     
  13. Gubbi

    Veteran

    Joined:
    Feb 8, 2002
    Messages:
    3,505
    Likes Received:
    829
    A single quad core CCX is 44mm^2 including L1,2 and 3 caches in 14nm. You can propably expec that area to be halved going from 14nm to 7nm.

    Cheers
     
    Tkumpathenurpahl likes this.
  14. bitsandbytes

    Newcomer

    Joined:
    Nov 27, 2011
    Messages:
    185
    Likes Received:
    64
    Location:
    England
    I'm not convinced a 50% size reduction is on the cards. Xbox One's SoC/APU reduced by 33% going from 28nm to 16nm (363mm^2 to 240mm^2) so expect a similar reduction going to 7nm.

    This would make a 7nm 2x CCX ~57mm^2 versus OG PS4's 8 Jaguar cores ~52mm^2 including caches?
     
  15. BRiT

    BRiT (╯°□°)╯
    Moderator Legend Alpha Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    12,160
    Likes Received:
    8,311
    Location:
    Cleveland
    But was that limited in size reduction because of esram?
     
    AlBran likes this.
  16. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    7,684
    Likes Received:
    5,983
    Some of the things about AI in general, and perhaps we've discussed this to death is that I can see investing in AI as being a very difficult choice. You can invest all this tech and budget into it, but should you, you would expect that it would dramatically have an effect on the game itself - if people aren't signing praises about it, that would be no different than investing a whole bunch of money into art assets that no one sees. So it makes sense in games where you are particularly trying to beat the AI. Chess games, Civ, etc. But in these narrative titles, etc, having machine learning AI makes curation of the game experience harder, and you're investing all this budget to include random factors that only make your life harder in the long run.

    And so some of the best examples I've read so far is the idea of a persistent open world operating on its own and growing on its own without you interacting or interacting with that world. That's an interesting and novel concept, but curating a story around it and interlocking pieces becomes more challenging as a result of this. Being able to control things like progression etc. Unless you remove those systems out completely (also doable), but I feel like we're treading similar territory to procedurally generated worlds (ie No Man's Sky) and the experience is hard to control; but in some cases (Diablo) it works great though. It would be interesting if there were no side quests and no progression system. And the only quests were the main storyline, everything else is entirely up to the player to leverage the environment to their needs or not. You'll continually need more CPU to have more depth of interactivity with the world, so perhaps that's a catch all statement.

    It seems like the biggest evolution for more CPU tends to sit around having 'more' detailed/immersive worlds. I'm still not sold on the idea of real-time adaptive AI. Or the idea of just adaptive AI in general. People don't actually like realism as much as they think, if games were actually realistic, they'd be too frustrated to play. IE: what happens when you get stomped by pre-made teams working together in a MP match.
     
    Tkumpathenurpahl and temesgen like this.
  17. bitsandbytes

    Newcomer

    Joined:
    Nov 27, 2011
    Messages:
    185
    Likes Received:
    64
    Location:
    England
    Not sure TBH as for some strange reason no one has bothered to actually measure and report what reduction PS4 slims APU got unless I missed it?

    You are right in that I do remember reading that ESRAM is harder to shrink.
     
  18. AlBran

    AlBran Ferro-Fibrous
    Moderator Legend

    Joined:
    Feb 29, 2004
    Messages:
    20,583
    Likes Received:
    5,681
    Location:
    ಠ_ಠ
    A CU on durango/orbis is ~6mm^2. On the somewhat recent PS4 Pro die shot, their enchanted CU is roughly 3.5mm^2, while on Scorpio, the estimate is about 3.1mm^2.

    Each 1MB partition of CPU L2 is about 3.6-3.8mm^2 on the durango/neo shots, while it's roughly 1.5mm^2 on scorpio/Neo.

    Each jaguar core on Durango/Neo is ~2.9mm^2. Scorpio seems to have slightly enlarged jaguars compared to Neo - 2mm^2 vs ~1.8mm^2.
     
    #98 AlBran, Jan 27, 2018
    Last edited: Jan 27, 2018
  19. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,411
    Likes Received:
    10,778
    Location:
    Under my bridge
    There's a whole discussion on AI. It's not an easy thing to change. Bethesda had to tone down the Radiant AI in Oblivion because it screwed with the gameplay. We don't want AI that's too smart or it'd be unbeatable.

    The important thing is to have versatile hardware that can be used as needed. I'll repeat my desire for a volumetric modelling of the world that can be used for graphics, audio, and AI. Whether that warrants a ray0tracing accelerator, or just a fast GPU with loads of RAM and bandwidth, I don't know.
     
    milk, temesgen and iroboto like this.
  20. Tkumpathenurpahl

    Regular Newcomer

    Joined:
    Apr 3, 2016
    Messages:
    998
    Likes Received:
    716
    Given Mark Cerny's emphasis on "time to triangle" with the PS4, is there an argument that, whilst offloading CPU tasks to the GPU is possible and prevalent in bigger studios, it's worth going with a substantial CPU upgrade for ease of development across the wider development community?
     
    milk likes this.
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...