In a post-silicon scaling world what will drive performance improvements for new console generations

Discussion in 'Console Technology' started by Prophecy2k, Jan 12, 2018.

  1. Prophecy2k

    Veteran

    Joined:
    Dec 17, 2007
    Messages:
    2,467
    Likes Received:
    377
    Location:
    The land that time forgot
    As the title says.

    I guess this thread probably belongs in the semiconductor industry forum, but I wanted to focus the discussion on consoles, understanding that the development process for new console generations differs in very specific ways (in terms of the design considerations) from desktop PC parts.

    Are we looking at 3D stacked ASICs becoming a viable path forwards for consoles? Or perhaps a new substrate, i.e. something other than silicon; e.g. graphene, Si Germanium etc..

    Or are there ever more exoticsemiconductor process technologies and/or radical new computing paradigms that will bring the requisite performance improvements to keep computing performance advancements moving forwards in the mid to longterm?
     
  2. JPT

    JPT
    Veteran

    Joined:
    Apr 15, 2007
    Messages:
    1,905
    Likes Received:
    283
    Location:
    Oslo, Norway
    I have no clue, so I'll just say Quantum, just to say something :p
     
  3. TheAlSpark

    TheAlSpark Moderator
    Moderator Legend

    Joined:
    Feb 29, 2004
    Messages:
    20,792
    Likes Received:
    5,879
    Location:
    ಠ_ಠ
    Scott Bakula ¯\_(ツ)_/¯

    My impression of other suitable substrates is that they just wouldn't be economical enough due to supply.

    I do wonder about consoles returning to a more customized architecture.
     
    #3 TheAlSpark, Jan 12, 2018
    Last edited: Jan 12, 2018
  4. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    8,000
    Likes Received:
    6,280
    I agree with more custom architecture.
    Different types of accelerators and less general purpose ones like we have now. I imagine the same change will happen in the PC space. Industry GPUs with tensor cores etc
     
    Shortbread, Prophecy2k and milk like this.
  5. McHuj

    Veteran Regular Subscriber

    Joined:
    Jul 1, 2005
    Messages:
    1,440
    Likes Received:
    560
    Location:
    Texas
    That will happen. There will be much more emphasis on architectures and specialized accelerators in my opinion.

    It wouldn't surprise me if saw even more integration with memory into a single chip. HBM is just an incremental step to that instead of a PCB you have an silicon interposer that allows for faster bandwidth and lower power. If the both memory and processor end up on the same chip, power efficiency will further improve.
     
  6. Tkumpathenurpahl

    Veteran Newcomer

    Joined:
    Apr 3, 2016
    Messages:
    1,113
    Likes Received:
    824
    So, if more custom architecture is likely, what are the likely candidates?

    I've been in love with ray tracing since I first read about it some ~10 years ago, so I'd love to see that gain some traction.

    What else is out there as a contender? Anything for physics? I remember those PhysX cards that were released some years ago, but they just got gobbled up by GPU's.
     
  7. Grall

    Grall Invisible Member
    Legend

    Joined:
    Apr 14, 2002
    Messages:
    10,801
    Likes Received:
    2,172
    Location:
    La-la land
    The problem with accelerators is that if you don't need whatever it is they do, they're just baggage, dead weight which you have to pay for, but which doesn't contribute to system performance. Considering price of transistors keep going up and up and up with every node, I don't think dedicated, hardwired silicon is the way we will end up going. They work in mobile phone SoCs because of low power requirements, but outside of that one specific useage case I'm sceptical. Tensor cores is a special case, due to HPC industry's current hardon for AI - this trend may or may not pan out in the future, but in either case I don't think it acts as some kind of general indicator of which way the whole industry will go.

    Home computers and consoles had system chips with hardware sprites and dedicated background layers, discrete sound channels and whatnot once, and it was all replaced with more and more generalized hardware; framebuffers combined with more and more advanced blitters at first, then 3D accelerators which went through a similar process of dedicated silicon becoming increasingly generalized.

    Time rarely moves backwards.
     
    Shortbread, Pixel, RootKit and 6 others like this.
  8. milk

    Veteran Regular

    Joined:
    Jun 6, 2012
    Messages:
    3,017
    Likes Received:
    2,589
    I don't think feature specific specialized hardware will become a thing, but maybe we will have more variations of general purpose hardware. Today we have cpus and gpus that are capable of doing general compute, but with different tradeoffs on the kind of code they are performant at. Maybe in the future other architectures might become popular on top of these existing ones. Maybe something in between what a cpu and what a gpu is today. Or maybe something even more extremelly parallel than a gpu or more extremelly branch-friendly than a modern cpu. Or something making tradeoffs within other aspects aside of the currently popular SIMD vs SISD dicotomy.
    I'm a terrible layman by the way.
     
    Shortbread, Prophecy2k and iroboto like this.
  9. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    8,000
    Likes Received:
    6,280
    Time moves in cycles though ;)
    From Centralized, to Decentralized, back to Centralized and now to decentralized.
    Good rebuttal though. But curious on your thoughts on how to continue to evolve the graphics scene while hitting the limits of silicon / price / performance
     
    Shifty Geezer likes this.
  10. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    8,000
    Likes Received:
    6,280
    sometimes a small adjustment can make a big change in how we do things.
    I expect this.
    Or a lot of small incremental changes, to enable something that couldn't be before.
     
  11. Theeoo

    Newcomer

    Joined:
    Nov 13, 2017
    Messages:
    132
    Likes Received:
    64
    I had one of those PhysX cards once. Supposedly they were equivalent to a 9800GTX in PhysX performance and that was the first generation foray by an unknown company so presumably future chipsets would've be way better but as salrady said it was probably too specialized to survive. An machine learning AI card might gain traction if it can have some useful mass market purpose.
     
  12. OCASM

    Regular Newcomer

    Joined:
    Nov 12, 2016
    Messages:
    922
    Likes Received:
    881


    TL;DW: real-time path tracing in Unity (free) this year which hopefully increases the demand for PowerVR cards.
     
  13. Gubbi

    Veteran

    Joined:
    Feb 8, 2002
    Messages:
    3,530
    Likes Received:
    875
    We'll see customization at the SOC level, but the individual components of the SOC will be standard IP with possible small variations:

    Standard CPUs.
    Standard GPUs.
    Standard memory technology.

    As silicon scaling comes to an end, mobile platforms will be stuck at a fixed performance ratio wrt. consoles, dictated by economics and power (less silicon, a lot less power). We won't see handsets with console class graphics four years after a console launches. Console vendors will stop to think in generations and start to think as continuous improvement of a perpetual product. In fact they already have.

    As silicon scaling dies, we'll see fab equipment amortized over much longer periods, so even though individual dies don't hold more logic, we can continue to pack more silicon into systems for a while because it will continue to fall in price, - though less and less so.

    Cheers
     
  14. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    8,000
    Likes Received:
    6,280
    i guess it's going to be a lot of focus on changing how games are coded. We started at separating CPU and GPU. And then we combined them into a SoC. If we need more power, can we not separate them again?
     
  15. TheAlSpark

    TheAlSpark Moderator
    Moderator Legend

    Joined:
    Feb 29, 2004
    Messages:
    20,792
    Likes Received:
    5,879
    Location:
    ಠ_ಠ
    I suppose that becomes a cost issue since the point of the APU was to reduce complexity on the production side, which has a cascade of effects (motherboard, inter-ASIC communication, memory etc).

    We'd also want to consider that the CPU side doesn't need to scale up physically as quickly as the GPU side since devs will probably only need so many cores/threads on there, so where we're looking at iterative designs, the GPU side can grow while the CPU just sticks to a certain # of core/threads (<16) with progressive architectural enchantment.
     
    iroboto likes this.
  16. Globalisateur

    Globalisateur Globby
    Veteran Regular

    Joined:
    Nov 6, 2013
    Messages:
    2,991
    Likes Received:
    1,716
    Location:
    France
    I think there is still a lot to improve with the silicon tech we have now.

    Fully Unified CPU / GPU engine in a fully compute flexible APU using a dynamic res & settings engine.

    We see currently the GPU taking more and more work from the CPU. At some point every jobs will be processed by the Compute GPU in a unified and if possible async way. Also some dedicated silicon in current GPUs (like Rops) will disappear to let more room for versatile compute silicon.

    No more CPU or GPU bottleneck anymore and it's going to work perfectly in tandem with dynamic resolution scaling :

    - Locked 60fps
    - Dynamic resolution / effects / Lod etc.
    - 100% of APU will constantly be used by the games.

    In such a unified compute APU the only bottleneck will be the resolution (and settings) and all games could run at 60fps.
     
    #16 Globalisateur, Jan 15, 2018
    Last edited: Jan 15, 2018
    milk likes this.
  17. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,891
    Likes Received:
    11,483
    Location:
    Under my bridge
    Realtime RT hardware makes a lot of sense to me. It's valuable not just in graphics but in audio modelling and AI - forever doing ray and circle casts. As I mentioned elsewhere (AI thread?), a move towards a different spatial model that ties graphics, audio, and AI together, based on a unified evaluation system, is a logical advance in game engine design.
     
  18. Orion

    Regular Newcomer

    Joined:
    Feb 18, 2013
    Messages:
    339
    Likes Received:
    47
    2 minute papers had a video about realtime raytracing improvements, iirc, using neural network denoising. Think they say it can go 1 sample per pixel now with excellent stable noise free video results, 1 sample per several pixels may be possible.
     
    OCASM likes this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...