Baseless Next Generation Rumors with no Technical Merits [pre E3 2019] *spawn*

Discussion in 'Console Industry' started by Arkham night 2, Feb 16, 2019.

Tags:
Thread Status:
Not open for further replies.
  1. McHuj

    Veteran Regular Subscriber

    Joined:
    Jul 1, 2005
    Messages:
    1,416
    Likes Received:
    533
    Location:
    Texas
    I thought the 2016 reveal was pretty good. We got teraflops, memory bandwidth, and a shot of the motherboard which turned out to be pretty close to the final product (even in terms of the SOC size).

    If we get the same, I will be pretty satisfied. Gimme the GPU teraflop count and a motherboard shot. That will gives all the necessary info about the memory type and SOC/chiplet configuration. At this point in time, all these things are pretty final.

    I also want them to reveal the damn thing (and PS5) so devs can start showing early protypes of what games will look like and can speak freely about next-gen.
     
    iroboto likes this.
  2. Nisaaru

    Regular

    Joined:
    Jan 19, 2013
    Messages:
    855
    Likes Received:
    189
    IMHO the situation was different for the Pro/X1X reveal. The main design might be fixed but clocks/acceptable-yield aren't. The storage design might also allow more flexibility for an adjustment as we're around 1-1.5 years from release.
     
    iroboto likes this.
  3. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    7,715
    Likes Received:
    6,006
    Yea definitely power consumption and cooling become trickier.
    I'm not sure about the other aspects, I'm not all that well informed on how infinity fabric and multi-chip work. IIRC the single I/O bus chip is supposed to bring everything together, so I'm not necessarily sure if you need to double everything, and we've never seen 2 GPUs on infinity fabric either, so I've not a clue how things are connected let alone if it's possible at all.

    Programming should be within reason (I hope with the hardware being locked), we are seeing good saturation numbers with shadow of the tomb raider even with ray tracing enabled ~ up to 99% saturation on both on a pair of 2080TIs.
    I suspect they are using it in AFR format, I would be curious to see if they could utilize it differently with varying amounts of SFR with some novel use of executeIndirect to get the GPU to trigger its own workloads back and forth between the two GPUs on console if this is the setup.
     
  4. Nisaaru

    Regular

    Joined:
    Jan 19, 2013
    Messages:
    855
    Likes Received:
    189
    Why double memory controllers? Afaik the memory controller is on the IO DIE. They would just connect more ram there for the bigger model and activate the needed memory channels. Why should another infinity fabric connection matter price wise?
     
  5. Shifty Geezer

    Shifty Geezer uber-Troll!
    Moderator Legend

    Joined:
    Dec 7, 2004
    Messages:
    40,502
    Likes Received:
    10,875
    Location:
    Under my bridge
    I presume you mean monolithic. Although a console powered by many gods would be pretty awesome, maintenance - all those different rules and customs - would be hell for users.
     
    Jay likes this.
  6. AlBran

    AlBran Ferro-Fibrous
    Moderator Legend

    Joined:
    Feb 29, 2004
    Messages:
    20,603
    Likes Received:
    5,710
    Location:
    ಠ_ಠ
    Bah, workloads do not descend from the heavens!
     
    BRiT likes this.
  7. Metal_Spirit

    Regular Newcomer

    Joined:
    Jan 3, 2007
    Messages:
    395
    Likes Received:
    183
    You are talking Nvidia and one game and one engine. I can also see checkerboard rendering at great effect on some games, but I see others at 1080p due to problems implementing it. So, genericly speaking, how would a two GPU system behave on most of the engines?
    Honestly, I see that option as a new ESRAM, with problems where before there were none
    Not shure that is what programmers want.
     
  8. Jay

    Jay
    Veteran Regular

    Joined:
    Aug 3, 2013
    Messages:
    1,889
    Likes Received:
    1,050
    :lol:
    Nope it wasn't auto correct, Scarlett need all the help it can get each sku different combination of gods :runaway:
     
  9. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    7,715
    Likes Received:
    6,006
    it comes down to programming more so than the bridge from what I can see. I don't necessarily believe that SLI is a 'power' house of some sorts. Most of our issues with mGPU comes down to coding for it. Most developers won't invest the time into it because few setups are mGPU which is why we're getting poor performance from it.

    A locked 2 GPU setup with good APIs supporting creativity and freedom for the developers (which DX12 offers much better over DX11) then yes, we should see improved saturation and performance on mGPU.

    As much as I want to believe single big GPU is the way to go, we need to start having a real discussion about the cost structure eventually.
     
    Jay likes this.
  10. MrFox

    MrFox Deludedly Fantastic
    Legend Veteran

    Joined:
    Jan 7, 2012
    Messages:
    5,359
    Likes Received:
    3,847
    As long as we are limited to under 200w for the entire console, and a 399-499 price point, gpu chiplets seem overkill. But it would be fun.
     
    BRiT likes this.
  11. AlBran

    AlBran Ferro-Fibrous
    Moderator Legend

    Joined:
    Feb 29, 2004
    Messages:
    20,603
    Likes Received:
    5,710
    Location:
    ಠ_ಠ
    Won't matter if the competitor is seeing great performance day 1 because they don't enforce headaches to extract performance. You only get one chance to make a good first impression. Also bear in mind that the competitor is seeing more than double the HW sales on current gen (let alone the rest of the gaming market being developed with sGPU assumption).

    I don't think MS should shoot themselves in the foot, then drive a stake through the hole - it's a solution looking for a problem.
     
    #791 AlBran, May 15, 2019
    Last edited: May 15, 2019
    Shortbread likes this.
  12. Globalisateur

    Globalisateur Globby
    Veteran Regular

    Joined:
    Nov 6, 2013
    Messages:
    2,865
    Likes Received:
    1,608
    Location:
    France
    2 GPU chiplets on next Xbox ? The '2 GPUs inside the Xbox' crazy dream from 2013 could finally become true !

    Inevitably with time and infinite patience any prediction will become true. First universe law. :yep2:
     
    Laniakea likes this.
  13. iroboto

    iroboto Daft Funk
    Legend Regular Subscriber

    Joined:
    Mar 6, 2014
    Messages:
    7,715
    Likes Received:
    6,006
    yea well if you can do what 2 GPUs with 1, then i guess there's not really much of a discussion. Eventually we'll need to look at the possibility of requiring more than one, but I don't think that's this generation.
     
  14. Jay

    Jay
    Veteran Regular

    Joined:
    Aug 3, 2013
    Messages:
    1,889
    Likes Received:
    1,050
    At minimum I suspect you would see similar game performance unless PS5 is equal in performance, and if went chiplet route I'd assume it's to have the most power.

    Unless coding it, is an absolute nightmare then as has been proven devs will make use of it same way they did for esram.
    Same way they had to make use of multi core, multi thread, async compute.
    The question is how hard is it to get decent performance before taping all of the performance.
    With the way engines are built for async compute now, it seems like it wouldn't be as hard as in the past.

    Hopefully not long before finding out how they go about building the different performance profiles and sku's.
    If monolithic then I'm guessing lockhart and Anaconda will be different dies, as disabling cu's and downclocking would be fair bit of wasted wafer.
     
  15. MrFox

    MrFox Deludedly Fantastic
    Legend Veteran

    Joined:
    Jan 7, 2012
    Messages:
    5,359
    Likes Received:
    3,847
    We are seeing multi-gpu applications on amazon aws with up to 8x high end nvidia, because that chip is almost at the reticle limit. There is no reason to use many smaller cards. Anyone will use a single card until the biggest one available isn't enough.

    It's difficult to imagine a future console's GPU anywhere near the reticle limit (it's like 800mm2 or something?) I was thinking there could be an advantage with smaller chips having better yield, but disabling CUs seems to be a very efficient way to deal with yield. So not sure if there would be any gain.
     
  16. Shortbread

    Shortbread Island Hopper
    Veteran

    Joined:
    Jul 1, 2013
    Messages:
    3,675
    Likes Received:
    1,793
    Something like...
    APU (155w package): CPU 15w-45w (idle/load) & GPU 100w-110w.
    Motherboard: 20w
    Memory: 8-12w ???
    SSD: 0.5-3w (idle/load)
    Blu-ray Drive: 10w (read/install/movie-playback)
    Misc Components (fan, WiFi chip, etc.): 5w

    Edit: Brain-fart on memory.
     
    #796 Shortbread, May 15, 2019
    Last edited: May 15, 2019
  17. Globalisateur

    Globalisateur Globby
    Veteran Regular

    Joined:
    Nov 6, 2013
    Messages:
    2,865
    Likes Received:
    1,608
    Location:
    France
    On a GPU only. But on a APU you can only disable GPU CUs, not CPU cores, can you ? An 'APU' with one CPU chiplet and only one GPU chiplet could work very well for a console.
     
  18. McHuj

    Veteran Regular Subscriber

    Joined:
    Jul 1, 2005
    Messages:
    1,416
    Likes Received:
    533
    Location:
    Texas
    Memory should be a factor of 6x-10x higher than that. High bandwidth ram is becoming a significant contributor to power (hence HBM to not only address bandwidth but power consumption as well).
     
    Globalisateur, chris1515 and BRiT like this.
  19. Shortbread

    Shortbread Island Hopper
    Veteran

    Joined:
    Jul 1, 2013
    Messages:
    3,675
    Likes Received:
    1,793
    Had two brain-farts. Thinking of 8GB of DDR4 for some odd reason. But yes, 24GB of GDDR6 is what's being rumored -correct?
     
  20. Laniakea

    Newcomer

    Joined:
    Apr 16, 2019
    Messages:
    64
    Likes Received:
    78
    Nah, this time it's obviously the PS5 which will have a 2nd hidden GPU inside! Why? Because instead of throwing away GPUs which don't work even after yield boosting measures like deactivated CUs, they can use defective GPUs for irregular tasks like async compute but especially as a performance booster for VR. Meaning in VR each eye get's its own GPU!

    lulz.png


    Maybe some thirsty leaker picks this joke up. :lol2:
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...