Baseless Next Generation Rumors with no Technical Merits [pre E3 2019] *spawn*

Status
Not open for further replies.
I thought the 2016 reveal was pretty good. We got teraflops, memory bandwidth, and a shot of the motherboard which turned out to be pretty close to the final product (even in terms of the SOC size).

If we get the same, I will be pretty satisfied. Gimme the GPU teraflop count and a motherboard shot. That will gives all the necessary info about the memory type and SOC/chiplet configuration. At this point in time, all these things are pretty final.

I also want them to reveal the damn thing (and PS5) so devs can start showing early protypes of what games will look like and can speak freely about next-gen.
 
IMHO the situation was different for the Pro/X1X reveal. The main design might be fixed but clocks/acceptable-yield aren't. The storage design might also allow more flexibility for an adjustment as we're around 1-1.5 years from release.
 
Power consumption and cost!
You would have two GPUs. Double memory controllers, doubles buses, double infinity fabric, etc. This would increase costs to more than a single GPU with the same CUs.
Besides, all these components draw power, so you would also have a bigger power consumption than the one with a single GPU with the same CUs.

Yea definitely power consumption and cooling become trickier.
I'm not sure about the other aspects, I'm not all that well informed on how infinity fabric and multi-chip work. IIRC the single I/O bus chip is supposed to bring everything together, so I'm not necessarily sure if you need to double everything, and we've never seen 2 GPUs on infinity fabric either, so I've not a clue how things are connected let alone if it's possible at all.

Programming should be within reason (I hope with the hardware being locked), we are seeing good saturation numbers with shadow of the tomb raider even with ray tracing enabled ~ up to 99% saturation on both on a pair of 2080TIs.
I suspect they are using it in AFR format, I would be curious to see if they could utilize it differently with varying amounts of SFR with some novel use of executeIndirect to get the GPU to trigger its own workloads back and forth between the two GPUs on console if this is the setup.
 
Why double memory controllers? Afaik the memory controller is on the IO DIE. They would just connect more ram there for the bigger model and activate the needed memory channels. Why should another infinity fabric connection matter price wise?
 
Not against monotheistic as I think it would work well for ps5, but Scarlett has different considerations.
I presume you mean monolithic. Although a console powered by many gods would be pretty awesome, maintenance - all those different rules and customs - would be hell for users.
 
  • Like
Reactions: Jay
Yea definitely power consumption and cooling become trickier.
I'm not sure about the other aspects, I'm not all that well informed on how infinity fabric and multi-chip work. IIRC the single I/O bus chip is supposed to bring everything together, so I'm not necessarily sure if you need to double everything, and we've never seen 2 GPUs on infinity fabric either, so I've not a clue how things are connected let alone if it's possible at all.

Programming should be within reason (I hope with the hardware being locked), we are seeing good saturation numbers with shadow of the tomb raider even with ray tracing enabled ~ up to 99% saturation on both on a pair of 2080TIs.
I suspect they are using it in AFR format, I would be curious to see if they could utilize it differently with varying amounts of SFR with some novel use of executeIndirect to get the GPU to trigger its own workloads back and forth between the two GPUs on console if this is the setup.

You are talking Nvidia and one game and one engine. I can also see checkerboard rendering at great effect on some games, but I see others at 1080p due to problems implementing it. So, genericly speaking, how would a two GPU system behave on most of the engines?
Honestly, I see that option as a new ESRAM, with problems where before there were none
Not shure that is what programmers want.
 
I presume you mean monolithic. Although a console powered by many gods would be pretty awesome, maintenance - all those different rules and customs - would be hell for users.
:LOL:
Nope it wasn't auto correct, Scarlett need all the help it can get each sku different combination of gods :runaway:
 
You are talking Nvidia and one game and one engine. I can also see checkerboard rendering at great effect on some games, but I see others at 1080p due to problems implementing it. So, genericly speaking, how would a two GPU system behave on most of the engines?
Honestly, I see that option as a new ESRAM, with problems where before there were none
Not shure that is what programmers want.
it comes down to programming more so than the bridge from what I can see. I don't necessarily believe that SLI is a 'power' house of some sorts. Most of our issues with mGPU comes down to coding for it. Most developers won't invest the time into it because few setups are mGPU which is why we're getting poor performance from it.

A locked 2 GPU setup with good APIs supporting creativity and freedom for the developers (which DX12 offers much better over DX11) then yes, we should see improved saturation and performance on mGPU.

As much as I want to believe single big GPU is the way to go, we need to start having a real discussion about the cost structure eventually.
 
  • Like
Reactions: Jay
As long as we are limited to under 200w for the entire console, and a 399-499 price point, gpu chiplets seem overkill. But it would be fun.
 
it comes down to programming more so than the bridge from what I can see. I don't necessarily believe that SLI is a 'power' house of some sorts. Most of our issues with mGPU comes down to coding for it. Most developers won't invest the time into it because few setups are mGPU which is why we're getting poor performance from it.

A locked 2 GPU setup with good APIs supporting creativity and freedom for the developers (which DX12 offers much better over DX11) then yes, we should see improved saturation and performance on mGPU.

As much as I want to believe single big GPU is the way to go, we need to start having a real discussion about the cost structure eventually.

Won't matter if the competitor is seeing great performance day 1 because they don't enforce headaches to extract performance. You only get one chance to make a good first impression. Also bear in mind that the competitor is seeing more than double the HW sales on current gen (let alone the rest of the gaming market being developed with sGPU assumption).

I don't think MS should shoot themselves in the foot, then drive a stake through the hole - it's a solution looking for a problem.
 
Last edited:
2 GPU chiplets on next Xbox ? The '2 GPUs inside the Xbox' crazy dream from 2013 could finally become true !

Inevitably with time and infinite patience any prediction will become true. First universe law. :yep2:
 
Won't matter if the competitor is seeing great performance day 1 because they don't enforce headaches to extract performance. You only get one chance to make a good first impression. Also bear in mind that the competitor is seeing more than double the HW sales on current gen (let alone the rest of the gaming market being developed with sGPU assumption).

I don't think MS should shoot themselves in the foot, then drive a stake through the hole - it's a solution looking for a problem.
yea well if you can do what 2 GPUs with 1, then i guess there's not really much of a discussion. Eventually we'll need to look at the possibility of requiring more than one, but I don't think that's this generation.
 
Won't matter if the competitor is seeing great performance day 1 because they don't enforce headaches to extract performance. You only get one chance to make a good first impression. Also bear in mind that the competitor is seeing more than double the HW sales on current gen (let alone the rest of the gaming market being developed with sGPU assumption).

I don't think MS should shoot themselves in the foot, then drive a stake through the hole - it's a solution looking for a problem.
At minimum I suspect you would see similar game performance unless PS5 is equal in performance, and if went chiplet route I'd assume it's to have the most power.

Unless coding it, is an absolute nightmare then as has been proven devs will make use of it same way they did for esram.
Same way they had to make use of multi core, multi thread, async compute.
The question is how hard is it to get decent performance before taping all of the performance.
With the way engines are built for async compute now, it seems like it wouldn't be as hard as in the past.

Hopefully not long before finding out how they go about building the different performance profiles and sku's.
If monolithic then I'm guessing lockhart and Anaconda will be different dies, as disabling cu's and downclocking would be fair bit of wasted wafer.
 
yea well if you can do what 2 GPUs with 1, then i guess there's not really much of a discussion. Eventually we'll need to look at the possibility of requiring more than one, but I don't think that's this generation.
We are seeing multi-gpu applications on amazon aws with up to 8x high end nvidia, because that chip is almost at the reticle limit. There is no reason to use many smaller cards. Anyone will use a single card until the biggest one available isn't enough.

It's difficult to imagine a future console's GPU anywhere near the reticle limit (it's like 800mm2 or something?) I was thinking there could be an advantage with smaller chips having better yield, but disabling CUs seems to be a very efficient way to deal with yield. So not sure if there would be any gain.
 
As long as we are limited to under 200w for the entire console, and a 399-499 price point, gpu chiplets seem overkill. But it would be fun.

Something like...
APU (155w package): CPU 15w-45w (idle/load) & GPU 100w-110w.
Motherboard: 20w
Memory: 8-12w ???
SSD: 0.5-3w (idle/load)
Blu-ray Drive: 10w (read/install/movie-playback)
Misc Components (fan, WiFi chip, etc.): 5w

Edit: Brain-fart on memory.
 
Last edited:
We are seeing multi-gpu applications on amazon aws with up to 8x high end nvidia, because that chip is almost at the reticle limit. There is no reason to use many smaller cards. Anyone will use a single card until the biggest one available isn't enough.

It's difficult to imagine a future console's GPU anywhere near the reticle limit (it's like 800mm2 or something?) I was thinking there could be an advantage with smaller chips having better yield, but disabling CUs seems to be a very efficient way to deal with yield. So not sure if there would be any gain.
On a GPU only. But on a APU you can only disable GPU CUs, not CPU cores, can you ? An 'APU' with one CPU chiplet and only one GPU chiplet could work very well for a console.
 
Something like...
APU (155w package): CPU 15w-45w (idle/load) & GPU 100w-110w.
Motherboard: 20w
Memory: 3w-4w
SSD: 0.5-3w (idle/load)
Blu-ray Drive: 10w (read/install/movie-playback)
Misc Components (fan, WiFi chip, etc.): 5w

Memory should be a factor of 6x-10x higher than that. High bandwidth ram is becoming a significant contributor to power (hence HBM to not only address bandwidth but power consumption as well).
 
Memory should be a factor of 6x-10x higher than that. High bandwidth ram is becoming a significant contributor to power (hence HBM to not only address bandwidth but power consumption as well).

Had two brain-farts. Thinking of 8GB of DDR4 for some odd reason. But yes, 24GB of GDDR6 is what's being rumored -correct?
 
2 GPU chiplets on next Xbox ? The '2 GPUs inside the Xbox' crazy dream from 2013 could finally become true !

Inevitably with time and infinite patience any prediction will become true. First universe law. :yep2:

Nah, this time it's obviously the PS5 which will have a 2nd hidden GPU inside! Why? Because instead of throwing away GPUs which don't work even after yield boosting measures like deactivated CUs, they can use defective GPUs for irregular tasks like async compute but especially as a performance booster for VR. Meaning in VR each eye get's its own GPU!

lulz.png


Maybe some thirsty leaker picks this joke up. :LOL:
 
Status
Not open for further replies.
Back
Top