Baseless Next Generation Rumors with no Technical Merits [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
Like I said before there is a rumor of a AMD RDNA 2 80 CU dGPU PC, die size 505 mm2.
I will love it when PS5 is 316mm² with 40CU clocked to hell and back coupled with 256bit bus and fastest chips around for ~530-576GB/s.

Such a smart design that I refuse to believe they are not going with it lol

They could SERIOUSLY undercut MS with that one. Would be PS4Pro vs Xbox One X (only difference in power would be ~15% and not 30%).
 
I think the 'problem' is that it seems unbelieveble that AMD can actually compete with a 2080Ti out of the blue, and that in a console APU/SOC. They must have some real monsters ready @AMD for the dGPU market, ready to destroy anything nvidia has and is ever going to get.

It's for that reason plenty of us here are skeptical of the inclusion of hardware ray tracing in the next gen consoles: there was no prior showing from AMD. But here we are, having had it confirmed across the board.

The same can be said of competing with/outperforming the 2080ti. It's unlikely that the PS5 and XSX will ship with such a GPU, but it's not so outlandish to think that AMD, in 2020, will be able to compete with a 2018 Nvidia GPU.

They targeted the mid-range market with the 5700XT, probably because even scaling up RDNA1 to something like 56CU's would've still resulted in a $1000 card that couldn't trace rays. That won't be the case with RDNA2, an architecture with which they'll actually be able to trade blows with Nvidia.

I have little doubt that TF for TF, Nvidia will still perform better, but AMD have at least got everything in place to truly compete with Nvidia this go around. And you can bet your bottom dollar that whatever Nvidia's flagship card is next year, it'll leave the 2080ti in its dust.
 
I don't really see price to be a deciding factor for 499-599 consoles. You either buy them for the best performance or because of your game library/certain games, gamepad or fanboyism. I expect the top level also to offer 1/2TB SSD options.

If price is a real deciding factor a real cheaper design like Lockhead+abo service sounds like a more convincing strategy to me. .
 
While ray tracing would be more of a surprise if it didn't land in 2020 gaming consoles, i think a 15+ TF amd dGPU for the pc market was abit of a surprise for many. I hope you're right because we need competiton from AMD again.
 
If AMD can effectively integrate two or more GPU chiplets on a PCB around its infinity fabric design where the GPU chiplets communicate as one (or viewed as a single chiplet), then Nvidia may have a problem on its hand. Because SLI/NVLink GPU scaling is virtually dead within the PC gaming space. And if AMD can scale performance (100%) across GPU chiplets similar to its Ryzen/Threadripper CPU chiplets, then PC gamers are going to be even further ahead when it comes to raw GPU floating-point performance.

I can't remember which thread it was - maybe "Architecture and Products" maybe another - where sebbi was quite clear about the minimum bandwidth required for GPU chiplets, and how we're still quite a ways off from that. If memory serves it was something in the realm of 256GB/s.

I'll try to dig it up. Mainly because, if the minimum speed for GPU chiplets was 256GB/s, PCIE 5 is already tantalisingly close at 128GB/s.
 
While ray tracing would be more of a surprise if it didn't land in 2020 gaming consoles, i think a 15+ TF amd dGPU for the pc market was abit of a surprise for many. I hope you're right because we need competiton from AMD again.

Is it that surprising though? If the Vega 64 were to clock at 1850MHz, it would result in a 15TF card. We've not seen (to my knowledge) any evidence that RDNA1 could scale its CU count and maintain the sorts of clockspeeds we've seen from it, but we've seen Vega 64 reach 1500MHz on 14/16nm with a notoriously hotter and more power hungry architecture.

I'd kinda be more surprised if AMD weren't able to release another 64CU card, on a newer node, using a newer architecture, and hit 1850MHz.
 
They could SERIOUSLY undercut MS with that one. Would be PS4Pro vs Xbox One X (only difference in power would be ~15% and not 30%).

Depends. If Sony is using RDNA1/Navi architecture, as all Gonzalo/Ariel/Oberon leaks suggest (Engineering samples in late 2018), and MS will be using RDNA2 arch with support for VRS/Mesh shaders/Sampler feedback etc. new features then the performance delta could be huge in titles fully utilizing these new techniques.
 
Totally possible, anything that supports a 2019 launch for ps5? Would be a kinda big difference. But PS5 was going to have RT, a navi2 feature....
 
I can't remember which thread it was - maybe "Architecture and Products" maybe another - where sebbi was quite clear about the minimum bandwidth required for GPU chiplets, and how we're still quite a ways off from that. If memory serves it was something in the realm of 256GB/s.

I'll try to dig it up. Mainly because, if the minimum speed for GPU chiplets was 256GB/s, PCIE 5 is already tantalisingly close at 128GB/s.

So a single 64CU GPU would work... but a GPU using two 32CU chiplets wouldn’t? I’m not following the logic here.

Literally, everything on the MCM should be a complete package (well integrated) where the multi-chiplet design should be recognized as a single GPU chip. And PCIe bandwidth saturation shouldn’t be no different than a single chip GPU or prior AMD/Nvidia products containing multi-chips within a SLI/CF configuration on a single GPU PCB.
 
So a single 64CU GPU would work... but a GPU using two 32CU chiplets wouldn’t? I’m not following the logic here.

Literally, everything on the MCM should be a complete package (well integrated) where the multi-chiplet design should be recognized as a single GPU chip. And PCIe bandwidth saturation shouldn’t be no different than a single chip GPU or prior AMD/Nvidia products containing multi-chips within a SLI/CF configuration on a single GPU PCB.

That was my position too, but I had the notion stripped away from me by people fat better versed in tech nitty gritty than I.

I think the issue was one of having GPU chiplets treated like a single GPU. To achieve that, you need bandwidth beyond any current interconnects. If you fail to achieve that, you piss away some degree of performance e.g. two SLI'd GPU's never perform at 200% of a single one of them.
 
I've just started having a look in the relevant threads and holy shit is there a lot to pour over.

Is there any way of just looking at threads in which I've posted?
 
Totally possible, anything that supports a 2019 launch for ps5? Would be a kinda big difference. But PS5 was going to have RT, a navi2 feature....

Microsoft have their own VRS even tho navi2 has it.

Perhaps Microsoft's is better, perhaps schedules slipped and Navi and navi2 was less clear when they started the design.

It's totally possible Sony rolled their own, they have sown RT tech already which did not seem related to RTX.
 
Some of my baseless theorizing as to why Sony/MS may be aiming for 12+ TF: Perhaps the chances of a Pro version of either console in the next 3-4 years is rather slim simply due to the slow down in process tech.

PS4 Pro brought a 2X increase in flops 3 years after launch. X1X a 4X increase 4 years after launch. Let's say PS5 launches with a 10TF GPU, can we realistically expect a ~20 TF Pro in 2023? I highly doubt it. According to that slide linked from Anandtech, 7nm to 5nm is only a 15% gain in perf or 30% power reduction. That's not good enough for a Pro model. Even the jump to the next half node, N5P is just an incremental bump. We probably won't get a real jump in perf/power until we hit 3nm and who knows when that will be commercially viable for consoles.

I think MS and Sony are going to push these boxes as hard as they can now, because they might have to last 5+ years before a Pro version appears (or not at all and we simply go to the next generation).
 
I will love it when PS5 is 316mm² with 40CU clocked to hell and back coupled with 256bit bus and fastest chips around for ~530-576GB/s.

Such a smart design that I refuse to believe they are not going with it lol

They could SERIOUSLY undercut MS with that one. Would be PS4Pro vs Xbox One X (only difference in power would be ~15% and not 30%).

18gbps chips might be really expensive. Perhaps 17gbps in retail.
 
18gbps chips might be really expensive. Perhaps 17gbps in retail.
There's no 17gbps binning as far as I know. Once 16gbps can be multi-sourced, it might become the middle bin along with low volume 18 above and low cost 14 below.

I wouldn't be surprised to see 14gbps being used next gen. In fact it's my prediction.
 
So a single 64CU GPU would work... but a GPU using two 32CU chiplets wouldn’t? I’m not following the logic here.

Literally, everything on the MCM should be a complete package (well integrated) where the multi-chiplet design should be recognized as a single GPU chip. And PCIe bandwidth saturation shouldn’t be no different than a single chip GPU or prior AMD/Nvidia products containing multi-chips within a SLI/CF configuration on a single GPU PCB.

I've been searching for a good 20-30 minutes now - long enough for my cup of tea to go cold - and I'm losing the will to live, especially when I know I'm just procrastinating and should be looking for a job.

If I think on, I'll carry on having a look tomorrow, but in the meantime, I know there's a fair bit of discussion around GPU chiplets in this thread.
 
Status
Not open for further replies.
Back
Top