General Next Generation Rumors and Discussions [Post GDC 2020]

Sony and AMD discovered early on that BC would limit the CU count to 36. Sony accommodated by designing a more robust cooling solution to handle higher frequency. Wanted a notable metric that they could use to compete on. Chose the SSD because a high performance part would offer tangible benefits that even casual gamers could readily experience.

Or maybe Sony wanted a machine that will price reduce well over time.
 
Spider-Man was awesome on the PS2. For PS2 era games.

Let's just agree to disagree, I'm seeing lots of devs super saying they are super stoked for SSDs in nextgen consoles. I've seen nobody talk about graphics other than the expected interest in the new RT hardware.
ssd's are a major game changer with regards to traditional hard drives. Even the xbox series x drive is what 50 times faster. Thats the game changer , it will remain to be seen if the speed difference between the next gen boxes are a game changer or not. We will know most likely in two years as devs get used to these new systems
 
ssd's are a major game changer with regards to traditional hard drives. Even the xbox series x drive is what 50 times faster. Thats the game changer , it will remain to be seen if the speed difference between the next gen boxes are a game changer or not. We will know most likely in two years as devs get used to these new systems

Over 120 times faster (minimum). Developers of current-gen based solutions around 20 MB/s, Series X is 2400 MB/s, with typical compression hitting 4800 MB/s.
 
Over 120 times faster (minimum). Developers of current-gen based solutions around 20 MB/s, Series X is 2400 MB/s, with typical compression hitting 4800 MB/s.

Right so your looking at many times faster. Also we are still 7 months out from console launches , specs can change for the better. MS has a few clock bumps right before the end. However i would imagine if MS was going to do anything it would be with the slower ram , adding in the 2 gigabyte chips instead of one
 
Right so your looking at many times faster. Also we are still 7 months out from console launches , specs can change for the better. MS has a few clock bumps right before the end. However i would imagine if MS was going to do anything it would be with the slower ram , adding in the 2 gigabyte chips instead of one

Also, they can do more with that 4.8Gb/s max, using bcpack/texture compression, effectively closing the gap.
 
Right so your looking at many times faster. Also we are still 7 months out from console launches , specs can change for the better. MS has a few clock bumps right before the end. However i would imagine if MS was going to do anything it would be with the slower ram , adding in the 2 gigabyte chips instead of one
Clock bumps won’t do much. if they want to tear it up; just make the memory uniform and and vault the bandwidth up.

I’m pretty sure MS is going to stay put though. No reason to change things. They have to work on their other stuff now. Games and platform offerings.
 
Also, they can do more with that 4.8Gb/s max, using bcpack/texture compression, effectively closing the gap.

I think BCPack will help close the gap but I honestly think its effectiveness is overblown, its an extension to already compressed texture formats BC1-7 from what I can tell, Sony and others are free and likely will use texture compression in formats that their GPUs and APIs support and do RLE or some other form of compression over the top of that.
 
I wondered the same a week ago here. My thoughts were 3.0 vs 4.0, heat or "fast enough" for their appliance. 3dillettante suggested MS's 2.4GB/s is the guaranteed minimum for all accesses.
That interpretation does require that the access pattern yield 2.4 GB/s or more when the developer tests it. Someone could design a pattern to bottleneck an SSD or increase the work done in the file system, which wouldn't count. The platform would give some definition of a "reasonable" access pattern, and a "guarantee" in that case would be that such a pattern would not drop below 2.4GB/s.

Sony and AMD discovered early on that BC would limit the CU count to 36. Sony accommodated by designing a more robust cooling solution to handle higher frequency. Wanted a notable metric that they could use to compete on. Chose the SSD because a high performance part would offer tangible benefits that even casual gamers could readily experience.
I'd still like to know what kind of barrier BC would be. It's straightforward for hardware with more CUs to not expose them to software. Any GPU with more CUs than are active does this. For Sony's consoles, the PS4 Pro hid that there were extra CUs from PS4 code. The PS4 itself hid the existence of the inactivated CUs from the software, even though if the choice had been made they could have been made available.
Even if there were some difficulty in hiding the CU count with standard hardware, this would be a semi-custom chip. Why would an internal mode switch that made the GPU say "36" instead of some higher number be difficult, when they're transplanting hardware and instructions for BC across multiple systems in a multi-billion transistor SOC?
 
I think BCPack will help close the gap but I honestly think its effectiveness is overblown, its an extension to already compressed texture formats BC1-7 from what I can tell, Sony and others are free and likely will use texture compression in formats that their GPUs and APIs support and do RLE or some other form of compression over the top of that.
Isn't BCPack already accounted for in the 4.8 figure? With kraken ps5 goes from 5.5 to 8-9, ~1.6x vs the 2x for bcpack.
 
It would be neat/expected if better yielding units in a year's time will simply run faster on average as the power yield curve improves as opposed to launch units. It basically becomes a more guaranteed boost of sorts for those who waited.

By the time slim gets out, maybe the PS5 will see simultaneous and sustained 3.5GHz/2.2GHz operation assuming they stick to the same power profile determination.

In that case, I will wait
until there's a game I'd care about from Sony
:cool:
 
Guaranteed transfer rate are almost always sequential or, rarely, with a specified block size which must be aligned and across all chips in the array.

With 1 byte requests it would be a few 100s of kb per second. Or reading exactly 1 every 4 blocks in a 4 chip array would drop the bandwidth by 4, hitting only one chip all the time. So it makes no sense to guarantee anything random access without more details on the access pattern limitations.
 
Isn't BCPack already accounted for in the 4.8 figure? With kraken ps5 goes from 5.5 to 8-9, ~1.6x vs the 2x for bcpack.

Maybe not. If I’m not mistaken texture compression formats aren’t required to be decompressed in RAM. Block compression used for textures allow for random access so the gpu can grab needed texture data and decompress in hardware.

I imagine both the PS5 and XSX are using similar solutions. There is a question is whether BCPack allows for greater compression at high quality settings.
 
Last edited:
Has DF or anybody tried to mimic next gen specs by building a PC(s)???

Of course it would be an exercise in futility, but still a fun one.

They could either try to overclock an 5700XT to the max (might work for PS5) or maybe use an Nvidia GPU clocked to 12 TF for Series X, like a 2080 or 2080 Ti. The 2080 clocked various ways might be a better idea, as they could simulate ray tracing in the consoles as well.

Of course Zen 2. And they could run a top of the line SSD.

Again of course they couldn't simulate games programmed for a blazing fast SSD baseline or anything, but it'd still be fun. There's a post in Era about how Doom Eternal loads in like 1s on an SSD and is well optimized for SSD, so that might be a good taste.

I am quite sure they're working on this video as they've done similar things.
 
Last edited:
I'd still like to know what kind of barrier BC would be. It's straightforward for hardware with more CUs to not expose them to software. Any GPU with more CUs than are active does this. For Sony's consoles, the PS4 Pro hid that there were extra CUs from PS4 code. The PS4 itself hid the existence of the inactivated CUs from the software, even though if the choice had been made they could have been made available.
Even if there were some difficulty in hiding the CU count with standard hardware, this would be a semi-custom chip. Why would an internal mode switch that made the GPU say "36" instead of some higher number be difficult, when they're transplanting hardware and instructions for BC across multiple systems in a multi-billion transistor SOC?

I'm really curious if the 36CU's are purely for backwards compatibility or if it's also some degree of forward planning for a "Pro." Extract the best performance possible from the smallest possible chip, with a view to doubling that chip, at a relatively economical level, within 3-5 years.

They already have the engineering work they've done on the PS4Pro's "butterfly" design, and 72CU's on 5nm might not be all that much bigger than the XSX's 360mm2 beast. I need to go and check out the kind of area reduction 5nm will bring though...

Given that they've gone with 14gbps GDDR6 *sigh* a jump to 16gbps would make it relatively cheap to bump up the memory bandwidth too. I'd love some HBM and 1TB/s bandwidth there though :love:
 
I'd still like to know what kind of barrier BC would be. It's straightforward for hardware with more CUs to not expose them to software. Any GPU with more CUs than are active does this. For Sony's consoles, the PS4 Pro hid that there were extra CUs from PS4 code. The PS4 itself hid the existence of the inactivated CUs from the software, even though if the choice had been made they could have been made available.
Even if there were some difficulty in hiding the CU count with standard hardware, this would be a semi-custom chip. Why would an internal mode switch that made the GPU say "36" instead of some higher number be difficult, when they're transplanting hardware and instructions for BC across multiple systems in a multi-billion transistor SOC?

Honestly, I’m having a hard time reasoning such a limitation. The BC for the PC platform and XBOX is not limited in such a fashion. So it doesn’t seem logical to easily conclude a limitation exists just for one platform.

Maybe the restriction is not a product of some issue with the actual configuration of the CUs. But rather RDNA’s CUs poorly mimic the performance of GCN’s CUs in some form or fashion, and the frequency of a RDNA CU must be boosted to compensate.

In other words 2.23 GHz isn’t some consequence of having just 36 CUs but the other way around. BC requires RDNA CUs running at high frequency to perform adequately across the board. The frequency is high enough to limit the number CUs that Sony can readily use in its design.
 
True, in the end, XSX has a faster CPU, more powerfull GPU (no var clocks on them either), higher bandwith, a solid BC-plan (higher settings, HDR, resolutions), and a SSD nvme solution that casual gamers are going to notice as a tangible difference even at the lower read speeds. Aside from that, implementations like bcpack/decompression etc could be rather competitive in raw speed anyway.
Being stuck to 36CU wasn't really needed i think, high clocks where attained on xsx too (1800+mhz gpu), which seems a sweet spot. You end up with a less costly cooling system, and put the resources in hardware instead.


I dont think that was MS plan though.

They just shot for performance leadership, 52 CU's fell out from that.

Sony thought 36 CU's would be enough, then when they realized they'd been caught out a bit, clocked the hell out of their chip to make up as much gap as they could. Which resulted in more expensive cooling evidently.

At least, that's how I see it. There was no grand plan by Sony to build around the SSD and go lesser on the GPU, or by MS to have cheaper cooling, etc IMO. Sony did prioritize the SSD obviously dont get me wrong, but I'm sure they could have had the SSD AND 40+, even 52, CU's as well.
 
So, my PC MB finally gave up the ghost and I had to go shopping for a new one. Took a look online at PCIE 4.0 SSDs and I was shocked at just how expensive they are.

Compared to PCIE 3.0 drives, they are almost twice as expensive for not twice the performance.

Plenty of PCIE 3.0 drives could, in theory, work on XBSX as many of them have > 3.0 GB/s raw read speeds. This makes me wonder if XBSX is using PCIE 3.0 as they don't really need PCIE 4.0 in order to hit their target SSD speed.

Going by this even proprietary XBSX expansion drives could potentially end up significantly cheaper than consumer level PCIE 4.0 SSDs of > 5.5 GB/s.

That's also assuming that MS doesn't just have a spec. that 3rd party SSD makers could use to make drives for XBSX that would, of course, require certification. Sort of like 3rd party memory cards on past consoles.

Anyway, end result is that I didn't order a PCIE 4.0 SSD even though I was tempted. I'll hold off until DirectStorage comes to PC.

Regards,
SB

Cerny did advise people not to buy PCIe 4.0 drives now.
Prices should be different come November/December.

Better yet, prices should be much different once you actually need to purchase a PCIe 4.0 SSD add-on after your initial storage ran out, unless you plan on installing 10 games on day one.

Also, Sony is letting people put in there a PCIe 4.0 SSD that follows the 2280 M.2 industry standard so that several 3rd party suppliers can compete with their offers.
Don't be surprised if price-per-GB on PS5-certified SSDs will be lower than the slower but custom single-supplier option you'll get for the SeriesX/S/?.
 
Back
Top