Real test is Ark Survival Evolved, bro.Seeing how Gears 5 was running at higher then PC specs, HDR, and the FPS/resolution, done in two weeks, performance shouldnt be a problem for BC games. Gears 5 is one of the best looking current gen games.
Real test is Ark Survival Evolved, bro.Seeing how Gears 5 was running at higher then PC specs, HDR, and the FPS/resolution, done in two weeks, performance shouldnt be a problem for BC games. Gears 5 is one of the best looking current gen games.
Sony and AMD discovered early on that BC would limit the CU count to 36. Sony accommodated by designing a more robust cooling solution to handle higher frequency. Wanted a notable metric that they could use to compete on. Chose the SSD because a high performance part would offer tangible benefits that even casual gamers could readily experience.
ssd's are a major game changer with regards to traditional hard drives. Even the xbox series x drive is what 50 times faster. Thats the game changer , it will remain to be seen if the speed difference between the next gen boxes are a game changer or not. We will know most likely in two years as devs get used to these new systemsSpider-Man was awesome on the PS2. For PS2 era games.
Let's just agree to disagree, I'm seeing lots of devs super saying they are super stoked for SSDs in nextgen consoles. I've seen nobody talk about graphics other than the expected interest in the new RT hardware.
ssd's are a major game changer with regards to traditional hard drives. Even the xbox series x drive is what 50 times faster. Thats the game changer , it will remain to be seen if the speed difference between the next gen boxes are a game changer or not. We will know most likely in two years as devs get used to these new systems
Or maybe Sony wanted a machine that will price reduce well over time.
Over 120 times faster (minimum). Developers of current-gen based solutions around 20 MB/s, Series X is 2400 MB/s, with typical compression hitting 4800 MB/s.
Right so your looking at many times faster. Also we are still 7 months out from console launches , specs can change for the better. MS has a few clock bumps right before the end. However i would imagine if MS was going to do anything it would be with the slower ram , adding in the 2 gigabyte chips instead of one
I still think the ssd speed difference is a bit over blown. But we will see what happensAlso, they can do more with that 4.8Gb/s max, using bcpack/texture compression, effectively closing the gap.
Clock bumps won’t do much. if they want to tear it up; just make the memory uniform and and vault the bandwidth up.Right so your looking at many times faster. Also we are still 7 months out from console launches , specs can change for the better. MS has a few clock bumps right before the end. However i would imagine if MS was going to do anything it would be with the slower ram , adding in the 2 gigabyte chips instead of one
Also, they can do more with that 4.8Gb/s max, using bcpack/texture compression, effectively closing the gap.
That interpretation does require that the access pattern yield 2.4 GB/s or more when the developer tests it. Someone could design a pattern to bottleneck an SSD or increase the work done in the file system, which wouldn't count. The platform would give some definition of a "reasonable" access pattern, and a "guarantee" in that case would be that such a pattern would not drop below 2.4GB/s.I wondered the same a week ago here. My thoughts were 3.0 vs 4.0, heat or "fast enough" for their appliance. 3dillettante suggested MS's 2.4GB/s is the guaranteed minimum for all accesses.
I'd still like to know what kind of barrier BC would be. It's straightforward for hardware with more CUs to not expose them to software. Any GPU with more CUs than are active does this. For Sony's consoles, the PS4 Pro hid that there were extra CUs from PS4 code. The PS4 itself hid the existence of the inactivated CUs from the software, even though if the choice had been made they could have been made available.Sony and AMD discovered early on that BC would limit the CU count to 36. Sony accommodated by designing a more robust cooling solution to handle higher frequency. Wanted a notable metric that they could use to compete on. Chose the SSD because a high performance part would offer tangible benefits that even casual gamers could readily experience.
Isn't BCPack already accounted for in the 4.8 figure? With kraken ps5 goes from 5.5 to 8-9, ~1.6x vs the 2x for bcpack.I think BCPack will help close the gap but I honestly think its effectiveness is overblown, its an extension to already compressed texture formats BC1-7 from what I can tell, Sony and others are free and likely will use texture compression in formats that their GPUs and APIs support and do RLE or some other form of compression over the top of that.
Isn't BCPack already accounted for in the 4.8 figure? With kraken ps5 goes from 5.5 to 8-9, ~1.6x vs the 2x for bcpack.
I'd still like to know what kind of barrier BC would be. It's straightforward for hardware with more CUs to not expose them to software. Any GPU with more CUs than are active does this. For Sony's consoles, the PS4 Pro hid that there were extra CUs from PS4 code. The PS4 itself hid the existence of the inactivated CUs from the software, even though if the choice had been made they could have been made available.
Even if there were some difficulty in hiding the CU count with standard hardware, this would be a semi-custom chip. Why would an internal mode switch that made the GPU say "36" instead of some higher number be difficult, when they're transplanting hardware and instructions for BC across multiple systems in a multi-billion transistor SOC?
I'd still like to know what kind of barrier BC would be. It's straightforward for hardware with more CUs to not expose them to software. Any GPU with more CUs than are active does this. For Sony's consoles, the PS4 Pro hid that there were extra CUs from PS4 code. The PS4 itself hid the existence of the inactivated CUs from the software, even though if the choice had been made they could have been made available.
Even if there were some difficulty in hiding the CU count with standard hardware, this would be a semi-custom chip. Why would an internal mode switch that made the GPU say "36" instead of some higher number be difficult, when they're transplanting hardware and instructions for BC across multiple systems in a multi-billion transistor SOC?
True, in the end, XSX has a faster CPU, more powerfull GPU (no var clocks on them either), higher bandwith, a solid BC-plan (higher settings, HDR, resolutions), and a SSD nvme solution that casual gamers are going to notice as a tangible difference even at the lower read speeds. Aside from that, implementations like bcpack/decompression etc could be rather competitive in raw speed anyway.
Being stuck to 36CU wasn't really needed i think, high clocks where attained on xsx too (1800+mhz gpu), which seems a sweet spot. You end up with a less costly cooling system, and put the resources in hardware instead.
So, my PC MB finally gave up the ghost and I had to go shopping for a new one. Took a look online at PCIE 4.0 SSDs and I was shocked at just how expensive they are.
Compared to PCIE 3.0 drives, they are almost twice as expensive for not twice the performance.
Plenty of PCIE 3.0 drives could, in theory, work on XBSX as many of them have > 3.0 GB/s raw read speeds. This makes me wonder if XBSX is using PCIE 3.0 as they don't really need PCIE 4.0 in order to hit their target SSD speed.
Going by this even proprietary XBSX expansion drives could potentially end up significantly cheaper than consumer level PCIE 4.0 SSDs of > 5.5 GB/s.
That's also assuming that MS doesn't just have a spec. that 3rd party SSD makers could use to make drives for XBSX that would, of course, require certification. Sort of like 3rd party memory cards on past consoles.
Anyway, end result is that I didn't order a PCIE 4.0 SSD even though I was tempted. I'll hold off until DirectStorage comes to PC.
Regards,
SB