Next-gen console versus PC comparison *spawn

I have no idea for the GPU, he just said the SSD is coming from the work of these patents

Well okay then, with the help of co-processors just for the SSD we will sure get a very speedy solution yes. One other thing though, it doesn't have to be exactly as described in the patent though, but along those lines.
 
All the industry have the spec currently but no one talk this time. This is not surprising no leak but I think all the mod on resetera knows the spec. It seems the consoles are nearly the same performance.
 
Going to be interesting when specs land, see what we are getting, it has to last another 7 years. Still, i wouldn't go too much after specs/numbers, someone 'certified' called unimpressive....
 
After the virtual filesystem is not different to a PC with a free BSD for the x86 CPU inside the patent. But there is a filesystem internal to the SSD call file Archive.
If you are talking about SSD patent application in another thread, that is US20170097897A1 https://patents.google.com/patent/US20170097897A1/en
- that's not actually "file system internal to the SSD". It's just a custom flash-aware file system on a separate disk or partition. SSDs know nothing about file systems, they work with blocks/sectors.

That figure describes their software stack for disk access, where the SSD partiton uses a custom file system to access read-only, protected game level data. This filesystem makes use of large 64 KB blocks (clusters), and the NVMe SSD controller supports hardware decompression of these 64 KB blocks, while contiguous allocation and data compression is handled in the software stack. The latter also implements a custom lightweight API (i.e. without HDD-based 'abstractions' ) to read data from this SSD partition.

Of course the software stack also supports standard file systems typical for USB media and optical/hard disk drives and SSDs, with full read-write access.

if it is 10 GB/s or faster, very easy more RAM on PC solve the problem, the first loading time will just be longer.
If loading times are not improved, which problem this additional RAM actually solves?

it is not so difficult because everything is internal to the SSD. The only problem is they need to reach a standard and they will not optimize only for read-only...
Add a beefier ARM CPU and some SSD have some hardware decompressor. The most difficult is the software side because you probably need to work with the MS on the OS side and other SSD makers.
It may not be difficult to achieve in the hardware, but if it's properly patented then it might be hard to do without some brute force.
First, I doubt the patent will be granted. They basically describe an integrated 4-channel flash memory controller with a NVMe (PCIe) interface, and an address translation layer implemented with multi-core ARM processors and dedicated look-up caches. There are already dozens of such integrated chips from several vendors like Phison, Silicon Motion, Samsung, Intel, SandForce, Marvel, and others.

Also, hardware compression has already been implemented by SandForce and probably a few others, to limited succes; major vendors chose to completely ignore it. Compression can increase read throughput for lower-end flash memory parts, but only as long as your data is compressable with a decent rate. It cannot substitute for flash-aware filesystems and device IO patterns based on large blocks.

And custom API layer for accessing flash memory storage is not such a novelty either.
 
Last edited:
If loading times are not improved, which problem this additional RAM actually solves?

Cross platform games between Xbox and PC is a thing, and games will use this streaming of the SSD for large open world games etc, they must work something out then.

First, I doubt the patent will be granted. They basically describe an integrated 4-channel flash memory controller with a NVMe (PCIe) interface, with a translation layer implemented in multi-core ARM processors. There are already dozens of such integrated chips from several makers, such as Phison, Silicon Motion, Samsung, SandForce, Marvel, and others.

Also, hardware compression has already been implemented by SandForce and probably a few others, to limited succes; major vendors chose to completely ignore it. Compression can increase read throughput for lower-end flash memory parts, but only as long as your data is compressable with a decent rate. It cannot substitute for flash-aware filesystems and device IO patterns based on large blocks.

And custom API layer for accessing flash memory storage is not such a novelty either.

Are you talking about the Sony patent? If so, why are they regestering it, if there are already integrated chips in the hardware that do just that, compression to speed things up.
 
MS could easily create some sort of DX SSD, type of virtual device, out of an a users existing SSD, and then let games acces that resource similar to how it might work in the next Xbox.
Existing SSDs are based on 512B sectors and 4 KB clusters, which fits hard disk-derived data access patterns of existing operating system. This kills their otherwise impressive sequential read speeds in raw disk IO tests by two orders of magnitude, when you run real-world random read tests on these SSD disks and operating systems.

Excerpts from the above Sony patent, which describe this problem at length:

[0043] In general, data stored in an HDD is divided into 512- or 4096-byte blocks and recorded in a distributed manner. A file system has metadata that makes distributed data look like one piece of continuous data, converting an instruction to access a continuous file area into that to access a plurality of distributed blocks. Because the HDD stores metadata for converting the name of the file to be accessed into the LBA associated with each block of the HDD, it is necessary to read metadata first so as to read the file.

[0044] Metadata itself may be distributed over a plurality of areas of the HDD. It is likely, therefore, that small data access to the HDD such as reading higher-order metadata to read the metadata in question may occur frequently due to metadata layering. During that period, it is difficult to acquire the logical block address of the area to be accessed where data is stored. Therefore, it is difficult for a CPU to issue a next read request. Applying such a data access procedure to the SSD in an “as-is” manner makes it difficult to achieve a high transfer rate that could otherwise be achieved by parallel access to a plurality of NAND devices.

[0045] Further, ordinary HDDs have no encryption or anti-tampering function. Therefore, it is necessary for the host CPU to handle encryption and tampering check. Encryption and tampering check may be performed at a BIOS (Basic Input/Output System) level or at a file system level. In either case, these tasks are handled by the CPU. Therefore, these processing tasks may become a bottleneck to a high SSD transfer rate. Although distribution of load of these processing tasks using an accelerator may be possible, it is necessary, for that purpose, to divide a read file into processing units and issue a number of processing requests for those processing units, thus making it difficult to reduce CPU processing load.

[0046] Further, a number of interrupts may be generated to notify completion to such a number of processing requests, possibly disrupting CPU processing. On the other hand, some file systems support data compression. In this case, the file system compresses data during file write and decompresses data during file read. At this time, if the interface speed of the data storage destination is low, the effective transfer rate may improve by reduction of data amount. However, data compression and decompression may become a bottleneck to a high SSD transfer rate.

[0047] Thus, although the transfer rate of NAND flash devices alone improves dramatically, incorporation of the devices into a system designed for an HDD leads to a variety of bottlenecks. As a result, such improvement in transfer rate is frequently not fully taken advantage of. To alleviate these various bottlenecks, a high-speed access software stack is available in the present embodiment in addition to a related art file system. The related art file system is accessed via a virtual file system to adapt to various storage devices and network file systems. For this reason, metadata is structured into a plurality of layers as described above. As a result, there are cases in which metadata is read a number of times before an intended file is read.

[0048] In the present embodiment, metadata is simplified by providing a high-speed access software stack specially designed for flash memories. Further, in the present embodiment, an auxiliary processor is provided in addition to a related art CPU to mainly execute and control the software stack in question so that the auxiliary processor takes charge of controlling a hardware accelerator for encryption and decryption, tampering check, and data decompression, thus distributing processing. Still further, the data read unit of the flash memory is expanded and unified for efficient read operations.
 
Last edited:
I think it will take some time before developers release games which truly push the bandwidth afforded by these next gen consoles anyway. For a period after launch, we're going to be getting cross-generation multiplat games which will still be designed around the constraints of older hardware. The only games which MAY push boundaries are ones that are likely to be exclusive to PS5 to begin with.. and thus it wont really be comparable to anything.

I think developers will begin raising RAM requirements and initial load times in PC versions of next gen games will be slightly longer than their console counterparts. The games should be designed to fill up as much RAM as possible to reduce pressure on the PCIe bus. I think that might be the solution until faster drives, an even faster bus, and other possible improvements come down the pipe.

Regardless... it's all very interesting to see what happens and how it changes game design.
 
Cross platform games between Xbox and PC is a thing, and games will use this streaming of the SSD for large open world games etc, they must work something out then.



Are you talking about the Sony patent? If so, why are they regestering it, if there are already integrated chips in the hardware that do just that, compression to speed things up.

This is not new. It’s not in the consumer PC space but it’s a field of research and vendors are already supplying products in the data center space.
 
games will use this streaming of the SSD for large open world games etc
The question was how that additional RAM would improve game loading speeds for a typical Windows SSD with heavy fragmentation from small cluster sizes.

why are they regestering it
Why not - there's a 50% chance for an US patent application to be granted. To me, it strongly looks like a collection of prior art, but I did not examine each and every claimn, like patent professionals do.
 
The question was how that additional RAM would improve game loading speeds for a typical Windows SSD with heavy fragmentation from small cluster sizes.
Presumably they could load more of the game into a larger memory pool and that would allow the PC to stream in assets fast enough to maintain parity with the consoles. Whereas without that additional RAM, games might just freeze for seconds at a time while levels/assets load it.

So, the in-game loading/streaming would potentially be faster, but the initial game load would take longer... which was probably the loading you're referring to.. so I guess the answer is that it wont. =/ lol
 
After sometime, most pc's will be much faster then the next-gen consoles, faster then those will sell to 100 million in 7 years. Just like now.
When PS5 launches most still will own a base PS4, like when so many are left with mere 1060's (or something in that range).

Right but this is 2019 not 2005. Games going forward are going to be made for phones / tablets / consoles / streaming services and so on.

Lets look at the main game platforms. You have phone platforms with their built in hardware but you also have xcloud and stadia which are both powered by x86 and amd graphics. Then you have the consoles which the ps4/xbox one and forward are powered by … you guessed it x86 and amd graphics cards. So a developer is still going to target pc hardware and is still going to allow you to addd more bell and whistles for those with the hardware to run it. Also going forward developers will get an extra pop of sales if they add in advance features that might not run well on the future consoels. Why ? because they will get pc sales and in the future it will allow them to easily push out enhanced editions on the streaming platforms.
 
If you are talking about SSD patent application in another thread, that is US20170097897A1 https://patents.google.com/patent/US20170097897A1/en
- that's not actually "file system internal to the SSD". It's just a custom flash-aware file system on a separate disk or partition. SSDs know nothing about file systems, they work with blocks/sectors.

That figure describes their software stack for disk access, where the SSD partiton uses a custom file system to access read-only, protected game level data. This filesystem makes use of large 64 KB blocks (clusters), and the NVMe SSD controller supports hardware decompression of these 64 KB blocks, while contiguous allocation and data compression is handled in the software stack. The latter also implements a custom lightweight API (i.e. without HDD-based 'abstractions' ) to read data from this SSD partition.

Of course the software stack also supports standard file systems typical for USB media and optical/hard disk drives and SSDs, with full read-write access.


If loading times are not improved, which problem this additional RAM actually solves?




First, I doubt the patent will be granted. They basically describe an integrated 4-channel flash memory controller with a NVMe (PCIe) interface, and an address translation layer implemented with multi-core ARM processors and dedicated look-up caches. There are already dozens of such integrated chips from several vendors like Phison, Silicon Motion, Samsung, Intel, SandForce, Marvel, and others.

Also, hardware compression has already been implemented by SandForce and probably a few others, to limited succes; major vendors chose to completely ignore it. Compression can increase read throughput for lower-end flash memory parts, but only as long as your data is compressable with a decent rate. It cannot substitute for flash-aware filesystems and device IO patterns based on large blocks.

And custom API layer for accessing flash memory storage is not such a novelty either.

Thanks for the explanation about the firmware API file system.

This is exactly what I read, they try to go around the bottleneck of PC SSD on software is optimized for HDD usage, using hardware for having less pressure on the CPU, be able to decompress fast enough the data during game streaming and customize the controller I/O for reading usage. All other things are inside every SSD.

I know some SSD have data decompressors and they use ARM CPU too but it seems the decompressor can be useful for games. This is not a PC for doing tons of things out of game. On Microsoft side, it seems the games will be decompressed by the GPU at loading time.

After I don't think this will be very fast a probably looking like benchmark SSD speed but in real-world application, 4 to 5 GB/s max or 2/3 GB/s and this is huge.
 
Last edited:
Right but this is 2019 not 2005. Games going forward are going to be made for phones / tablets / consoles / streaming services and so on.

Lets look at the main game platforms. You have phone platforms with their built in hardware but you also have xcloud and stadia which are both powered by x86 and amd graphics. Then you have the consoles which the ps4/xbox one and forward are powered by … you guessed it x86 and amd graphics cards. So a developer is still going to target pc hardware and is still going to allow you to addd more bell and whistles for those with the hardware to run it. Also going forward developers will get an extra pop of sales if they add in advance features that might not run well on the future consoels. Why ? because they will get pc sales and in the future it will allow them to easily push out enhanced editions on the streaming platforms.
Sure you will get Minecraft, fortnight, rocket league that can scale to very low spec hardware.
Not only can games scale they can be successful, good and enjoyable. No one is denying or saying that's not the case for many games.

But if you think every game can scale like that then you are simply mistaken.
The reason current minimum spec for many big budget cross platform games aren't higher is because the games was made with the current gen consoles in mind.
There are things that aren't done or even considered simply because of the consoles performance profiles.
I'm not talking just scaling graphics which is the easiest part to do.

Same with ssd, currently on pc ssd's are used as quality of life improvement because consoles have hdd. When cross platform games make use of ssd, you will not be able to use a hdd based pc.
Sure you could run it, but it won't be what anybody would consider playable.

Something to consider regarding cross platform/minimum spec support.
  • Longer because tools, engines and api's are the same
  • Shorter because already have knowledge of api's, tools and engines
Cross platform tail may be longer, but may get exclusives quicker at the same time raising the minimum spec.
 
Are patents ever denied these days? The Patent Office makes money from granting patents.
The main cost is in filing the patent, or related "valued-added services", not in the granting of the patent or maintaining it.

The patent system is in dire need or reform but nobody can agree how. You have to protect those sinking a ton into expensive R&D for genuinely new useful methods and technologies because if you don't there is no incentive to spend to innovate if competitors can use your work for free, but equally some things patented are just ridiculous. The biggest stumbling block is the need change the patent system globally - you can't just do it in one country or region.
 
The main cost is in filing the patent, or related "valued-added services", not in the granting of the patent or maintaining it.
Not from what I've seen. It's £445 to file if proceeding all the way to grant, and then increasingly expensive to maintain up to £610 in the 20th year, totalling £4640 to maintain it. Every patent not granted is a long term loss of four grand. The PO makes no money from rejecting patents, hence my doubt (in the face of clear evidence to the contrary with patents granted for tech that clearly isn't patentable) that they do any work to ensure patents are actually valid and instead just grant all of them.
 
Not from what I've seen. It's £445 to file if proceeding all the way to grant, and then increasingly expensive to maintain up to £610 in the 20th year, totalling £4640 to maintain it.

Ah, I wasn't aware it changed last year in the UK. As you can see from the changes, it is unashamedly about revenue whereas previously it was not.
 
This is exactly what I read, they try to go around the bottleneck of PC SSD on software is optimized for HDD usage, using hardware for having less pressure on the CPU, be able to decompress fast enough the data during game streaming and customize the controller I/O for reading usage.
I just wanted to clarify this in the context of the above patent being applicable to the PC.

Hardware compression would save CPU cycles, but that's less of a concern for a 8-core CPU, and the effect on the read throughput would be minimal - block level data compression is excellent for text (ASCII) files, but far less effective for binary data like textures and geometry, and absolutely ineffective for compressed files.

Fast load times and high read throughput are allowed by the custom read-only file system that uses large blocks and contiguous allocations. If you only have to write your data once, and cannot change it afterwards, there is much less fragmentation - only the free space can become fragmented when you delete some data, but with contiguous allocation, defragmenting the entire volume to consolidate the free space could only take seconds.

Emulating this on the PC with a standard NTFS file system will be ineffective even with large 2MB clusters, unless you implement similar file acces / allocation / defragmentation APIs on the OS level.

It's in the best interests to grant all of them, and let the law courts argue out the details of violations and precedent.
Yes, it does look so. But even if that Sony patent is granted, its claims include both SSD controller hardware and OS software stack - and this claim won't apply to the PC, since there can be no single product that implements both claims. There are SSD vendors (and they are not interested in data compression) and there are OS vendors - who would they sue in court?

So, the in-game loading/streaming would potentially be faster, but the initial game load would take longer... which was probably the loading you're referring to.. so I guess the answer is that it wont. =/ lol
:yes: Yes, we are talking about ways to achieve both <1s start time and fast level loading times. And it doesn't look like a PC with 32 GB of RAM - but a SSD formatted with a standard file system, which gives you typical random read thoughput of 20 MB/s on your typical heavily fragmented volume - will be comparable to a game console with a custom file system and proprietary APIs.
 
Last edited:
I just wanted to clarify this in the context of the above patent being applicable to the PC.

Hardware compression would save CPU cycles, but that's less of a concern for a 8-core CPU, and effects on the read throughput would be minimal - block level compression is excellent for text (ASCII) files, but far less effective for binary data like textures and geometry, and absolutely ineffective for compressed files.

Fast load times and high read throughput are allowed by the custom read-only file system that uses large blocks and contiguous allocations. If you only have to write your data once, and cannot change it afterwards, there is much less fragmentation - only the free space can become fragmented when you delete some data, but with contiguous allocation, defragmenting the entire volume to consolidate the free space could only take seconds.

Emulating this on the PC with a standard NTFS will be ineffective even with large 2MB clusters, unless you implement similar file acces / allocation / defragmentation APIs on the OS level.


Yes, it does look so. But even if that Sony patent is granted, its claims include both SSD controller hardware and an OS software stack - and this claim won't apply to the PC, since there will be no single product that implements both claims. There are SSD vendors (and they are not interested in data compression) and there are OS vendors. Who would they sue in court?


:D Yes, we are talking about ways to achieve both <1s start time and fast level loading times. And it doesn't look like a PC with 32 GB of RAM - but a SSD formatted with a standard file system, which gives you typical random read thoughput of 20 MB/s on your typical heavily fragmented volume - will be comparable to a game console with a custom file system and proprietary APIs.

Textures use GPU format compression and geometry does not need it and have its own form of compression like displacement mapping usable on GPU. I think they want to compress some of the files for keep at low as possible game size on SSD.
 
IYes, it does look so. But even if that Sony patent is granted, its claims include both SSD controller hardware and OS software stack - and this claim won't apply to the PC, since there can be no single product that implements both claims. There are SSD vendors (and they are not interested in data compression) and there are OS vendors - who would they sue in court?

Remember that most SSDs (a single drive/controller package) employ a native data structure that bears little relation to the OS filesystem, managed by the controller to encrypt, map bad blocks, manage garbage collection and wear levelling and crypto-shredding but custom file systems are supported all most desktop OSs so a patent predicated drive, controller and filesystem (software stack) wouldn't necessarily preclude it applying on PC.

But I doubt Sony care about anybody using their implementation on PC, they presumably want to make it difficult for competitors in the console space to use their approach. By PC I exclude macOS as Apple have been switching to custom controllers for years and APFS is the most solid-state friendly desktop filesystem in widespread use.
 
Back
Top