...So, unless that SSD tries to write all sectors equally, defects occur. Which means: just about every write requires a reallocation of up to 512k of data.
Sectors are deleted as a whole, but can be written per block (4k or sometimes even smaller). If a sector is full, the best the SSD can do is mark a block as deleted and move it, until it runs out of free space. After that, it has to compact and move whole 512k sectors for each write of even a single byte, to make room.
Linux has a native Flash file system. Most SSD drives (except the old or Intel ones) even use a microcontroller (ARM, most likely) that runs a custom Linux kernel with such a file system, with the Flash chips in a RAID configuration. (Exactly like most RAID controllers for Windows servers.) And that works fine, as long as the host OS recognises that and tells the SSD which logical sectors are empty and can be overwritten.
That requires the TRIM command, which simply tells the SSD: this sector is empty. Which is a new addition to Windows 7.
Under Linux, you don't actually need SSD drives as such, as the OS is quite capable of using any Flash memory available directly as part of the file system. But most people use Windows.
So the lack of TRIM would definitely rule out any RAID configuration for me. The whole situation seems slightly hinky to me: SSD's need an erase flag, magnetic drives don't. But what a lot of fuss. So what does Linux do exactly? The naive solution would just be to have a 1-2MB buffer in volatile and assemble 512k pages on the fly. Given those seek times, to what extent is fragmentation even an issue?
Also, since everyone's gone to the trouble of developing these badass microcontrollers, why not separate the chip, put it on a card or on a motherboard, and allow the user to pool any flash into an SSD.
Seems like semantics to me.
Last edited by a moderator: