Rewrites in flash works quite like in HDD: they are block based. You can't just rewrite one byte, you have to erase a whole block, and then write the new data. So basically if you just want to change one byte of a block, you have to read the whole block back, erase the block, then write the block back with that byte changed. HDD works like this too..
Almost all hard drives produced in the last 30 years use a 512 byte sector size. (There have been proposals in recent years to move to a 4kib sector size).
Most modern filesystems (e.g. NTFS and ext3) use a 4 kib cluster size by default. Doing random writes with a convention hard drive thus doesn't incur any extra write penalty beyond what the file system imposes.
Testing of early MLC SSDs such as the OCZ Core series suggested the first generation JMicron 602 controller used an internal 128 kib block size. This means writing a single 4 kib cluster, actually means doing 32 times as much writing than would be necessary.
SSDs based on Intel and Samsung controllers don't have this problem, as they include more onboard RAM to permit a smaller block size. I haven't seen any testing done yet on the new JMicron 602b based SSDs, so I cannot comment on whether newer drives have this issue.
JMicron controller has no cache, so it seems to more likely occur with any sort of write operations, which is why the OCZ drives using SLC has issues too. The controllers that have cache can accept more writes while the flash unit may be doing the clearing operations. Those without cache end up waiting until the clearing operations finish. This seems to occur during small or large file writes. The no cache and clearing issues are magnified in MLC drives because they need to do more per write than SLC does.
AFAIK, no commercial SSD uses onboard DRAM as a write-back cache between the OS and drive. Doing so would permit data loss if there were a large number of random writes pending and there was a power loss. The RAM on the Intel controller is used to permit a smaller sector size and to implement an improved write-leveling algorithm. Anand's
article mentions this is true for the Intel SSDs, and I presume this is true for the Samsung controller as well.
So, the performance of a SSD is actually best when new. After some uses, the performance will degrade a bit, then it stays there. Therefore, to more accurately benchmark a SSD, it should be done on a used one, not a new one.
This is very true. However, another issue is involved. The internal block map of blocks with free space and spares used in write-leveling can become fragmented, degrading the performance of sequential writes. AFAIK all existing SSDs are eventually affected by this issue. Here is a
link, showing the effect on even Intel's premium X25-E SSD.
---
beyond3d lies technical marketing
Whoops, my post (which I presume is still pending moderator approval) should read KiB not kib.
<mutter>Damn fool IEC kowtowing to the telecoms! I remember when a mb was a MB and a kilo depended upon who you were talking to...</mutter>
---
beyond3d lies technical marketing