Solid state drives?

Your method would require reading everything in a block and re-writing it, so at 100MB/s any write would take a minimum of 10ms. It'll change a bit for different reading/writing/erasing throughputs, but this is in the vicinity of HDD times. Writing can be done one page at a time, though, so using garbage collection it improves by a factor of 128, and now you see the big performance benefits.

In theory, if write wearing is not a problem, the flash should be designed with a copy-on-write feature to perform this entirely in the chip. The chip should have a certain amount of spare blocks for consecutive writing. If you have to erase 128 pages at once, which may require, say, 900 ms, then you can allocate, say 90 units (i.e. 45MB) to reduce the worst case write latency to 10 ms.

Of course, since wear leveling is an important issue, this is just theoretical and not useful at all.
 
Okay, but I don't see how many small files is any different from one big file. All that matters is how much is being modified. 100kb being changed in a fragmented 1 GB file is the same as one out of 10,000 100kb files being changed.

Yes, I would agree with this. If it were just 100 KB out of an entire 1 GB file that is fragmented, I don't think you or I would be particularly concerned about it. ;) Now the more plausible scenario for a "fragmented" 1 GB file could be a LOT worse. If it happens to be broken into 5 or 10 thousand fragments, that could present a severe ramification if the time ever came where you wanted to replace the file outright.

I wasn't really trying to say that big fragmented files would necessarily be more deadly than many small fragmented files. All I was suggesting is that, in general, fragmentation could tend to involve more blocks in a read/update/write process than the nice and neat scenario where the affected files are compactly isolated to the minimum number of blocks needed to store that size of file. :) It seems the goal of keeping an mlc ssd happy is to touch the least amount of blocks with a write operation. Anything that increases the number of blocks that need to be touched beyond the absolute logical minimum that would be required to store the size of that data, is just begging for trouble, I'm thinking.

Going a little further to ssd unfriendly processes, it makes a great deal of sense that Windows background file indexing would be one phenomenon that epitomizes the worst thing you could expose an ssd to, no? It endlessly chews away at your supply of write blocks and robs idle time for when a garbage collection process could happen. So when you really intend to do some work on your ssd, you find it is still trying to catch a breather, and then you get a nasty stall.
 
is there a way to forcefully "clean" a ssd ? maybe wiping with 0s?

Writing with zero probably don't help here, because the controller will write zeros after an erase. Maybe format will cause the controller to erase without writing, but I'm not sure about this.

Intel wanted to make a new storage specification for SSD, to handle these SSD specific operations more directly, but it's still not ready now.
 
The problem is that windows has no way to directly tell a SSD to clear a cell. I suppose you could go through and write all 0's but then the cell will be written with 0's and isn't actually cleared. In other words, rather than being an empty cell it's a cell written with 0's.

At least that's my understanding from discussions I've read on the topic.

Regards,
SB
 
Yes, that is another point I wanted to get out in earlier remarks- that it seems that better communication/coordination between OS and ssd could assist greatly in helping to mask these internal overhead situations...sort of like a more integrated approach to software-based and hardware-based disk management beyond just the generic IDE/ATA standard for storage devices.
 
The problem is that windows has no way to directly tell a SSD to clear a cell.
I've read somewhere (sorry, can't remember the source) that Win7 will do something like that through special ATA commands, i.e. if a cell becomes unoccupied in the filesystem it signals that to the ssd so the ssd can clear the cell.
 
I've read somewhere (sorry, can't remember the source) that Win7 will do something like that through special ATA commands, i.e. if a cell becomes unoccupied in the filesystem it signals that to the ssd so the ssd can clear the cell.

That would help performance of SSDs tremendously if it's true.

Regards,
SB
 
They talked about this in last year's WinHEC. I found a report here:

http://www.cio.com.au/article/266551/how_windows_7_will_won_t_work_better_ssds?pp=1

Thanks for that link...this is the most interesting bit from it in regards to SSDs.

Second, Windows 7's new "trim" feature will improve performance three ways. It will: Reduce the amount of data to be deleted, which improves the SSD's lifespan; delete garbage data in advance, which speeds up writing of data; and maximize the amount of unused data, which helps even out the wear and tear on the SSD, Shu said.

I already have a habit of turning off defrag on any SSD drives. Something people mixing SSD's and Vista should do always...

I'm hoping that what they mean by "delete garbage data in advance" is referring to an ability to instruct a SSD to performe garbage collection/empty cells. And not some other meaning.

Regards,
SB
 
Just to update, my little Transcend ssd in a Mac Mini project has been a success, so far! Granted, I haven't noticed any extreme speed increases in disk bound operations, but I haven't really laid into it, yet. I was mostly concerned about being able to make bootable restored drive from my original hdd. That part went remarkably troublefree, thanks to a little utility app built right into OS X called [gasp] Disk Utility. Actually, I am well acquainted with this software- I just never used the Restore function in it before. You just tell it to take one hdd and make a bootable restore of it on another "hdd". No need to mess with images or anything! It didn't even care that I was restoring to a smaller drive. As long as the size of the net files fits on the smaller drive, it'll work.

Haven't noticed any stalls (but I haven't really pushed it). Bootup and shutdown is nicely speedy, though. Firefox performance is snappy, so far. The real test, however, will come as more time passes (as the uptime with nightly sleeps grows out to 30 days or so before reboot). I am curious to see how well it copes with increasing reliance on vm on the ssd, as "stuff" accumulates in ram and vm with use.
 
Fwiw, there was an s-load of dust bunnies packed inside that lil Mini. If Mini's could cough up hairballs like a cat, this one was definitely due.
 
I'm hoping that what they mean by "delete garbage data in advance" is referring to an ability to instruct a SSD to performe garbage collection/empty cells. And not some other meaning.

That's exactly what they mean.

Today when a file is deleted in the filesystem the SSD has no idea that those blocks are no longer in use, so it must wait until they've been overwritten by a write command before it knows it's safe to erase them.

In Win7, the trim command will be sent by NTFS when it frees allocated clusters in response to a file deletion. This means the SSD will know the space is now unused and thus can immediately put those blocks onto its erase queue.

This will help ensure there will always be a pool of already erased blocks and reduce the write "stalling" that you may see currently.
 
Last edited by a moderator:
I think most (if not all) usb flash drives have wear leveling, otherwise their life expectancy will be extremely low. Anyway, USB flash drives normally have pretty bad write performance, so they are unlikely to have serious write congestion problem.
 
Well, presumably if you write to a usb pen drive constantly or extremely frequently that it could happen if it uses MLC chips.

I can't think of many scenario's where you're constantly writing to a pen drive enough to notice something like that however. As well USB is another layer between the drive access and windows. IE - windows shouldn't stall waiting for USB access, rather writes to the USB device will just slow down considerably.

Regards,
SB
 
Back
Top