OCZ SSD sales increase 265%

...So, unless that SSD tries to write all sectors equally, defects occur. Which means: just about every write requires a reallocation of up to 512k of data.

Sectors are deleted as a whole, but can be written per block (4k or sometimes even smaller). If a sector is full, the best the SSD can do is mark a block as deleted and move it, until it runs out of free space. After that, it has to compact and move whole 512k sectors for each write of even a single byte, to make room.

Linux has a native Flash file system. Most SSD drives (except the old or Intel ones) even use a microcontroller (ARM, most likely) that runs a custom Linux kernel with such a file system, with the Flash chips in a RAID configuration. (Exactly like most RAID controllers for Windows servers.) And that works fine, as long as the host OS recognises that and tells the SSD which logical sectors are empty and can be overwritten.

That requires the TRIM command, which simply tells the SSD: this sector is empty. Which is a new addition to Windows 7.

Under Linux, you don't actually need SSD drives as such, as the OS is quite capable of using any Flash memory available directly as part of the file system. But most people use Windows. :)

So the lack of TRIM would definitely rule out any RAID configuration for me. The whole situation seems slightly hinky to me: SSD's need an erase flag, magnetic drives don't. But what a lot of fuss. So what does Linux do exactly? The naive solution would just be to have a 1-2MB buffer in volatile and assemble 512k pages on the fly. Given those seek times, to what extent is fragmentation even an issue?

Also, since everyone's gone to the trouble of developing these badass microcontrollers, why not separate the chip, put it on a card or on a motherboard, and allow the user to pool any flash into an SSD.

Seems like semantics to me.
 
Last edited by a moderator:
Lack of trim support is only an issue if you're doing huge amounts of writes afaik, so that you really stress out the drive's empty block recycling mechanism. Trim allows the drive to pre-recycle empty blocks, but unless you're really doing a lot of writes this won't be an issue.

You'll do fine with the drive's normal garbage collection scheme in almost every case except the most extreme corner cases...

There's other reasons to avoid raid 0 primarily than losing trim. :)
 
Lack of trim support is only an issue if you're doing huge amounts of writes afaik, so that you really stress out the drive's empty block recycling mechanism. Trim allows the drive to pre-recycle empty blocks, but unless you're really doing a lot of writes this won't be an issue.

You'll do fine with the drive's normal garbage collection scheme in almost every case except the most extreme corner cases...

Strikes me OS+Internet browser equals huge amounts of writes. I have a clean install of Win 7 right now and I hear the hard disk plunking away as I type--looks like various SVC host processes, logging and so forth. Browsers in my experience tend to do loads of little writes. So do you guys with SSDs tend to keep the swap file and the browser cache on that same disk? What is the ideal layout? Also, is there any consensus on MTBF (I did due diligence w/google and came up with nothing)?

There's other reasons to avoid raid 0 primarily than losing trim. :)

So you would say no RAID? Why (I've never done a RAID setup before)?
 
Strikes me OS+Internet browser equals huge amounts of writes.
No, no, no... We're talking gigabytes worth of writes here. You'd need to dump in enough writes to max out the drive's spare block area and more, some web browsing and random OS stuff is piddly in comparison.

So do you guys with SSDs tend to keep the swap file and the browser cache on that same disk?
Erm, I have 12 gigs of RAM in my PC, so I can't really speak for the majority of users... :) I have a smallish swap file on a regular 2TB Hitachi HDD, and that's also where I keep my browser cache.

I've got my OS, applications and World of Warcraft installed on my SSD, with everything else on aforementioned Hitachi HDD.

Also, is there any consensus on MTBF (I did due diligence w/google and came up with nothing)?
Nah, there's no real consensus there. :) The drive should in theory last your lifetime and more, but hey... These are consumer devices, and SSDs are new tech, so they probably won't last that long. Most likely anything you buy today will last long enough that it'll be obsolete before it breaks though. Unless you're unlucky and receive a dud (unlikely), or there's some fundamental flaw in one or more of the main components of the drive. (Also unlikely.)

So you would say no RAID? Why (I've never done a RAID setup before)?
RAID 0 doubles the chance of failure, because either of the two drives failing will mean total loss of data due to the way everything is spread out across both units. It's convenient in the way it doubles the capacity of the logical drive/partition, but the disadvantage outweighs that advantage IMO.

Theoretically you also get extra speed from raid 0, but SSDs are already crazy fast today, and linear reads and writes are very uncommon operations in everyday use, so you won't have much use for that speed anyway. All it's good for is copying of really really large files, and since you won't have any other I/O devices capable of matching your SSD raid array in speed, you won't gain any performance anyhow. ;)
 
So do you guys with SSDs tend to keep the swap file and the browser cache on that same disk? What is the ideal layout?

My system has 4GB of memory and a 120GB OCZ SSD Vertex 1. I have swap file completely disabled and only enable it if a program absolutely needs it. I use a program to provide a small RAM-disk, around 384 Megs. I place my windows TMP/TEMP directory and browser's cache directory on the RAM-disk. On top of this I have the drive indexer service disabled on the SSD. This removes the largest source of multiple writes to the SSD while keeping the system amazingly zippy.
 
So the lack of TRIM would definitely rule out any RAID configuration for me. The whole situation seems slightly hinky to me: SSD's need an erase flag, magnetic drives don't. But what a lot of fuss. So what does Linux do exactly? The naive solution would just be to have a 1-2MB buffer in volatile and assemble 512k pages on the fly. Given those seek times, to what extent is fragmentation even an issue?

Also, since everyone's gone to the trouble of developing these badass microcontrollers, why not separate the chip, put it on a card or on a motherboard, and allow the user to pool any flash into an SSD.

Seems like semantics to me.
Like Grall and Davros said.

Some additional info:

The problem is still the fragmentation, but in a completely different way. Let's consider a blank drive. That's easy: you write each cluster as a block, sequentially. If a cluster is altered, you mark it as deleted and write a new one, at the end. And you keep an index that translates between logical and physical clusters.

At some point, all available space on the drive is in use. But the OS thinks it's only half full, or probably a lot less. That's because all those altered clusters take space, and you don't know which other ones are flagged by the file system as "deleted/empty".

So, you have to compact the large, fragmented physical sectors, by writing the blocks (clusters) that aren't marked as deleted/moved by the SSD to a new, spare sector you keep in reserve to do exactly that, and deleting the whole sector. Which is something you can do ~100,000 to 10 million times, depending on the quality of the Flash. (EDIT: for each sector.)

TRIM makes that process a whole lot easier, as it allows the drive to simply erase whole sectors and only move the clusters that are actually in use by the file system.

So, yes, you don't need TRIM, but it is a significant boost to the speed and durability of the drive, if it's used a lot.


SSD in RAID 0 are only a very small risk, as that's how the Flash chips are used in the SSD itself as well. It does use some error correction, and most Flash chips are sold with a small percentage of bad sectors.

And I would recommend to put your swap file on that SSD (if it supports TRIM and you feel the need for a swap drive in the first place). Because the drive can take it, and it significantly speeds up your system. Temporary files are best put on a RAM drive, as they mostly clog up your system.
 
Last edited by a moderator:
Thanks for filling me in on the details guys, cleared up my confusion. I have 8GB RAM, so it's not like I need a swap file, I've just traditionally thought it best practice to have one. I've been meaning to set up a RAM drive in the rather ingenious way Albuqurque(sp?) described in that post for some time, just haven't gotten around to it.
 
As stupid as this may sound at first, if you have a good bit of ram don't put the swap file on the ssd put in on a ramdrive, windows always uses the swap file to some degree even when there is plenty of free ram
 
As stupid as this may sound at first, if you have a good bit of ram don't put the swap file on the ssd put in on a ramdrive, windows always uses the swap file to some degree even when there is plenty of free ram
Unless you turn it off. :)
 
As stupid as this may sound at first, if you have a good bit of ram don't put the swap file on the ssd put in on a ramdrive, windows always uses the swap file to some degree even when there is plenty of free ram

No, I agree entirely. Win 7 is pretty tame about it, but I'm just a little reluctant to kill the page file completely. Mainly I just don't know the details of windows virtual memory, and I've found in the past that sometimes subverting stupid windows behavior--even if it completely makes sense to--can be punishing. Currently my page file is an 8GB contiguous file on the outer edge of the platter(s). When I get an SSD (maybe in the next couple of days here) I think I definitely wouldn't want a page file on it, as that's ~2.5GB write every time I power up the system. Now, whether to have a page file at all, I don't know.
 
Well, completely turning it off works about 99% of the time. Some applications require one to function well, or at all. But they're old and few, if you have at least 4GB RAM, and don't run huge CAD or rendering jobs.
 
Well, completely turning it off works about 99% of the time. Some applications require one to function well, or at all. But they're old and few, if you have at least 4GB RAM, and don't run huge CAD or rendering jobs.

In the worst case do you get a little, "windows is running low on virtual memory" dialog, or a BSOD? If it's the former than I might try killing mine for a while as a test run.
 
I have run without for years. The last application I remember requiring it was Boiling Point.
 
Intel's third-gen SSDs use a year-old Marvell SSD controller, WTF? :???:

From (source in Swedish) Nordic Hardware's first performance pre-evaluation (apparantly, the test system's PSU caught fire during testing... :p)
marvell.intel.jpg
 
Back
Top