OCZ SSD sales increase 265%

but then reducing performance would not prevent that if you need to copy 10gig of files you copy 10gig of files you dont think this drives a bit slow i'll only copy 6gig

Correct, the reduction in performance is a side effect of the sparing decisions and algorithm selections used by SF which require more complex data movement and overhead. Most other designs don't have any significantly enhanced overhead with sequential transfers and through their sparing and algorithm choices don't suffer many of the issues that SF drives do.

Performance for SSDs along with endurance of SSDs is a fairly complex area and highly dependent on the incoming command and data streams. For example, all the drives on the market with display remarkably different random IO behavior based on the span upon which those IOs are connected. They'll also display remarkably different (order of mag) differences in endurance as well. ie, doing random writes to a 30GB portion of the drive will deliver significantly different results than doing random writes to the whole drive(160GB/240GB). A lot of this has to do with the addressing and indirection capabilities of the controller architecture along with the erase/write structure of the underlying flash memory.

This all leads to weirdness like manufactures (at least the honest ones) sometimes quoting high random IO numbers for their consumer drives vs their enterprise drives. This is because the enterprise drives assume full span access pattern in the specifications while the consumer drives do not. Often the consumer drives assume spans in the range of 8GB.
 
You never lose capacity on SSD until you have massive sector/cell failures. It means it has 80% of it's lifetime left before it dies.

Remember, with hard drives there are two kinds. Those that have failed, and those that have not yet failed. All hard drives will fail. SSDs are no different.
 
I wonder why flash perform writes on a page level, but has to erase whole blocks of pages. This really complicates SSD design, why don't they fix this quirk in flash instead to make it possible to erase individual pages. I suppose they'd lose some density due to additional control logic or whatever on the ICs, but surely it would be worth the tradeoff for the increased lifespan of the resulting flash.

Current mlc flash has terrible write cycle endurance (low thousands of cycles), and it's only going to get far worse as dimensions continue to shrink. In another generation or two, flash will cease to be viable as a reliable storage medium altogether at this rate.
 
I transfer to mine all the time. I have steam installed on my SSD, but installing all the games would take up more space than the total size of the SSD. So I keep the downloads on my array and transfer over when I decide to play them.

Here's a useful tool for that - Steam Mover.


SteamWindow2.PNG
 
Last edited by a moderator:
I wonder why flash perform writes on a page level, but has to erase whole blocks of pages. This really complicates SSD design, why don't they fix this quirk in flash instead to make it possible to erase individual pages.

why does it need to erase at all ?
cant it just label the sectors as empty a bit like how fat works (when you delete a file it's still there its just listed as blank)
 
According to your linked article you do

Remember back to Flash 101, even though we have to erase just one page we can’t; you can’t erase pages, only blocks. We have to erase all of our data just to get rid of the invalid page, then write it all back again.
 
Sorry. Was referring to the previous page: it is clearer there.

You said "cant it just label the sectors as empty a bit like how fat works (when you delete a file it's still there its just listed as blank)". And on the previous page there's the table which implies just that : nothing is done on file delete command
 
I wonder why flash perform writes on a page level, but has to erase whole blocks of pages. This really complicates SSD design, why don't they fix this quirk in flash instead to make it possible to erase individual pages. I suppose they'd lose some density due to additional control logic or whatever on the ICs, but surely it would be worth the tradeoff for the increased lifespan of the resulting flash.

erase is done via a different voltage plane. IIRC, current flash uses ~12V to erase the cells. This required a separate voltage plane and associated logic which adds overhead. This pretty much mean you want to block it on an array/sub-array basis which results in the increase. Physically the arrays are on the order of 4k-8k wide and it is likely that between 4-8 rows fit underneath one higher level metal wire.


Current mlc flash has terrible write cycle endurance (low thousands of cycles), and it's only going to get far worse as dimensions continue to shrink. In another generation or two, flash will cease to be viable as a reliable storage medium altogether at this rate.

delivered endurance is a probability distributing and guard band of raw error rates and error correcting codes. Its really an issue of ratios of redundancy vs usable bits. It is entirely possible to take current MLC and get upwards of an order of magnitude higher endurance depending on how complex you want to make your ECC. Flash still has a decent amount of life left in it.
 
why does it need to erase at all ?
cant it just label the sectors as empty a bit like how fat works (when you delete a file it's still there its just listed as blank)

No, because they aren't emtpy. Eventually you need to reclaim the space that has been allocated in the past but is currently unused or out of date but still has data. In order to do that you need to erase the block.
 
well they are not empty on a standard hdd, so why the difference
and why can the ssd write any combinatinion of zero's and ones to a sector, but if you want to write just zero's (ie: an erase) it cant do it and can only do it do a whole block ?
 
erase is done via a different voltage plane. ( ... )
This explanation is a bit too technical for my level of understanding lol, so to just cut to the chase, would it be entirely impractical to just subdivide your block-level erasure voltage plane into page level distribution instead? Because largely getting rid of write magnification would undoubtedly make many SSD controller designers happy...

It is entirely possible to take current MLC and get upwards of an order of magnitude higher endurance depending on how complex you want to make your ECC.
Yeah, but how complex would that have to be then? How much flash capacity would have to be dedicated purely for ECC data? And that's today - what about in another half-decade, will we be down to a few hundred rewrites per cell (on average)?

Also, what is the long-term storage endurance of flash? Stick a drive in a drawer, take it out five years later and half the flash cells have discharged, rendering everything unreadable...? Flash as a technology doesn't really inspire confidence... ;)

Flash still has a decent amount of life left in it.
I'd rather see we switch to memristor-based tech instead... Unlimited writes (much faster than flash too IIRC), thankyou verymuch. :p

Only advantage I can see with limited write endurance for flash is maybe prod Microsoft into stopping their OSes' annoying habit of crapping spurious writes to the system disk pretty much constantly. Almost every laptop owner running on battery power is familiar with that issue, suddenly the HDD spins up for no apparant reason whatsoever.
 
well they are not empty on a standard hdd, so why the difference
and why can the ssd write any combinatinion of zero's and ones to a sector, but if you want to write just zero's (ie: an erase) it cant do it and can only do it do a whole block ?

Flash without erase is a write once technology. Can't remember off the top of my head if the erase is to all 1's or all 0's. But the write is only one way. ie, you erase it to zero, then you can either switch it to 1 or leave it at zero but once you write it to 1 you can't go back to zero unless you erase the block.
 
This explanation is a bit too technical for my level of understanding lol, so to just cut to the chase, would it be entirely impractical to just subdivide your block-level erasure voltage plane into page level distribution instead? Because largely getting rid of write magnification would undoubtedly make many SSD controller designers happy...

Sure, just cut you density by 4-8x!


Yeah, but how complex would that have to be then? How much flash capacity would have to be dedicated purely for ECC data? And that's today - what about in another half-decade, will we be down to a few hundred rewrites per cell (on average)?

Just go to increasingly complex ECC methods. It will be fine for another 8-10 years at least.

Also, what is the long-term storage endurance of flash? Stick a drive in a drawer, take it out five years later and half the flash cells have discharged, rendering everything unreadable...? Flash as a technology doesn't really inspire confidence... ;)

really, not any worse than any other non-archival storage technology. HD aren't good over the long term either. They also suffer from bit decay.

Only advantage I can see with limited write endurance for flash is maybe prod Microsoft into stopping their OSes' annoying habit of crapping spurious writes to the system disk pretty much constantly. Almost every laptop owner running on battery power is familiar with that issue, suddenly the HDD spins up for no apparant reason whatsoever.

Turn off search indexing.
 
Sure, just cut you density by 4-8x!
That bad, huh? :D Well, then it's kind of understandable they do it this way, yes... Heh.

It will be fine for another 8-10 years at least.
If you say so... Maybe it's just me, but I sure would prefer a storage tech for my data that doesn't self-destruct just by using it though! ;)

HD aren't good over the long term either. They also suffer from bit decay.
I've had HDDs laying about for at least 5 years that I read without issues (although that probably doesn't really count as long-term), but I wonder how my ancient SCSI HDDs are faring that I used with my old Commodore Amiga... I haven't really touched those at all since roughly 1997. :oops: I'd like to image those somehow, I have a SCSI PCI add-in board, but no decent cable for it...

Turn off search indexing.
Is that the Windows Search service, or has MS hidden it away more cleverly than that? I never really bothered faffing with that - supposedly SSDs are fast enough to not need indexing (and same with superfetch supposedly), but knowing what stuff you can/should turn off with a SSD system disk requires effort. :LOL:

I guess it was a low priority for me at the time, but it might be good to know this kind of stuff for future reference. Personally I feel windows should be smart enough to do stuff like this on its own, but...
 
Well AFAIK, Super Fetch and Defrag are automatically disabled.

Indexing, probably not but then again I guess there might some gains in search times also for SSDs. Of course, not owning one myself so can't tell.
 
If you say so... Maybe it's just me, but I sure would prefer a storage tech for my data that doesn't self-destruct just by using it though! ;)

There are 2 types of drives, those that have failed and those that have not failed YET. All drives will fail.

Win7 is smart enough to disable some of the services if it detects an SSD drive. However, it's preferable to also disable the search indexing service on the SSD drive as well. The drives are fast enough for random access to not really need it, plus it saves a ton of read and write cycles by not building and rebuilding the indexes.
 
plus it saves a ton of read and write cycles by not building and rebuilding the indexes.

Indexes are not rebuilt only in exceptional cases. (I'm talking about indexing algorithms, of course - don't know many details about win's specific implementation)
 
Back
Top