OCZ SSD sales increase 265%

Yes, we're not talking about theoretical cases here in abstract terms. We're talking about Windows Search Indexer service.

Once again ... It builds indexes on the file names and content that is on the drive. When it rescans the drive, it ends up changing the indexes, requiring additional writes. Disabling it saves quite a bit of write cycles on an SSD.
 
If you say so... Maybe it's just me, but I sure would prefer a storage tech for my data that doesn't self-destruct just by using it though! ;)

Then stay away from everything except archival WORM storage...

I've had HDDs laying about for at least 5 years that I read without issues (although that probably doesn't really count as long-term), but I wonder how my ancient SCSI HDDs are faring that I used with my old Commodore Amiga... I haven't really touched those at all since roughly 1997. :oops: I'd like to image those somehow, I have a SCSI PCI add-in board, but no decent cable for it...

You assume you read it without issue. But then again, how would you know? You in all likelihood had such a pathetic file system that it couldn't detect even the most basic of errors. SDC is REAL. There are a lot of people who assume they don't get memory errors as well. Lets just say that pretty much all media suffers from bit rot (yes even archival grade WORM degrades over time).
 
Then stay away from everything except archival WORM storage...
That's a bit extreme of an attitude to take, imo. The alternative to SSDs, namely HDDs, don't suffer nearly the same amount of wear and tear from writes, this is a known and well-established fact...

You assume you read it without issue. But then again, how would you know?
Err, because I verified the data (by using it)? :)

You in all likelihood had such a pathetic file system that it couldn't detect even the most basic of errors.
Yeah, it was either FAT32 or NTFS, I can't quite remember which, but everything transferred fine anyway. HDDs use reed-solomon ECC for anything stored on disk, so there's always a basic layer of protection.

It would be nice with a more secure filesystem as standard though, but I guess I don't really have the need for it. I can't even remember the last time I lost any data - no matter how insignificant - due to filesystem corruption. Well, I did have windows overwrite a .DOC for me with garbage a couple years ago when I abruptly shut off my PC as it was saving that file after winxp went insanely super-slow for whatever reason. Never happened before, nor after though.

Lets just say that pretty much all media suffers from bit rot
Yeah, only except flash is rather more fragile than the mainstream tech... :( And with spare block shuffling-around and recycling coupled with garbage collection schemes I suppose there's not a data recovery program in existence that can recover deleted stuff either. Oh well! :p

Good flash SSDs are damn fast though, I wouldn't switch away from my intel x25-E for any HDD in the world. Any really important stuff I have a web-based backup for - although that's almost entirely useless, since if my data becomes corrupted locally it will happily backup that corrupt data on top of the previous faultless copy... :LOL: It's only good for accessing my stuff remotely, which I don't do anyway. Meh.

Oh,
 
That's a bit extreme of an attitude to take, imo. The alternative to SSDs, namely HDDs, don't suffer nearly the same amount of wear and tear from writes, this is a known and well-established fact...

No not really. Measured failure rates for both SSDs and HDDs currently such that modern SSDs are more reliable over the planned lifetimes(5 years) than HDDs. Going further, HDDs tend towards catastrophic failure while SDDs tend towards gradual failure.


Err, because I verified the data (by using it)? :)

Like you verified your non-ecc memory by using it? Lookup the definition of SDC.


Yeah, it was either FAT32 or NTFS, I can't quite remember which, but everything transferred fine anyway. HDDs use reed-solomon ECC for anything stored on disk, so there's always a basic layer of protection.

A basically bad layer of protection...

It would be nice with a more secure filesystem as standard though, but I guess I don't really have the need for it. I can't even remember the last time I lost any data - no matter how insignificant - due to filesystem corruption. Well, I did have windows overwrite a .DOC for me with garbage a couple years ago when I abruptly shut off my PC as it was saving that file after winxp went insanely super-slow for whatever reason. Never happened before, nor after though.

The more disturbing thing is that you would probably never know.


Yeah, only except flash is rather more fragile than the mainstream tech... :( And with spare block shuffling-around and recycling coupled with garbage collection schemes I suppose there's not a data recovery program in existence that can recover deleted stuff either. Oh well! :p

The data doesn't bear your statements out. Current reputable vendor SSDs are more reliable than HDDs.
 
Btw Aaron when are the Intel G3's out? The last update said it was lower than expected flash cycle life that was causing the delay. And intel was waiting for 25nm yields to ramp up as well. Are we gonna see them in January or February at least?

Also given that 25nm will have a lower number of flash cycles than 34nm SSD's, does the usable lifetime of the SSD's also decrease or will this be made up with some nice firmware/algorithms?
 
Btw Aaron when are the Intel G3's out? The last update said it was lower than expected flash cycle life that was causing the delay. And intel was waiting for 25nm yields to ramp up as well. Are we gonna see them in January or February at least?

Also given that 25nm will have a lower number of flash cycles than 34nm SSD's, does the usable lifetime of the SSD's also decrease or will this be made up with some nice firmware/algorithms?

Well this link seems to indicate that lifetime (of the G3) may well be double or more than the G2 34nm drives :

http://www.anandtech.com/show/3965/intels-3rd-generation-x25m-ssd-specs-revealed
 
Well this link seems to indicate that lifetime (of the G3) may well be double or more than the G2 34nm drives :

http://www.anandtech.com/show/3965/intels-3rd-generation-x25m-ssd-specs-revealed

Ahh yes, thanks for the link :smile: Seems they are using some new firmware and possible allocating more spare area as well to make up for the reduction in flash cycles. According to this link, Program/Erase cycles are 3000 for 25nm flash compared to 5000 for 34nm and 10000 for 50nm

http://www.anandtech.com/show/4043/micron-announces-clearnand-25nm-with-ecc
 
and 10000 for 50nm
Heh, make that 100,000 for 50nm SLC flash, as used in the X-25E drive... 2 petabyte guaranteed write endurance... Takes a while to use up that amount. ;)

Typical commercial shortsightedness using inferior trash MLC in these drives, while you get twice the capacity for the same silicon area, getting a tenth the write endurance and far slower writes (and somewhat slower reads as well I believe) isn't worth it IMO.
 
Heh, make that 100,000 for 50nm SLC flash, as used in the X-25E drive... 2 petabyte guaranteed write endurance... Takes a while to use up that amount. ;)

Typical commercial shortsightedness using inferior trash MLC in these drives, while you get twice the capacity for the same silicon area, getting a tenth the write endurance and far slower writes (and somewhat slower reads as well I believe) isn't worth it IMO.

Agreed, but the problem is that SLC drives cost at least twice as much. And SSD's aren't cheap to begin with. Consumers would be ok with paying half the price for less endurance
 
Heh, make that 100,000 for 50nm SLC flash, as used in the X-25E drive... 2 petabyte guaranteed write endurance... Takes a while to use up that amount. ;)

Typical commercial shortsightedness using inferior trash MLC in these drives, while you get twice the capacity for the same silicon area, getting a tenth the write endurance and far slower writes (and somewhat slower reads as well I believe) isn't worth it IMO.

You get 2x+ greater capacity, significantly reduced cost (SLC is not the volume product), perfectly adequate write endurance and good enough write speeds. There are very few areas that need the increased write endurance of SLC and almost all of them aren't capacity focused. There has been plenty of details, information, perspective, and market impacts on SSDs at the last couple IDFs. They are a good place to start.
 
Unless it's encased in a thin shell of a noble metal of some sort, probably nothing whatsoever.

Is the quoted price the spare part replacement cost for a proprietary design? If it's just a rebadged OEM drive, then it's simply epic fail.
 
I have a noob question
would it be better to get 2 small ssd's and put them in a striped array, rather than 1 larger one
iirc the reason you dont get 2x the performance with a striped array of hdd's is the seek time, but seek time on a ssd is <1ms
 
Striping doubles your I/O speed (more or less anyway), but almost no tasks are I/O-bound anyway except very large file transfers which hardly anyone engages in regularly except people who routinely edit videos and such. So striping gets you an advantage that won't bring you any noticeable performance.

What's worse is that striping will double your chance of dataloss in case of drive failure. Instead of having your data on one device, it's now spread out across two, which means you can lose everything even if just one of them breaks.

The one lone advantage would be you'd get a bigger drive, which would make your free drive space a little easier to manage. I'm not sure if that advantage is worth the drawbacks though...
 
so your sort of saying if someone released a ssd with 2x the performance of current ssd's there would be no point in buying one ?
 
If it was done through striping it wouldn't be a very good solution.

If it was just one drive, doubling straight up, then most likely it would be worthwile since small block size random I/O (where the SSD's real strength lies) would very likely go up as well. If it was just linear block transfers however then you'd just have a checkmark feature which you could likely ignore, and just go with whatever SSD had the best value, warranty, manufacturer reputation or whatever.
 
these ssd's are a mine field
this is from a review of a kingston drive using a toshiba controller about trim

but for now it’s only available in Windows 7, and then only when a drive is using the default windows storage drivers and isn’t in a RAID array (the RAID controller blocks the TRIM commands).

Other than TRIM support, there are no other performance maintenance algorithms running on the drive, leaving XP, Vista or RAID users out in the cold. This is a situation shared by most of the other SSDs currently on sale, although those based on Indilinx and Samsung drive controllers do include garbage collection.

and this
As this system needs to be compatible with the TRIM command we have not installed the Intel INF drivers (which don’t support TRIM yet), and are instead using the default Windows 7 AHCI driver.

do I need ahci as my win xp drive isnt using it and you can only enable ahci in xp pre-install with the press f6 to install a raid driver from a floppy disk option
 
You should turn on AHCI if you have an Intel SSD, as otherwise you won't get max performance out of your drive (no command queuing without AHCI). However, not all SSD controllers support command queuing, so with those, it probably won't matter one way or the other. You should refer to the drive's documentation.

In any case, most likely it won't make a difference in a real-world situation anyhow, as desktop use profiles generally don't show enough I/O queue depth for command queuing to make a difference. If you run a lot of heavy database or fileserver workloads on your system, then yeah. You should get AHCI working. But otherwise...it's prolly not a biggie.
 
Back
Top