IS this a reliable SSD?

Since we're talking anecdotal items, I have an IBM "DeathStar" still running :)

The Greens I know of that have failed were either in no-fan USB enclosures or poorly-ventilated "Business" PCs. I've actually never had a hard drive fail in my own rig. As for RC's bizarre claim that I don't know how to cool...I think 14 fans, 2 radiators (1x360 push/pull and 1x240 push/pull) with blocks on CPU and both GPUs and both a 240 and a 120 fan cooling the drive bays should be sufficient. ;)
 
Last edited by a moderator:
My stance is it's all luck of the draw. I had two 75GXP 30 gig drives, one lasted until around 2007. It wasn't of much use by then anyway.

But with OCZ drives, they were frequently dropping new firmware with promises of improved reliability and compatibility, and I remember mentions of SATA controllers that would be trouble. Vertex 2 causes a SMART error during boot on my P35 board with ICH9, for example, although it works ok on it. They actually wanted motherboard manufacturers to make new BIOS releases just to run their supposedly SATA2 compliant drives. My impression was that OCZ is/was quite obviously new to storage products and this combined with SSD being bleeding edge resulted in some serious misjudgments.
 
Since we're talking anecdotal items, I have an IBM "DeathStar" still running :)

The Greens I know of that have failed were either in no-fan USB enclosures or poorly-ventilated "Business" PCs. I've actually never had a hard drive fail in my own rig. As for RC's bizarre claim that I don't know how to cool...I think 14 fans, 2 radiators (1x360 push/pull and 1x240 push/pull) with blocks on CPU and both GPUs and both a 240 and a 120 fan cooling the drive bays should be sufficient. ;)

That's fine but at what temps are the HDDs running?:p
 
well it looks like the 256 gig ssd's have started to hit the $200 price point. You can get the crucial m4 for $200 and a few others for $200-$220.
 
I'd like 500GB at least, preferably 750, for a decent price. Guess I gotta wait some more...
 
If at current state SSDs are most suitable for OS partitions only, then why do you need so many GBs?

Well its handy in a laptop. 128 gigs isn't alot for your whole system esp with games growing ever larger.


Also in my desktop I can fit more programs onto the one disk drive and move another 128 gig drive over to my game storage .


I have a

Brazos htcp with a 60 gig ssd (agility 3) and a 3TB hardrive

Dual core amd neo laptop with a 128 gig ssd from san disk

I have my fx 8150 with a 128 gig m4 and a 128 gig vertex 2 .


I'd most likely put a 256 gig and move the m4 to the game drive and put the 128 gig into my gf's laptop
 
I think the Hybrid drives are a better value and a good compromise between capacity, speed and price.
For systems that arent based on the Intel Z68 / Z77, I'd agree. However, for those who have a Z-series Socket 1155 chipset, being able to build your own hybrid storage solution from your own SSD and your own spindle disk (or disks in various RAID configurations) is even better.

For my new home server, I have a RAID1 of 320Gb 7200RPM disks for OS and apps, and then another RAID1 of 750Gb 7200RPM disks for all the Hyper-V guests. I paired a 240GB OCZ Agility 3 to the 750Gb array to use 64GB of it as cache and the other ~150GB for scratch space (another volume with another drive letter.)

In "maximum" performance mode, you can use the SSD as a write-back cache as well as a read cache. Intel only recommends that mode for machines with a UPS, as there is a possibility of data loss due to a sudden power loss (I have a 1KVa line-interactive UPS attached to that server.) Also, the Z-series chipset uses the SSD as a block-level caching device rather than file level, so truly the stuff you use the most gets cached.

It's a world of difference for VM's.
 
Current "hybrid" drives are way too limited to be effective. The NAND cache is much too small except for light, everyday use, and it only caches reads I believe.
 
Current "hybrid" drives are way too limited to be effective. The NAND cache is much too small except for light, everyday use, and it only caches reads I believe.

Limited and non-effective? Care to back that up with real-world use examples? That video clearly shows that the hybrid drive was very close to the SSD in terms of loading programs. Even the traditional 10K rpm couldn't keep up with the hybrid drive and was a distant 3rd. Under normal use, a user doesn't noticed hard drive write speeds even from traditional models unless you are copying very large files. How often does the typical PC user copy large files to the hard drive?

A 500GB hybrid drive costs about $80. For that price you'd only be able to get a 64GB SSD, then you'd have to shell out another $80 for a traditional 500GB HDD to only get about 10% combined speed increase over the hybrid.
 
Last edited by a moderator:
Any situation where your dataset exceeds the capacity of that tiny NAND cache would make performance crash and burn, like when regularly playing a bunch of different games and such. I already said it's an effective scheme for light, regular use, but if that's what you do you might as well just go with a SSD and not have to deal with any mechanical parts at all; even 64GB is enough for quite a big bunch of programs.
 
Sure 64GB is enough for programs but I would still need another HDD to store my non-programs like music, pictures, HD videos. That's why I said the hybrid drive is a good compromise between speed, capacity and cost. Big TB hardrives are popular for a reason, it's not because "even 64GB is enough for a bunch of programs".
 
Neither music, pictures or HD video are performance sensitive, so them living on a regular HDD - even an external USB drive - is completely acceptable.
 
The problem with current "hybrid" drives is that their caches are too small. 4GB (or even 8GB in the 2nd generation) is really not that big considering that most computers already have 8GB main memory. That means, most of these cached data may actually already in the main memory buffer. This is also why I think Z68's SSD cache is more useful, as you can use a much larger SSD (64GB max) as cache.

The only upside for a hybrid drive, right now, is probably the ability to boot faster, which could be important for notebooks. And that's why you see Seagate makes 2.5" hybrid drives first. Also, notebooks tend to have less main memory than desktop computers.
 
In "maximum" performance mode, you can use the SSD as a write-back cache as well as a read cache. Intel only recommends that mode for machines with a UPS, as there is a possibility of data loss due to a sudden power loss (I have a 1KVa line-interactive UPS attached to that server.) Also, the Z-series chipset uses the SSD as a block-level caching device rather than file level, so truly the stuff you use the most gets cached.

I'd still recommend against using "maximum" performance mode because a BSOD may also cause data loss (it happened to me a few times, so I turned back to "enhanced" mode).
 
I'd still recommend against using "maximum" performance mode because a BSOD may also cause data loss (it happened to me a few times, so I turned back to "enhanced" mode).

While a BSOD is absolutely a possibility anywhere at any time, the likelyhood of BSOD'ing my Hyper-V host is considerably less than that of BSOD'ing one of my gaming rigs. I'm not overclocking it, installing apps on it, and the like. As for my SNB-E desktop, I've actually yet to BSOD it either, but I've locked it solid at least a half dozen times while doing some OC stability testing of the video card. But that SNB-E rig also runs a six-SSD-drive RAID0 so it's bound to die at some point anyway ;)
 
Dumb question, but isn't "reliable" sort of a stupid term for a technology that we've seen nothing but problems with a few years out? :|

"Less unreliable than most" seems about the best SSD can currently do. :(
 
Dumb question, but isn't "reliable" sort of a stupid term for a technology that we've seen nothing but problems with a few years out? :|

"Less unreliable than most" seems about the best SSD can currently do. :(

In terms of pure MTBF, SSD's are (statistically) more reliable than spinning disks in the majority of consumer user cases. Fundamentally, a storage medium that is built on solid state logic rather than a physical spinning magnetic media removes hundreds or even thousands of failure scenarios.

Certainly this doesn't mean SSD's are devoid of failures, but the super-majority of complaints have been BSOD's linked to bad controller logic or performance inconsistencies -- not catastrophic data loss. And yes, catastrophic data loss does occur, but it occurs on spinning disks just the same.
 
Back
Top