SSD's: there yet, and what is what?

sand force isn't a step in the right direction right now. Seems like it will start at 50 gigs and work its way up. All the while carrying a premium over the vertex 2. Whats to stop indellix from making a new sata 6 controller that offers better performance at lower costs

14 gigs of data is alot of space to take up on the 64gig drives. IF next gen indellix drives go with cheap dram and expensive nand it may still work out cheaper and faster over all

Maybe the cost of the controller is still too high right now. However, I still think the basic ideas behind this controller seems to be good. Good error correction is never bad, and it has the potential to make SSD much more reliable than traditional HDD because SDD generally use many flash devices. Real time compression is also nice to have especially when it has a positive impact on performance.

From the preview article, the SandForce controller is about 30% faster than Indilinx controller in random access test (which is very important if you want to use a SSD as system drive). Maybe Indilinx can catch up with a newer controller and more DRAM, but it remains to be seen.
 
It may be even more reliable than non-protected higher grade chips.
Good SSDs already have ECC implemented. They must, since flash is inherently unreliable. Crap flash chips is never going to be a good solution, even if you offer twice the crap for the same price I would never consider buying it even in my wildest dreams. Who today can say for sure the long-term consequences of using low-grade flash on a wide scale to store important data? It's just too risky IMO. Even ECC can only do so much, and when you have perhaps hundreds of gigabytes at stake, all stored on chips rated for a mere 5k cycles... That's the very definition of unwise in my book.

I'd rather pay premium for good hardware, or wait longer for good hardware to come down in price, whichever makes the most sense to me at the time. Paying good money for what's essentially damaged goods from the very point it left the acid vats in the foundry does not make sense to me at all.
 
Good SSDs already have ECC implemented. They must, since flash is inherently unreliable. Crap flash chips is never going to be a good solution, even if you offer twice the crap for the same price I would never consider buying it even in my wildest dreams. Who today can say for sure the long-term consequences of using low-grade flash on a wide scale to store important data? It's just too risky IMO. Even ECC can only do so much, and when you have perhaps hundreds of gigabytes at stake, all stored on chips rated for a mere 5k cycles... That's the very definition of unwise in my book.

Most MLC NAND are only good for about 5k~10k cycles. I don't think you can get much better than that in more expensive SSDs except those SLC ones.

If you use good ECC, you can have much better reliability. For example, compare a SSD with 10 higher grade flash devices with a SSD with 13 lower grade flash devices protected with RS codes, using 3 devices for redundancy. That means up to any 3 of the 13 devices can fail without data loss. Suppose that the higher grade device has 0.01% fail rate, the fail rate of a 10 devices system would be about 0.1% (any fail of the 10 devices will cause system fail). On the other hand, suppose that the lower grade device has 1% fail rate, then the fail rate of a 13 devices protected system would be 0.000665%. That's much better than a unprotected higher grade system.

Furthermore, if any of the devices failed, it can notify the system through SMART system so the user can replace the device (with all data intact) before more devices fail.
 
That's much better than a unprotected higher grade system.
Yes, a nice theoretical example, assuming manufacturers would be stupid enough to stick expensive flash in a device and then not secure it further.
 
Err AFAIK SF1200 is very close to Intel G2 price/GB wise, SF1500 falls short but has plenty of performance to back it up.

And most interestingly SF1500 models with SLC are WAY cheaper than the X25-E model with the same capacity. (790 vs 550USD for 54GB)
 
Personally I've never been a big fan of these smart controllers ... I'd rather have had a huge sea of flat addressed flash, with all the warts visible. Maybe a bad block list from the factory, but that's about all.

Put all the smarts in software ... our processors are more than capable of handling it.
 
Yes, a nice theoretical example, assuming manufacturers would be stupid enough to stick expensive flash in a device and then not secure it further.

Personally I don't know any current SSD has RS code (or similar) protection among its devices. Most flash chips already have some sorts of ECC built-in, but even RAID-5 style parity protection on chips level is rare, not to mention RS code style protection.
 
Personally I've never been a big fan of these smart controllers ... I'd rather have had a huge sea of flat addressed flash, with all the warts visible. Maybe a bad block list from the factory, but that's about all.

Put all the smarts in software ... our processors are more than capable of handling it.

IIRC there were some debates about this question. The major advantage of a software based approach is, as you said, computer processors generally are fast enough to handle these problems. And there are already file systems designed for flash devices. Another advantage is that the OS will be able to optimize for the usage pattern, reducing writes and page erases.

However, the major disadvantage is that file systems tend to have very long life span (e.g. FAT). It's probably not a very good idea for a file system to depend on the behavior of flash too much, which may become a hurdle if some characteristics of flash somehow changed in the future.

Another approach is using drivers to do the allocation and use normal file systems. But this would require the driver to be in the bootstrap stage, otherwise the SSD can't be used as system disks.
 
I don't think RS codes make a lot of sense, the errors aren't really bursty ... the bit flips can be handled by the hamming/whatever code build into the flash and block level errors are cheaper to handle with RAID. Because of the way flash is written I doubt there is much middle ground between "basically fine with occasional bitflip" and "completely fucked" where RS would make sense.
 
I don't think RS codes make a lot of sense, the errors aren't really bursty ... the bit flips can be handled by the hamming/whatever code build into the flash and block level errors are cheaper to handle with RAID.

RS code can be used in interleaved form for block level error correction.
 
You don't really want to have to read/modify/write all the interleaved blocks because you want to reuse one of them. You want something which is both commutative and reversible ... I don't really see an alternative to the humble XOR.
 
Personally I've never been a big fan of these smart controllers ... I'd rather have had a huge sea of flat addressed flash, with all the warts visible. Maybe a bad block list from the factory, but that's about all.

Put all the smarts in software ... our processors are more than capable of handling it.

I used to think that but on practical term, this would give you a like giant crappy USB drive on about any OS installation out there, dipping at floppy drive speed when writing small files. (OS which don't have the smarts)

I proposed your very idea on that forum, and the answer was the controller has data locality for it. It may not be fun for all your blocks to transit through a PCIe 1x or sata connection and be shuffled back and forth for re-orderind and all. A drive can be expected to be talked to through a USB2 interface as well (30MB/s sustained rate).

the software side would be very problematic, given all possible hardware combinations and interfaces. You may have to flip a jumper on the SSD to switch between compatibility (slow) mode, and a mode that can address indivual flash chips.
You may compile kernel modules and use custom filesystems on linux.. but have to wait for windows 7 SP3 as the only microsoft OS to support it, with no guarantee for future stuff.

I believe the idea is workable for an embedded computer, but not for storage that has to work on every piece of hardware (PC and non-PC) and software (from XP to windows 8, everything in between, and any other stuff out there).

Possibly, all of that would be an argument for dimm-like slots on motherboards, where you plug flash sticks in ;). PCIe SSD (on netbook and high end products like Fusion IO) are half-way (or quater-way) between SATA and that.
 
Last edited by a moderator:
You don't really want to have to read/modify/write all the interleaved blocks because you want to reuse one of them. You want something which is both commutative and reversible ... I don't really see an alternative to the humble XOR.

I think there are several methods. For example, in a 6 devices case, device 0 ~ 3 store normal data, and device 4 and 5 store RS redundancy codes. It can work in a RAID style, i.e. the block size is 4 times as large, that is, it writes to all devices every time. The downside is that the block size may be a little too big. Altough it would have the upside of much better read/write bandwidth.

Another method is use normal block size, but still have devices for RS redundancy codes. This way, only the devices which need to be updated, and the redundancy devices, need to be written. To avoid wear leveling difficulties in devices storing redundancy codes, the redundancy codes can be distributed to every device (e.g. for block 0, device 0 ~ 3 store normal data, device 4 and 5 stores redundancy codes. For block 1, device 1 ~ 4 store normal data, device 0 and 5 stores redundancy codes, etc.)
 
Personally I've never been a big fan of these smart controllers ... I'd rather have had a huge sea of flat addressed flash, with all the warts visible. Maybe a bad block list from the factory, but that's about all.

Put all the smarts in software ... our processors are more than capable of handling it.
Do you include Reed Solomon error correction in that as well? CPUs don't really have efficient support for Galois Field maths.
 
Actually windows has installable filesystems.
Yes, it does. Ever tried using ext2 on Windows? It might work, but not because of M$.

Then again, ever tried to format a drive with W7? It's like: "Please use our recommended new filesystems and fork over the cash to upgrade every computer you might want to be able to read/write that drive :)"
 
Do you include Reed Solomon error correction in that as well? CPUs don't really have efficient support for Galois Field maths.
Can't encoding be done with FNT (relatively efficient). Decoding only has to be done if the small block codes fail (which you'd detect with a CRC/hash) which would hopefully make it non critical.
 
Err AFAIK SF1200 is very close to Intel G2 price/GB wise, SF1500 falls short but has plenty of performance to back it up.

And most interestingly SF1500 models with SLC are WAY cheaper than the X25-E model with the same capacity. (790 vs 550USD for 54GB)

Do you have any released Sandforce drives to use in comparisons ? The intel drives are out right now and have known performance and life times.

The vertex drives are $200 for a 60 gig drive. I don't see the new sandforce vertex 2 50 gig drives costing that $200. Not only that but we know the 60 gig vertex 1s can drop in price even more as they were selling for as low was $170 back in june.
 
Back
Top