SSD's: there yet, and what is what?

Can't encoding be done with FNT (relatively efficient). Decoding only has to be done if the small block codes fail (which you'd detect with a CRC/hash) which would hopefully make it non critical.
You'll have to elaborate I'm afraid. Searching for FNT just suggested "Freestyle National Tour" and "Friday Night Theology" :(
 
It's an interesting read, thanks! :)

However, for normal SSD the number of devices are generally not very big (32 devices would be pretty enormous already).
 
Doesn't really matter, it's not like you can use page size as symbol size. You have to chop the data up into smaller symbols first before encoding, which is just as well ... if you are going to use RS you might as well get full use out of it and allow it to be used to correct for intra page errors as well.

BTW, writing all the pages across the devices as a single super-page is the only practical way to use RS for correcting at the device level ... creating numdevices of reads for every page written is not an option. The size would be annoying, but it's not a huge problem ... translating filesystem blocks to pages in a 1:1 manner is a recipe for disaster to start with. You'd collect changes into a serial log first, which makes even very large page sizes usable (although over-enthusiastically sync'ing programs might have to be simply ignored to protect the drive).
 
Doesn't really matter, it's not like you can use page size as symbol size. You have to chop the data up into smaller symbols first before encoding, which is just as well ... if you are going to use RS you might as well get full use out of it and allow it to be used to correct for intra page errors as well.

I was thinking about intra-page protection rather than inter-page protection (inter-page correction is generally already done by the devices), so the symbol size of RS codes can be pretty small.
 
Hmm? I thought you were suggesting projecting against device failure with RS codes?

Yes, in RAID-like configuration. For example, 3 devices with 2 devices for RS codes. So you can have up to 2 devices failed without any data loss.
 
Personally I've never been a big fan of these smart controllers ... I'd rather have had a huge sea of flat addressed flash, with all the warts visible. Maybe a bad block list from the factory, but that's about all.

Put all the smarts in software ... our processors are more than capable of handling it.

Me too. Present day file systems have been designed to hide the latencies of mechaical disks and while a big change in filesystems is already underway. It really makes no sense to have 2 uncoupled layers of logic, one in the file system and one in the controller to increase overall disk performance. These "controllers" are pretty lame (compared to desktop/laptop cpu's) arm cores anyway, so the advantage to having 2 cores is minimal.
 
Just following up the benchmarks I did earlier now that I've recieved my replacement.
I got them to send an Intel 80GB G2 instead of another Agility (ZOMG, courier left the package in the letterbox all day!)

MB/S: Read|Write
Sequential: 249|60
Random 4k: 19|33

The drive that got 54|54 previously got
Sequential: 64|56
Random 4k: 0.62|1.49

Random test takes absolutely ages at this speed so I'm only going to do this on the one spinning drive.
I'd have re-run on the 100GB partition but it got used for a temporary Win7 install meantime so is giving lower sequential (ugh, bad idea to have Windows & Games on separate partitions of the same drive apparently :oops:)
This was using AS SSD benchmark btw.
 
Back
Top