Solid state drives?

I was thinking that the usb thumbdrive will have an interesting metamorphosis into the future, where it not only encompasses local, rewritable storage, but also a local cpu. So you literally can carry your Mac or PC in your pocket. Then when you want to use it, all you have to do is plug it into a dumb terminal (or any usb-equipped computer, for that matter). That would supply the power/video/communication resources for your usb-computer. Your usb-PC would just appear as a virtual computer in a window or full-screen on the host computer desktop. That way, it will never matter if your personal computer is Windows (or one of the many flavors of it, for that matter), Mac, Linux, whatever, and you can use whatever native apps you choose for that environment. It all runs locally in solid-state on the usb which sends out a stanardized/universal video output that any host computer can accept and process.
 
There were some companies which produced products similar to what you described. Basically they are a "pocket PC" without keyboard, mouse, or monitor. They are not as small as a USB drive, but they are still quite small, normally about the same size as an external HDD.
 
I was thinking that the usb thumbdrive will have an interesting metamorphosis into the future, where it not only encompasses local, rewritable storage, but also a local cpu.

Some people might call this device a mobile phone and just forget about the usb-sticks...
 
Is it true that the speed of these ssd's is somewhat influenced by the available cpu power? I seem to remember that this was an aspect of IDE-based drives in the past. So, achieving modest disk access performance gains is usually not a problem, but dipping into the ultimate throughput will require a healthy dose of cpu resources (usually not a problem with the latest cpu's at the latest clockspeeds, but could be a bottleneck on older cpu's). Is it that SATA-based drives were a bit more autonomous to do what they do, or do they have fairly similar cpu requirements as the older IDE designs?

As for my own little ssd project on a mac mini, sometimes the file copy operations fly, but other times it really isn't much different than the original hdd. I haven't noticed any outright stalls, gladly. UI responsiveness is much improved in all instances, however. Menu response stays snappy (like reliving the os9 classic days) and applications start and retire pronto. It may be dipping into the vm (on ssd now, of course) to keep everything running, but it never feels like it. Essentially, page swaps are swift enough to feel as if in a state of persistent, infinite RAM. ;) On the face of it, that seems quite more effective and inherently scalable than if I had simply upgraded my 512 MB RAM card to 1 GB.

I did notice that there is a substantial amount of CPU load when a file copy is chugging away at decent pace. So that made me think that I am simply CPU-bound from greater throughput, just to host IDE communication. It is only a 1.2 Ghz G4, so it wouldn't surprise me.

The highest disk throughput I have observed is about 18 MB/s. I know - that's not very high, compared to theoretical capability (but far higher than I have ever seen with the oem hdd). More typically, it hangs around 5-8 MB/s. The write instances seem to hold their own, though (given that writes have been cited as an ssd achilles's heal). Ironically, it seems the reads have been consistently slower (but not necessarily slow) doing whatever, compared to the writes. :confused:

Transfers between usb thumbdrive and ssd were a bit underwhelming, though (whereas that should be the theoretical ssd nirvana case). Maybe it is just the limit of the thumbdrive (or just CPU overhead, as mentioned before), as well? It was about 5 MB/s.

The $50 usb external hdd I also got (to supplement my primary storage) is impressively speedy, though. I thought it would just be mediocre laptop-grade performance, at best, given the price. However, that was where I first observed sustained 18 MB/s throughput in conjunction with the ssd. Large-size files are definitely favored over many tiny files, when it comes to exploring throughput ceilings.
 
Last edited by a moderator:
Long-term performance analysis of Intel Mainstream SSDs

http://www.pcper.com/article.php?aid=669&type=expert&pid=1

Until Intel tweaks their write combining algorithms and revises their released firmware, there are ways to minimize your chances of falling into the fragmentation black hole.[...]Even with very little deliberate write activity it is still possible to overly fragment the drive over time.[...]Hopefully Intel can further tweak their algorithms with a future firmware update to the X25-M. In the meantime, we hope our suggestions keep your SSD on the speedier side of things.

Jawed
 
The introduction of combined writes has added a completely new dynamic to the mix, meaning that getting accurate real-world figures from an X25-M presents the same problems faced by physicists attempting quantum measurement. Since this particular SSD ‘adapts’ to usage patterns, the mere act of benchmarking it results in an altered outcome.
Spooky
 
I have to say that article essentially paints X-25M as useless. You're paying a massive premium for performance that rapidly dimishes and is almost impossible to retrieve.

What's more worrying is that other SSD manufacturers will prolly move in the direction of Intel's algorithm. Since this algorithm is fundamentally broken and is purely an exercise in marketing, sigh ("massive performance*"), SSDs are looking like they're years off being useful.

Windows 7 it seems won't have all the fancy command support needed to work around these issues.

Jawed

* until after you've used it for a while for anything other than benchmarking
 
I don't see why not. I also don't see why Intel couldn't release a defragmenter that worked on the low-level to garbage collect and compact data on the drive - regaining all the lost performance.
 
homerdog said:
Could this be fixed with firmware?

There's no real way to fix this with firmware alone, the OS will have to incorporate some form of TRIM command in order to tell the SSD when it's safe to garbage collect unused cells. Things should be better on Windows 7.
 
Last edited by a moderator:
Would increasing the cluster size in the disk partition help? Instead of a default 4KB cluster use the maximum, e.g. a 64KB cluster? Obviously the latter wastes some space, but it minimises the number of files that can share a 512KB block.

Jawed
 
I am happy I got a raptor (300g) again instead of being tempted by a SSD now.

Agreed. I have 2 x 150 gig ones in RAID 0...are you talking about the Velociraptor? That is one of my impending upgrades actually...those drives are the bees knees!
 
Well I've been playing with a pair of OCZ Core2's hung off a semi-competent RAID controller (Dell PERC6/E) in RAID0 versus a pair of 150GB Velociraptors on the same controller. My interest was in testing how the throughput scales with increasing number of processes accessing the array in parallel. For read-only operations the SSDs win by about a factor of two in throughput. For mixed read/write though the VRs win by a factor of three despite the RAID contoller having a decent sized cache with which to coalesce the writes.

My conclusion from this testing is that currently cheapo SSDs aren't a replacement for semi-cheapo performance HD for my workload. Maybe some of my read-mostly databases could go on them, but for general mixed operations they're not worth the price premium.
 
Agreed. I have 2 x 150 gig ones in RAID 0...are you talking about the Velociraptor? That is one of my impending upgrades actually...those drives are the bees knees!

Yeah they are getting cheaper thanks in a great deal probably to SSDs.
 
Back
Top