Solid state drives?

Heres a question with ram being cheap why are we having to make do with drives with only 16mb cache, why not a gig

The disk buffer is not really a cache in the usual sense; it's there to buffer data going to and from the hard drive, so that the IDE/SATA interface on one side and the physical read/write head on the other side don't need to operate in exact lockstep.

The disk buffer may do read-ahead/read-behind caching - if the read/write head happens to be located above a data track but not above data requested by the OS (e.g. it has completed a seek to a track containing the requested data, but the actual data are still half a rotation away), the read/write head will still be picking up data; these data can then be cached by the disk buffer, in anticipation of the OS requesting them in the near future.

For these uses, a 16 Mbyte buffer is basically enough; larger buffers suffer diminishing returns. General-purpose disk caching can easily benefit from much larger caches, however such a cache is better left under the control of the OS, which can make far more informed choices than what the hard disk itself can. Also, a cache located in system RAM is going to be much faster than a cache located at the hard disk itself.
 
Yes, the cache size (buffer) on the cheap SSDs is internal buffer within the JMF602 Controller at only 16 KB! Which is why they have such stutter/pausing problems on writes.

The new Intel SSDs have 256 KB cache and a far better controller.

OCZ (Core and Solid series) use only the JMF602 and suffer badly from stuttering.

The new OCZ Vertex series is soposed to use a 32MB buffer and a far better controller - really looking forward to getting one to try out.
 
Thank you for the feedback on real experiences with ssd. The chipset is a new aspect for what I have read so far (admittedly, not much). I guess the prognosis doesn't look good for me.

Can you guys elaborate a bit more on your what your experiences have been in the "nightmare" scenario? What was the first kind of task where you knew problems were afoot? Was it OS installation, restoring a system files image, something after your first trip past the login screen? Could it potentially be a different situation if using a non-Windows OS? Was there a stall on literally everything you tried to do on your computer, or was it specific kinds of operations that aggravated the problem?

Is it basically a matter of verifying the ssd has a certain chipset or certain size buffer memory which determines if the experience will be positive, and that will naturally indicate a pricey selection of ssd?
 
Yes, the cache size (buffer) on the cheap SSDs is internal buffer within the JMF602 Controller at only 16 KB!
In and of itself that shouldn't be a problem, busmastering DMA isn't exactly new.

If it can only receive new data on interrupt, that would be a problem.
 
Thank you for the feedback on real experiences with ssd. The chipset is a new aspect for what I have read so far (admittedly, not much). I guess the prognosis doesn't look good for me.

Can you guys elaborate a bit more on your what your experiences have been in the "nightmare" scenario? What was the first kind of task where you knew problems were afoot? Was it OS installation, restoring a system files image, something after your first trip past the login screen? Could it potentially be a different situation if using a non-Windows OS? Was there a stall on literally everything you tried to do on your computer, or was it specific kinds of operations that aggravated the problem?

Is it basically a matter of verifying the ssd has a certain chipset or certain size buffer memory which determines if the experience will be positive, and that will naturally indicate a pricey selection of ssd?
Basically anything which required random writes would cause JMicron controller based SSDs to freeze for several hundred milliseconds. Visit a webpage, browser writes to cache. Stutter. Windows writes memory to page file. Stutter. Try to open an application. Stutter.

And so on.
 
Wow!...the pricing on those Mtron drives are no joke! :eek:

Is it possible that the solid state memory itself is actually not so expensive, but putting a chipset in there that is robust and will actually keep up with the extreme performance of the memory is the real costly part of the whole device?
 
I want one of these : £225




The SATA2 DDR2 HyperDrive5 features...

>Read and write access time in microseconds.
>Sustained Read Rate of 175 MB/s.
>Sustained Write Rate of 145MB/s.
>WD740ADFD Raptor manages 77 MB/S.
>It is a Bootable SATA2 Disk.
>It connects just like a Hard Disk or CDROM.
>CDROM drive form factor, fits into a 5.25" CD bay.
>It does not require any drivers.
>Silent Solid State technology.
>Takes 8 DDR2 Kingston ValueRAM memory sticks.
>Up to 8GB per stick making 64GB Max capacity.
>External power adapter keeps data when PC is off.
>Battery backup to keep data in a power cut.
>Start with 2GB and build up to 16/32/64GB.
>Formats Instantly
>Never needs to be defragged. It's Random Access!
>Gives the user an instant desktop.
>Fires up Windows XP in 4 seconds!

I'd rather have some one gather the balls and build a Dram equivalent of Fusion-io. An order of magnitude more bandwidth, saturated with DRAM is way better than these half hearted effort to speed up day to day computing. Ok, I may not be able to boot from this, but it is way better than this at latency and bandwidth concerns.
 
Basically anything which required random writes would cause JMicron controller based SSDs to freeze for several hundred milliseconds. Visit a webpage, browser writes to cache. Stutter. Windows writes memory to page file. Stutter. Try to open an application. Stutter.

And so on.

I installed Windows 7 beta on a SSD with JMicron controller, and it doesn't stutter that badly. Of course, I don't put the pagefile on SSD because that's basically a bad idea. I also don't have a heave usage pattern since it's just for beta testing. Also Windows 7 seems to have some serious write caching. I did encounter some evidences of the stutter problem when doing HD Tune random access test, in a few tests one random access takes in the range of 500ms.

Of course, this is still a serious problem, because good random access time is practically the only advantage a SSD has regarding to performance. Therefore, I'd advise against buying these products before it's solved. JMicron claimed that they solved this problem in their 3rd generation controller (used in some new 256GB SSD), but I've yet to see an independent confirmation. Therefore, if anyone who want to buy a SSD for better performance, do not buy a JMicron controller one (yet).
 
There are precious few SSDs that don't use the JMicron controller, though. The Intel and Samsung SSD's don't. The high-end OCZ Core series also doesn't, I think.
 
Send me the money and i'll build it
I have looked at how much this would cost as a hobbyist project, the problem are the up front licensing costs for the PCI-Express cores ... there is just no way to do a small run at relatively low cost (unless you can develop a PCI-e core in a realistic time frame yourself). You can use a PCI-Express to PCI bridge, but that's fugly, costly and high latency.

It's a pity there are no cheap bridges from PCI-express to something sane and low complexity like Rapid-IO.
 
Last edited by a moderator:
So I take it, nobody here has actually tried out the ssd as a system hdd idea, just to play around with it to see how well it works in practice? I figured somebody must have tried it out for kicks, by now. ;)

I have one and it works splendidly for my carpc.

Btw Anandtech has a write up about the problems and there also was a reposnse by Jmicron saying there was an easy fix, but it was hardware so people are screwed. Something to do with the total cache or the like. I tried to google it, but I did not see it. You could probably find it if you took the time thoug.

Oh I found it
Dailytech It could all be bogus, but it is worth checking.

When did you first find out about the write latency issue?

We have been developing SSD technology since 2006 and launched our first generation SSD controller, JMF601A/602A at the end of 2007. It soon attracted the attention of SSD makers because of the feature set and high performance. We found the write latency issue around March, 2008. The issue only happens under a special condition, when the system data is close to full and the host keeps writing data on it. It takes time to do internal garbage collection, data merge and housekeeping.


What did you do to solve it?

We revised the hardware architecture and launched JMF601B/602B in June 2008. JMF601A/602A was the old version after B version was available. Currently, all JMicron customers are using latest version, including ASUS NB/EeePC, OCZ, Super Talent, Transcend, etc. The B version improves the write latency a lot. Besides, JMicron also can reserve more spare blocks to alleviate the issue. Because more spare blocks reservation would decrease the drive capacity, most SSD makers tend to not enlarge the spare size.
 
Last edited by a moderator:
From what I've seen, everything OCZ produces uses the JMicron controller. The difference in stuttering is between their MLC and SLC series. Their newest product, OCZ Apex uses 2 of the newer controllers along with a RAID-0 controller chip. It was reviewed recently by PC Perspective.

Probably the most interesting part of the OCZ Apex drive is the fact that it uses not one, but rather a pair, of the JMicron JMF602 controllers. Yes, these controllers are somewhat notorious for performance issues including stuttering and pausing but they are cheap and allow OCZ to offer drives with larger capacity than what MLC Samsung-based drives and Intel-based drives offer. (Also note these are the newer JMF602B versions that are slightly improved overall.) In order to attempt to address these performance complaints the Apex series uses a PAIR of the controllers with an internal RAID-0 configuration.

Also, it seems even the Core series, at least their revision 1 and 2 had severe stuttering problems as talked about in this thread.
 
The Samsung MLC flash drives also suffer from the stuttering issue and I believe they use their own controller.

From what I've seen only the Intel MLC drives so far reduce the stuttering issues. From what I've heard, pathologically heavy write cases can still overwhelm the Intel controller and cause it to stutter. But that's far outside the range most computer users would be it through.

Cache will help alleviate it, but the root cause of the problem is garbage collection/cleanup on the MLC flash chips.

I believe Intel uses a 3 attack method to helping solve this. 1. Large cache. 2. Large portions of MLC flash dedicated to garbage collection/cleanup (IE - unformatted space). 3. Special usage algorhythms in it's controller.

It's one reason you see 120 gig MLC drives which actually use 128 MB of flash. It helps a bit, but they don't reserve nearly enough space to resolve most stuttering issues. 8 MB just isn't enough.

As for the usage case I had. It was really rather speed until a certain amount of writes happened and then garbage collection/cleanup kicks in and random writes then goes to hell. And you'll have system pauses for up to 10-20 seconds.

Running games and whatnot "usually" wasn't bad since those were located on another (mechanical) drive.

The absolute WORST causes of stuttering and pausing were cause by installing any program. Even Windows Update would cause massive pauses due to prolonged writes to the drive that quickly overwhelmed what little flash was reserved for garbage collection/cleanup.

SLC flash drives really don't suffer from this problem. At least not nearly to the extent that MLC flash does.

It's been postulated in some storage forums that if the OS is capable of sending the instruction to clear a cell on a flash device when a file is deleted that this wouldn't be nearly the issue that it is.

The problem isn't that writes are inherently really slow but that clearing a cell is extremely slow. And flash drives don't clear a cell until garbage cleanup or it needs to write to a cell that is no longer "used" due to the file being deleted.

Regards,
SB
 
You mean Gig instead of Meg? Or are you talking at a per chip level so it's Meg? I'm a bit surprised that 8Gig out of 128Gig wouldn't be enough for garbage collection/clean up in typical scenarios.
 
once stuff llike indexing, precaching, NTFS attribute last accessed etc are turned off i have found stuttering to be almost non exsistant. i am running a cheap 32gig MLC SSD for os drive, page file is on a mirrored drive.

I have oblivion installed on the SSD with high texture pack. takes about 2-3 seconds to load the world :D
 
From http://advancedstorage.micronblogs....izza-consolidation-ssd-performance-endurance/ :

Garbage Collection

NAND has some constraints that are not ideal for use as a storage medium in a storage device like an SSD. The storage area on a NAND device is broken into units called pages and blocks. A page is typically 4KB in size and a block is a group of pages (64 to 128 pages to a block for today’s NAND devices). In order to write data to a NAND device, it must be erased first. The smallest unit that can be erased is a block. Once the block is erased the pages can be written one at a time until the block is filled. It is undesirable to have to erase a block and move the data around on every single write that is received from the host because this process is slow—resulting in poor SSD performance. The process is referred to as “read-modify-write”. In order to avoid performing read-modify-write procedures, modern SSDs will keep a pool of blocks pre-erased and ready for new data. When data is written to the same logical area repeatedly it is always written to a new physical area in the NAND. Along with the written data, a table that tells the controller where to locate the latest data is updated and the old locations are marked invalid. At some point the drive runs out of pre-erased blocks and must re-claim the areas marked invalid by the firmware. This process of reclaiming blocks is called garbage collection and SSDs must do it frequently or they will quickly run out of space.

To put this into an everyday example, imagine that you and your friends order two pizzas for dinner. The two pizzas arrive and soon everyone is busy moving slices of pizza onto their own plates. The only problem is there’s no room on the table for the requisite pitchers of beer. So you make the command decision to combine the remaining slices of pizza onto one pizza tray—creating new empty space on the table. Your friends pour their beer and applaud your sheer brilliance…as a garbage collector!
 
Back
Top