OCZ SSD sales increase 265%

the ocz revodrives are out now and offer similar performance the only problem is they are pci-e 4x and few mboards have those slots.
also they are basically 2 sandforce in raid and raid blocks the trim command
also hp do a really nice 640gb drive the problem with that is it's 12 grand

ps: is there an idiots guide to buying an ssd anywhere

edit: found this:
The Crucial RealSSD C300 offers the best performance and the best performance per dollar per gigabyte. Be advised that the 64 GB unit is significantly slower than the 256 GB flagship when it comes to writing data. Clearly, the 256 GB Crucial drive is the very best choice for enthusiasts that can fork out $700 for a 6Gb/s SATA product and don’t care about average power consumption results. Be advised that you should be using an operating system that supports the TRIM feature to maintain high performance.
All SSDs based on the SandForce SF-1200 controller deliver great throughput, stellar I/O performance, and low power consumption. Right now, these seem to be the best mainstream options at 100 to 128 GB and $300 to $400.
Intel’s X25-M has been a great option for many months—and still is. Despite some write performance limitations, you’ll get a low-power, high-peformance client SSD that still does well against modern competition.
Toshiba’s HG2 requires the least power to operate. Unfortunately, it’s painfully slow at I/Os and 4K random writes. Don’t choose it unless you specifically want low power and quick read operation.
Western Digital’s Silicon Edge Blue 256 GB is way too expensive, poor on 4K random writes, below average on PCMark Vantage, and inefficient. We expect more from WD.
Budget-sensitive users should consider the 64 GB Crucial RealSSD C300 and the Intel X25-V. Both lack write performance and have sporadic weaknesses, but they're still better client drives than the Toshiba or WD options. Be sure that you can live with the low storage capacity. These are designed as boot drives, after all.


At this time, it makes sense to purchase an SSD if you’ve been waiting for balanced and affordable products to become available. Drives like the OCZ Vertex 2 or the G.Skill don’t outperform the other drives, but they do well in all benchmarks and even deliver good bang for the buck.


We therefore grant our Recommended Buy Award to these two 100 to 120 GB products.


Crucial’s RealSSD C300 remains the very best choice these days after a slow start and with firmware version 002. But the other options are definitely all worth considering.
 
and can someone explain it says it has 256k cache, than on the next line it says it has 64mb cache ?
It doesn't really have x amount of cache per se, like regular HDDs do. Most of the on-board memory of these drives is for the use of the integrated microcontroller, storing program code and data, block remapping tables and so on.

A HDD needs cache to buffer data due to the mechanical components used in its design, without cache, the drive would suffer greatly particulary on random data accesses (not that it doesn't do that already, heh, but the results would be far far worse.) SSDs don't have mechanical stuff of course, so the need for data buffer caches is much less too.
 
..

That Corsair drive with the Indilinx controller seems a very good product.
The controller itself has had numerous improvements via firmware updates too.

That review you pointed to though seems a bit bare to be honest, more than enthusiast review rather than someone with the tools and knowledge to probe performance and compare to other drives thoroughly.

For the latest info and news I find Anandtech to be reasonably decent especially for news..
 
thanks...
I used that website because it was the first to come up in google when i typed in the drive model and the word review
 
I use an OCZ Vertex 1 120GB SSD as my main OS drive which has the Indilinx controller for around 2 years now. I'm very happy with it's performance. The Indilinx-based SSDs still hold their own very well with the current-gen drives. Before that I ran anywhere from 7200 RPM SATA to RAID-0 7200RPM SATA to RAID-0 10K RPM WD Raptors. I'm happier with the Indilinx SSD setup than any of the others.
 
heres a bit of an ot question
this drive the hyperdrive 5
http://www.hyperossystems.co.uk/

ddr2-800 has a tranfer rate of 6400mb/sec but it would be limited by the sata 2 interface so why are the read and write speeds only 175mb and 145mb (slower than many ssd's)

i tried benchmarking a ramdrive but hdtach wont see it
 
Anandtech (and I hear, PC Perspective, but I don't really go there) have previews of the new Sandforce 1500 controller. It totally blows everything out of the water, it looks like a true generational leap above what's come before it. Very interesting stuff I must say...

If it wasn't for the still very bad price/capacity ratio for SSDs, I would say they're really starting to come into their own. Harddrive manufacturers should absolutely start to worry for real now, because if/when flash ICs begin dropping sharply in price they stand to lose a lot, because none of the remaining HDD manufacturers have any SSD presence whatsoever. Other companies stand ready to eat their collective lunches, so unless they wanna become obsoleted like manufacturers of typewriters they better get in the game, fast! :p
 
Unfortunately, I expect SSDs to go back up in price, as the economy is strengthening again, and they have to compete with other chips.

An SSD drive contains quite an area of high-tech silicon, which could be sold as different chips with a higher profit margin. Flash chips are efficient and easy to design, but pretty big.
 
I think you mean the new Sandforce-2000 family series with the 2582/25xx controllers. The 1500 series is the same controller that's been out for some time now.

I am looking forward to seeing that the new SandForce SSDs and the new Intel SSDs offer. They both look to be amazingly great products.
 
Sorry to be slightly OT and ask a dumb question, but can anyone refresh me as to how fast the modern southbridge bus is? Basically, I mean, I've got 6 SATA2 ports on the intel controller on my board, so a modern SSD should saturate the SATA2 at 300MBps? So if I have two of those drives in RAID 3+, do I get a full 600MBps? What would be the general calculation for southbridge bandwidth? Or is it totally vendor-specific? Or is the whole system bus the same speed/width, and everything shares?

Anyway, I've never had an SSD, been waiting for the market to adjust. Are they really awesome? Like equivalent to that money in beer awesome? And Would a 2 drive RAID setup be that much more awesome? And would it be worth getting a SATA3 card for that additional headroom?

It's just been a long time since I seriously considered MB architecture, and at that time there was the 100Mhz/64 bit bus, double that for RAM, X whatever for CPU, and whatever the AGP bus multiplier was. But obviously much has changed since then.
 
If it wasn't for the still very bad price/capacity ratio for SSDs, I would say they're really starting to come into their own. Harddrive manufacturers should absolutely start to worry for real now, because if/when flash ICs begin dropping sharply in price they stand to lose a lot, because none of the remaining HDD manufacturers have any SSD presence whatsoever. Other companies stand ready to eat their collective lunches, so unless they wanna become obsoleted like manufacturers of typewriters they better get in the game, fast! :p

Toshiba and Samsung produce most of the media (more than 75%), not sure they have to worry about SSD competition.
 
Sorry to be slightly OT and ask a dumb question
There are no dumb questions, only dumb answers. Asking questions is a good thing. ;)

, but can anyone refresh me as to how fast the modern southbridge bus is? Basically, I mean, I've got 6 SATA2 ports on the intel controller on my board, so a modern SSD should saturate the SATA2 at 300MBps? So if I have two of those drives in RAID 3+, do I get a full 600MBps? What would be the general calculation for southbridge bandwidth? Or is it totally vendor-specific? Or is the whole system bus the same speed/width, and everything shares?
It mostly depends on two things: the amount of Flash chips, and the speed of the controller(s). If you double the amount of Flash chips, you double the throughput, as long as the controller(s) can keep up. And you can add multiple SSD drives, which basically do the same to the south bridge.

But you need very big and fast SSDs to saturate 300 MBs. 200 MBs for a single, fast SSD drive is a good benchmark.

Anyway, I've never had an SSD, been waiting for the market to adjust. Are they really awesome? Like equivalent to that money in beer awesome? And Would a 2 drive RAID setup be that much more awesome? And would it be worth getting a SATA3 card for that additional headroom?
For now, SATA2 is sufficient. They are as awesome as they are not because they're very fast at sequential transfers, but mostly because they have a neglectible seek-time. While a regular hard disk counts access times for each non-sequential sector in milliseconds, an SSD counts those in nanoseconds. Reading a lot of small files (or a large one on a fragmented disk) is just as fast as reading a single, large, (non-fragmented) file from a Raptor.

Windows pops up in 15-30 seconds after turning on the computer, applications start and are ready as soon as you click the shortcut.

But they don't help much (if any) if you load a large document or game level from a freshly defragmented drive.

It's just been a long time since I seriously considered MB architecture, and at that time there was the 100Mhz/64 bit bus, double that for RAM, X whatever for CPU, and whatever the AGP bus multiplier was. But obviously much has changed since then.
SATA is really cool and works very well for anything available in the consumer space, as long as you don't consider the extremely badly designed connectors. ;)


The main thing to consider for SSD drives isn't SATA or the chipset, but the support of TRIM if you use Windows.

A drive has no notion of the usage by the file system, it doesn't know if a logical cluster (4-64k) is empty or in use. And it uses pretty big physical sectors (like, 512k). So, if you change a byte, the OS writes a new logical sector (512 bytes), the file system writes a cluster (4k or more) and the SSD has to write a physical sector of 512k. Which means, that the SSD has to relocate 508k of other data as well (with a cluster size of 4k). And the amount of write cycles is limited to ~100,000.

So, unless that SSD tries to write all sectors equally, defects occur. Which means: just about every write requires a reallocation of up to 512k of data.

Sectors are deleted as a whole, but can be written per block (4k or sometimes even smaller). If a sector is full, the best the SSD can do is mark a block as deleted and move it, until it runs out of free space. After that, it has to compact and move whole 512k sectors for each write of even a single byte, to make room.

Linux has a native Flash file system. Most SSD drives (except the old or Intel ones) even use a microcontroller (ARM, most likely) that runs a custom Linux kernel with such a file system, with the Flash chips in a RAID configuration. (Exactly like most RAID controllers for Windows servers.) And that works fine, as long as the host OS recognises that and tells the SSD which logical sectors are empty and can be overwritten.

That requires the TRIM command, which simply tells the SSD: this sector is empty. Which is a new addition to Windows 7.

Under Linux, you don't actually need SSD drives as such, as the OS is quite capable of using any Flash memory available directly as part of the file system. But most people use Windows. :)
 
Last edited by a moderator:
Back
Top