ssd caching on z68 with OS already on separate ssd

Discussion in 'PC Hardware, Software and Displays' started by Sxotty, Dec 29, 2011.

  1. Sxotty

    Veteran

    Joined:
    Dec 11, 2002
    Messages:
    4,891
    Likes Received:
    344
    Location:
    PA USA
    So basically I have an SSD already for my OS and programs, but I have a lot of data files which I use regularly. GIS, databases etc... anyway I have them on regular old HDD which slows things down a lot. I was wondering if anyone has used the SSD caching on a Z68 when you already have a separate SSD for the OS and programs. Does that work out well? I was worried it would cache form the SSD or something silly like that which would slow performance down.
     
  2. pcchen

    pcchen Moderator
    Moderator Veteran Subscriber

    Joined:
    Feb 6, 2002
    Messages:
    2,750
    Likes Received:
    127
    Location:
    Taiwan
    I am using such configuration. I have an Intel SSD(80GB) for my system, but since it's very small I use a 2TB HDD for most of my applications, and configure a 64GB SSD for cache. During some tests, it performs roughly twice as fast as not using cache at all.

    However, I'd suggest that if you want to use SSD cache, do not enable write back caching (in Intel's jargon, it's called "Maximized mode"). Although it should be able to keep data integrity in case of a system crash (as it's SSD, the cached data should still be there), but in my experiences it frequently causes data loss in a system crash. Also, depending on usage pattern, write back cache is not that much better than write through cache anyway.

    As for selecting the SSD for caching, preferably one should use a SLC SSD for cache. Intel has a very small (20GB) SLC SSD solely for this purpose, but IMHO 20GB is too small.
     
  3. Sxotty

    Veteran

    Joined:
    Dec 11, 2002
    Messages:
    4,891
    Likes Received:
    344
    Location:
    PA USA
    Hey thanks for the info. I thought 40GB is the biggest caching drive the z68 supported. Must be misremembering. I just thought this would be nice so I did not have to figure out which things to store on the main SSD (120GB only).


    edit:
    I looked around a bit and dadgum SLC drives are too pricey.
     
    #3 Sxotty, Dec 29, 2011
    Last edited by a moderator: Dec 30, 2011
  4. Lightman

    Veteran Subscriber

    Joined:
    Jun 9, 2008
    Messages:
    1,804
    Likes Received:
    475
    Location:
    Torquay, UK
    I've got 100GB Samsung S805 SLC drive with SuperCap but it's my main system drive, got it cheap, for £100 with only 700hours of usage. Not going to change it into cache any time soon. However my old 30GB OCZ Solid with crappy JMF601 controller is laying spare, will it give me any benefit caching 2x1TB Samsung's F1 drives?
    Besides can you use SSD cache when running RAID?
     
  5. Sxotty

    Veteran

    Joined:
    Dec 11, 2002
    Messages:
    4,891
    Likes Received:
    344
    Location:
    PA USA
    Sorry lightman I don't know if it would work with raid, though I don't see why it would not. If you have raid 1 then I have to assume it would give you a significant speed up if it actually worked. I guess I will look at used SLC drives a bit, but how do you actually know how many hours usage it has?
     
  6. pcchen

    pcchen Moderator
    Moderator Veteran Subscriber

    Joined:
    Feb 6, 2002
    Messages:
    2,750
    Likes Received:
    127
    Location:
    Taiwan
    Intel 311 20GB SSD is quite affordable though (around US$120). However, as I said, it's too small. The max size for SSD cache is 64GB. If you have a larger SSD, you can utilize its remaining space as a normal HDD.

    Using a older MLC SSD (such as the old JMicron ones) is probably not a very good idea, as cache has a lot of write traffic. A better way to use such old SSD would be installing some "mostly" read-only applications, such as some large games with long load times (in most cases, games do not write save game data into its own directories, so it fits the "mostly read-only" scenario.

    As for RAID, actually Z68's SSD cache requires a RAID configuration, as in AHCI mode it can't bypass Windows' built-in AHCI driver. You can also make a one HDD RAID too. However, due to its restriction, it can only cache one RAID (i.e. one HDD or two or more HDD in RAID 0 or RAID 1 or RAID 0+1 config). So if you have multiple separate HDDs you can't use it on all of them.

    Using a newer MLC SSD as a cache should be fine, as the cache mostly write in large chunks, so it shouldn't create too much fragmentation.
     
  7. Davros

    Legend

    Joined:
    Jun 7, 2004
    Messages:
    14,891
    Likes Received:
    2,309
    that reminds me of a strange problem I had. I did ask about it here but no one was able to provide an answer

    I was building a pc for someone with an asus mboard (cant remember the model) and a sata drive
    but running the xp setup cd it could not find a hard drive
    when I booted the bios would display a mesage "raid not configured press f1 to configure raid" which of course I ignored because I didnt want raid and anyway I only had 1 drive.
    after trying every everything I could think of to get xp cd to see the hdd out of desperation i pressed f1
    and selected raid 0 and presssed ok.
    At this point I was execting to get an error message along the lines of "you cant have a striped array with only 1 hard drive" but no, i got a progress bar and after a while a message saying array configured. so i ran the xp setup again and it found the hard drive and installed.
     
  8. Lightman

    Veteran Subscriber

    Joined:
    Jun 9, 2008
    Messages:
    1,804
    Likes Received:
    475
    Location:
    Torquay, UK

    Thanks!

    I was wondering how Intel implemented SSD caching hoping they are bunching together write data and then send it to SSD cache precisely to avoid stalls on slower drives. But if it's immediate caching then my old OCZ Solid is a no go.
    I didn't know you can use same SSD as both cache and storage drive, but this is not ideal as often you would hit caching request and data R/W request when reading from it.

    I'm very happy with my setup anyway, as my apps and OS are on SSD and Steam/Video/Music files are on RAID 0 with 190-240MB/ linear read/write. On top of that 1TB drive as kind of backup copy of critical data and emergency Windows XP 64 OS, plus 750GB drive as a scratch disk for applications, Amiga files (with Amiga partition), Linux partition, etc.
     
  9. Sxotty

    Veteran

    Joined:
    Dec 11, 2002
    Messages:
    4,891
    Likes Received:
    344
    Location:
    PA USA
    Well this won't work quite as I planned then. I have working data on 640GB raptor. Games, videos pics are on a 1TB drive. What this probably means is the intel 20GB would do fine for data, but only speed up the raptor which isn't that slow now.
     
  10. pcchen

    pcchen Moderator
    Moderator Veteran Subscriber

    Joined:
    Feb 6, 2002
    Messages:
    2,750
    Likes Received:
    127
    Location:
    Taiwan
    I thought about this for a while. Intel's SSD cache is, obviously, not perfect yet. A better solution seems to be something implemented on the motherboard (i.e. firmware level instead of driver level). However, after AHCI, it seems to be very difficult to do anything firmware level as the OS generally talks to the HDD controller directly.

    Another way is to embed the cache into the HDD itself, such as what Seagate has done with their 2.5" HDD lines. However, that's expensive and you don't get much cache space (even the newer version gets only 8GB, although SLC, SSD cache, which is really not that much as many computers already have 8GB RAM).

    I think, with current technology trend, we are probably more likely to go full SSD route. That is, most computers (especially laptops) will only have SSD. Large data will be stored in separate, networked HDD storage, which double as backups.
     
  11. Albuquerque

    Albuquerque Red-headed step child
    Veteran

    Joined:
    Jun 17, 2004
    Messages:
    3,845
    Likes Received:
    329
    Location:
    35.1415,-90.056
    My next IVB rig is going to have an 8-port (at minimum) PCIe SATA-6 hardware raid controller and a stack of cheap 80gb or 120gb MLC SSD drives. The card will probably be a grand, and the stack of drives will probably be another grand, but $2000 for ~1Tb drive that runs at 1GB/sec is just fine by me.
     
  12. Sxotty

    Veteran

    Joined:
    Dec 11, 2002
    Messages:
    4,891
    Likes Received:
    344
    Location:
    PA USA
    I care about my data though and what you are describing sounds risky albequerque, partially b/c for 1 grand getting 1TB of SSD means raid 0.
     
  13. Albuquerque

    Albuquerque Red-headed step child
    Veteran

    Joined:
    Jun 17, 2004
    Messages:
    3,845
    Likes Received:
    329
    Location:
    35.1415,-90.056
    It's a gaming rig, and it's backed up nightly to my WHS box. Given the MTBF on SSD drives compared to their spinning counterparts, I'm not worried.
     
  14. Sxotty

    Veteran

    Joined:
    Dec 11, 2002
    Messages:
    4,891
    Likes Received:
    344
    Location:
    PA USA
    I can see in that case why it would not matter much, but I still don't see how much more useful it would really be. If you ever do it you will need to benchmark some stuff :).
     
  15. Albuquerque

    Albuquerque Red-headed step child
    Veteran

    Joined:
    Jun 17, 2004
    Messages:
    3,845
    Likes Received:
    329
    Location:
    35.1415,-90.056
    Well, TBH if i bought eight 120gb drives and ran RAID5 on something like an LSI 9265-8I, I would have ~800Gb capacity and still be ungodly fast (read and write scores above 1.2Gb/sec.) I might do that...

    The card, BBU and a license for their 'fastpath' driver would be around $850, and eight 120gb SSD sata3 sandforce drives would probably run around $1100-ish right now; hopefully less in four months when it will be time to buy my new rig. It might be possible to switch to slightly fewer 240Gb drives if the price/Gb continues in the right direction, but I"m not holding my breath. The 80Gb and 120Gb drives seem to be the best bet...
     
  16. Davros

    Legend

    Joined:
    Jun 7, 2004
    Messages:
    14,891
    Likes Received:
    2,309
    They charge for the driver ?
    ps: whats a bbu ?

    pps: if you going to spend huge amounts of cash why not get a server board that supports 96+gb of ram (ramdrive baby)
    there are also 5.25inch ramdrives (like the hyperdrive 5, or gigabyte i-ram) they have battery so you can turn your pc off (as long as its for not more than a day i think and then leave it on for long enough for the battery to recharge) the hyper drive will automatically backup to ssd when you turn the pc off and restore when you turn it on ) raid a load of those bad boys...

    its about time someone on here went batsh1t insane :D
     
  17. BRiT

    BRiT (╯°□°)╯
    Moderator Legend Alpha Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    12,498
    Likes Received:
    8,701
    Location:
    Cleveland
    Battery Backup Unit. The enterprise-level controller cards that have RAM cache also have BBUs so your data isn't lost should a power outage happen before the controller has a chance to flush the buffers.

    I too think the money would be better spent for a really large ram-drive that can be saved off to SSD periodically.
     
  18. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    6,806
    Likes Received:
    473
    Personally I'd just stick to RAID-0 over RAID-5 ... the latter is only a little less disaster prone than former, drive failure during RAID-5 rebuilds are not exactly rare (the odds of clustered drive failures with identical drives and near identical read/write patterns are not good). You are going to have to rely on backups either way, so why give up performance?

    No write hole either, so no need for battery backup.
     
    #18 MfA, Jan 2, 2012
    Last edited by a moderator: Jan 2, 2012
  19. Albuquerque

    Albuquerque Red-headed step child
    Veteran

    Joined:
    Jun 17, 2004
    Messages:
    3,845
    Likes Received:
    329
    Location:
    35.1415,-90.056
    On a card like the LSi, the RAID5 scores are damned near the RAID0 scores, we're talking more than 450,000 IOPS no matter which way you go (so long as you have enough drives underneath to sustain it.) The extra cost for the Fastpath driver I believe is only necessary for using an array of SSD's to cache an array of spindle disks; I think I can skip that if I'm doing only pure SSD. But I can't tell for sure from the documentation...

    As for 96GB of ram? You'd have to buy buffered+ECC stuff, and you'd need multiple CPU sockets for that qty of ram, which means multiple XEON processors and of course a board that supported all of that. That's going to be FAR MORE than my storage solution, sorry... And WTF would I do with overclocks on a rig like that? Nothing, that's what.

    I'd rather have the fastest single consumer-grade CPU that I can overclock the piss out of, a damned nice consumer-grade motherboard that will let me, a huge fistful of consumer-grade ram that can run at some good timings and 1T, and spend $2k on a ~1.5GB/sec, 400,000+ IOPS, ~1TB storage array. I mean, we're talking even faster than the fastest REVODrive at that point. As for which raid mode? I dunno, we'll see when I get there :D

    As for a huge ramdisk? Sure, I'll probably get 32GB of ram and have an 8Gb scratch space for temp and pagefile (some older apps absolutely require a pagefile.) I have a similar thread to that somewhere else on this forum where I'm already doing such a thing for other reasons. But I'm not going to be man-handling NTFS junctions and shit while copying data around to my ramdrive and then back out to disk -- that's absurd.

    I have several hundred GB of games; continuously moving the ones I want in and out of ram can bite me.

    Edit: I'll be buying the battery backup unit because it's required to enable write-back caching. I mean, we're talking about SSD's, what's the one thing they don't always perform well on? In fact, I'd likely config the card to use the cache only for write-back, as read-ahead on an SSD array this big would be of likely NO benefit.
     
    #19 Albuquerque, Jan 2, 2012
    Last edited by a moderator: Jan 2, 2012
  20. Lightman

    Veteran Subscriber

    Joined:
    Jun 9, 2008
    Messages:
    1,804
    Likes Received:
    475
    Location:
    Torquay, UK
    Battery unit on cached controller is a must. I went for OS SSD only when I found my enterprise Samsung drive with Supercap preventing any data loss during power cuts or bluescreens. With overclocked PC while finding your rig limits it's quite often you PC crashes hard. I also had few power cuts since getting this drive and not a single OS corruption :smile:.
    On my OCZ Solid I've corrupted Windows XP within a month, but more telling would be to say within a 50 OS boot. Granted I was using it only for OC fun and trying to beat some records.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...