If no cpu is used then I am sure it is hardware raid. The reason why I say mine is hardware raid is because no cpu is used on mine because i ran HD Tune and it reported my cpu usage as being 0.1 or something.
But for RAID0 or RAID1 the CPU usage will be trivial even with software RAID, that's what folks have been saying here. That's kind of why I was being so dismissive above, performance-wise the distinction between hardware and software is pretty irrelevant when you're doing RAID0 and RAID1.
Parity RAID takes more grunt, but it's still not necessarily a killer for modern CPU. Hell, there are benchmarks around that show that Linux software RAID (which is *all* done on the CPU and intentionally doesn't require "RAID" controllers on the motherboard) is performs better than real hardware RAID at RAID5 and RAID6.
The real issue to my mind is what happens when a disc in your RAID set fails. Any style of RAID that requires an OS kernel to boot to a level of self-awareness that it can deal with a degraded array is "software". If your BIOS and/or bootloader need to understand the layout of the kernel on the physical discs to boot your kernel, you've got a big problem.
With a decent h/w controller the BIOS, kernel and all the rest are broadly speaking unaware and/or don't care about what physical discs are attached to the controller, how broken they are, the RAID level or anything else. All they see is a single block device (usually manifesting itself as a SCSI block device) that they can happily boot from, read from, write to as if it were a bog standard 15TB SAS hard-drive.
The host OS RAID controller driver may well be able to extract more information about the real state of the array, but the host OS doesn't have to care, and for good reason. At critical times like boot time, when the OS isn't around yet, issues like "can my BIOS find my boot sector" are easier with a proper h/w setup.