RAID 0: Read & write STRs will increase for every drive added until the RAID controller interface bus becomes the bottleneck. Seek times stay the same.
Random reads & writes can be faster, depending on the stripe size, and how much data is being transferred. RAID 0 tends to benefit applications such as video editing the most, where large files are being handled. Some games also benefit, depending on how they store their data. The bottleneck, as Guden Oden already pointed out, tends to be seek times. I tend to find that RAID 0 also helps in disk-intensive multitasking situations.
Say both disks contain a file, 12kb in size. In RAID 0 (with a 1kb stripe size - to make the math easy) disk 1 would contain parts 1, 3, 5, 7, 9, & 11, and disk 2 would contain parts 2, 4, 6, 8, 10, & 12. Both disks could read their parts at the same time (disk 1 would read part 1 while disk 2 would read part 2 and so on) doubling the read speed.
RAID 1: Read & write STRs will generally stay the same. Seek times stay the same.
People tend to think that read speed should double (because it is reading from two disks) but this is not true.
Going back to our example with the 12kb file, in RAID 1 both disk 1 and disk 2 would contain parts 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, & 12. If the controller assigned disk 1 to read parts 1, 3, 5, 7, 9, & 11, and disk 2 to read parts 2, 4, 6, 8, 10, & 12 (like RAID 0) you would see no increase in speed because disk 1 needs to go right past part two on its way from part one to part three - it can't just "jump over it". Also, the above way of reading would mean that you are not doing any error checking (reading both disks and making sure the data matches.
If the controller cheats and doesn't compare all the data (relies instead on the drive reporting an unreadable sector) then then you could see an increase in performance if you were either:
#1: drive interface bandwidth limited, or
#2: reading a large file where each drive could read alternating tracks.
RAID 5: Read STRs increase for every drive added (a four disk RAID 5 array will usually read as fast as a three disk RAID 0 array), write STRs tend to be XOR processor limited (or until the RAID controller interface bus becomes the bottleneck in both cases). Seek times are a little slower in my experience.
Again with the 12kb file / 1kb stripe example, disk 1 would contain parts 1, 4, 7, & parity(10,11,12). Disk 2 would contain 2, 5, parity(7,8,9), & 10, disk 3 would contain 3, parity(4,5,6), 8, & 11, disk 4 would contain parity(1,2,3), 6, 9, & 12.
Here's my RAID 5 data array (4x 200GB Seagate 7200.7 on a Promise SX4 w/256MB cache):
It resides on the PCI bus so it is read limited to 100MBs. Writes are a little faster then a 7200.7 on its own. Seek times are fast due to the 256MB cache - not the actual array.
The Promise has a hardware XOR processor. Compare that to four 74G Raptors on the ICH7R in RAID 5:
As the ICH7R is a southbridge controller it has no bandwidth limit (that I could dream of hitting) so the reads are fast, but the writes only average ~20MB/s. Seek times remain the same as a Raptor on its own.