aaronspink
Veteran
but then reducing performance would not prevent that if you need to copy 10gig of files you copy 10gig of files you dont think this drives a bit slow i'll only copy 6gig
Correct, the reduction in performance is a side effect of the sparing decisions and algorithm selections used by SF which require more complex data movement and overhead. Most other designs don't have any significantly enhanced overhead with sequential transfers and through their sparing and algorithm choices don't suffer many of the issues that SF drives do.
Performance for SSDs along with endurance of SSDs is a fairly complex area and highly dependent on the incoming command and data streams. For example, all the drives on the market with display remarkably different random IO behavior based on the span upon which those IOs are connected. They'll also display remarkably different (order of mag) differences in endurance as well. ie, doing random writes to a 30GB portion of the drive will deliver significantly different results than doing random writes to the whole drive(160GB/240GB). A lot of this has to do with the addressing and indirection capabilities of the controller architecture along with the erase/write structure of the underlying flash memory.
This all leads to weirdness like manufactures (at least the honest ones) sometimes quoting high random IO numbers for their consumer drives vs their enterprise drives. This is because the enterprise drives assume full span access pattern in the specifications while the consumer drives do not. Often the consumer drives assume spans in the range of 8GB.