Question on SSD storage capacities (a multiple of 4?)

Blackraven

Newcomer
Question:

Why is it that SSD drive capacities are in multiples of 4? (ie. Like Samsung's 32 and 64 GB models and Intel's 80GB and 160 GB plus upcoming 320 GB unit).

Whereas HDD uses multiples of 10 (ie. 80 GB, 120 GB, 500 GB, 750 GB, 1000 GB/1 TB, 1500 GB/1.5 TB, 2000 GB/2 TB, etc.)

Is there an explanation to this?
 
ssd are probably using binary mesurements whereas hdd's are using decimal

It's easier than that... Harddrive storage is physical bits on a magnetic platter, you can write almost as many as you care to given the obvious physical constraints. SSD's are memory -- memory is accessed in lines and rows, which are all base-8 addressing. Thus, SSD storage devices are based on components that are sized in powers of two (1,2,4,8,16,32,64,128,256,et al) In your example, a 320GB SSD would be a combination of 128 and 256...
 
Flash components are in powers of two, but at least Intel's flash setups populate 10 channels with those components.

A power of two amount multiplied by 10 means power of four.

edit: I meant multiple of four.
 
Last edited by a moderator:
Hmmmm.

My home PC has a 64 GB SSD in it (an Mtron 7000). Its actual capacity is 64000884736 bytes, which is not actually very close to being a power of 2.
 
SSDs apparently have reserved space in them, as shipped.

http://www.dailytech.com/article.aspx?newsid=14004

JMicron also can reserve more spare blocks to alleviate the issue. Because more spare blocks reservation would decrease the drive capacity, most SSD makers tend to not enlarge the spare size.

Note by author: This is part of the reason why OCZ Technology's drives are labeled as 30, 60, 120, and 250 GB instead of the regular 32, 64, 128, 256 GB. Almost all SSDs make use of spare blocks; it is not a feature specific to JMicron.
The article is an interview with JMicron relating to the performance problems experienced with SSDs. An increase in spare blocks is one method of combatting performance issues.

Jawed
 
I'd have to assume that there is a certain amount of "reserve" capacity for error bits. 64GB in memory terms is far more than 64 billion bytes, it's closer to something like 68.7 billion bytes if I did my math correctly.

Two things they accomplish this way... First, and the most obvious, is redundancy in case of bad cells. Second, which may be just be bullhonkey, is to make sure that you're not "getting more than you paid for" -- physical drives have been selling Gigabytes as 1 billion bytes (Rather than 1,073,741,824 bytes) for eons. Why change it now?

Edit - Treed like a mofo. :(
 
Last edited by a moderator:
Back
Top