In addition to a maximum file size, a file system also has a overall capacity limit.
Because storage capacity (density) has been increasing very quickly, many file systems need to be revised to handle additional capacity. It is not difficult to understand this problem, let us take a look at an example.
Most hard disk drives have a hardware sector size of 512 bytes. As a result, a 1TB (one tera byte) hard disk drive has 231 sectors. In order for a sector to “point” to the next sector of a file, it needs to use a 32-bit number. Many file systems were originally written to use 32-bit numbers to string sectors of a file together. As a result, many of these file systems need to be revised to handle the capacity of a server file system.
Even though a 1TB drive is relatively large for a consumer hard disk drive, it is quite small for servers. A server does not use a single hard disk drive. Rather, it uses an array of hard disk drives both for redundancy and capacity. For example, an array of 9 1TB hard disk drives can create a 8GB RAID5 configuration. Many larger servers have already reached the order of peta bytes. Each peta byte is 1000 giga bytes.
Many modern file systems have limits of up to 16 exabytes (1060) bytes. This is due to the use of 64-bit integers to string sectors together.