First of all, thank Microsoft for inventing the NTFS file system. It is indeed a very robust file system with powerful functions.
Cluster is the most basic unit for disk I/O reading and writing (it is the allocation unit in NTFS).
Today we will talk about topics related to NTFS cluster size in SQL Server data storage. NTFS will use 4KB clusters by default when formatting partitions that exceed 2GB, which is basically the cluster size of most hard disks today. Defragmentation can be used when the cluster is no larger than 4KB.
The NTFS cluster size can be set from 512B to 64KB. Of course, it must be specified during formatting, otherwise it cannot be changed. If the cluster is too small, the space utilization is high, but the partition table is large, there are many fragments, and the performance is poor; if the cluster is too large, the space utilization is low, but there are few fragments, and the performance is good. So 4KB is a common choice.
Today's hard drives have a capacity of several hundred GB, so space no longer seems to be a problem. However, disk I/O has always been a performance bottleneck. In order to increase the disk read and write speed, everyone has racked their brains. In any case, as long as the hard drive is selected, it seems impossible to change its physical design, and it is not recommended. So we can only start from other places, such as using RAID array, defragmenting frequently, using Chips, good data cables, etc., everything that can be used is used.
The SQL Server server is an application with high I/O requirements. The basic unit of reading and writing data files is a page. The size of each page is 8KB. 8 consecutive pages form an area, which is a 64KB area. Generally, data files They are all relatively large, and in general production environments, several GB or more are common. And basically no one will defragment SQL Server storage, so we can format the disk partition dedicated to SQL Server storage into a 64KB cluster, which can improve performance without wasting space.
Are there any risks? Of course, when a disk disaster occurs, the data lost may be a little more, at least 64KB. However, practice has proved that this solution is still very feasible, because the RAID array blocks of general servers are also 64KB. Both It doesn't matter if it's 64KB.
You can also refer to other application scenarios. If there are any mistakes, you are welcome to criticize.
Author of this article: gytnet
Source of this article: http://www.cnblogs.com/gytnet/archive/2009/12/21/1628561.htm