When we focus on IO performance for SQL Server it is vitally important to understand the architecture of the disk subsystem. Typically, a DBA will focus on Disk Queue Length as the primary measure of IO performance; however, in a Storage Area Network (SAN) environment the Disk Queue Length metric may not provide reliable results. In a SAN environment a “disk” as recognized by SQL Server, may not be composed of a single physical spindle (physical disk); therefore, the Disk Queue Length metric must be considered in relation to the actual number of spindles. For example, a “Logical Disk” labeled “M:” may actually be composed of four (4) spindles meaning that a Disk Queue Length value of twelve (12) (which is, in my opinion, the threshold for identifying disk IO issues) must be normalized to the number of spindles because the metric represents the total Disk Queue Length:
Disk Queue Length / Spindles = Physical Disk Queue Length.
In our example resulting in a value of:
12 / 4 = 3.
A Disk Queue Length of three (3) does not represent a negative performance metric.
With the discussion above in mind we address SAN-based SQL Server IO performance using a different metric which may be more reliable – Disk Latency / Data Transfer Time. Using the Disk Latency metric permits the DBA to abstract the underlying disk subsystem allowing direct measurement of the performance of the disks as reported to the operating system. In my opinion, the Disk Latency for a well-configured disk subsystem should be less than four milliseconds (4 ms).