One of the important performance metrics we monitor for our Virtual-DBA and consulting customers is Page Life Expectancy (PLE). We look for low values as well as values that are trending lower over time. A drop in PLE is a prime indicator of memory pressure.
PLE is how many seconds a page can be expected to reside in memory before being flushed to disk. Microsoft recommends that SQL PLE not fall below 300. In real life we often see PLE in the 2000+ range in healthy systems.
Maintenance activities can drop PLE substantially. Rebuilding indexes or doing backups can drop PLE. When a drop in PLE occurs often determines its importance. For instance a drop in PLE overnight may not be as important as a drop during business hours.
Fixing Page Life Expectancy
In order to fix PLE problems we need to examine the entire Server. We need to look at Available Mbytes of Memory as well as what functions the server performs. We don’t want to create a situation where SQL Server has to give back memory to the operating system since this can result in clearing the SQL Server cache with an associated temporary performance drop. For this, and other reasons we prefer to set Max Server Memory to a specific value rather than leave it unlimited.
I like to look at a long term perfmon trace that includes Memory:Available Mbytes as well as some other counters like Total and Target server memory and Memory Grants Pending from SQL Server Memory Manager. If you consistently have a large number of available Mbytes of memory available consider increasing SQL Server Max server memory.
It’s important to note that the Max Server Memory setting has changed in SQL 2012. Before it measured the amount of memory allocated to the buffer pool. Starting in SQL 2012 the Max server memory includes many more important memory consuming areas of SQL Server. In SQL 2008 and earlier we needed to estimate the memory needs of non-buffer areas of SQL Server in calculating an optimum Max Server memory.