Archive for the ‘Performance Monitoring and Tuning’ Category

How to identify and resolve key lookups?

What is a key lookup? Key lookup is an undesirable situation where queries referring to columns that are not part of the non-clustered index and SQL Server have to navigate to the data rows in the table to access the required columns. It is also known as a bookmark lookup. You can view the key lookups by looking at the execution plans. You can resolve key lookups by using a covering index or index join.
A covering index is an index that consists of all the required columns therefore can return the results without having SQL Server to navigate to the table data. An index join uses an index intersection between two or more indexes to cover a query fully.
In the following example, we will create a key lookup and creating a cover index to resolve the key lookup. More specifically, we will:
1. Create a test database.
2. Create a test table by selecting data from the AdventureWorks2008R2.Production.TransactionHistory table for testing.
3. Create a primary key on the test table.
4. Create a non-clustered index on the test table.
5. Query the test table and specify the non-clustered index key column field on the WHERE clause to create a key lookup.
6. Resolve the key lookup by modifying the non-clustered index key to create a covering index.

Implementation Steps:
1. Create a test database.

-- if the TestDB exists, then drop it
IF  EXISTS (SELECT name FROM master.dbo.sysdatabases WHERE name = N'TestDB')
DROP DATABASE [TestDB]

-- create the TestDB
CREATE DATABASE TestDB
GO

2. Create a test table by selecting data from the AdventureWorks2008R2.Production.TransactionHistory table for testing.

-- borrow data from the AdventureWorks2008R2.Production.TransactionHistory table for testing
USE TestDB
GO
SELECT *
INTO TransactionHistory
FROM AdventureWorks2008R2.Production.TransactionHistory

3. Create a primary key on the test table.

-- create the primary key on the TransactionID column
-- this will create a clustered index on the TransactionID column
ALTER TABLE TransactionHistory
ADD CONSTRAINT [PK_TransactionID] PRIMARY KEY (TransactionID)

4. Create a non-clustered index on the test table.

-- create a non-clustered index on the ProductID column
IF EXISTS (SELECT name FROM sys.indexes
            WHERE name = N'NCI_TransactionHistory_ProductID')
DROP INDEX NCI_TransactionHistory_ProductID ON dbo.TransactionHistory
GO
CREATE INDEX NCI_TransactionHistory_ProductID
ON dbo.TransactionHistory (ProductID)
GO

5. Query the test table and specify the non-clustered index key column field on the WHERE clause to create a key lookup. Here, we will set the NOCOUNT, STATISTICS IO, and STATISTICS TIME on and then off to view the relevant time and io metrics and suppress the count of rows message. We will also turn on the “Include Actual Execution Plan” option prior to executing the following query:

-- select data from the TransactionHistory table
-- filter by the ProductionID column to create a key lookup
SET NOCOUNT ON
SET STATISTICS IO ON
SET STATISTICS TIME ON
SELECT TH.TransactionID
, TH.ProductID
, TH.TransactionDate
, TH.Quantity
, TH.ActualCost
FROM TransactionHistory TH
WHERE TH.ProductID = 2
SET STATISTICS TIME OFF
SET STATISTICS IO OFF
SET NOCOUNT OFF

The result for the messages will be:

SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.
Table ‘TransactionHistory’. Scan count 1, logical reads 148, physical reads 0, read-ahead reads 10, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times:
CPU time = 16 ms, elapsed time = 111 ms.

The Execution plan will show the Key Lookup:
Key Lookup

6. Resolve the key lookup by modifying the non-clustered index key to create a covering index.

-- create a covering index to resolve the key lookup
-- drop the NCI_TransactionHistory_ProductID nonclustered index 
-- then create a new covering index 
IF EXISTS (SELECT name FROM sys.indexes
            WHERE name = N'NCI_TransactionHistory_ProductID')
DROP INDEX NCI_TransactionHistory_ProductID ON dbo.TransactionHistory
GO
CREATE INDEX NCI_TransactionHistory_ProductID_TransactionDate_Quantity_ActualCost
ON dbo.TransactionHistory (ProductID, TransactionDate, Quantity, ActualCost)
GO

If you had mouse-over the tool tip on the Key Lookup operation, you can see the output list for the operation. It will be difficult to view the complete output list if the query is complicated. In that case, you can right-click on the Key Lookup operator, select Properties, and then copy the output list from the “Output List” dialog box.
Output List from the Key Lookup Operator

Please be aware that we are dropping and re-creating an index on a development environment. Do not do this in a production environment during business hours. You should have gone through testing and with approval for the changes requested, then schedule a maintenance window or run the changes during a window of low activity.

In addition to the covering index, you can consider using the Include Columns if the data columns for the index are wide and exceeding 900KB limit of an index. The other alternative is create extra indexes for index joins.

7. Run the same query that was ran in step 5 and compare the logical reads and elapsed time differences. The logical reads have decrease to 3 from the previous 148 and the elapsed time has decrease to 0 ms from the previous 111 ms.

-- select data from the TransactionHistory table
-- filter by the ProductionID column to create a key lookup
SET NOCOUNT ON
SET STATISTICS IO ON
SET STATISTICS TIME ON
SELECT TH.TransactionID
, TH.ProductID
, TH.TransactionDate
, TH.Quantity
, TH.ActualCost
FROM TransactionHistory TH
WHERE TH.ProductID = 2
SET STATISTICS TIME OFF
SET STATISTICS IO OFF
SET NOCOUNT OFF

The result for the messages will be:

SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.
Table ‘TransactionHistory’. Scan count 1, logical reads 3, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.

The Execution plan will show the index seek and the key lookup is no longer an issue:
Index Seek

How to view the SQL Server Execution Plans?

Similar to how you plan to drive to the meeting place that your buddy, Billy Bob, has told you about before staring your engine, SQL Server also has plans for every query executions. If the plan is not in the cache, one will be created before executing the query being requested. You may be a man with a plan. SQL Server is a mighty database engine with many plans and they are cached in memory for reuse instead of drawing up new ones unprepared.  Even better, you can remove an existing execution plans, optimize the query, and then re-execute for SQL Server to create a new execution plan and reiterate the process until you are happy with the execution plan.

Now, just how do you go about viewing the execution plans so that you can remove them if needed? In this post, we will perform the following walk-through step by step:

  1. Create a new test database.
  2. Create a new table by importing data from the AdventureWorks2008R2.Production.TransactionHistory table.
  3. Query the newly created table.
  4. Query the DMVs to view the execution plan created.
  5. Save the execution plan with a .sqlplan extension to view it in a GUI window if needed.
  6. You can remove the execution plan by running the DBCC FREEPROCCACHE() function by specifying the plan handle.
  7. You can then optimize the query by adding indexes or query hints if needed and then re-execute when ready to create a new execution plan.

Implementation steps:

  1. Create a new test database.
  2. -- create a test database
    -- !!!Do not do this in a production server!!!
    create database TestDB
    
  3. Create a new table by importing data from the AdventureWorks2008R2.Production.TransactionHistory table.
  4. -- import data from the AdventureWorks2008R2.Production.TransactionHistory table
    USE TestDB
    GO
    SELECT *
    INTO dbo.TransactionHistory
    FROM AdventureWorks2008R2.Production.TransactionHistory
    
  5. Query the newly created table.
  6. -- create an index on the TransactionID column
    ALTER TABLE dbo.TransactionHistory
    ADD CONSTRAINT [PK_TransactionID] PRIMARY KEY (TransactionID)
    
    -- create an index on the ProductID column
    CREATE NONCLUSTERED INDEX IX_TransactionHistory_ProductID
    ON dbo.TransactionHistory (ProductID)
    
    -- query the newly created table
    SELECT [TransactionID]
          ,[ProductID]
          ,[ReferenceOrderID]
          ,[ReferenceOrderLineID]
          ,[TransactionDate]
          ,[TransactionType]
          ,[Quantity]
          ,[ActualCost]
          ,[ModifiedDate]
      FROM [TestDB].[dbo].[TransactionHistory]
    WHERE ProductID = 784  
    
  7. Query the DMVs to view the execution plan created.
  8. -- query the DMVs to view the query plan created
    SELECT st.text, qs.execution_count, qp.query_plan, qs.plan_handle
    FROM sys.dm_exec_query_stats qs
    CROSS APPLY sys.dm_exec_sql_text (qs.sql_handle) st
    CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp
    WHERE st.text like '%ProductID = 784%' -- specify the code to filter 
    
  9. Save the execution plan with a .sqlplan extension to view it in a GUI window if needed.
  10. You can click on the XML link in the query_plan column. If a new window with XML code shows up, you can save it as a .sqlplan file and then open it in a GUI format to view it. If it opens in a GUI format window for the execution plan, you can save it for review in a later time or share with your colleagues for further analysis.

  11. You can remove the execution plan by running the DBCC FREEPROCCACHE() function by specifying the plan handle.
  12. -- remove the execution plan specifying its plan handle
    DBCC FREEPROCCACHE (0x06000E001C47E20540C17A84000000000000000000000000) -- plan handle in the ()
    
  13. You can then optimize the query by adding indexes or query hints if needed and then re-execute when ready to create a new execution plan.

How to find the query IO in terms of reads and writes?

You are interested in finding out the IO in terms of reads for the query execution before tuning activities such as adding a missing index or updating statistics so that you can see the comparisons of improvements. What can you do?

You can wrap the query around the SET STATISTICS IO ON and SET STATISTICS IO OFF statements to cause SQL Server to display the information regarding the amount of disk activity generated by the query. Needless to say, when STATISTICS is ON, the statistical information is displayed. When set to OFF, the information is not displayed. For example:

SET STATISTICS IO ON
SELECT p.ProductID
, p.Name
, pch.StandardCost
, pch.StartDate
, pch.EndDate
, i.Quantity
, i.Shelf
FROM Production.Product AS p
	JOIN Production.ProductCostHistory AS pch
		ON p.ProductID = pch.ProductID
	JOIN Production.ProductInventory AS i 
		ON p.ProductID = i.ProductID	
SET STATISTICS IO OFF	

Output:

(583 row(s) affected)
Table ‘Worktable’. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table ‘ProductInventory’. Scan count 1, logical reads 9, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table ‘Product’. Scan count 1, logical reads 5, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table ‘ProductCostHistory’. Scan count 1, logical reads 5, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

In the output, table is the name of the table. When the scan count is 0, the query is using a clustered or unique index to seek for one value. When 1, the query is searching for one value using a non-unique clustered index which is defined on a non-primary key column. When the scan count is higher than 1, then the number represents the number of different seek/scan to locate the value. Two of the most critical values to pay attention to are the “logical reads” and “physical reads.” Logical reads are the number of pages read from the data cache. Physical reads are the number of pages read from disk. Logical reads are preferred while physical reads are not desirable. You can look up all other output items and meanings in the reference link.

An interesting and very critical note about the STATISTICS IO is that the logical reads is not affected by other system resources. In STATISTICS TIME, the values for the CPU time and elapsed time are affected by the system resources. For example, if a heavy duty data loading job runs at the same time as your query, the CPU time and elapsed time will vary if you had run it on a time where the data loading job is not running. But, the logical reads in STATISTICS IO will be the same if you had not tuned the query with optimization.

You can attempt to add possible missing indexes to the tables or update the statistics and then re-run the query with the SET STATISTICS IO ON statement to do a comparison for the logical reads, physical reads, and other items to observe the improvements if there are any.

REFERENCES:

SET STATISTICS IO
http://msdn.microsoft.com/en-us/library/ms184361(v=sql.105).aspx

How to find the query execution time in terms CPU usage and duration?

You are interested in finding out the query execution time in terms CPU usage and total duration before tuning activities such as adding a missing index or updating statistics so that you can see the comparisons of improvements. What can you do?

To obtain such detailed information on the amount of time in milliseconds to parse, compile, and execute a query, you can wrap the query with the SET STATISTICS TIME ON and SET STATISTICS TIME OFF statements. When SET STATISTICS TIME is ON, the time statistics for a statement are displayed. When set to OFF, the time statistics are not displayed. For example:

SET STATISTICS TIME ON
SELECT p.ProductID
, p.Name
, pch.StandardCost
, pch.StartDate
, pch.EndDate
, i.Quantity
, i.Shelf
FROM Production.Product AS p
	JOIN Production.ProductCostHistory AS pch
		ON p.ProductID = pch.ProductID
	JOIN Production.ProductInventory AS i 
		ON p.ProductID = i.ProductID	
SET STATISTICS TIME OFF

Output:

SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.

(583 row(s) affected)

SQL Server Execution Times:
CPU time = 16 ms, elapsed time = 106 ms.
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.

CPU time = 0 ms and elapsed time = 0 ms for parse and compile time are indications that the optimizer reused an existing execution plan and avoided spending time parsing and compiling the query again. In this example, I had already run this query as a test a number of times. On the first execution, the optimizer had parsed and compiled it to produce the execution plan and stored it as a cached plan.

The “SQL Server Execution Times:” section shows the required CPU time is 16 ms and the elapsed time is 106 ms where ms stands for milliseconds.

You can attempt to add possible missing indexes to the tables or update the statistics and then re-run the query with the SET STATISTICS TIME ON statement to do a comparison for the CPU time and elapsed time to observe the improvements if there are any.

REFERENCES:

SET STATISTICS TIME
http://msdn.microsoft.com/en-us/library/ms190287(v=sql.105).aspx

What is an execution plan and how to save its various formats?

On a daily basis, we receive requests to perform tasks that require planning. For example, travel a designated location or prepare a meal for our family. If we had been to the designated location in the past, we will use the same travelling plan or direction to go to that location and if we had ready had experience cooking the dishes we are preparing we will likely use the same steps like we had done in the past. In a similar fashion, when SQL Server receives a request to execute a query, its internal query optimizer will attempt to calculate the most cost effective and efficient way to execute the query and store the plan in the plan cache. If a plan to execute the query had already existed, the SQL Server query optimizer will use the plan that’s already cached.

There are two types of execution plans. They are estimated execution plans and actual execution plans. All execution plans are stored in a section of memory called the plan cache. When a query is submitted to SQL Server, the optimizer creates an estimated execution plan and compares it with an actual plan that’s already stored in the plan cache. If they match, then the optimizer will use the plan that’s already stored in the plan cache since it is an actual query plan and it’s been used previously by the query engine. This reduces the overhead of creating another actual execution plan and then storing it in the plan cache.

You can view the execution plans in 2 formats. They can be in graphical or XML. Graphical plans are quicker and easier to read. You can also save the XML execution plans into graphical plans. Here’s an example of how to use both to view the estimated and actual execution plans for the following query:

SELECT p.ProductID
, p.Name
, pch.StandardCost
, pch.StartDate
, pch.EndDate
, i.Quantity
, i.Shelf
FROM Production.Product AS p
	JOIN Production.ProductCostHistory AS pch
		ON p.ProductID = pch.ProductID
	JOIN Production.ProductInventory AS i 
		ON p.ProductID = i.ProductID			

1. Open SSMS, copy the above query and paste it in a new query window.
2. To view the graphical estimated execution plan, right-click on the query window and then select “Display Estimated Execution Plan.” The plan will be shown on the lower portion of the window. You can save the graphical estimated execution plan by right-clicking on the “Execution Plan” window and click on the “Save Execution Plan As…” button.
3. To view the graphical actual execution plan, right-click on the query window and then select “Include Actual Execution Plan.” You will then need to execute the query. The results will be displayed on the lower portion of the window in addition to the actual execution plan. You may need to click on the “Execution Plan” tab to see the graphical actual execution plan. You can save the graphical actual execution plan by right-clicking on the “Execution Plan” window and click on the “Save Execution Plan As…” button.
4. To view the XML estimated execution plan, you will need to execute the SET SHOWPLAN_XML ON statement, run the query, and then run the SET SHOWPLAN_XML OFF statement to set it off so that the subsequent queries will execute as intended. Once the XML link appears on the results window, you can click on it to open it, and then save it as XML or with the .sqlplan extension so that you can later open it in a graphical plan format. For example:

  1. Execute the statement to set the estimated execution plan on:
    SET SHOWPLAN_XML ON
    
  2. Run the query:
    SELECT p.ProductID
    , p.Name
    , pch.StandardCost
    , pch.StartDate
    , pch.EndDate
    , i.Quantity
    , i.Shelf
    FROM Production.Product AS p
    	JOIN Production.ProductCostHistory AS pch
    		ON p.ProductID = pch.ProductID
    	JOIN Production.ProductInventory AS i 
    		ON p.ProductID = i.ProductID	
    
  3. Execute the statement to set the estimated execution plan off:
    SET SHOWPLAN_XML OFF
    

5. To view the XML actual execution plan, you will need to wrap the query with the SET STATISTICS XML ON and the SET STATISTICS XML OFF statements. Then execute it. Once the XML link appears on the results window, you can click on it to open it, and then save it as XML or with the .sqlplan extension so that you can later open it in a graphical plan format. For example:

SET STATISTICS XML ON
SELECT p.ProductID
, p.Name
, pch.StandardCost
, pch.StartDate
, pch.EndDate
, i.Quantity
, i.Shelf
FROM Production.Product AS p
	JOIN Production.ProductCostHistory AS pch
		ON p.ProductID = pch.ProductID
	JOIN Production.ProductInventory AS i 
		ON p.ProductID = i.ProductID		
SET STATISTICS XML OFF		

REFERENCES:
Displaying Graphical Execution Plans (SQL Server Management Studio)
http://msdn.microsoft.com/en-us/library/ms178071(v=sql.105).aspx

Graphical Execution Plan Icons
http://msdn.microsoft.com/en-us/library/ms175913(v=sql.105).aspx

How to Create SQL Server Performance Reports Using Excel?

Once you had run the perfmon to capture the objects and counters for a scheduled time period, you will need to analyze the results as the natural next step. Wouldn’t it be nice if you can create charts and graphs and present your findings to your team or clients? Yes. It would. It can be done using Excel and it is easier than you think.

You will first need to convert the perfmon counter log file(s) to csv file(s) using the Windows relog program. Then format the file to add in the bells and whistles to find the averages, min, max, standard deviation, and creating charts and graphs.

To convert the perfmon counter log file(s) to csv file(s):
1. Locate the perfmon counter file the .blg file.
2. On the command prompt, browse to the directory that contains the .blg file.
3. To convert the .blg file to a .csv file: relog logfile.blg –f CSV –o newfile.csv. Replace logfile.blg with the name of your .blg file and newfile.csv with the name of the .csv you wish to save as.
4. The relog program will do the conversion and let you know when the command is completed successfully.

To format the newly converted csv file into presentable details:
1. Open the perfmon log csv file using Excel.
2. Replace all the occurrences of \\SERVERNAME\ with an empty string (“ “). Where SERVERNAME is the server name you had run the perfmon on.
3. In Cell A1, replace “(PDH-CSV 4.0) (Eastern Daylight Time)(240)” with “Time”.
4. Delete the second row because very often the first data row has invalid data.
5. Format Column A to “Date” – 3/14/01 1:30 PM format.
6. Formatting the numbers: select all of the columns except column A, right-click and select “Format Cells”, choose the Number format, zero decimals, and check the box for use 1000 Separator.
7. Create a copy of the worksheet for Summary, PivotChart, and Details. Name them accordingly. Eg.: Summary, PivotChart, Details.

To create a Summary report for Average, Min, Max, and Standard Deviation for all the performance counters:
1. On the Summary worksheet insert 6 rows on top of row 1.
2. For cells A2 to A5, type in Average, Min, Max, and Standard Deviation.
3. In the B column, put in the formulas. The assumption here is the data goes from row 8 to row 100. But, make sure to change the row 100 to the # for your data row.

  • B2 (Average) = AVERAGE(B8:B100)
  • B3 (Min) = MIN(B8:B100)
  • B4 (Max) = MAX(B8:B100)
  • B5 (Standard Deviation) = STDEV(B8:B100) – shows how much variation or “dispersion” exists from the average (mean, or expected value). A low standard deviation indicates that the data points tend to be very close to the mean, whereas high standard deviation indicates that the data points are spread out over a large range of values.

4. Move your cursor to cell B8, which should be your first cell with data. Then select the View menu then select Freeze Panes. That way we can move around our spreadsheet while we still see the dates on the left and the formulas and headers across the top.
5. Click on the cell B1, key in the value “=B7” to reference it to B7 where the heading is shown.
6. Make row 1 from B1 all the way to the end on the right referencing row 7 by dragging the + on the lower right-hand corner of B1 all the way to the right end of the worksheet.
7. Do the same for the rows: Average, Min, Max, and Standard Deviation.
8. Make row 1 bold and format it to fit the contents if needed.



To create PivotTable with PivotChart:
1. On the PivotChart worksheet, click on the Insert menu, select PivotTable and then select PivotChart.
2. Accept the default settings and click the “OK” button.
3. To generate graph for CPU usage for example (you can use the same approach for all other objects):

  1. From the “PivotTable Field List” select “Time” and drag it into the “Axis Fields (Categories)” area.
  2. From the “PivotTable Field List” select “Process(_Total)\% Process Time” and drag it into the “Values” area.
  3. You can modify if you wish to see the values being presented as Average, Min, Max, and STDEV by clicking the drop-down arrow on the value box and selecting the “Value Field Settings…” option.
  4. You can close the “PivotTable Field List” by clicking the x button on its row to have more space for the graph.
  5. You can expand the graph by clicking on the lower right-hand corner and drag to the right and down to re-size.
  6. You can remove the “total” box on the right by clicking it and then delete it.
  7. You can format the heading by clicking the heading and then key in the desired heading.

4. You can also create the averages, min, max, and standard deviations by creating a Memory averages from the PivotChart as the template and then copy it and rename it as Processor, Network, Disk, and SQL Server and then change and add in the required counters from the Field List.
5. Save the .csv as an Excel workbook file to preserve the formatting.

How to capture the performance monitor counters and objects using PERFMON?

In the previous blogs, we have reviewed the performance objects and counters to identify memory, processor, disk, and overall SQL Server performance pressures and bottlenecks. In this blog post, we are going to review the steps in running the performance monitor (PERFMON) in both Windows Server 2003 and 2008 to capture the performance monitor counters and objects.

To minimize performance monitor overhead, it is usually recommended that you limit the number of counter selectively and save the counter logs locally on a local disk other than the ones that are being monitored instead of transferring the performance data across the network.

Windows Server 2003:

  1. Open Performance Monitor by running PERFMON in the “Run…” textbox.
  2. Expand the “Performance Logs and Alerts” node.
  3. Right-click on “Counter Logs.”
  4. Select “New Log Settings…” from the popup menu.
  5. Assign a named for the new log settings. Eg.: “ServerName_Perfmon.
  6. Click the “Add Counters…” button to start adding counters for each “Performance object” you wish to add counters from. Add the desired counters. Please see my previous blog posts on identifying memory, processor, disk, and overall SQL Server performance pressures and bottlenecks for suggestions.
  7. Once you have added all the required counters, click on the “Log files” tab and click on the “Configure…” button to specify the location for the log file to be stored in. Make sure the format stays in “Binary file” because you can parse them into different formats later.
  8. Click on the “Schedule” tab to schedule it.

To make the PERFMON file as a template to make it reusable, you can save the list of counters as an .htm file so that you can use it on other SQL Server hosts. You can right-click on the new log created and select the “Save Setting As…” option to save the file as .htm into the designated location. To use this list of objects and counters for other SQL Server hosts, open the .htm file in an editor such as Notepad and replace all instances of \\SQLServerMachineName with ‘ ‘ (a blank string) without the quotes.

To use the template PERFMON counter log file in a new host server for Windows Server 2003, open the PERFMON, in the “Performance Logs and Alerts” node right-click “Counter Logs” and select the option “New Log Settings From…” and then browse to the .htm template PERFMON file.

After the scheduled PERFMON completes, you can convert the .blg files to .csv files for further analysis. To convert perfmon .blg files to .csv file, you can use the command prompt and navigate to the directory where the binary file is located and type:

relog logfile.blg -f CSV -o newfile.csv

Windows Server 2008 and Windows Server 2008 R2

  1. Open Performance Monitor by running PERFMON in the “Run…” textbox.
  2. Under “Reliability and Performance”, expand “Data Collector Sets”.
  3. Right-click on “User Defined”
  4. Select “New” from the popup menu then select “Data Collector Set”.
  5. Assign a name for the new log settings. Eg.: “ServerName_Perfmon.
  6. Under the “Create data logs” section, check the checkbox for “Performance counter”.
  7. Click the “Add…” button to start adding counters for each “Performance object” you wish to add counters from. Add the desired counters. Please see my previous blog posts on identifying memory, processor, disk, and overall SQL Server performance pressures and bottlenecks for suggestions.
  8. Once you have added all the required counters, click the “OK” button and then click the “Next” button.
  9. Specify the location to save the file.
  10. Click the “Finish” button.
  11. Right-click the new file under the “User Defined” folder and select “Properties”.
  12. Click on the “Schedule” tab.
  13. Click on the “Add” button to schedule a “Beginning date:” and “Start time:” and then click on the “Stop Condition” tab to specify the “Overall duration.”

To make the PERFMON file as a template to make it reusable, you can save the list of counters as an .xml file so that you can use it on other SQL Server hosts. Right-click on the new log folder under the “User Defined” folder and select “Save Template…” to save the file as .xml in to the designated location.

To use a template perfmon counter log file in a Windows Server 2008 or 2008 R2 host server, open PerfMon > Reliability and Performance > Data Collector Sets > right-click on “User Defined” > New > “Data Collector Set” > name the file > select the “Create from a template (Recommended)” radio button > click the “Browse” button to browse to the template file > select the template .xml file > click the “Next” button > specify the location to save the file > the “Save and close” radio button will be selected > hit the “Finish” button.

To modify the new “DataCollector01” set, right-click on it and select “Properties” to add or remove the counters as necessary. Click the “OK” button when done.

To schedule the new log file, right-click on the new file under the “User Defined” section, click “Properties”, then click the “Schedule” tab.

To convert perfmon .blg files to .csv files, you can use the command prompt and navigate to the directory the binary file is located and type:

relog logfile.blg -f CSV -o newfile.csv

REFERENCES:

How to create a log using System Monitor in Windows
http://support.microsoft.com/kb/248345

Using Performance Monitor
http://technet.microsoft.com/en-us/library/cc749115.aspx

What are the performance counters to capture to analyze SQL Server pressure?

You want to capture a system baseline for performance assessment before tuning. In the previous blogs, we have reviewed the performance counters to analyze memory, processor, and disk bottlenecks. Now, you want to know the most important performance counters to capture if you wish to make an assessment and/or analyze SQL Server bottleneck.

Before selecting the objects and counters, we need to define objects and counters in Windows Performance Monitor (PERFMON). A performance object is an entity for which performance data is available and performance counters define the type of data that are available for a performance object. One performance object will have one or more associated performance counters. When added in the Performance Monitor, they will appear in the form of \Performance Object\Performance Counter. For example, \SQLServer: General Statistics\Processes Blocked and \SQLServer: General Statistics\User Connections, where SQLServer: General Statistics is the performance object and Processes Blocked and User Connections are the performance counters.

To analyze the SQL Server bottlenecks, you can capture the performance monitor objects such as SQLServer:Access Methods, SQLServer:Latches, SQLServer:Locks(_Total), SQLServer:SQL Statistics, and SQLServer:General Statistics and the associated counters for these objects.

For the SQLServer:Access Methods object, you can add counters such as FreeSpace Scans/sec, Table Lock Escalations/sec, and Workfiles Created/sec. The following describes each of the counters and the recommended values for each:

  • \SQLServer:Access Methods\FreeSpace Scans/sec: Number of scans per second needed to search for free space within pages. This is usually caused by heap tables. Tables without a clustered index.
  • \SQLServer:Access Methods\Table Lock Escalations/sec: Number of times per second locks on a table was escalated to the table lock granularity.
  • \SQLServer:Access Methods\Workfiles Created/sec: Number of work files created per second. Workfiles are created in the Tempdb and are used for processing operations where the amount of data to be processed is too big to fit into the available memory.

For the SQLServer:Latches object, you can add the counter Total Latch Wait Time (ms). The following describes the counter and the recommended value:

  • \SQLServer:Latches\Total Latch Wait Time (ms): Total latch wait time in milliseconds for latch requests in the last second. Latches are used internally by SQL Server to protect the integrity of internal structures and they are not directly controlled by users.

For the SQLServer:Locks(_Total) object, you can add counters such as Number of Deadlocks/sec, Lock Timeouts/sec, and Lock Wait Time (ms). The following describes each of the counters and the recommended values for each:

  • \SQLServer:Locks(_Total)\Number of Deadlocks/sec: Number of lock requests per second that resulted in a deadlock.
  • \SQLServer:Locks(_Total)\Lock Timeouts/sec: Number of lock requests per second that timed out.
  • \SQLServer:Locks(_Total)\Lock Wait Time (ms): Total wait time in milliseconds for locks in the last second.

For the SQLServer:Statistics object, you can add counters such as Batch Requests/sec, SQL Compilations/sec, and SQL Re-Compilations/sec. The following describes each of the counters and the recommended values for each:

  • \SQLServer:Statistics\Batch Requests/sec: Number of command batches received per second. It is usually a good indicator of the load on SQL Server.
  • \SQLServer:Statistics\SQL Compilations/sec: Number of SQL compilations per second. It measures the number of times SQL Server compiles an execution plan per second. What is compilation? Before a query, batch, stored proc, triggers, or any SQL statements get executed on SQL Server, they get compiled into a plan. The plan is then executed to produce the results. These compiled plans are stored in SQL Server’s memory of which known as plan cache. Usually, it is recommended that plan reuse should be used to avoid compilation cost.
  • \SQLServer:Statistics\SQL Re-Compilations/sec: Number of statement recompiles per second. Re-Compilation occurs when SQL Server checks the current query plans before executing any of the individual query plans for SQL statement executions, if the checks for validity of the query plans fail or they are not in satisfactory of SQL Server, the query plans will get compiled again, hence the term Re-Compilation. Re-Compilations slow down query executions and should be reduced when possible.

For the SQLServer:General Statistics object, you can add counters such as Processes Blocked and User Connections. The following describes each of the counters and the recommended values for each:

  • \SQLServer:General Statistics\Processes Blocked: Identifies the number of currently blocked processes at the time the sample was taken.
  • \SQLServer:General Statistics\User Connections: Identifies the number of users connected to SQL Server at the time the sample was taken.

Common SQL Server overall performance bottleneck resolutions if the system is having performance bottleneck:

  • Check for missing indexes and outdated statistics
  • Make sure there are scheduled maintenance jobs for index maintenance in addition to updating statistics
  • Check for long running queries and report to the development team for possible revision
  • Check for blocking and report to the development team for possible revision
  • Check for deadlocks and report to the development team for possible revision
  • Minimize ad hoc queries if there are any and rewrite them into stored procedures
  • Resolve the processor, memory, and disk bottlenecks if there are any

 

REFERENCES:

SQL Server, Access Methods Object
http://msdn.microsoft.com/en-us/library/ms177426(v=sql.105).aspx

SQL Server, Latches Object
http://msdn.microsoft.com/en-us/library/ms177421(v=sql.105).aspx

SQL Server, Locks Object
http://msdn.microsoft.com/en-us/library/ms190216(v=sql.105).aspx

SQL Server, General Statistics Object
http://msdn.microsoft.com/en-us/library/ms190697(v=sql.105).aspx

SQL Server, SQL Statistics Object
http://msdn.microsoft.com/en-us/library/ms190911(v=sql.105).aspx

Object and Counter Design
http://msdn.microsoft.com/en-us/library/windows/desktop/aa372191(v=vs.85).aspx

Batch Compilation, Recompilation, and Plan Caching Issues in SQL Server 2005
http://technet.microsoft.com/en-us/library/cc966425.aspx

What are the performance counters to capture to analyze disk pressure?

You want to capture a system baseline for performance assessment before tuning. In the previous blogs, we have reviewed the performance counters to analyze memory and processor pressure. Now, you want to know the most important performance counters to capture if you wish to make an assessment and/or analyze disk bottleneck.

Before selecting the objects and counters, we need to define objects and counters in Windows Performance Monitor (PERFMON). A performance object is an entity for which performance data is available and performance counters define the type of data that are available for a performance object. One performance object will have one or more associated performance counters. When added in the Performance Monitor, they will appear in the form of \Performance Object\Performance Counter. For example, \PhysicalDisk\% Disk Time and \PhysicalDisk\Current Disk Queue Length, where PhysicalDisk is the performance object and % Disk Time and Current Disk Queue Length are the performance counters.

To analyze disk pressure and bottlenecks, you can capture the performance monitor object PhysicalDisk and the associated counters for this object. The following describes each of the counters and the recommended values for each:

  • \PhysicalDisk\% Disk Time: Shows the percentages of time the disks are busy servicing read or write requests. The recommended average should be less than 85%.
  • \PhysicalDisk\Avg. Disk Queue Length: Shows the number of disk requests that are waiting for a disk during the sample interval. The recommended average should be less than 2 per disk.
  • \PhysicalDisk\Current Disk Queue Length: Shows the number of disk requests that are currently waiting at the time the performance data is collected. The recommended average should be less than 2 per disk.
  • \PhysicalDisk\Disk Transfers/sec: Shows the number of read and writes per second regardless of the amount of data. The recommended average should be less than 100 per disk.
  • \PhysicalDisk\Disk Bytes/sec: Shows the amount of data transmitted per disk per second.
  • \PhysicalDisk\Avg. Disk Sec/Read: Shows the average in milliseconds (ms) to read from disk. The recommended average should be less than 10 milliseconds (ms).
  • \PhysicalDisk\Avg. Disk Sec/Write: Shows the average in milliseconds (ms) to write to disk. The recommended average should be less than 10 milliseconds (ms).

What if you had ran the Performance Monitor and found the averages for these disk performance counters are in less than desired values? What can you do? You can review the following recommendations as starting points in troubleshooting or resolving the disk bottlenecks:

  • Find and review the queries that use cause the disk pressure and optimize with rewrite to reduce the disk I/O.
  • Consider using SAN and RAID array if not already done so.
  • Get faster hard disks.
  • Relocate the log files to separate disks from the data files.
  • Create multiple files and filegroups in separate disk drives.
  • Increase the host server memory to reduce paging to the disks.

REFERENCES:

Examining and Tuning Disk Performance
http://technet.microsoft.com/en-us/library/cc938959.aspx

Monitoring Disk Usage
http://technet.microsoft.com/en-us/library/ms175903(v=sql.105).aspx

Object and Counter Design
http://msdn.microsoft.com/en-us/library/windows/desktop/aa372191(v=vs.85).aspx

Performance Monitor Counters
http://technet.microsoft.com/en-us/library/cc768048.aspx#XSLTsection131121120120

What are the performance counters to capture to analyze processor pressure?

You want to capture a system baseline for performance assessment before tuning. In a previous blog, we have reviewed the performance counters to capture to analyze memory pressure. Now, you want to know the most important performance counters to capture if you wish to make an assessment and/or analyze processor pressure.

Before selecting the objects and counters, we need to define objects and counters in Windows Performance Monitor (PERFMON). A performance object is an entity for which performance data is available and performance counters define the type of data that are available for a performance object. One performance object will have one or more associated performance counters. When added in the Performance Monitor, they will appear in the form of \Performance Object\Performance Counter. For example, \System\ Processor Queue Length and \Processor(_Total)\% Processor Time, where System and Processor(_Total) are the performance object and Processor Queue Length and % Processor Time are the performance counters.

To analyze processor pressure, you can capture the performance monitor objects such as Processor(_Total), System, and SQLServer: Access Methods and the associated counters for these objects.

For the Processor(_Total) object, you can add counters such as % Processor Time and % Privileged Time. The following describes each of the counters and the recommended values for each:

  • \Processor(_Total)\% Processor Time: The percentage of elapsed time that the processors spend on executing active threads. Simply put, it is the percentage of time that the processors are busy. The recommended average value should always be below 80%.
  • \Processor(_Total)\% Privileged Time: Note that there are two modes for all activities inside a Windows Operating System: user mode and kernel mode. Kernel mode is also known as privileged mode. All system activities are classified as kernel mode (privileged mode). Disk access or activity is a privileged mode. The recommended average value should always be below 15%.

For the System object, you can add counters such as Processor Queue Length and Context Switches/sec. The following describes each of the counters and the recommended value for each:

  • \System\Processor Queue Length: Note that regardless of the number processors, there will always be only one processor queue. This counter measures the number of threads in the processor queue. Simply put, it is the number or requests waiting on the processor. The recommended average value should always be below 2.
  • \System\Context Switches/sec: This counter measures the rate in which the processors are switched from one thread to another. The recommended average value should always be below 1,000.

For the SQLServer: Access Methods object, you can add counters such as Full Scans/sec, Worktables Created/sec, and Page Splits/sec. The following describes each of the counters and the recommended value for each:

  • \SQLServer: Access Methods\Full Scans/sec: This counter shows the full scans for tables and indexes. Both are undesirable results for large tables and indexes. The recommended average should be less than 1.
  • \SQLServer: Access Methods\Worktables Created/sec: Worktables are built in tempdb and are dropped automatically at the end of the statement. They are usually generated for GROUP BY, ORDER BY, or UNION queries. If you see the value for this counter is high, you will need to review the stored procedure and/or ah-hoc queries for optimization and possible rewrite.
  • \SQLServer: Access Methods\Page Splits/sec: The index fillfactor determines the fullness of a data page. If you find that the number of page splits is high, you should consider increasing the fillfactor of your indexes.  Specifying a higher number for the fillfactor to reduce page splits to allocate more room in data pages before it fills up a page and causes page split.

What if you had ran the Performance Monitor and found the averages for these processor performance counters are in less than desired values? What can you do? You can review the following recommendation as starting points in troubleshooting or resolving the processor bottlenecks:

  • Find and review the queries that use the most CPU time and rewrite them if possible in addition to checking for missing indexes and outdated statistics.
  • Relocate the CPU intensive software to a separate host server.
  • Increase the number of processor or upgrade to faster processors.

 

REFERENCES:

Optimizing SQL Server CPU Performance
http://technet.microsoft.com/en-us/magazine/2007.10.sqlcpu.aspx

Processor Counters
http://technet.microsoft.com/en-us/library/cc938593.aspx

Object and Counter Design
http://msdn.microsoft.com/en-us/library/windows/desktop/aa372191(v=vs.85).aspx

Performance Monitor Counters
http://technet.microsoft.com/en-us/library/cc768048.aspx#XSLTsection131121120120