Archive for June, 2013

Don’t forget to add the Trace Flag 845 when enabling “Lock Pages in Memory” for the SQL Server 2005+ Standard Edition 64-bit

You can enable “Lock Pages in Memory” for the SQL Server service account to prevent paging. For the Enterprise editions, you can just use the following steps and the call it a day:

  1. On the Start menu, click Run. In the Open box, type gpedit.msc. The Group Policy dialog box opens.
  2. On the Group Policy console, expand Computer Configuration, and then expand Windows Settings.
  3. Expand Security Settings, and then expand Local Policies.
  4. Select the User Rights Assignment folder. The policies will be displayed in the details pane.
  5. In the pane, double-click Lock pages in memory.
  6. In the Local Security Policy Setting dialog box, click Add.
  7. In the Select Users or Groups dialog box, add the account that runs the SQL Server Service.
  8. Restart the machine for these changes to take effect.

But, if you are running SQL Server 2005+ Standard Edition 64-bit you will need to make sure you have the latest service packs and cumulative update provided that you have verified the compatibility of the applications, and then you have a critical extra step to do to enable Trace Flag 845 as a startup trace flag:

    1. Browse to the SQL Server Configuration Manager.
    2. Double-click to open the SQL Server instance properties.
    3. Click the Advanced tab.
    4. Scroll down to the “Startup Parameters”  row, then click on the drop-down box and then add “;-T845” to the end of the startup parameters.

T845

  1. Restart the SQL Server Service.
  2. Verify the “Using locked pages for buffer pool” message is written in the SQL Server Error log file at startup by running the following command:
    xp_readerrorlog 0, 1, 'lock'

REFERENCES:

How to: Enable the Lock Pages in Memory Option (Windows)
http://msdn.microsoft.com/en-us/library/ms190730(v=sql.105).aspx

Lock pages in memory now available for 64 bit Standard Edition of SQL Server
http://blogs.msdn.com/b/suhde/archive/2009/05/20/lock-pages-in-memory-now-available-for-standard-edition-of-sql-server.aspx

How to create a SQL Server trace file, schedule it to run, and analyze the trace file results?

Similar to a video camera in which you can capture and record the moments that may or may not be significant to you or others, SQL Server also has a built-in camera. It is called the SQL Server Profiler. SQL Server Profiler is the indispensable tool for capturing the SQL Server activities for analysis or reply for troubleshooting purposes.

While the full introduction to SQL Server Profiler is out of the scope of this blog, we can go over the basic concepts and see an example of its power. You can explore further by using the on-line references or reading “Mastering SQL Server Profiler” by Brad McGehee.

A few critical profiler terminologies before we can get started using an example, they are events, data columns, filters, and trace:

  • Events: Occurrences of some action within SQL Server. Eg: execution of a stored procedure, execution of an Ad Hoc select statement, execution of insert/update/delete statements, and etc…
  • Data Columns: Attributes of the events. Similar to an Excel file, the rows are events and the columns are data columns. They are context sensitive, not all events have all data columns. For example, if the row (event) is labeled “All Girls School,” the column (data column) for “Boys” should not apply.
  • Filters: Allows you to be selective with the trace files. You need to be specific on what you are looking for or else you will be looking for a “Needle in a Haystack.’
  • Trace: The events and data columns collected and stored in a physical file. You can save the trace files to a database table, in memory of profiler, or XML.

In the following walk-through example, we will have following established goals and objectives:

  1. Run SQL Server Profiler to create a trace file with the specific events, data columns, and filter to trace for a stored procedure “SelectData” executions by a user named “BillyBob” in the database “AdventureWorks.”
  2. Extract the trace definition to a TSQL script.
  3. Modify the TSQL script to specify the location to store the trace files, enable rollover, file size, and duration to run the trace. In addition modifying the filters in the script as there is a bug that had threw me for a loop for a few days to figure it out by modifying 1 to 0 in the exec sp_trace_setfilter statements for the filtering to take effect in the script.
  4. Schedule a SQL Server Agent job to run the trace definition script.
  5. Create a SQL Server Agent job to stop the trace job on demand in case needed.
  6. Load the trace files results into a database table for analysis.

The following are the events and data columns to be selected.

Events:

  • RPC:Completed: Indicates that a remote procedure call (RPC) has been completed.
  • SP:StmtCompleted: Indicates that a Transact-SQL statement within a stored procedure has completed.
  • SQL:BatchStarting: Indicates that the Transact-SQL batch is starting.
  • SQL:BatchCompleted: Indicates that the Transact-SQL batch has completed.
  • Showplan XML: Identifies the Showplan operators in a SQL statement.

Columns:

  • Duration: The duration (in microseconds) of the event. Note: In SQL Server 2005, the SQL Server Profiler graphical user interface displays the Duration column in milliseconds by default, but when a trace is saved to either a file or a database table, the Duration column value is written in microseconds.
  • ObjectName: The name of the object that is referenced.
  • TextData: The text value dependent on the event class that is captured in the trace.
  • CPU: The amount of CPU time (in milliseconds) that is used by the event.
  • Reads: The number of read operations on the logical disk that are performed by the server on behalf of the event. These read operations include all reads from tables and buffers during the statement’s execution. Represented in pages.
  • Writes: The number of physical disk write operations that are performed by the server on behalf of the event. Represented in pages.
  • IntegerData: The integer value dependent on the event class captured in the trace.
  • DatabaseName: The name of the database in which the user statement is running.
  • ApplicationName: The name of the client application that created the connection to an instance of SQL Server.
  • StartTime: The time at which the event started, when available.
  • EndTime: The time at which the event ended.
  • SPID: The server process ID (SPID) that is assigned by SQL Server to the process that is associated with the client.
  • LoginName: The name of the login of the user.
  • EventSequence: Sequence number for this event.
  • BinaryData: The binary value dependent on the event class that is captured in the trace.

Implementation steps

Step 1: Run SQL Server Profiler to create a trace file with the specific events, data columns, and filter to trace for a stored procedure “SelectData” executions by a user named “BillyBob” in the database “AdventureWorks.”

    1. Open SQL Server Management Studio (SSMS) and connect with a login belonging to the SYSADMIN server role or a login with permission to run SQL Server Profiler to the instance you wish to run a trace file on.
    2. Once connected, on the menu bar, click the Tool menu > SQL Server Profiler.
    3. On the “Connect to Server” dialog box, select or key in the instance name you wish to trace. Then, click on the “Connect” button to connect.
    4. On the “Trace Properties” dialog box, assign a name for the trace file on the “Trace name:” text box. In this case, I have named it “Stored Procs Trace.” On the “Use the template:” drop-down box, select “Blank” as the template. Check the checkbox “Save to file:” and then specific the location to save the trace file. On the “Set maximum file size (MB):” textbox, specify 25 as the MB file size for the file rollover and leave the checkbox “Enable file rollover” checked. Check the checkbox “Enable trace stop time:” to enable the trace stop time. We will modify this stop time in the TSQL script to be generated in a later step.
    5. Still on the “Trace Properties” dialog box, click on the “Events Selection” tab. Scroll down to see the “Stored Procedures” event and then click on the + symbol to expand the events under the “Stored Procedures” event class. Click on the checkboxes for “RPC:Completed” and “SP:StmtCompleted” to select them. Scroll down to the “TSQL” events and then click on the + symbol to expand the events under the “TSQL” event class. Click on the checkboxes for “SQL:BatchCompleted” and “SQL:BatchStarting” to select them. Scroll up to see the “Performance” event and then click on the + symbol to expand the events under the “Performance” event class. Click on the checkbox for “Showplan XML” to select it.
    6. Still on the Trace Properties” dialog box, deselect the “Show all events” and “Show all columns” by unchecking their checkboxes.
    7. Still on the Trace Properties” dialog box, click on the “Organize Columns…” button. Order the columns in the following order by select on the value and then clicking the “Up” or “Down” for rearrangement of the order:
      • Duration
      • EventClass
      • ObjectName
      • TextData
      • CPU
      • Reads
      • Writes
      • IntegerData
      • DatabaseName
      • ApplicationName
      • StartTime
      • EndTime
      • SPID
      • LoginName
      • EventSequence
      • BinaryData

Click the “OK” button once the orders are sorted. Scroll to the right and then deselect the columns that are not needed by right-clicking on the column name and then select the “Deselect Column” option.

  1. Still on the Trace Properties” dialog box, click the “Column Filters…” button. On the “Edit Filter” dialog box, scroll down to select the value “LoginName” and then click on the + symbol on the right for the “Like” option. On the textbox, key in the login name you wish to place a filter on. In this case “BillyBob.” Scroll up or down to select the value “Database” and then click on the + symbol on the right for the “Like” option. On the textbox, key in the database name you wish to place a filter on. In this case “AdventureWorks.” Scroll up or down to select the value “ObjectName” and then click on the + symbol on the right for the “Like” option. On the textbox, key in the stored procedure name you wish to place a filter on. In this case “SelectData.” You can then use the same approach to place more filters as needed. Once done, then click on the “OK” button.
  2. Still on the Trace Properties” dialog box, click the “Run” button to start the trace process.
  3. Connect to the SQL Server instance you wish to trace on using the login account that you had filtered on. In this case, I will connect using the “BillyBob” login. Once connect, switch back to the “SQL Server Profiler” screen, you will see a good number of rows being populated by just making a connection event.
  4.  Go back to the SSMS window and then execute the stored procedures you wish to trace.
  5. You can click on the stop button below the menu bar to stop the trace anytime or you can wait for the stored procedures to finish executing before clicking on the stop button to stop it.

Step 2: Extract the trace definition to a TSQL script.

  1. Once you had stopped the trace, you can extract the trace by click on the File menu > Export > Script Trace Definition > For SQL Server 2005 – 2008 R2…
  2. Browse to the destination where you wish to save the script file. Assign the file a descriptive name and then click on the “Save” button to save it. In this case, I have saved the TSQL script as “Stored Procs Trace.sql”

Step 3: Modify the TSQL script to specify the location to store the trace files, enable rollover, file size, and duration to run the trace.

  1. Open the trace definition TSQL script that you had saved. In my case, I have opened the “Stored Procs Trace.sql” file.
  2. On line #13,
    set @DateTime = '2013-06-15 14:52:14.000'

    @DateTime is the argument for the stop time. Here, I will assign it a stop time of 120 minutes by modifying the line to

    set @DateTime = DATEADD(MI, 120, GETDATE())
  3. On line #14,
    set @maxfilesize = 5

    @maxfilesize is the file size to rollover once the file is full. Here, I will change it to

    set @maxfilesize = 25

    to increase the file size to 25MB before rolling over to create new ones.

  4. On line #22,
    exec @rc = sp_trace_create @TraceID output, 0, N'InsertFileNameHere', @maxfilesize, @Datetime

    We need to change the @options argument value from 0 to 2 to enable file rollover and key in the trace file name and location for the @tracefile paramenter. For example, “

    exec @rc = sp_trace_create @TraceID output, 2, N'C:\SQL Server Traces\Stored Procs Trace', @maxfilesize, @Datetime
  5. On line #93, you can modify the filter to trace for a different login name if needed.
    exec sp_trace_setfilter @TraceID, 11, 0, 6, N'BillyBob'

    . Make sure you change the third argument from 1 to 0 if it was 1. The similar changes to all other filtering.  This is the logical operator argument [ @logical_operator = ]. It Specifies whether the AND (0) or OR (1) operator is applied.

  6. Save the changes to the trace definition TSQL script file.

The trace script should similar to the following:

/****************************************************/
/* Created by: SQL Server 2008 R2 Profiler */
/* Date: 06/30/2013 06:55:49 PM */
/****************************************************/

-- Create a Queue
declare @rc int
declare @TraceID int
declare @maxfilesize bigint
declare @DateTime datetime

set @DateTime = DATEADD(MI, 110, GETDATE()) -- stop trace after 110 minutes
set @maxfilesize = 25

-- Please replace the text InsertFileNameHere, with an appropriate
-- filename prefixed by a path, e.g., c:\MyFolder\MyTrace. The .trc extension
-- will be appended to the filename automatically. If you are writing from
-- remote server to local drive, please use UNC path and make sure server has
-- write access to your network share

exec @rc = sp_trace_create @TraceID output, 2, N'C:\Blogs\HandsOnSQLServer\SQL Server Trace Files\Stored Procs Trace', @maxfilesize, @Datetime
if (@rc != 0) goto error

-- Client side File and Table cannot be scripted

-- Set the events
declare @on bit
set @on = 1
exec sp_trace_setevent @TraceID, 122, 1, @on
exec sp_trace_setevent @TraceID, 122, 25, @on
exec sp_trace_setevent @TraceID, 122, 2, @on
exec sp_trace_setevent @TraceID, 122, 10, @on
exec sp_trace_setevent @TraceID, 122, 14, @on
exec sp_trace_setevent @TraceID, 122, 34, @on
exec sp_trace_setevent @TraceID, 122, 11, @on
exec sp_trace_setevent @TraceID, 122, 35, @on
exec sp_trace_setevent @TraceID, 122, 51, @on
exec sp_trace_setevent @TraceID, 122, 12, @on
exec sp_trace_setevent @TraceID, 10, 15, @on
exec sp_trace_setevent @TraceID, 10, 16, @on
exec sp_trace_setevent @TraceID, 10, 1, @on
exec sp_trace_setevent @TraceID, 10, 17, @on
exec sp_trace_setevent @TraceID, 10, 2, @on
exec sp_trace_setevent @TraceID, 10, 10, @on
exec sp_trace_setevent @TraceID, 10, 18, @on
exec sp_trace_setevent @TraceID, 10, 34, @on
exec sp_trace_setevent @TraceID, 10, 11, @on
exec sp_trace_setevent @TraceID, 10, 35, @on
exec sp_trace_setevent @TraceID, 10, 51, @on
exec sp_trace_setevent @TraceID, 10, 12, @on
exec sp_trace_setevent @TraceID, 10, 13, @on
exec sp_trace_setevent @TraceID, 10, 14, @on
exec sp_trace_setevent @TraceID, 45, 16, @on
exec sp_trace_setevent @TraceID, 45, 1, @on
exec sp_trace_setevent @TraceID, 45, 17, @on
exec sp_trace_setevent @TraceID, 45, 25, @on
exec sp_trace_setevent @TraceID, 45, 10, @on
exec sp_trace_setevent @TraceID, 45, 18, @on
exec sp_trace_setevent @TraceID, 45, 34, @on
exec sp_trace_setevent @TraceID, 45, 11, @on
exec sp_trace_setevent @TraceID, 45, 35, @on
exec sp_trace_setevent @TraceID, 45, 51, @on
exec sp_trace_setevent @TraceID, 45, 12, @on
exec sp_trace_setevent @TraceID, 45, 13, @on
exec sp_trace_setevent @TraceID, 45, 14, @on
exec sp_trace_setevent @TraceID, 45, 15, @on
exec sp_trace_setevent @TraceID, 12, 15, @on
exec sp_trace_setevent @TraceID, 12, 16, @on
exec sp_trace_setevent @TraceID, 12, 1, @on
exec sp_trace_setevent @TraceID, 12, 17, @on
exec sp_trace_setevent @TraceID, 12, 10, @on
exec sp_trace_setevent @TraceID, 12, 14, @on
exec sp_trace_setevent @TraceID, 12, 18, @on
exec sp_trace_setevent @TraceID, 12, 11, @on
exec sp_trace_setevent @TraceID, 12, 35, @on
exec sp_trace_setevent @TraceID, 12, 51, @on
exec sp_trace_setevent @TraceID, 12, 12, @on
exec sp_trace_setevent @TraceID, 12, 13, @on
exec sp_trace_setevent @TraceID, 13, 12, @on
exec sp_trace_setevent @TraceID, 13, 1, @on
exec sp_trace_setevent @TraceID, 13, 10, @on
exec sp_trace_setevent @TraceID, 13, 14, @on
exec sp_trace_setevent @TraceID, 13, 11, @on
exec sp_trace_setevent @TraceID, 13, 35, @on
exec sp_trace_setevent @TraceID, 13, 51, @on

-- Set the Filters
declare @intfilter int
declare @bigintfilter bigint

exec sp_trace_setfilter @TraceID, 11, 0, 6, N'BillyBob'
exec sp_trace_setfilter @TraceID, 34, 0, 6, N'SelectData'
exec sp_trace_setfilter @TraceID, 35, 0, 6, N'AdventureWorks'
-- Set the trace status to start
exec sp_trace_setstatus @TraceID, 1

-- display trace id for future references
select TraceID=@TraceID
goto finish

error:
select ErrorCode=@rc

finish:
go

Step 4: Schedule a SQL Server Agent job to run the trace definition script.

  1. Log in to the SQL Server instance using a login in the SYSTEMADMIN server role or with access to create and schedule a SQL Server Agent job.
  2. Create a new SQL Server Agent job. For example, I will create a new job called “Trace for Stored Procs.”
  3. Create a new job step. Name it “Trace” and copy the trace file definition TSQL script and paste into the “Command:” textbox. Click the “OK” button.
  4. On the “Select a page” section, click on the “Schedules” link. Click the “New…” button to add a schedule to run the job for the trace.
  5. Click the “OK” button to finish the job creation process.

Step 5: Create a SQL Server Agent job to stop the trace job on demand in case needed.

To view the trace files information, you can use the

fn_trace_getinfo()

function. For example:

SELECT * FROM :: fn_trace_getinfo(default)

You can further narrow the search down to only running trace by specifying the value to 5 for the property column and 1 for the value column. For example:

SELECT * FROM :: fn_trace_getinfo(default)
WHERE property = 5 and value = 1

Note that by default there is a default trace that’s always running. The default trace will always have the traceid of 1.  It is recommended that you leave it running and only kill the non-default traces created by you or your fellow colleagues when needed. Prior to killing any of the non-default traces, you can view and make a note of the traces you wish to kill and then modify the script to kill all the non-default traces. The following is the TSQL script to kill all non-default traces:

-- cursor to kill all running traces except the default trace
-- traceid 1 is the default trace

-- view the number of traces currently running
SELECT * FROM :: fn_trace_getinfo(default)
WHERE property = 5 and value = 1

DECLARE @traceid int

-- get all the running traces except the default one
DECLARE cursor_traceids CURSOR FOR
SELECT traceid FROM :: fn_trace_getinfo(default)
WHERE property = 5
and value = 1
and traceid <> 1 -- traceid 1 is the default trace

OPEN cursor_traceids
FETCH NEXT FROM cursor_traceids INTO @traceid

WHILE @@FETCH_STATUS = 0
BEGIN
-- To stop and delete its definition
EXEC sp_trace_setstatus @traceid, @status = 0 -- 0 stops the trace
EXEC sp_trace_setstatus @traceid, @status = 2 -- 2 closes the trace and deletes its definition from the server

FETCH NEXT FROM cursor_traceids INTO @traceid
END

CLOSE cursor_traceids
DEALLOCATE cursor_traceids

To create a SQL Server Agent job to kill all non-default traces if needed, you can use the following steps:

  1. Log in to the SQL Server instance using a login in the SYSTEMADMIN server role or with access to create and schedule a SQL Server Agent job.
  2. Create a new SQL Server Agent job. For example, I will create a new job called “Stop All Non-Default Traces.”
  3. Create a new job step. Name it “Stop Non-Default Traces” and copy the TSQL script to stop all non-default traces and paste into the “Command:” textbox. Click the “OK” button.
  4. Click the “OK” button to finish the job creation process.
  5. In SSMS, expand the “SQL Server Agent” folder. Under the “Jobs” folder, double-click the “Job Activity Monitor.”
  6. In the “Job Activity Monitor,” locate the job “Stop All Non-Default Traces.” Then right-click on it and select “Start Job at Step…” to start the job. This will kill all non-default traces.
  7. Go back to SSMS, you can verify that the non-default trace has been stopped by running the following line of code:
SELECT * FROM :: fn_trace_getinfo(default)

Step 6: Load the trace files results into a database table for analysis.

Once you had stopped the trace file, you can view the trace results using the SQL Server Profiler or load the results into a SQL Server database table. To perform the loading into a SQL Server database table for analysis, you can use the following steps.

  1. Locate the trace files and make a note of it. In this case, my trace files are located in “C:\SQL Server Traces” and the first trace file name is “Stored Procs Trace.trc”
  2. To load the trace files into a database table, you can use thefn_trace_gettable()

    function. To load all the trace files, you can specify the keyword “default” as the parameter. For example:

  3. USE AdventureWorks
    GO
    SELECT * INTO temp_trc
    FROM fn_trace_gettable('C:\SQL Server Traces\Stored Procs Trace.trc', default);
    GO
    
  4. To query the trace file table:
-- selecting from the trace table
USE AdventureWorks
GO
SELECT CASE
WHEN Duration IS NULL THEN 'NULL'
ELSE CONVERT(VARCHAR, CONVERT(decimal(20,3), Duration/1000000.0))
END as [Duration in Seconds]
, EventClass
, ObjectName
, TextData
, CPU
, Reads
, Writes
, IntegerData
, DatabaseName
, ApplicationName
, StartTime
, EndTime
, SPID
, LoginName
, EventSequence
, BinaryData
FROM temp_trc

REFERENCES:

SQL Server Event Class Reference
http://msdn.microsoft.com/en-us/library/ms175481(v=sql.90).aspx

SQL Server Profiler Terminology
http://msdn.microsoft.com/en-us/library/ms189070(v=sql.90).aspx

Describing Events by Using Data Columns
http://msdn.microsoft.com/en-us/library/ms190762(v=sql.90).aspx

sp_trace_create
http://msdn.microsoft.com/en-us/library/ms190362(v=sql.90).aspx

fn_trace_getinfo
http://msdn.microsoft.com/en-us/library/ms173875(v=sql.90).aspx

fn_trace_gettable
http://msdn.microsoft.com/en-us/library/ms188425(v=sql.90).aspx

sp_trace_setfilter
http://msdn.microsoft.com/en-us/library/ms174404(v=sql.90).aspx

Mastering SQL Server Profiler
by Brad M McGehee

How to identify and resolve key lookups?

What is a key lookup? Key lookup is an undesirable situation where queries referring to columns that are not part of the non-clustered index and SQL Server have to navigate to the data rows in the table to access the required columns. It is also known as a bookmark lookup. You can view the key lookups by looking at the execution plans. You can resolve key lookups by using a covering index or index join.
A covering index is an index that consists of all the required columns therefore can return the results without having SQL Server to navigate to the table data. An index join uses an index intersection between two or more indexes to cover a query fully.
In the following example, we will create a key lookup and creating a cover index to resolve the key lookup. More specifically, we will:
1. Create a test database.
2. Create a test table by selecting data from the AdventureWorks2008R2.Production.TransactionHistory table for testing.
3. Create a primary key on the test table.
4. Create a non-clustered index on the test table.
5. Query the test table and specify the non-clustered index key column field on the WHERE clause to create a key lookup.
6. Resolve the key lookup by modifying the non-clustered index key to create a covering index.

Implementation Steps:
1. Create a test database.

-- if the TestDB exists, then drop it
IF  EXISTS (SELECT name FROM master.dbo.sysdatabases WHERE name = N'TestDB')
DROP DATABASE [TestDB]

-- create the TestDB
CREATE DATABASE TestDB
GO

2. Create a test table by selecting data from the AdventureWorks2008R2.Production.TransactionHistory table for testing.

-- borrow data from the AdventureWorks2008R2.Production.TransactionHistory table for testing
USE TestDB
GO
SELECT *
INTO TransactionHistory
FROM AdventureWorks2008R2.Production.TransactionHistory

3. Create a primary key on the test table.

-- create the primary key on the TransactionID column
-- this will create a clustered index on the TransactionID column
ALTER TABLE TransactionHistory
ADD CONSTRAINT [PK_TransactionID] PRIMARY KEY (TransactionID)

4. Create a non-clustered index on the test table.

-- create a non-clustered index on the ProductID column
IF EXISTS (SELECT name FROM sys.indexes
            WHERE name = N'NCI_TransactionHistory_ProductID')
DROP INDEX NCI_TransactionHistory_ProductID ON dbo.TransactionHistory
GO
CREATE INDEX NCI_TransactionHistory_ProductID
ON dbo.TransactionHistory (ProductID)
GO

5. Query the test table and specify the non-clustered index key column field on the WHERE clause to create a key lookup. Here, we will set the NOCOUNT, STATISTICS IO, and STATISTICS TIME on and then off to view the relevant time and io metrics and suppress the count of rows message. We will also turn on the “Include Actual Execution Plan” option prior to executing the following query:

-- select data from the TransactionHistory table
-- filter by the ProductionID column to create a key lookup
SET NOCOUNT ON
SET STATISTICS IO ON
SET STATISTICS TIME ON
SELECT TH.TransactionID
, TH.ProductID
, TH.TransactionDate
, TH.Quantity
, TH.ActualCost
FROM TransactionHistory TH
WHERE TH.ProductID = 2
SET STATISTICS TIME OFF
SET STATISTICS IO OFF
SET NOCOUNT OFF

The result for the messages will be:

SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.
Table ‘TransactionHistory’. Scan count 1, logical reads 148, physical reads 0, read-ahead reads 10, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times:
CPU time = 16 ms, elapsed time = 111 ms.

The Execution plan will show the Key Lookup:
Key Lookup

6. Resolve the key lookup by modifying the non-clustered index key to create a covering index.

-- create a covering index to resolve the key lookup
-- drop the NCI_TransactionHistory_ProductID nonclustered index 
-- then create a new covering index 
IF EXISTS (SELECT name FROM sys.indexes
            WHERE name = N'NCI_TransactionHistory_ProductID')
DROP INDEX NCI_TransactionHistory_ProductID ON dbo.TransactionHistory
GO
CREATE INDEX NCI_TransactionHistory_ProductID_TransactionDate_Quantity_ActualCost
ON dbo.TransactionHistory (ProductID, TransactionDate, Quantity, ActualCost)
GO

If you had mouse-over the tool tip on the Key Lookup operation, you can see the output list for the operation. It will be difficult to view the complete output list if the query is complicated. In that case, you can right-click on the Key Lookup operator, select Properties, and then copy the output list from the “Output List” dialog box.
Output List from the Key Lookup Operator

Please be aware that we are dropping and re-creating an index on a development environment. Do not do this in a production environment during business hours. You should have gone through testing and with approval for the changes requested, then schedule a maintenance window or run the changes during a window of low activity.

In addition to the covering index, you can consider using the Include Columns if the data columns for the index are wide and exceeding 900KB limit of an index. The other alternative is create extra indexes for index joins.

7. Run the same query that was ran in step 5 and compare the logical reads and elapsed time differences. The logical reads have decrease to 3 from the previous 148 and the elapsed time has decrease to 0 ms from the previous 111 ms.

-- select data from the TransactionHistory table
-- filter by the ProductionID column to create a key lookup
SET NOCOUNT ON
SET STATISTICS IO ON
SET STATISTICS TIME ON
SELECT TH.TransactionID
, TH.ProductID
, TH.TransactionDate
, TH.Quantity
, TH.ActualCost
FROM TransactionHistory TH
WHERE TH.ProductID = 2
SET STATISTICS TIME OFF
SET STATISTICS IO OFF
SET NOCOUNT OFF

The result for the messages will be:

SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.
Table ‘TransactionHistory’. Scan count 1, logical reads 3, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.

The Execution plan will show the index seek and the key lookup is no longer an issue:
Index Seek

How to view the database auto-growth settings and correct them?

Auto-growth is the process which the SQL Server engine expands the size of the database data or log files when they are out of space. The growth size will depend on the settings you have specified for the file growth options for your databases. There are three settings that you can set for the file growth by:

  • Specific size
  • Percentage of the current size
  • Not grow at all

In addition, you have two options when configuring the maximum size for the database data or log files:

  • Unrestricted growth: the files will keep on growing until you are out of disk space
  • Restricted growth: you specify the maximum file size in MB for the files

You should always set the initial size for the database data and log files appropriately to prevent the auto-growth events. When an auto-growth event occurs, the negative impacts can be:

  • Locking and blocking: when auto-growth occurs, a database level lock is required. All the processes will need to wait until auto-growth completes.
  • Fragmentations: when auto-growth occurs, SQL Server will need to find additional disk space somewhere in the disk drive(s) and usually the disk space will not physically next to the existing database file space.

When creating a new database, you should always establish the auto-growth settings in addition to setting the initial sizes correctly. By default, new databases are created based on the settings of the Model system database. The Model database has the default settings of auto-growth by 1 MB for data files and 10% for log files. Both have unrestricted growth. The files will grow until the disk is full. Imagine you have a transaction intensive OLTP database and it’s over 500GB for the data file and 200GB for the log file and the auto-growth setting is using the default settings. Whenever an auto-growth event is needed, the data file will grow by 1MB and the log file will growth by 2,000MB or more (10% of the existing log file size). It’s definitely not pretty as the auto-growth events for the data file will occur constantly as it grows by 1MB.

Now, if the SQL Server instance has been in production for over 2 years and there are over 20 databases, how you can quickly identify the auto-growth settings for each of the databases without going through each one by right-clicking the databases and selecting and viewing the properties of the databases and then browsing to the Files section and finally clicking the ellipses button to view the auto-growth setting? The following script can help you:

-- auto growth settings for data and log files
select DB_NAME(mf.database_id) database_name
, mf.name logical_name
, CONVERT (DECIMAL (20,2) , (CONVERT(DECIMAL, size)/128)) [file_size_MB]
, CASE mf.is_percent_growth 
    WHEN 1 THEN 'Yes'
    ELSE 'No'
  END AS [is_percent_growth]
, CASE mf.is_percent_growth
	WHEN 1 THEN CONVERT(VARCHAR, mf.growth) + '%'
	WHEN 0 THEN CONVERT(VARCHAR, mf.growth/128) + ' MB'
  END AS [growth_in_increment_of]
, CASE mf.is_percent_growth
    WHEN 1 THEN CONVERT(DECIMAL(20,2), (((CONVERT(DECIMAL, size)*growth)/100)*8)/1024)
    WHEN 0 THEN CONVERT(DECIMAL(20,2), (CONVERT(DECIMAL, growth)/128)) 
  END AS [next_auto_growth_size_MB] 
, CASE mf.max_size
	WHEN 0 THEN 'No growth is allowed'
	WHEN -1 THEN 'File will grow until the disk is full'
	ELSE CONVERT(VARCHAR, mf.max_size)
  END AS [max_size]
, physical_name  
from sys.master_files mf

You can identify how often an auto-growth event occurs by:

  • Querying the default trace for the “Data File Auto Grow” and “Log File Auto Grow” events
  • If the default trace is not running, then run a trace to capture the “Data File Auto Grow” and “Log File Auto Grow” events

Auto-Grow Events

It is highly recommended that you pre-size the data and log files to eliminate or reduce the auto-growth events. You can modify the initial size, max size, and the file growth settings of the database data and log files by using the following code:

ALTER DATABASE DatabaseName
MODIFY FILE
(
NAME = FileName
, SIZE = InitialSize
, MAXSIZE = RestrictedMaxSize
, FILEGROWTH = SizeToGrowBy
)

For example, to modify the initial size, max size, and the file growth settings for the data and log files for the TestDB:

-- modify the data file settings
ALTER DATABASE TestDB
MODIFY FILE
(
NAME = TestDB
, SIZE = 60MB
, MAXSIZE = 100MB
, FILEGROWTH = 5MB
) 
-- modify the log file settings
ALTER DATABASE TestDB
MODIFY FILE
(
NAME = TestDB_log
, SIZE = 60MB
, MAXSIZE = 100MB
, FILEGROWTH = 12%
)