How to rebuild all indexes in all databases

DECLARE @Database VARCHAR(255)   
DECLARE @fillfactor INT 

SET @fillfactor = 90 

SELECT name FROM master.dbo.sysdatabases   
WHERE name NOT IN ('master','msdb','tempdb','model','distribution')   

OPEN DatabaseCursor  

FETCH NEXT FROM DatabaseCursor INTO @Database  

   SET @cmd = 'DECLARE TableCursor CURSOR FOR SELECT ''['' + table_catalog + ''].['' + table_schema + ''].['' + 
  table_name + '']'' as tableName FROM [' + @Database + '].INFORMATION_SCHEMA.TABLES 
  WHERE table_type = ''BASE TABLE'''   

   -- create table cursor  
   EXEC (@cmd)  
   OPEN TableCursor   

   FETCH NEXT FROM TableCursor INTO @Table   

       IF (@@MICROSOFTVERSION / POWER(2, 24) >= 9)
           -- SQL 2005 or higher command 
           SET @cmd = 'ALTER INDEX ALL ON ' + @Table + ' REBUILD WITH (FILLFACTOR = ' + CONVERT(VARCHAR(3),@fillfactor) + ')' 
           EXEC (@cmd) 
          -- SQL 2000 command 
          DBCC DBREINDEX(@Table,' ',@fillfactor)  

       FETCH NEXT FROM TableCursor INTO @Table   

   CLOSE TableCursor   
   DEALLOCATE TableCursor  

   FETCH NEXT FROM DatabaseCursor INTO @Database  
CLOSE DatabaseCursor   
DEALLOCATE DatabaseCursor

An alternative to the Count(*) function.

Sometimes rapid code development doesn’t always produce the most efficient code.
Take the age old line of code


Obviously this will give you the row count for a table, but at what cost?

Doing any SELECT * from a table will ultimately result in a table or clustered index scan.


Turning on STATISTICS IO on reveals 4534 logical reads just to return the row count of 1,133,242.

Table 'tblERPInvoiceData'. Scan count 1, logical reads 4534, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Starting with SQL Server 2005, Microsoft introduced a DMV, sys.dm_db_partition_stats, that provides you with the same information at a fraction of the cost. It requires a little more coding, but once you turn on STATISTICS IO, you will see the performance benefit.

USE [db_name];
SELECT AS 'SchemaName'
, AS 'TableName'
,SUM(p.row_count) AS 'RowCount'
FROM sys.dm_db_partition_stats p
JOIN sys.objects o ON o.object_id = p.object_id
JOIN sys.schemas s ON o.schema_id = s.schema_id
WHERE p.index_id < 2 AND o.type = 'U'
AND = 'dbo'
AND = 'tblERPInvoiceData'

Since we’re querying a DMV, we never touch the base table. We can see here we only need 16 logical reads to return the same row count of 1,133,242, and the tblERPInvoiceData table is nowhere in our execution plan.

Table 'sysidxstats'. Scan count 1, logical reads 9, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'sysschobjs'. Scan count 0, logical reads 4, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'sysclsobjs'. Scan count 0, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

By using the DMV, we have improved the query performance and reduced the total I/O count by nearly 100%. Another added benefit of using the DMV, is we won’t need locks on the base table and therefore will avoid the possibility of blocking other queries hitting that table.

Backup all Databases in MSSQL – SQL Script

If you have a server you would have been presented at one point with the need to backup all databases in one go. Here’s a script that would do just that.



DECLARE @name VARCHAR(50) — database name  

DECLARE @path VARCHAR(256) — path for backup files  
DECLARE @fileName VARCHAR(256) — filename for backup  
DECLARE @fileDate VARCHAR(20) — used for file name

— specify database backup directory
SET @path = ‘D:SQLBackup’
— specify filename format
SELECT @fileDate = CONVERT(VARCHAR(20), Getdate(), 112)

FROM   master.dbo.sysdatabases
WHERE  name NOT IN ( ‘master’, ‘model’, ‘msdb’, ‘tempdb’ )
— exclude these databases
DECLARE @DatabaseFolder NVARCHAR(255)
SET @DatabaseFolder = @path + @fileDate
EXEC master.sys.Xp_create_subdir      @DatabaseFolder;

    OPEN db_cursor
FETCH next FROM db_cursor INTO @name

SET @fileName = @path + @fileDate +  + @name + ‘.BAK’
FETCH next FROM db_cursor INTO @name
CLOSE db_cursor
DEALLOCATE db_cursor

Getting a list of stored procedures in SQL Server Azure

I ran across a problem the other day, needing to change in bulk a lot of stored procedures on my cloud server. The requirement was to change a join condition from a text field to an integer – which is in itself a better idea as numeric indexes perform better.

First, I wrote a query to allow me to find the stored procedures containing the text I wanted to modify:



    WHERE [text] LIKE '%Foo%'
    AND OBJECTPROPERTY(id, 'IsProcedure') = 1

    FROM sys.sql_modules
    WHERE OBJECTPROPERTY(object_id, 'IsProcedure') = 1
    AND definition LIKE '%Foo%'

Second, I get the text of the stored procedure I wish to modify, and I run sp_helptext to get the details:
sp_helptextI go through all the procedures like this, modifying when needed, re-running the original statement, until there are no other results left.


SQL Procedures and Functions that any DBA should know

large_about_prodba-430x319Have you ever needed very fast a script you can use in SQL and did not want to waste any time looking for it on Google? We have put together a list of most used SQL queries for your database performance.

For example purposes, let’s assume that you have a table called Company with a primary key on CompanyID (identity column, int).

The rest of the data in the table will be mentioned when time comes.

Primary Keys

In Table Creation

CompanyId int NOT NULL,
CompanyName varchar(255) NOT NULL,
Address varchar(255),
City varchar(255),

After Table Creation




Drop Key




Create Index

CREATE INDEX IX_Company_City ON Company (City)

Create Unique Index

CREATE UNIQUE INDEX IX_Company_City ON Company (City)

Among the resources that shape indexes, the most elemental ones at your disposal are the statements
that create the index and the two system stored procedures,

and sp_help @tablename.

As an example, I asked the Object Browser in Query Analyzer to create a script for a table as a create.

I then invoked sp_helpindex to get the details of the indexes on this table, Company.


The index named PK_Company was created as the result of a constraint creating a primary key.

PK_Company is a clustered index, meaning that the table is in the order specified by the index key,
in this case, the identity column CompanyID.
The index created has only one column, and since it was a primary key constraint, it also is unique and not null.
Because I didnt specify the direction of the index, it was created in the default of ascending order.
Since the clustered index was created on the Primary file group, the table and its index were created there. Clustered indexes are always created on the same file group as the data, because the leaf level or lowest level of a clustered index is the data itself. (In fact, you can move a table to a different file group by creating a clustered index for the table on a different file group. Both structures will move. Additionally, clustered indexes are effective for range searches because of their proximity to the data and because they sort the table in index order.)

The Unintended Consequences of Clustered Indexes

Be sure to analyze the retrieval requests for your system and design your indexes to accommodate most of those requests. You’ll have to accept tradeoffs between the most frequently executed queries and the less frequently executed but more demanding queries.

The optimizer can chose to use an existing index based on its statistics. However, if the statistics indicate that reading the whole table would be cheaper, the optimizer will perform a table scan. The database can maintain index statistics with the Auto_Create_Statistics option set, or you can update them manually. Depending on how busy your server is, you may elect to perform updates at less busy times. As a side benefit, you also can keep statistics on non-indexed columns, which could help query execution.

Clustered and non-clustered indexes are interrelated by virtue of the clustering index key. You can promote efficiency in index maintenance by specifying clustered indexes first, because the non-clustered indexes use the clustered indexs pointers. If you specify a non-clustered index before a clustered index and then drop the clustered one, you are forcing the server to do double the work. When you specify a clustered index, you create the clustering key that succeeding secondary or non-clustered indexes use. If the clustered index comes after the non-clustered index, the server will drop and recreate the non-clustered indexes so it can inform them about where to find pages with the clustering key. To avoid this extra work, use the Drop Existing clause of the Create Index statement.

Another consequence of clustered indexes is you can influence the physical location of the data file by using the ON FILEGROUP clause. If you designate a different file group for the index, you can move the table to the named file group in the clustered index order. Because the clustered index in effect is the data, one less intermediate leaf level lies between the data and the index. Youll have to invoke the Alter Database command to create the new file group, then relocate your table to the new file group with a Create Index statements On Filegroup option.

If you elect this Drop Existing option after the fact (i.e., your table already exists and you have already created the new file group), you can use the create index command with the DROP EXISTING clause to prevent the double build and rebuild of the non-clustered indexes.



Use this command to remove all the data from SQL Server’s data cache (buffer) between performance tests to ensure fair testing. Keep in mind that this command only removes clean buffers, not dirty buffers. Because of this, before running the DBCC DROPCLEANBUFFERS command, you may first want to run the CHECKPOINT command first. Running CHECKPOINT will write all dirty buffers to disk. And then when you run DBCC DROPCLEANBUFFERS, you can be assured that all data buffers are cleaned out, not just the clean ones.



If you rarely restart the mssqlserver service, you may find that your server log gets very large and takes a long time to load and view. You can truncate (essentially create a new log) the Current Server log by running DBCC ERRORLOG. You might want to consider scheduling a regular job that runs this command once a week to automatically truncate the server log. As a rule, I do this for all of my SQL Servers on a weekly basis. Also, you can accomplish the same thing using this stored procedure: sp_cycle_errorlog.

Procedure Caching


SET @intDBID = (SELECT dbid FROM master.dbo.sysdatabases WHERE name = ‘database_name’)


DBCC FLUSHPROCINDB : Used to clear out the stored procedure cache for a specific database on a SQL Server, not the entire SQL Server.
The database ID number to be affected must be entered as part of the command.

You may want to use this command before testing to ensure that previous stored procedure plans won’t negatively affect testing results.
If you want to clear the cache for all databases, use: DBCC FREEPROCCACHE
If you are unsure about the procedure cache is being used, run this command: DBCC PROCCACHE


DBCC INDEXDEFRAG (Database_Name, Table_Name, Index_Name)

In SQL Server 2000, Microsoft introduced DBCC INDEXDEFRAG to help reduce logical disk fragmentation. When this command runs, it reduces fragmentation and does not lock tables, allowing users to access the table when the defragmentation process is running. Unfortunately, this command doesn’t do a great job of logical defragmentation.

The only way to truly reduce logical fragmentation is to rebuild your table’s indexes. While this will remove all fragmentation, unfortunately it will lock the table, preventing users from accessing it during this process. This means that you will need to find a time when this will not present a problem to your users.

Of course, if you are unable to find a time to reindex your indexes, then running DBCC INDEXDEFRAG is better than doing nothing.


DBCC DBREINDEX(‘table_name’, fillfactor)

Periodically (weekly or monthly) perform a database reorganization on all the indexes on all the tables in your database. This will rebuild the indexes so that the data is no longer fragmented. Fragmented data can cause SQL Server to perform unnecessary data reads, slowing down SQL Server’s performance.
If you perform a reorganization on a table with a clustered index, any non-clustered indexes on that same table will automatically be rebuilt.

Database reorganizations can be done by scheduling SQLMAINT.EXE to run using the SQL Server Agent, or if by running your own custom script via the SQL Server Agent (see below).

–Script to automatically reindex all tables in a database
USE DatabaseName –Enter the name of the database you want to reindex
DECLARE @TableName varchar(255)
SELECT table_name FROM information_schema.tables
WHERE table_type = ‘base table’

OPEN TableCursor
FETCH NEXT FROM TableCursor INTO @TableName

PRINT “Reindexing ” + @TableName
DBCC DBREINDEX(@TableName,’ ‘,90)
FETCH NEXT FROM TableCursor INTO @TableName

CLOSE TableCursor

The script will automatically reindex every index in every table of any database you select, and provide a fill factor of 90%. You can substitute any number you want for the 90 in the above script.

When DBCC DBREINDEX is used to rebuild indexes, keep in mind that as the indexes on a table are being rebuilt, that the table becomes unavailable for use by your users. For example, when a non-clustered index is rebuilt, a shared table lock is put on the table, preventing all but SELECT operations to be performed on it. When a clustered index is rebuilt, an exclusive table lock is put on the table, preventing any table access by your users. Because of this, you should only run this command when users don’t need access to the tables being reorganized.



Used to show how fragmented data and indexes are in a specified table. If data pages storing data or index information becomes fragmented, it takes more disk I/O to find and move the data to the SQL Server cache buffer, hurting performance. This command tells you how fragmented these data pages are. If you find that fragmentation is a problem, you can reindex the tables to eliminate the fragmentation. Note: this fragmentation is fragmentation of data pages within the SQL Server MDB file, not of the physical file itself.

Since this command requires you to know the ID of both the table and index being analyzed, you may want to run the following script so you don’t have to manually look up the table name ID number and the index ID number.

–Script to identify table fragmentation
–Declare variables
@ID int,
@IndexID int,
@IndexName varchar(128)

–Set the table and index to be examined
SELECT @IndexName = ‘index_name’ –enter name of index
SET @ID = OBJECT_ID(‘table_name’) –enter name of table
–Get the Index Values
FROM sysindexes
WHERE id = @ID AND name = @IndexName

–Display the fragmentation

While the DBCC SHOWCONTIG command provides several measurements, the key one is Scan Density. This figure should be as close to 100% as possible. If the scan density is less than 75%, then you may want to reindex the tables in your database.


DBCC SHOW_STATISTICS (table_name, index_name)

Used to find out the selectivity of an index. Generally speaking, the higher the selectivity of an index, the greater the likelihood it will be used by the query optimizer. You have to specify both the table name and the index name you want to find the statistics on.


DBCC UPDATEUSAGE (‘databasename’)

The official use for this command is to report and correct inaccuracies in the sysindexes table, which may result in incorrect space usage reports. Apparently, it can also fix the problem of unreclaimed data pages in SQL Server. You may want to consider running this command periodically to clean up potential problems. This command can take some time to run, and you want to run it during off times because it will negatively affect SQL Server’s performance when running. When you run this command, you must specify the name of the database that you want affected.

Removing duplicate records from a table where the rows are similar but still distinct.

We had an issue with a supplier table which contained records that were quite similar but still distinct.

For example this:

Supplier ID                 Supplier Number         Company ID      Name                                        Address

5                                          3                                      COMP1                BLACK AND DECKER          UK, London

6                                          3                                      COMP1                BLACK AND DECKER          United Kingdom, London

7                                          3                                      COMP2               BLACK AND DECKER          UK, London

8                                          3                                      COMP2               BLACK AND DECKER           United Kingdom, London


We want to remove records 6 and 8 but keep 5 and 7.

To do this, we wrote the following query:

DELETE from tblSuppliers
where exists (select ‘x’ from tblSuppliers X
where tblSuppliers.Number = X.Number and tblSuppliers.CompanyID = x.CompanyID
and tblSuppliers.SupplierID > X.SupplierID);



Other Methods

Choose one of the following queries to identify or remove duplicate rows from a table leaving only unique records in the table:

Method 1:

  2    SELECT min(rowid) FROM table_name B
  3    WHERE A.key_values = B.key_values);

Delete all rowids that is BIGGER than the SMALLEST rowid value (for a given key).

Method 2:

SQL> create table table_name2 as select distinct * from table_name1;
SQL> drop table table_name1;
SQL> rename table_name2 to table_name1;

This method is usually faster. However, remember to recreate all indexes, constraints, triggers, etc. on the table when done.

Method 3 (similar to the one we used):

SQL> delete from my_table t1
SQL> where  exists (select 'x' from my_table t2
SQL>                 where t2.key_value1 = t1.key_value1
SQL>                   and t2.key_value2 = t1.key_value2
SQL>                   and t2.rowid      > t1.rowid);

Note: One can eliminate N^2 unnecessary operations by creating an index on the joined fields in the inner loop (no need to loop through the entire table on each pass by a record). This will speed-up the deletion process.

Note 2: If you are comparing NOT-NULL columns, use the ISNULL function. Remember that NULL is not equal to NULL. This should not be a problem as all key columns should be NOT NULL by definition.

Automating Server-Side Tracing in SQL Server

Experts from within and outside Microsoft have always said that running Profiler on a production environment is not a recommended practice, as it could degrade the performance of the server. Instead they suggest doing server side tracing via SQL Trace system stored procedures. But DBAs often questioned this by asking, what if I profile
the production server, from a different PC? Well, I haven’t heard a convincing answer to this whole thing about Profiler vs. server side tracing yet, but I attended SQL PASS 2003 in Seattle (between 11th and 14th of November, 2003). More than once during the event, some of the big wigs from Microsoft PSS recommended server side tracing over Profiler.
Well, they definitely know their stuff, so I decided to start using server side tracing, instead of Profiler. Not to mention, I personally heard about incidents where DBAs brought down SQL Servers by running Profiler (especially in version 7.0).

Prior to attending PASS, I’ve never used SQL Trace system stored procedures to set up a server side trace. So, I started off by reading SQL Server Books Online. There are a bunch of SQL Trace system stored procedures, that you can use to set up a server side
trace. There are also a few system functions to query and get information about currently running traces.

Here is a list of SQL Trace system stored procedures and functions:

SQL Trace system stored procedures

sp_trace_create Creates a trace definition
sp_trace_generateevent Creates a user-defined event
sp_trace_setevent Adds or removes an event
or event column to a trace
sp_trace_setfilter Applies a filter to a trace
sp_trace_setstatus Used to start, stop and close traces

SQL Trace system functions

fn_trace_geteventinfo Returns information about the events traced by a specified trace
fn_trace_getfilterinfo Returns information about the filters applied to a specified trace
fn_trace_getinfo Returns information about a specified trace or existing traces
fn_trace_gettable Returns trace file information in a table format, for querying purposes

If you look at the documentation for above stored procedures in SQL Server Books Online, you will realize that they accept about four input parameters each, on an average. If you ever used Profiler, you’ll know there are numerous events and data columns. It is a bit cumbersome To specify all those events and data columns using these stored procedures. Because, every event has a number associated with it, and so do the data columns. You cannot remember all those numbers and repeatedly call the above procedures by specifying various EventIDs and ColumnIDs. No wonder many DBAs prefer to use the point-and-click Profiler. Bu t hey, Microsoft is not really recommending
Profiler on a production server, and I must trace production SQL Servers for various troubleshooting and tuning purposes while working on bugs.

So, I decided to write my own wrapper stored procedures, that wrap the trace system stored procedures and make it easier to setup traces. In this article, I’ll provide you with downloadable stored procedures, that you could use to setup your own server side traces,
without much effort. These stored procedures make the job almost as easier as using Profiler.

For example, using my wrapper stored procedures, you could setup and start trace, to record all stored procedures that are called in a specified database (with database ID = 37), along with the following information: Actual stored procedure call, start time, end time,
duration, application name and host name. As you will see below, we called four different friendly stored procedures. To setup the same trace directly, using trace system stored procedures, you’ll have to call at least 10 stored procedures, one for each event traced, and one for each data column captured. A massive saving on the number of stored procedures called 🙂 and I am definitely finding these wrappers much easier to setup
and quicker too. The more complex the trace definition gets, the more savings on the number of stored procedure calls.
DECLARE @TraceID int

EXEC CreateTrace
‘C:My SQL TracesAll Procedure Calls’,
@OutputTraceID = @TraceID OUT

EXEC AddEvent
‘TextData, StartTime, EndTime, ApplicationName, ClientHostName, DatabaseID’

EXEC AddFilter

EXEC StartTrace

The massive saving in the number of stored procedure calls to setup the trace, comes from the fact that the main wrapper stored procedure accepts a list of all events and
data columns in a comma delimited string format. So, how many ever events and data columns you want to capture in your trace, you’ll only make one call to the wrapper stored procedure.

Better yet, all the above wrapper procedures handle errors gracefully, and return meaningful and helpful error messages, instead of throwing some cryptic return code

An important note: Only members of the sysadmin fixed server role can execute the SQL Trace system stored procedures. So, even if you manage to call my wrapper stored procedures, they will fail to invoke SQL Trace system procedures, if you are not a

Deployment Instructions:

Now let me introduce you to all the wrapper stored procedures I created:

Stored procedure name Parameter usage
CreateTraceCreates an empty trace
definition, ready for use.
Specifies the trace file name and complete path. Do not provide a .trc extension to the file name, as SQL Trace automatically adds the .trc extension to the output file.

@OverwriteFile – bit

Specifies whether to overwrite the trace file, if it already exists. Default is 0, in which case if the file already exists, an error will be raised and trace will not be created. Specify 1 to overwrite an existing trace file.

@MaxSize – bigint

Specifies the maximum size in megabytes (MB) a trace file can grow upto. Default is 5 MB.
This stored procedure code restricts the maximum trace file size to 512 MB (half Giga Byte (GB)) as a safety measure, but can be overridden by setting the variable @MaxAllowedSize to a bigger value. You will find @MaxAllowedSize in the body of the stored procedure.

@Rollover – bit

Specifies that when the trace file reaches the maximum specified size, a new trace file will be created. Default is 1, meaning new file will be created when the current trace file reaches the maximum size. If you specify 0, tracing will stop when the file reaches its size limit. The new file
will get the same name, but will be postfixed with a number, to
indicate the sequence. For example, when the file MyTrace.trc
reaches its maximum size, MyTrace_1.trc will be

@Shutdown – bit

Defaults to 0. If you specify 1, SQL Server will shut down, if the trace cannot be written to the file for whatever reason. Use this option with caution, and only when absolutely needed.

@Blackbox -bit
Defaults to 0. If you specify 1, a blackbox trace will be created. A black box trace stores a record of the last 5 MB of trace information produced by the server. When 1 is specified, all other
parameters will be ignored. To learn more about how black box trace works, consult SQL Server 2000 Books Online.

@StopAt – datetime
Defaults to NULL. When NULL, the trace will run until it is manually stopped or until the server shuts down. If you specify a valid date and time, the trace will stop automatically at that specified date and time.

@OutputTraceID – int – OUTPUT parameter

This is an OUTPUT parameter and returns the ID for the trace that is created. This ID is needed for adding events and filters to the trace, as well as for querying the trace definition.

Return values: -1 indicates a failure and 0 indicates success
Note: The ID of the created trace will also be returned as a resultset for convenience.

events and columns to the trace definition.


@TraceID– intThe ID of the trace, created by CreateTrace, and is used to identify
the trace to which to add the events and columns.

@EventList – varchar(1000)

Used to specify a comma separated list of events to capture. You can see a list of all valid events in the SQL Server Books Online page titled “sp_trace_setevent”. Alternatively, you will find a list of all the events and their descriptions, in this script.

@ColumnList – varchar(1000)

Used to specify a comma separated list of data columns to capture. You can see a list of all valid column names in the SQL Server Books Online page titled “sp_trace_setevent”. Alternatively, you will find a list of all the data columns and their descriptions, in this

Return values: -1 indicates a failure and 0 indicates success

AddFilterAdds filters to existing trace

Click here to download

@TraceID– intThe ID of
the trace, created by CreateTrace, and is used to identify
the trace to which to add the filter.


Name of the column on which to apply the filter.
You can only filter on a column, after adding that column to the
trace definition, using AddEvent


Specifies the value on which to filter.

@ComparisonOperator – varchar(8)

Specifies the
type of comparison to be made. Defaults to ‘=’, meaning ‘Equals’
comparison. Other valid comparison operators are: ‘<>’ (Not
Equal) , ‘>’ (Greater Than) , ‘<‘ (Less Than) , ‘>=’
(Greater Than Or Equal), ‘<=’ (Less Than Or Equal), ‘LIKE’ and


Defaults to ‘OR’. You could also specify ‘AND’.
Useful for filtering a column for multiple values.

-1 indicates a failure and 0 indicates

Note: Call this procedure once for each
filter. If you want to filter a column for a range of values
(similar to BETWEEN operator), call this procedure once with ‘>=’
comparison operator and again with ‘<=’ comparison operator.

StartTraceStarts a specified
@TraceID– intThe ID of the trace (created by CreateTrace), to be started.

Return values: -1 indicates a failure and 0 indicates success

StopTraceStops a specified
@TraceID– intThe ID of the trace (created by CreateTrace), to be stopped.

Return values: -1 indicates a failure and 0 indicates success

ClearTraceClears the definition of the
trace from memory.
@TraceID– intThe ID of the trace (created by CreateTrace), to be cleared from memory.

Return values: -1 indicates a failure and 0 indicates success

Now that we know what the stored procedures are called and how to use their
parameters, let me show you, how to use these stored procedures to setup
traces for specific requirements. Before we go any further, here is a quick tip. Once you setup a specific trace using my wrapper stored procedures, you could save all those stored procedure calls in a file. Now this file will serve you as a template, just like the Profiler templates.

Scenario 1: Identifying long running stored procedures

In this scenario, we will trace for all stored procedures, that took more than 15 Seconds to complete. The output trace file ‘LongRunningProcs.trc’ will be saved to ‘C:My SQL Traces’ (Note that this is the location on the SQL Server machine, not the client
DECLARE @TraceID int

EXEC CreateTrace
‘C:My SQL TracesLongRunningProcs’,
@OutputTraceID = @TraceID OUT

EXEC AddEvent
‘TextData, Duration’

EXEC AddFilter

EXEC StartTrace @TraceID

Once you are done, you could stop the trace by calling the following stored procedures. Important Note: You can only view the trace file, after successfully stopping the trace and clearing it from memory: (Lets assume that the ID of the trace created above was 1)


EXEC StopTrace 1
EXEC ClearTrace 1

Scenario 2: Get a list of all the stored procedures called within a specific database:

In this scenario, I will show you, how to get a list of all the stored procedures called, from within a specific database. In this example, we will look for all stored procedure calls from msdb database. We will also capture the start time, end time, application name, client host name, NT user name and the domain name.

DECLARE @TraceID int, @DB_ID int

EXEC CreateTrace
‘C:My SQL TracesProceduresCalledInMSDB’,
@OutputTraceID = @TraceID OUT

EXEC AddEvent
‘TextData, StartTime, EndTime, ApplicationName, ClientHostName, NTUserName, NTDomainName, DatabaseID’

SET @DB_ID = DB_ID(‘msdb’)

EXEC AddFilter

EXEC StartTrace @TraceID

Scenario 3: Tracing for
specific errors:

Let us imagine a scenario, where you deployed a brand new application and database. Now the old database is not needed anymore. So, you took the old database offline, but you want to make sure no user or application is trying to access the old database. As you probably know, when somebody or some application tries to open an offline
database, you get the following error: 942: Database ‘OldDatabase’ cannot be opened because it is offline. In the following example, we will setup a trace that looks for error 942 and captures the time of the request, application name, NT user name and the client machine name from which the request originated. We will also specify that if the trace file already exists, it’ll be overwritten.


DECLARE @TraceID int

EXEC CreateTrace
@OverwriteFile = 1,
@OutputTraceID = @TraceID OUTPUT

EXEC AddEvent
‘Error, StartTime, ApplicationName, NTUserName, ClientHostName’

EXEC AddFilter

EXEC StartTrace @TraceID

Scenario 4: Troubleshooting deadlocks:

In this scenario, I will show you how to setup a trace to identify the connections (SPIDs) involved in a deadlock, using the Deadlock and Deadlock Chain events.


DECLARE @TraceID int

EXEC dbo.CreateTrace
‘C:My SQL TracesDead Locks’,
@OutputTraceID = @TraceID OUT

EXEC dbo.AddEvent
‘Lock:Deadlock, Lock:Deadlock Chain, RPC:Starting, SQL:BatchStarting’,

EXEC dbo.StartTrace

Scenario 5: Identifying stored procedure recompilations:

Stored procedure recompiles have a potential to hinder the performance of your application. So it is important to identify those procedures that are recompiling repeatedly, and fix them, if the recompilation is not beneficial. The following
template creates a trace that logs the stored procedures that are recompiling along with the database ID in which they are running. It also captures EventSubClass. From SQL Server 2000 SP3 and above, EventSubClass tells you the exact reason for the stored procedure recompilation. For more information search Microsoft Knowledge Base ( KB) for article 308737.

DECLARE @TraceID int

EXEC dbo.CreateTrace
‘C:My SQL TracesRecompilations’,
@OutputTraceID = @TraceID OUT

EXEC dbo.AddEvent
‘ObjectID, ObjectName, EventSubClass, DatabaseID’

EXEC dbo.StartTrace

Scenario 6: Starting a Blackbox trace:

A black box trace stores a record of the last 5 MB of trace information produced by the server. This is very useful for troubleshooting nasty problems, bugs and access violation errors, that cause the SQL Server to shutdown. Consult SQL Server 2000 Books Online and Microsoft Knowledge Base for more information on Blackbox traces.

DECLARE @TraceID int

EXEC CreateTrace
@Blackbox = 1,
@OutputTraceID = @TraceID OUT

EXEC StartTrace


The above scenarios, will just get you started, but you can really use these stored procedures to setup complicated traces with various columns, events and different types of filters. I hope you find my work useful. For the sake of completeness, I’ll mention the fact that, you could even schedule the above stored procedures, as SQL Agent jobs, in order to start the traces at a desired date and time.

In the process of learning the SQL Trace system stored procedures, I did stumble upon few bugs. For example, when you set a filter on ObjectID, and then query the trace
definition using fn_trace_getfilterinfo function, the ObjectID reported will be incorrect when the ObjectID is greater than 255 (SELECT value FROM ::fn_trace_getfilterinfo(@TraceID)).

One thing I observed with Profiler is that, though the trace system stored procedures support the comparison operators > and <, Profiler only shows >= and <=.