SQL Procedures and Functions that any DBA should know

large_about_prodba-430x319Have you ever needed very fast a script you can use in SQL and did not want to waste any time looking for it on Google? We have put together a list of most used SQL queries for your database performance.

For example purposes, let’s assume that you have a table called Company with a primary key on CompanyID (identity column, int).

The rest of the data in the table will be mentioned when time comes.

Primary Keys

In Table Creation


CREATE TABLE Company
(
CompanyId int NOT NULL,
CompanyName varchar(255) NOT NULL,
Address varchar(255),
City varchar(255),
CONSTRAINT pk_Company PRIMARY KEY (CompanyId)
)

After Table Creation


ALTER TABLE Company ADD PRIMARY KEY (CompanyId)

or:

ALTER TABLE Company ADD CONSTRAINT pk_Company PRIMARY KEY (CompanyId)

Drop Key

ALTER TABLE Company
DROP CONSTRAINT pk_Company


 

Indexes

Create Index

CREATE INDEX IX_Company_City ON Company (City)

Create Unique Index

CREATE UNIQUE INDEX IX_Company_City ON Company (City)

Among the resources that shape indexes, the most elemental ones at your disposal are the statements
that create the index and the two system stored procedures,

sp_helpindex
and sp_help @tablename.

As an example, I asked the Object Browser in Query Analyzer to create a script for a table as a create.

I then invoked sp_helpindex to get the details of the indexes on this table, Company.

Output

The index named PK_Company was created as the result of a constraint creating a primary key.

PK_Company is a clustered index, meaning that the table is in the order specified by the index key,
in this case, the identity column CompanyID.
The index created has only one column, and since it was a primary key constraint, it also is unique and not null.
Because I didnt specify the direction of the index, it was created in the default of ascending order.
Since the clustered index was created on the Primary file group, the table and its index were created there. Clustered indexes are always created on the same file group as the data, because the leaf level or lowest level of a clustered index is the data itself. (In fact, you can move a table to a different file group by creating a clustered index for the table on a different file group. Both structures will move. Additionally, clustered indexes are effective for range searches because of their proximity to the data and because they sort the table in index order.)

The Unintended Consequences of Clustered Indexes


Be sure to analyze the retrieval requests for your system and design your indexes to accommodate most of those requests. You’ll have to accept tradeoffs between the most frequently executed queries and the less frequently executed but more demanding queries.

The optimizer can chose to use an existing index based on its statistics. However, if the statistics indicate that reading the whole table would be cheaper, the optimizer will perform a table scan. The database can maintain index statistics with the Auto_Create_Statistics option set, or you can update them manually. Depending on how busy your server is, you may elect to perform updates at less busy times. As a side benefit, you also can keep statistics on non-indexed columns, which could help query execution.

Clustered and non-clustered indexes are interrelated by virtue of the clustering index key. You can promote efficiency in index maintenance by specifying clustered indexes first, because the non-clustered indexes use the clustered indexs pointers. If you specify a non-clustered index before a clustered index and then drop the clustered one, you are forcing the server to do double the work. When you specify a clustered index, you create the clustering key that succeeding secondary or non-clustered indexes use. If the clustered index comes after the non-clustered index, the server will drop and recreate the non-clustered indexes so it can inform them about where to find pages with the clustering key. To avoid this extra work, use the Drop Existing clause of the Create Index statement.

Another consequence of clustered indexes is you can influence the physical location of the data file by using the ON FILEGROUP clause. If you designate a different file group for the index, you can move the table to the named file group in the clustered index order. Because the clustered index in effect is the data, one less intermediate leaf level lies between the data and the index. Youll have to invoke the Alter Database command to create the new file group, then relocate your table to the new file group with a Create Index statements On Filegroup option.

If you elect this Drop Existing option after the fact (i.e., your table already exists and you have already created the new file group), you can use the create index command with the DROP EXISTING clause to prevent the double build and rebuild of the non-clustered indexes.


DBCC DROPCLEANBUFFERS

DBCC DROPCLEANBUFFERS


Use this command to remove all the data from SQL Server’s data cache (buffer) between performance tests to ensure fair testing. Keep in mind that this command only removes clean buffers, not dirty buffers. Because of this, before running the DBCC DROPCLEANBUFFERS command, you may first want to run the CHECKPOINT command first. Running CHECKPOINT will write all dirty buffers to disk. And then when you run DBCC DROPCLEANBUFFERS, you can be assured that all data buffers are cleaned out, not just the clean ones.

DBCC ERRORLOG

DBCC ERRORLOG


If you rarely restart the mssqlserver service, you may find that your server log gets very large and takes a long time to load and view. You can truncate (essentially create a new log) the Current Server log by running DBCC ERRORLOG. You might want to consider scheduling a regular job that runs this command once a week to automatically truncate the server log. As a rule, I do this for all of my SQL Servers on a weekly basis. Also, you can accomplish the same thing using this stored procedure: sp_cycle_errorlog.

Procedure Caching

DECLARE @intDBID INTEGER

SET @intDBID = (SELECT dbid FROM master.dbo.sysdatabases WHERE name = ‘database_name’)

DBCC FLUSHPROCINDB (@intDBID)


DBCC FLUSHPROCINDB : Used to clear out the stored procedure cache for a specific database on a SQL Server, not the entire SQL Server.
The database ID number to be affected must be entered as part of the command.

You may want to use this command before testing to ensure that previous stored procedure plans won’t negatively affect testing results.
If you want to clear the cache for all databases, use: DBCC FREEPROCCACHE
If you are unsure about the procedure cache is being used, run this command: DBCC PROCCACHE

DBCC INDEXDEFRAG

DBCC INDEXDEFRAG (Database_Name, Table_Name, Index_Name)


In SQL Server 2000, Microsoft introduced DBCC INDEXDEFRAG to help reduce logical disk fragmentation. When this command runs, it reduces fragmentation and does not lock tables, allowing users to access the table when the defragmentation process is running. Unfortunately, this command doesn’t do a great job of logical defragmentation.

The only way to truly reduce logical fragmentation is to rebuild your table’s indexes. While this will remove all fragmentation, unfortunately it will lock the table, preventing users from accessing it during this process. This means that you will need to find a time when this will not present a problem to your users.

Of course, if you are unable to find a time to reindex your indexes, then running DBCC INDEXDEFRAG is better than doing nothing.

DBCC REINDEX

DBCC DBREINDEX(‘table_name’, fillfactor)

Periodically (weekly or monthly) perform a database reorganization on all the indexes on all the tables in your database. This will rebuild the indexes so that the data is no longer fragmented. Fragmented data can cause SQL Server to perform unnecessary data reads, slowing down SQL Server’s performance.
If you perform a reorganization on a table with a clustered index, any non-clustered indexes on that same table will automatically be rebuilt.

Database reorganizations can be done by scheduling SQLMAINT.EXE to run using the SQL Server Agent, or if by running your own custom script via the SQL Server Agent (see below).


–Script to automatically reindex all tables in a database
USE DatabaseName –Enter the name of the database you want to reindex
DECLARE @TableName varchar(255)
DECLARE TableCursor CURSOR FOR
SELECT table_name FROM information_schema.tables
WHERE table_type = ‘base table’

OPEN TableCursor
FETCH NEXT FROM TableCursor INTO @TableName
WHILE @@FETCH_STATUS = 0

BEGIN
PRINT “Reindexing ” + @TableName
DBCC DBREINDEX(@TableName,’ ‘,90)
FETCH NEXT FROM TableCursor INTO @TableName
END

CLOSE TableCursor
DEALLOCATE TableCursor

The script will automatically reindex every index in every table of any database you select, and provide a fill factor of 90%. You can substitute any number you want for the 90 in the above script.

When DBCC DBREINDEX is used to rebuild indexes, keep in mind that as the indexes on a table are being rebuilt, that the table becomes unavailable for use by your users. For example, when a non-clustered index is rebuilt, a shared table lock is put on the table, preventing all but SELECT operations to be performed on it. When a clustered index is rebuilt, an exclusive table lock is put on the table, preventing any table access by your users. Because of this, you should only run this command when users don’t need access to the tables being reorganized.

DBCC SHOWCONTIG

DBCC SHOWCONTIG (Table_id, IndexID)


Used to show how fragmented data and indexes are in a specified table. If data pages storing data or index information becomes fragmented, it takes more disk I/O to find and move the data to the SQL Server cache buffer, hurting performance. This command tells you how fragmented these data pages are. If you find that fragmentation is a problem, you can reindex the tables to eliminate the fragmentation. Note: this fragmentation is fragmentation of data pages within the SQL Server MDB file, not of the physical file itself.

Since this command requires you to know the ID of both the table and index being analyzed, you may want to run the following script so you don’t have to manually look up the table name ID number and the index ID number.

–Script to identify table fragmentation
–Declare variables
DECLARE
@ID int,
@IndexID int,
@IndexName varchar(128)

–Set the table and index to be examined
SELECT @IndexName = ‘index_name’ –enter name of index
SET @ID = OBJECT_ID(‘table_name’) –enter name of table
–Get the Index Values
SELECT @IndexID = IndID
FROM sysindexes
WHERE id = @ID AND name = @IndexName

–Display the fragmentation
DBCC SHOWCONTIG (@id, @IndexID)

While the DBCC SHOWCONTIG command provides several measurements, the key one is Scan Density. This figure should be as close to 100% as possible. If the scan density is less than 75%, then you may want to reindex the tables in your database.

DBCC SHOW_STATISTICS

DBCC SHOW_STATISTICS (table_name, index_name)

Used to find out the selectivity of an index. Generally speaking, the higher the selectivity of an index, the greater the likelihood it will be used by the query optimizer. You have to specify both the table name and the index name you want to find the statistics on.

DBCC UPDATEUSAGE

DBCC UPDATEUSAGE (‘databasename’)

The official use for this command is to report and correct inaccuracies in the sysindexes table, which may result in incorrect space usage reports. Apparently, it can also fix the problem of unreclaimed data pages in SQL Server. You may want to consider running this command periodically to clean up potential problems. This command can take some time to run, and you want to run it during off times because it will negatively affect SQL Server’s performance when running. When you run this command, you must specify the name of the database that you want affected.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s