Add new record to the database using Classic ASP

Please note the code below. This sequence will add a new record into a database.

The database is identified by the DSN name and is called Student.

Option 1

The code below can be called by passing the parameteres in the query string like: dbnewrecSQL.asp?name=Joey&id=9000&year=1964

<TITLE>dbnewrecSQL.asp</TITLE>
<body>
<HTML>
<%
'on error resume next
auname=request.querystring("name")
auyear=request.querystring("year")
auID=request.querystring("ID")
If  auid<9000 then
auid=auid+9000
end if
Set Conn = Server.CreateObject("ADODB.Connection")
conn.open "DSN=Student;uid=student;pwd=magic"
SQLStmt = "INSERT INTO authors (AU_ID,author,year_born) "
SQLStmt = SQLStmt & "VALUES (" & auid
SQLStmt = SQLStmt & ",'" & auname & "'"
SQLStmt = SQLStmt & "," & int(auyear) & ")"
Set RS = Conn.Execute(SQLStmt)
set rs=nothing
If  err.number>0 then
response.write "VBScript Errors Occured:" & "<P>"
response.write "Error Number=" & err.number & "<P>"
response.write "Error Descr.=" & err.description & "<P>"
response.write "Help Context=" & err.helpcontext & "<P>"
response.write "Help Path=" & err.helppath & "<P>"
response.write "Native Error=" & err.nativeerror & "<P>"
response.write "Source=" & err.source & "<P>"
response.write "SQLState=" & err.sqlstate & "<P>"
else
response.write "No VBScript Errors Occured" & "<P>"
end if
IF  conn.errors.count> 0 then
response.write "Database Errors Occured" & "<br>"
response.write "<b>" & SQLstmt & "</b><P>"
for counter= 0 to conn.errors.count
response.write "Error #" & conn.errors(counter).number & "<P>"
response.write "Error desc. -> " & conn.errors(counter).description & "<P>"
next
else
response.write "No Database Errors Occured!" & "<P>"
end if
Conn.Close
set conn=nothing
%>
</BODY>
</HTML>

Option 2

This page demonstrates the capabilities how to add a record to a database using ADO instead of SQL. The script is:

<html><head>
<title>dbnewrec.asp</title>&
<body bgcolor="#FFFFFF">
<% ' My ASP program that allows you to append a record %>
<form name="myauthor" action="dbnewADOrespond.asp" method="GET">
<p>Author ID: <input type="TEXT" name="id"></p>
<p> Author Name: <input type="TEXT" name="name"></p>
<p> Year Born: <input type="TEXT" name="year"></p>
<p> <input type="SUBMIT"> </p>
</form></body></html>

The form responder looks like this:

<TITLE>dbnewADO.asp</TITLE>
<body bgcolor="#FFFFFF">
<HTML>
<!--#INCLUDE VIRTUAL="/ADOVBS.INC" -->
<!--#INCLUDE VIRTUAL="/learn/test/lib_errors.asp" -->
<%
on  error resume next
auname=request.querystring("name")
auyear=request.querystring("year")
auID=request.querystring("ID")
If  auid<9000 then
auid=auid+9000
end if
conn="DSN=Student;uid=student;pwd=magic"
Set RS = Server.CreateObject("ADODB.Recordset")
RS.Open "authors",Conn,adopenstatic,adlockoptimistic
RS.AddNew
'RS("AU_ID")=auid
RS("Author") = auname
RS("Year_Born")= int(auyear)
RS.Update
Call ErrorVBscriptReport("Adding Record")
Call ErrorADOReport("Adding Record",RS.activeconnection)
RS.Close
set rs=nothing
%>
</BODY>
</HTML>
Advertisements

Automating Server-Side Tracing in SQL Server

Experts from within and outside Microsoft have always said that running Profiler on a production environment is not a recommended practice, as it could degrade the performance of the server. Instead they suggest doing server side tracing via SQL Trace system stored procedures. But DBAs often questioned this by asking, what if I profile
the production server, from a different PC? Well, I haven’t heard a convincing answer to this whole thing about Profiler vs. server side tracing yet, but I attended SQL PASS 2003 in Seattle (between 11th and 14th of November, 2003). More than once during the event, some of the big wigs from Microsoft PSS recommended server side tracing over Profiler.
Well, they definitely know their stuff, so I decided to start using server side tracing, instead of Profiler. Not to mention, I personally heard about incidents where DBAs brought down SQL Servers by running Profiler (especially in version 7.0).

Prior to attending PASS, I’ve never used SQL Trace system stored procedures to set up a server side trace. So, I started off by reading SQL Server Books Online. There are a bunch of SQL Trace system stored procedures, that you can use to set up a server side
trace. There are also a few system functions to query and get information about currently running traces.

Here is a list of SQL Trace system stored procedures and functions:

SQL Trace system stored procedures

sp_trace_create Creates a trace definition
sp_trace_generateevent Creates a user-defined event
sp_trace_setevent Adds or removes an event
or event column to a trace
sp_trace_setfilter Applies a filter to a trace
sp_trace_setstatus Used to start, stop and close traces

SQL Trace system functions

fn_trace_geteventinfo Returns information about the events traced by a specified trace
fn_trace_getfilterinfo Returns information about the filters applied to a specified trace
fn_trace_getinfo Returns information about a specified trace or existing traces
fn_trace_gettable Returns trace file information in a table format, for querying purposes

If you look at the documentation for above stored procedures in SQL Server Books Online, you will realize that they accept about four input parameters each, on an average. If you ever used Profiler, you’ll know there are numerous events and data columns. It is a bit cumbersome To specify all those events and data columns using these stored procedures. Because, every event has a number associated with it, and so do the data columns. You cannot remember all those numbers and repeatedly call the above procedures by specifying various EventIDs and ColumnIDs. No wonder many DBAs prefer to use the point-and-click Profiler. Bu t hey, Microsoft is not really recommending
Profiler on a production server, and I must trace production SQL Servers for various troubleshooting and tuning purposes while working on bugs.

So, I decided to write my own wrapper stored procedures, that wrap the trace system stored procedures and make it easier to setup traces. In this article, I’ll provide you with downloadable stored procedures, that you could use to setup your own server side traces,
without much effort. These stored procedures make the job almost as easier as using Profiler.

For example, using my wrapper stored procedures, you could setup and start trace, to record all stored procedures that are called in a specified database (with database ID = 37), along with the following information: Actual stored procedure call, start time, end time,
duration, application name and host name. As you will see below, we called four different friendly stored procedures. To setup the same trace directly, using trace system stored procedures, you’ll have to call at least 10 stored procedures, one for each event traced, and one for each data column captured. A massive saving on the number of stored procedures called 🙂 and I am definitely finding these wrappers much easier to setup
and quicker too. The more complex the trace definition gets, the more savings on the number of stored procedure calls.
DECLARE @TraceID int

EXEC CreateTrace
‘C:My SQL TracesAll Procedure Calls’,
@OutputTraceID = @TraceID OUT

EXEC AddEvent
@TraceID,
‘SP:Starting’,
‘TextData, StartTime, EndTime, ApplicationName, ClientHostName, DatabaseID’

EXEC AddFilter
@TraceID,
‘DatabaseID’,
37

EXEC StartTrace
@TraceID
GO

The massive saving in the number of stored procedure calls to setup the trace, comes from the fact that the main wrapper stored procedure accepts a list of all events and
data columns in a comma delimited string format. So, how many ever events and data columns you want to capture in your trace, you’ll only make one call to the wrapper stored procedure.

Better yet, all the above wrapper procedures handle errors gracefully, and return meaningful and helpful error messages, instead of throwing some cryptic return code
back.

An important note: Only members of the sysadmin fixed server role can execute the SQL Trace system stored procedures. So, even if you manage to call my wrapper stored procedures, they will fail to invoke SQL Trace system procedures, if you are not a
sysadmin.

Deployment Instructions:

Now let me introduce you to all the wrapper stored procedures I created:

Stored procedure name Parameter usage
CreateTraceCreates an empty trace
definition, ready for use.
@OutputFile
nvarchar(245)
Specifies the trace file name and complete path. Do not provide a .trc extension to the file name, as SQL Trace automatically adds the .trc extension to the output file.

@OverwriteFile – bit

Specifies whether to overwrite the trace file, if it already exists. Default is 0, in which case if the file already exists, an error will be raised and trace will not be created. Specify 1 to overwrite an existing trace file.

@MaxSize – bigint

Specifies the maximum size in megabytes (MB) a trace file can grow upto. Default is 5 MB.
This stored procedure code restricts the maximum trace file size to 512 MB (half Giga Byte (GB)) as a safety measure, but can be overridden by setting the variable @MaxAllowedSize to a bigger value. You will find @MaxAllowedSize in the body of the stored procedure.

@Rollover – bit

Specifies that when the trace file reaches the maximum specified size, a new trace file will be created. Default is 1, meaning new file will be created when the current trace file reaches the maximum size. If you specify 0, tracing will stop when the file reaches its size limit. The new file
will get the same name, but will be postfixed with a number, to
indicate the sequence. For example, when the file MyTrace.trc
reaches its maximum size, MyTrace_1.trc will be
created.

@Shutdown – bit

Defaults to 0. If you specify 1, SQL Server will shut down, if the trace cannot be written to the file for whatever reason. Use this option with caution, and only when absolutely needed.

@Blackbox -bit
Defaults to 0. If you specify 1, a blackbox trace will be created. A black box trace stores a record of the last 5 MB of trace information produced by the server. When 1 is specified, all other
parameters will be ignored. To learn more about how black box trace works, consult SQL Server 2000 Books Online.

@StopAt – datetime
Defaults to NULL. When NULL, the trace will run until it is manually stopped or until the server shuts down. If you specify a valid date and time, the trace will stop automatically at that specified date and time.

@OutputTraceID – int – OUTPUT parameter

This is an OUTPUT parameter and returns the ID for the trace that is created. This ID is needed for adding events and filters to the trace, as well as for querying the trace definition.

Return values: -1 indicates a failure and 0 indicates success
Note: The ID of the created trace will also be returned as a resultset for convenience.

AddEventAdds
events and columns to the trace definition.

 

@TraceID– intThe ID of the trace, created by CreateTrace, and is used to identify
the trace to which to add the events and columns.

@EventList – varchar(1000)

Used to specify a comma separated list of events to capture. You can see a list of all valid events in the SQL Server Books Online page titled “sp_trace_setevent”. Alternatively, you will find a list of all the events and their descriptions, in this script.

@ColumnList – varchar(1000)

Used to specify a comma separated list of data columns to capture. You can see a list of all valid column names in the SQL Server Books Online page titled “sp_trace_setevent”. Alternatively, you will find a list of all the data columns and their descriptions, in this
script.

Return values: -1 indicates a failure and 0 indicates success

AddFilterAdds filters to existing trace
definitions

Click here to download

@TraceID– intThe ID of
the trace, created by CreateTrace, and is used to identify
the trace to which to add the filter.

@ColumnName
varchar(50)

Name of the column on which to apply the filter.
You can only filter on a column, after adding that column to the
trace definition, using AddEvent

@Value
sql_variant

Specifies the value on which to filter.

@ComparisonOperator – varchar(8)

Specifies the
type of comparison to be made. Defaults to ‘=’, meaning ‘Equals’
comparison. Other valid comparison operators are: ‘<>’ (Not
Equal) , ‘>’ (Greater Than) , ‘<‘ (Less Than) , ‘>=’
(Greater Than Or Equal), ‘<=’ (Less Than Or Equal), ‘LIKE’ and
‘NOT LIKE’.

@LogicalOperator
varchar(3)

Defaults to ‘OR’. You could also specify ‘AND’.
Useful for filtering a column for multiple values.

Return
values:
-1 indicates a failure and 0 indicates
success

Note: Call this procedure once for each
filter. If you want to filter a column for a range of values
(similar to BETWEEN operator), call this procedure once with ‘>=’
comparison operator and again with ‘<=’ comparison operator.

StartTraceStarts a specified
trace.
@TraceID– intThe ID of the trace (created by CreateTrace), to be started.

Return values: -1 indicates a failure and 0 indicates success

StopTraceStops a specified
trace.
@TraceID– intThe ID of the trace (created by CreateTrace), to be stopped.

Return values: -1 indicates a failure and 0 indicates success

ClearTraceClears the definition of the
trace from memory.
@TraceID– intThe ID of the trace (created by CreateTrace), to be cleared from memory.

Return values: -1 indicates a failure and 0 indicates success

Now that we know what the stored procedures are called and how to use their
parameters, let me show you, how to use these stored procedures to setup
traces for specific requirements. Before we go any further, here is a quick tip. Once you setup a specific trace using my wrapper stored procedures, you could save all those stored procedure calls in a file. Now this file will serve you as a template, just like the Profiler templates.

Scenario 1: Identifying long running stored procedures

In this scenario, we will trace for all stored procedures, that took more than 15 Seconds to complete. The output trace file ‘LongRunningProcs.trc’ will be saved to ‘C:My SQL Traces’ (Note that this is the location on the SQL Server machine, not the client
machine).
DECLARE @TraceID int

EXEC CreateTrace
‘C:My SQL TracesLongRunningProcs’,
@OutputTraceID = @TraceID OUT

EXEC AddEvent
@TraceID,
‘SP:Completed’,
‘TextData, Duration’

EXEC AddFilter
@TraceID,
‘Duration’,
15000,
‘>=’

EXEC StartTrace @TraceID
GO

Once you are done, you could stop the trace by calling the following stored procedures. Important Note: You can only view the trace file, after successfully stopping the trace and clearing it from memory: (Lets assume that the ID of the trace created above was 1)

 

EXEC StopTrace 1
EXEC ClearTrace 1

Scenario 2: Get a list of all the stored procedures called within a specific database:

In this scenario, I will show you, how to get a list of all the stored procedures called, from within a specific database. In this example, we will look for all stored procedure calls from msdb database. We will also capture the start time, end time, application name, client host name, NT user name and the domain name.

DECLARE @TraceID int, @DB_ID int

EXEC CreateTrace
‘C:My SQL TracesProceduresCalledInMSDB’,
@OutputTraceID = @TraceID OUT

EXEC AddEvent
@TraceID,
‘SP:Completed’,
‘TextData, StartTime, EndTime, ApplicationName, ClientHostName, NTUserName, NTDomainName, DatabaseID’

SET @DB_ID = DB_ID(‘msdb’)

EXEC AddFilter
@TraceID,
‘DatabaseID’,
@DB_ID

EXEC StartTrace @TraceID
GO

Scenario 3: Tracing for
specific errors:

Let us imagine a scenario, where you deployed a brand new application and database. Now the old database is not needed anymore. So, you took the old database offline, but you want to make sure no user or application is trying to access the old database. As you probably know, when somebody or some application tries to open an offline
database, you get the following error: 942: Database ‘OldDatabase’ cannot be opened because it is offline. In the following example, we will setup a trace that looks for error 942 and captures the time of the request, application name, NT user name and the client machine name from which the request originated. We will also specify that if the trace file already exists, it’ll be overwritten.

 

DECLARE @TraceID int

EXEC CreateTrace
‘C:Trapping942s’,
@OverwriteFile = 1,
@OutputTraceID = @TraceID OUTPUT

EXEC AddEvent
@TraceID,
‘Exception’,
‘Error, StartTime, ApplicationName, NTUserName, ClientHostName’

EXEC AddFilter
@TraceID,
‘Error’,
942

EXEC StartTrace @TraceID

Scenario 4: Troubleshooting deadlocks:

In this scenario, I will show you how to setup a trace to identify the connections (SPIDs) involved in a deadlock, using the Deadlock and Deadlock Chain events.

 

DECLARE @TraceID int

EXEC dbo.CreateTrace
‘C:My SQL TracesDead Locks’,
@OutputTraceID = @TraceID OUT

EXEC dbo.AddEvent
@TraceID,
‘Lock:Deadlock, Lock:Deadlock Chain, RPC:Starting, SQL:BatchStarting’,
‘TextData’

EXEC dbo.StartTrace
@TraceID

Scenario 5: Identifying stored procedure recompilations:

Stored procedure recompiles have a potential to hinder the performance of your application. So it is important to identify those procedures that are recompiling repeatedly, and fix them, if the recompilation is not beneficial. The following
template creates a trace that logs the stored procedures that are recompiling along with the database ID in which they are running. It also captures EventSubClass. From SQL Server 2000 SP3 and above, EventSubClass tells you the exact reason for the stored procedure recompilation. For more information search Microsoft Knowledge Base ( KB) for article 308737.

DECLARE @TraceID int

EXEC dbo.CreateTrace
‘C:My SQL TracesRecompilations’,
@OutputTraceID = @TraceID OUT

EXEC dbo.AddEvent
@TraceID,
‘SP:Recompile’,
‘ObjectID, ObjectName, EventSubClass, DatabaseID’

EXEC dbo.StartTrace
@TraceID

Scenario 6: Starting a Blackbox trace:

A black box trace stores a record of the last 5 MB of trace information produced by the server. This is very useful for troubleshooting nasty problems, bugs and access violation errors, that cause the SQL Server to shutdown. Consult SQL Server 2000 Books Online and Microsoft Knowledge Base for more information on Blackbox traces.

DECLARE @TraceID int

EXEC CreateTrace
@Blackbox = 1,
@OutputTraceID = @TraceID OUT

EXEC StartTrace
@TraceID

Conclusion:

The above scenarios, will just get you started, but you can really use these stored procedures to setup complicated traces with various columns, events and different types of filters. I hope you find my work useful. For the sake of completeness, I’ll mention the fact that, you could even schedule the above stored procedures, as SQL Agent jobs, in order to start the traces at a desired date and time.

In the process of learning the SQL Trace system stored procedures, I did stumble upon few bugs. For example, when you set a filter on ObjectID, and then query the trace
definition using fn_trace_getfilterinfo function, the ObjectID reported will be incorrect when the ObjectID is greater than 255 (SELECT value FROM ::fn_trace_getfilterinfo(@TraceID)).

One thing I observed with Profiler is that, though the trace system stored procedures support the comparison operators > and <, Profiler only shows >= and <=.

 

Glossary Of Computer Security Terms

Access
A specific type of interaction between a subject and an object that results in the flow of information from one to the other. (Source:
GCST).
Access Control
The process of limiting access to the resources of a system only to authorized programs, processes, or other systems (in a network). Synonymous with controlled access and limited access. (Source: GCST)
Accreditation
A formal declaration by the designated approving authority (DAA) that the automated information system (AIS) is approved to operate in a particular security mode using a prescribed sete of safeguards. Accreditation is the official management authorization for operation of an AIS and is based on the certification process as well as other management considerations. The accreditation
statement affixes security responsibility with the DAA and shows that due care has been taken for security. (Source: GCST)
Assurance
A measure of confidence that the security features and architecture of an AIS accurately mediate and enforce the security policy. Compare with trusted computer system. (Source: GCST)
Audit Trail
A chronological record of system activities that is sufficient to enable the reconstruction, reviewing, and examination of the sequence of environments and activities surrounding or leading to an operation, a procedure, or an event in a transaction from its inception to final results. (Source: GCST)
Authenticate
1.To verify the identity of a user, device, or other entity in a computer system, often as a prerequisite to allowing access to resources in a system.
2.To verify the integrity of data that have been stored, transmitted, or otherwise exposed to possible unauthorized modification.
(Source: GCST)
Authorization
The granting of acccess rights to a user, program, or process. (Source: GCST)
Automated Information System
An assembly of computer hardware, software, and/or firmware configured to collect, create, communicate, compute, disseminate, process, store, and/or control data or information. (Source: GCST)
Availability
The state when data is in the place needed by [or accessible to] the user, at the time the user needs them, and in the form needed by the user. (Source: GCST)
Certification
The comprehensive evaluation of the technical and nontechnical security features of an AIS and other safeguards, made in support of the accreditation process, that establishes the extent to which a  particular design and implementation meet a specified set of security requirements. (Source: GCST)
Compartmented Mode of Operation
An AIS is operating in the compartmented mode when each user with direct or indirect individual access to the AIS, its peripherals, remote terminals, or remote hosts, has all of the following:
A valid personnel clearance for the most restricted information on the system.
Formal access approval for, and has signed nondisclosure agreements for, that information to which the user is to have access.
A valid need-to-know for that information to which the user is to have access.
(Source: GCST)
Covert Channel
A communications channel that allows two cooperating processes
to transfer information in a manner that violates the system’s security policy. Synonymous with confinement channel. (Source: GCST)
Covert Storage Channel
A covert channel that involves the direct or indirect writing of a storage location by one process and the direct or indirect reading of the storage location by another process. Covert storage channnels typically involve a finite resource (e.g., sectors on a disk) that is shared by two subjects at different security levels. (Source: GCST)
Covert Timing Channel
A covert channel in which one process signals information to another by modulating its own use of system resources (e.g., CPU time) in such a way that this manipulation affects the real response time observed by the second process. (Source: GCST)
Dedicated Mode of Operation
An AIS is operating in the dedicated mode when each user with
direct or indirect individual access to the AIS, its peripherals, remote terminals, or remote hosts, has all of the following:
A valid personnel clearance for all information on the system.
Formal access approval for, and has signed nondisclosure agreements for, all the information stored and/or processed (including all compartments, subcompartments, and/or special access programs).
A valid need-to-know for all information contained within the system.
(Source: GCST)
Denial of Service
Any action or series of actions that prevent any part of a system from functioning in accordance with its intended purpose. This includes any action that causes unauthorized destruction, modification, or delay of service. Synonymous with interdiction. (Source: GCST)
Designated Approving Authority (DAA)
The official who has the authority to decide on accepting the security safeguards prescribed for an AIS, or that official who may be responsible for issuing an accreditation statement that records the decision to accept those safeguards. (Source: GCST)
Discretionary Access Control (DAC)
A means of restricting access to objects based on the identity and need-to-know of the user, process, and/or groups to which they belong. The controls are discretionary in the sense that a subject with a certain access permission is capable of passing that permission (perhaps indirectly) on to any other subject. Compare mandatory access control. (Source: GCST)
Evaluation
An assessment of a product agains the Trusted Computer System Evaluation Criteria (The Orange Book).
Information Warfare
Information warfare is the activity by a hacker, terrorist, or other adversary to disrupt an information system. Traditional security addresses the protection of information. Information warfare is aimed at protecting the systems that collect, store, manipulate, and transport information so that they are not accessed by unauthorized persons and are available as needed. (Source: Defense Information Infrastructure Master Plan)
Mandatory Access Control (MAC)
A means of restricting access to objects based on the sensitivity (as represented by a label) of the information contained in the objects and the formal authorization (i.e., clearance) of subjects to access information of such sensitivity. Compare discretionary access control. (Source: GCST)
Multilevel Mode of Operation
An AIS is operating in the multilevel mode when all of the following statements are satisfied concerning the users with direct or indirect access to the AIS, its peripherals, remote terminals, or remote hosts:
Some do not have a valid personnel clearance for all of the information processed in the system.
All have the proper clearance and have the appropriate formal access approval for that information to which they are to have access.
All have a valid need-to-know for that information to which they are to have access.
(Source: GCST)
Multilevel Security (MLS)
An MLS system is a system containing information with different security classifications that simultaneously permits access by users with different security clearances and needs to know. This system prevents users from obtaining access to information for which they lack authorization. (Source: DOD Directive 5200.28)
Risk
The probability that a particular threat will exploit a particular vulnerability of the system. (Source: GCST)
Risk Analysis
The process of identifying security risks, determining their magnitude, and identifying areas needing safeguards. Risk analysis is a part of risk management. Synonymous with risk assessment. (Source: GCST)
Risk Management
The total process of identifying, controlling, and eliminating or minimizing uncertain events that may affect system resources. It includes risk analysis, cost/benefit analysis, selection, implementation and test, security evaluation of safeguards, and overall security review. (Source: GCST)
Sensitive Compartmented Information
Information restricted to people who have been given formal access to the security program, called a compartment.
Security Policy
The set of laws, rules, and practices that regulate how an organization manages, protects, and distributes sensitive information. (Source: GCST)
System-High Mode of Operation
An AIS is operating in the system-high mode when each user with direct or indirect access to the AIS, its peripherals, remote terminals, or remote hosts, has all of the following:
A valid personnel clearance for all information on the system.
Formal access approval for, and has signed nondisclosure agreements for, all the information stored and/or processed (including all compartments, subcompartments, and/or special access programs).
A valid need-to-know for some of the information contained within the system.
(Source: GCST)
Trusted Computer System
A system that employs sufficient hardware and software assurance measures to allow its use for simultaneous processing of a range of sensitive or classified information. (Source: GCST)
————————————————————————

Note: “GCST” means the Glossary of Computer Security Terms, NCSC-TG-004, 21 Oct 88 (the “Olive” Book).

SQL – Stored Procedures and Functions

Creating a stored procedure

A stored procedure is a saved collection of Transact-SQL statements that can take and return user-supplied parameters.
Stored procedures can be found in SQL Server under the database tree view > Programability > Stored Procedure.
You can create one using the Query Analyser or by right clicking on “Stored Procedures” folder in SQL and selecting “New Stored Procedure”.

Here is a sample

CREATE PROCEDURE tblProducts_GetAllForType
	-- Add the parameters for the stored procedure here
	@Type int
AS
BEGIN
	-- SET NOCOUNT ON added to prevent extra result sets from
	-- interfering with SELECT statements.
	SET NOCOUNT ON;

    -- Insert statements for procedure here
	SELECT * FROM tblProducts WHERE [Type]=@Type
END
GO

This can be called for example with the following statement:

exec dbo.tblProducts_GetAllForType 1

Creating a function

CREATE FUNCTION creates a user-defined function in SQL Server.
A user-defined function is a Transact-SQL that accepts parameters, performs an action, such as a complex calculation, and returns the result of that action as a value. The return value can either be a scalar (single) value or a table. Use this statement to create a reusable routine that can be used in these ways:

  • In Transact-SQL statements such as SELECT
  • In applications calling the function
  • In the definition of another user-defined function
  • To parameterize a view or improve the functionality of an indexed view
  • To define a column in a table
  • To define a CHECK constraint on a column
  • To replace a stored procedure

Example:

CREATE FUNCTION Add_two_Numbers
(
	-- Add the parameters for the function here
	@number1 int, @number2 int
)
RETURNS int
AS
BEGIN
	-- Declare the return variable here
	DECLARE @result int

	-- Add the T-SQL statements to compute the return value here
	SET @result = @number1 + @number2

	-- Return the result of the function
	RETURN @result

END
GO


This function can be executed like this:

SELECT dbo.Add_Two_Numbers (1,5) as Total

Will display 6

SQL Fundamentals

Database Tables

A database most often contains one or more tables. Each table is identified by a name (e.g. “Customers” or “Orders”). Tables contain records (rows) with data.

Below is an example of a table called “Persons”:

P_Id First_Name Last_Name Address City
1 Jill McGuire Oakley Hay 9 Corby
2 Bob Builder The Main Road 23 Corby
3 Peter Pan Neverland Ohio

The table above contains three records (one for each person) and five columns (P_Id, First_Name, Last_Name, Address, and City).


SQL Statements

Most of the actions you need to perform on a database are done with SQL statements.

The following SQL statement will select all the records in the “Persons” table:

SELECT * FROM Persons

In this tutorial we will teach you all about the different SQL statements.


Keep in Mind That…

  • SQL is not case sensitive

Semicolon after SQL Statements?

Some database systems require a semicolon at the end of each SQL statement.

Semicolon is the standard way to separate each SQL statement in database systems that allow more than one SQL statement to be executed in the same call to the server.

We are using MS Access and SQL Server 2000 and we do not have to put a semicolon after each SQL statement, but some database programs force you to use it.


SQL DML and DDL

SQL can be divided into two parts: The Data Manipulation Language (DML) and the Data Definition Language (DDL).

The query and update commands form the DML part of SQL:

  • SELECT – extracts data from a database
  • UPDATE – updates data in a database
  • DELETE – deletes data from a database
  • INSERT INTO – inserts new data into a database

The DDL part of SQL permits database tables to be created or deleted. It also define indexes (keys), specify links between tables, and impose constraints between tables.
The most important DDL statements in SQL are:

  • CREATE DATABASE – creates a new database
  • ALTER DATABASE – modifies a database
  • CREATE TABLE – creates a new table
  • ALTER TABLE – modifies a table
  • DROP TABLE – deletes a table
  • CREATE INDEX – creates an index (search key)
  • DROP INDEX – deletes an index

SELECT

If we want to select the content of the columns named “First_Name” and “Last_Name” from the table above, we would use the following SELECT statement:

SELECT First_Name,Last_Name FROM Persons

If we want to select all the columns from the “Persons” table, we would use the following SELECT statement:

SELECT * FROM Persons

SELECT DISTINCT Statement

In a table, some of the columns may contain duplicate values. This is not a problem, however, sometimes you will want to list only the different (distinct) values in a table.

The DISTINCT keyword can be used to return only distinct (different) values.

SELECT DISTINCT column_name(s)
 FROM table_name

For example, if we would like to see the distinct values from the column named “City” from the table above, we would use the following SELECT statement:

SELECT DISTINCT City FROM Persons

The result-set will look like this:

City
Corby
Ohio

The WHERE clause is used to extract only those records that fulfill a specified criterion.

For example, the

SELECT * FROM Persons
 WHERE City='Corby'

will return a result-set looking like this:

P_Id First_Name Last_Name Address City
1 Jill McGuire Oakley Hay 9 Corby
2 Bob Builder The Main Road 23 Corby

Numeric values do not need to be included in quotes, but string values do. Use a single quote or a double quote.

Double quote require an extra setting in SQL to be turned on:

When SET QUOTED_IDENTIFIER is ON, identifiers can be delimited by double quotation marks, and literals must be delimited by single quotation marks. When SET QUOTED_IDENTIFIER is OFF, identifiers cannot be quoted and must follow all Transact-SQL rules for identifiers

Operators Allowed in the WHERE Clause

With the WHERE clause, the following operators can be used:

Operator Description
= Equal
<> Not equal
> Greater than
< Less than
>= Greater than or equal
<= Less than or equal
BETWEEN Between an inclusive range
LIKE Search for a pattern
IN If you know the exact value you want to return for at least one of the columns

If you would like to combine multiple conditions, you can use the logical operators AND and OR. You would use them separately or together, but remember, enclose conditions into open brackets.

SELECT * FROM Persons WHERE
 First_Name='Bob'
 AND (Last_Name='Builder' OR Last_Name='McGuire')

The result-set will look like this:

P_Id First_Name Last_Name Address City
2 Bob Builder The Main Road 23 Corby

The ORDER BY Keyword

The ORDER BY keyword is used to sort the result-set by a specified column.

The ORDER BY keyword sort the records in ascending order (ASC) by default.

If you want to sort the records in a descending order, you can use the DESC keyword.

SELECT column_name(s)
FROM table_name
ORDER BY column_name(s) ASC|DESC

For example, if we want to select all the persons from the table above, sorted by their last name, we use the following SELECT statement:

SELECT * FROM Persons
ORDER BY First_Name

The result-set will look like this:

P_Id First_Name Last_Name Address City
1 Jill McGuire Oakley Hay 9 Corby
4 Nilsen Tom Vingvn 23 Ohio
3 Peter Pan Neverland Ohio
2 Bob Builder The Main Road 23 Corby

INSERT STATEMENT

The INSERT INTO statement is used to insert new records in a table.

It is possible to write the INSERT INTO statement in two forms.

The first form doesn’t specify the column names where the data will be inserted, only their values:

INSERT INTO table_name

VALUES (value1, value2, value3,…)

The second form specifies both the column names and the values to be inserted:

INSERT INTO table_name (column1, column2, column3,...)
VALUES (value1, value2, value3,...)

We have the following “Persons” table:

P_Id First_Name Last_Name Address City
1 Jill McGuire Oakley Hay 9 Corby
2 Bob Builder The Main Road 23 Corby
3 Peter Pan Neverland Ohio

Now we want to insert a new row in the “Persons” table.

We use the following SQL statement:

INSERT INTO Persons
VALUES (4,'John', 'Doe', 'Bat Street 2', 'Desborough')

The “Persons” table will now look like this:

P_Id First_Name Last_Name Address City
1 Jill McGuire Oakley Hay 9 Corby
2 Bob Builder The Main Road 23 Corby
3 Peter Pan Neverland Ohio
4 John Doe Bat Street 2 Desborough

Insert Data Only in Specified Columns

It is also possible to only add data in specific columns.

The following SQL statement will add a new row, but only add data in the “P_Id”, “First_Name” and the “Last_Name” columns:

INSERT INTO Persons (P_Id, First_Name, Last_Name)
VALUES (5, 'Jacob', 'Powell')

The “Persons” table will now look like this:

P_Id First_Name Last_Name Address City
1 Jill McGuire Oakley Hay 9 Corby
2 Bob Builder The Main Road 23 Corby
3 Peter Pan Neverland Ohio
4 John Doe Bat Street 2 Desborough
5 Jacob Powell

The UPDATE Statement

The UPDATE statement is used to update existing records in a table.

UPDATE table_name
SET column1=value, column2=value2,...
WHERE some_column=some_value

Note: Notice the WHERE clause in the UPDATE syntax. The WHERE clause
specifies which record or records that should be updated. If you omit the WHERE clause, all records will be updated!

Now we want to update the person “Tjessem, Jakob” in the “Persons” table.

We use the following SQL statement:

UPDATE Persons
SET Address='Bat Street 67', City='Corby'
WHERE First_Name='Jacob' AND Last_Name='Powell'

The “Persons” table will now look like this:

P_Id First_Name Last_Name Address City
1 Jill McGuire Oakley Hay 9 Corby
2 Bob Builder The Main Road 23 Corby
3 Peter Pan Neverland Ohio
4 John Doe Bat Street 2 Desborough
5 Jacob Powell Bat Street 67 Corby

SQL UPDATE Warning

Be careful when updating records. If we had omitted the WHERE clause in the example above, like this:

UPDATE Persons
SET Address='The bad turn', City='Corby'

The “Persons” table would have looked like this:

P_Id First_Name Last_Name Address City
1 Jill McGuire The bad turn Corby
2 Bob Builder The bad turn Corby
3 Peter Pan The bad turn Corby
4 John Doe The bad turn Corby
5 Jacob Powell The bad turn Corby

The DELETE Statement

The DELETE statement is used to delete rows in a table.

SQL DELETE Syntax

DELETE FROM table_name

WHERE some_column=some_value

Note: Notice the WHERE clause in the DELETE syntax. The WHERE clause specifies which record or records that should be deleted.
If you omit the WHERE clause, all records will be deleted!


SQL DELETE Example

The “Persons” table:

P_Id First_Name Last_Name Address City
1 Jill McGuire Oakley Hay 9 Corby
2 Bob Builder The Main Road 23 Corby
3 Peter Pan Neverland Ohio
4 John Doe Bat Street 2 Desborough
5 Jacob Powell Bat Street 67 Corby

Now we want to delete the person “Jacob Powell” in the “Persons” table.

We use the following SQL statement:

DELETE FROM Persons
WHERE First_Name='Jacob' AND Last_Name='Powell'

The “Persons” table will now look like this:

P_Id First_Name Last_Name Address City
1 Jill McGuire Oakley Hay 9 Corby
2 Bob Builder The Main Road 23 Corby
3 Peter Pan Neverland Ohio
4 John Doe Bat Street 2 Desborough

Delete All Rows

It is possible to delete all rows in a table without deleting the table. This means that the table structure, attributes, and indexes will be intact:

DELETE FROM table_name
or
DELETE * FROM table_name
or
TRUNCATE TABLE table_name

Note: Be very careful when deleting records. You cannot undo this statement!

Database Fundamentals in SQL Server 2008 p2

This lesson will provide you with an overview of the rich set of features and benefits that SQL Server can bring to your organization. You will gain an understanding of the components that are provided by SQL Server 2008. Finally, this lesson will summarize the typical roles and responsibilities of IT professional careers around SQL Server technologies. SQL Server is an example of a technology that has uses a client/server architecture.

Introducing SQL Server 2008 in the Enterprise

What Is a Client/Server Architecture?


The client/server architecture consists of client, server, and network components.

The client is the requester of services while the server is the provider of database services. You can design an application in client/server architecture and the users can access the designed application from client computers.

To help the users access the application, you need to install the client application on the client computer. You also need to install the server-side application on the server that has server software, such as Microsoft SQL Server 2000 or SQL Server 2008.

The user can access SQL Server 2008 by using Named Pipes or Multi-protocol Net-Libraries and share Windows NT. SQL Server 2008 grants access to the server based on Windows NT user or group security identifiers (SIDs).

In Microsoft Access there is only file sharing whereas in client/server architecture there is sharing of processes. When the client sends a request for a service or data to the server, the server responds to the request.

You can categorize client/server applications as multi-tier architecture based on how the business logic is distributed and whether the maximum processing takes place on the client or the server.

You can find several benefits in using client/server architecture. Firstly, you can ensure data integrity because data is stored in a central location.

In addition, you can define business and security rules on the server and enforce these rules equally among all users.

Besides this, you can optimize network traffic because an RDBMS performs effective querying and returns only the data that an application requires.

Therefore, the application workload is shared between the client and the server, thus providing a better performance.

Moreover, client/server architecture allows better scalability of the processes.

When scaling, you can do vertical and horizontal scaling up, that is, you can either increase the configurations of the server for better

 

What Is Microsoft SQL Server 2008?


Basically, SQL Server 2008 is an RDBMS. However, it is an enterprise-class data platform with integrated Business Intelligence (BI) tools that stores, manages, manipulates, and presents data. You can provide secure multiuser access to your data by using the client/server architecture of SQL Server 2008. Even if there are several can be, the solution is highly available and scalable.

SQL Server 2008 is designed to meet the exploding data needs of many organizations. SQL Server provides a solution that organizations can use to store and manage many types of data, including XML, e-mail, time/calendar, file, document, and geospatial data. It also provides a rich set of services to search, query, analyze, report, and integrate data.

You can use the components of SQL Server 2008 such as Microsoft SQL Server 2008 Analysis Services (SSAS), Microsoft SQL Server 2008 Reporting Services (SSRS), Microsoft SQL Server 2008 Integration Services (SSIS), Replication Services, and Notification Services to provide BI and enterprise data solutions. You can also take advantage of the new features available to SQL Server 2008 to add greater value to your organization’s data needs.

 

SQL Server Components


SQL Server 2008 also consists of a series of components that allows IT professionals to manage all aspects of database administration, development, and business intelligence development. The following list below is designed to provide an overview of the components purpose within SQL Server.

Relational Database Engine

Relational Database Engine provides a highly secure environment to store, retrieve, and modify data in relational, Extensible Markup Language (XML) and unstructured formats. Besides data management, the database engine enables you to set up security and manage the databases.

SQL Server Component Description
SQL Server Agent SQL Server Agent is a separate service that is used to automate repetitive tasks. Besides setting up job schedules, notifications and alerts can also be set up to alert a database administrator about problems or the successful completion of a job. Furthermore, you can set up alerts to monitor the general status of SQL Server and its databases.The SQL Server database engine must be running for the SQL Server agent to run. There are also tools to monitor the job history of the previously completed jobs.
Transact-SQL (T-SQL) Transact–Structured Query Language (T-SQL) is the primary programming language that is used against SQL Server. Database developers primarily use it query and modify data and objects within the database such as tables, views, and stored procedures.Database administrators will typically use the graphical user interface to manage the day-to-day operation of SQL Server. T-SQL provides useful code that not only allows you to retrieve information about objects that have been created by a database developer, but also to set up administrative tasks, such as backup programmatically.

Understanding T-SQL will benefit any SQL Server professional. This is because the T-SQL programming language is more flexible than the equivalent options that are presented in the GUI.

SQL Server Integration Services (SSIS) SSIS is the service that is responsible for the movement and integration of data from a single or multiple data sources. It can be used for simple data transfers from one system to another. It can also deal with complex data movement that can be used to populate entire database systems such as a data warehouse.SSIS is referred to as an Extract, Transform and Load (ETL) tool. The popularity of SSIS is due to its ability to transform or change data during the data movement, providing consistency to the data to meet business requirements.
SQL Server Analysis Services (SSAS) SSAS is the service that allows the creation of multidimensional structures that contains data aggregated together from other data sources. This multidimensional structure is referred to as a cube. Each intersection of axes within the cube is represented by the aggregation of data at that point. This ultimately delivers Online Analytical Processing (OLAP) from a central point, rather than having to establish this aggregated data from numerous data sources, hindering performance. Data warehouses are typically used as the source to an OLAP database.Furthermore, you can take advantage of the information already in the cube through SSAS data mining. Data mining allows SSAS to identify patterns in the data, which may not be so obvious, by using built-in data mining algorithms. Data mining can also use relational source to pick trend as well as using cubes as a source.
SQL Server Reporting Services (SSRS) SSRS is the reporting tool that can query a wide range of data sources, such as relational and OLAP, and present the information in a more digestible manner. It consists of a developer tool known as Report Designer to provide the professional report writer the ability to create compelling, visual reports. It also has an end-user tool known as the Report Builder that enables business users to create their own ad hoc reports.Visualization comes in the form of charts and gauges. It also involves presenting data in a standard table format. SSRS also provides the feature to automatically deliver reports through a wide range of formats.
Full Text Search Now integrated into the database engine in SQL Server 2008, Full text search enables fast and flexible indexing for keyword-based queries of free text data stored in a SQL Server database.
Service Broker Service Broker is a queuing mechanism that can be used for reliable, transactional, and message-based communication between software services. These services can be a part of applications within a single database or distributed applications.

The service broker service is a part of the database engine and is useful when informational messages from an application may not be immediately available to a backend database. This type of communication is known as asynchronous communication. The messages are stored in a queue until the receiving service becomes available. Introduced in SQL Server 2005, it was difficult to develop asynchronous applications prior to this. With Service broker, Microsoft provides the infrastructure r to manage asynchronous communication. At the same time, it also reduces the development time for such systems.

Replication Replication allows a copy of the data to be made available on another SQL Server so that the data is brought closer to the user. It also ensures that the data is synchronized.

Although replication can extract a copy of data from a source and load it into a destination, it cannot transform the data.

Replication can be controlled to replicate changes immediately or on a scheduled basis. It also has the ability to merge data together from multiple SQL Servers and reconcile conflicting data.

 

Database Fundamentals in SQL Server 2008


This course is aimed toward those students who are new to using Microsoft SQL Server.
This course is also useful to those students who are studying for exams:

•70-432: Microsoft® SQL Server® 2008, Database Administration

•70-433: Microsoft® SQL Server® 2008, Database Development

Lesson Introduction

A database is a repository of data that can be stored, manipulated, and retrieved. There are many database models. SQL Server specializes in the implementation of a relational database model.

The purpose of this lesson is to explore the fundamental components of a relational database system. Tables and relationships form the basis of SQL Server. The organization of these tables are known as normalization, which makes working with SQL Server efficient, regardless of whether you are writing queries or creating a database from the outset.

Finally, you will be introduced to additional objects that are implemented within a SQL Server database to support data.

Types of Databases

A relational database is a collection of tables that are related together by a common field through a relationship. The purpose of a relational database is to break down data into business entities such as customers, orders, and employees and reflect each entity as a table. This helps reduce data redundancy and improves performance.

Within a database, two models of database can be created.

Online Transactional Processing (OLTP) Online Analytical Processing (OLAP)
The tables are organized in a way that the database is more optimized for systems that perform data modifications such as inserts, updates and deletes.

Typically, deals with detailed information.

Users can still read data from these systems. However, read performance may be affected because the structure of these tables is designed to reduce the amount of redundant and repetitive data OLTP is commonly implemented in many relational database systems.

Data is up to date and is immediately available after the transaction that modified the data is complete.

Each transaction typically deals with small amounts of data.

In this model, data is stored is in multidimension views of aggregated and detailed data. These can be stored as tables in a SQL Server database, other storage models also exist.

Typically, deals with summary information.

The tables here are optimized for reading data. Although it is possible to modify data, the performance against OLAP may be impaired for write operations.

OLAP databases are managed by SQL Server Analysis services and is becoming popular with the need to analyze data becoming more of a business imperative.

OLAP represents historical data over time.

Each transaction typically deals with large amounts of data.

What Are Tables?

Tables are objects that are used to store data. A table consists of columns and rows.

Columns have a data type such as an integer or time data type associated to a specific column to control the type of data that can be entered.

Rows are used to hold specific information about a business entity such as a customer or an order.
Typically, each row consists of a unique identifier associated to a particular row such as a customerID or as in the graphic, the DepartmentID column to assign a unique attribute to a record.

A primary key constraint is typically associated with the ID column to ensure that the entity data is unique.

What Are Relationships?

Relationships are used to join tables together through common fields between two tables.

Primary keys can be used to enforce uniqueness of records within a single table. You can also use the primary key to enforce data integrity between two tables. This is known as referential integrity.
In a typical two table relationship, one of the tables will be joined to the other table by a primary key column. The table that has the primary key is known as the primary table, and all records in the primary key column are unique. The secondary table on which the column is joined from the primary table typically has repeating values. The column that holds the repeating values is referred to as a foreign key column. This creates a relationship known as one to many and is the most common relationship type used.

By default, the impact of this relationship on joined columns is:

 

  • A value cannot be placed in the secondary table unless it exists in the primary table.
  • A record cannot be removed from the primary table if it has related records existing in the secondary table.
  • A record cannot be updated from the primary table if it has related records existing in the secondary table.

This ensures that data integrity is retained and that accidental deletions and updates cannot occur. This default behavior can be modified. Additional relationships that can occur in a relational database management system include the following.

  • One-to-One. There is one record in the secondary table for each record in the primary table.
  • Many-to-Many. There are many records in the secondary table for many records in the primary table.

What Is Normalization?

Normalization is the process of organizing data and tables within the SQL Server database. This is implemented by creating tables and setting relationships between the table to remove repetitive and redundant data.
Redundant data occupies unnecessary space in the database. Moreover, a change in information in one area requires a change in all areas where the same data exists because data is repeated in several locations. This can be particularly problematic if the data changes frequently.
There are many degrees of normalization. For use in commercial databases, it is typical to only implement one of three levels of normalization. These typically set rules for how data should be broken down.

These are referred to as:

 

  • First Normal Form (1st NF)
  • Second Normal Form (2nd NF)
  • Third Normal Form (3rd NF)

Normalization is a useful guide in breaking down data into a relational format. However, there can be slight variations when it is implemented. Ultimately, remember the purpose of normalization.

  • To prevent waste of space
  • To make data consistent
  • To ease long-term maintenance
  • To ease data updates and modifications

Additional SQL Server Objects


Tables are the fundamental storage object of data within a SQL Server 2008 database. There are also additional objects that can be created in SQL Server, which are used to support, manage, and protect the data that is stored within the tables. The following is an overview of each object and the purpose that it serves within SQL Server.

SQL Server Object Description
Constraints Constraints are objects that are associated with single or multiple columns within a table. Constraints ensure that data that is inserted or updated meets the requirements that the constraint places on the data.

The type of constraint objects that can be associated to a table are:

Primary key

Unique constraints

Foreign key

Default constraint

Check constraint

XML schemas

Indexes Indexes are objects within SQL Server that provide pointers to the data. They act as a bridge between queries and the data.Indexes can speed up data retrieval from a database if the appropriate indexes are created.

Without indexes, SQL Server scans an entire table to find data that will be required in a query.

Moreover, like an index in a book, an index in a database requires disk space.

Stored Procedures Stored procedures are SQL Server objects that encapsulate T-SQL logic within a named object.Stored procedures can hold T-SQL statements for repetitive or complex T-SQL queries and tasks without the need to rewrite the T-SQL statement.

The main advantage of stored procedures is that they are precompiled.

This means that the first run of a stored procedure has to work out the best way to retrieve data for a query or a task before it is executed.

Subsequent execution of the same stored procedure will take advantage of the previous execution plan that was previously worked out.

This can substantially improve the performance of queries and modifications that are conducted through stored procedures.

Stored procedure objects can also be secured, so access to the data can be controlled.

Views Views are objects in SQL Server that are analogous to virtual tables. Views consist of rows and columns like a table, and they can be secured to control access to the data like stored procedures.Views can pull data from any number of tables into a single virtual table. This provides the following benefits:

Focuses the data for the user.

Simplifies the management of permissions.

Hides the complexity of the underlying tables.

Provides shared application logic for .NET applications.

In certain situations, views can be used to improve performance.

Functions Functions are objects within SQL Server that can be used to manipulate or change data.SQL Server provides a wide range of system-supplied functions that can be used against data in the database, such as:

CONVERT – To covert data from one data type to another

GETDATE() – To retrieve today’s date

However, SQL Server provides database professionals with the ability to create user-defined functions if the SQL Server system functions do not provide the required functionality.

Data Manipulation Language (DML) Triggers Like stored procedures, triggers are SQL Server objects that encapsulate T-SQL logic within a named object.However, triggers are event-driven. A trigger is an object that is associated to a table or a view and dependant on the type of trigger defined will only fire when one of the following T-SQL statements are issued:

Insert

Update

Delete