Find missing indexes for SQL Server 2012

The following example generates a listing of missing indexes that could be beneficial for T-SQL batches that have been submitted to the query optimizer since the last time the SQL Server service was restarted:

T-SQL

SELECT user_seeks * avg_total_user_cost * (avg_user_impact * 0.01) AS [Index_Useful] ,
igs.last_user_seek ,
id.statement AS [Statement] ,
id.equality_columns ,
id.inequality_columns ,
id.included_columns ,
igs.unique_compiles ,
igs.user_seeks ,
igs.avg_total_user_cost ,
igs.avg_user_impact
FROM sys.dm_db_missing_index_group_stats AS igs
INNER JOIN sys.dm_db_missing_index_groups AS ig ON igs.group_handle = ig.index_group_handle
INNER JOIN sys.dm_db_missing_index_details AS id ON ig.index_handle = id.index_handle
ORDER BY [Index_Useful] DESC;

Query Execution Results
Query Execution Results
Missing Index Details - Full listing showing the desirability status, columns to be used and user seeks
Missing Index Details – Full listing showing the desirability status, columns to be used and user seeks

As you can see, the most required index missing in the database above is one on the status table.
We can create the missing index using the following command:

CREATE INDEX IX_ERPStatus ON tblERPStatus (SystemID, ERPDocumentNumber)

Advertisements

How to insert data into an Excel File using VB.NET

excel-logo

Declaring the connection

If you are using Excel 2007 and above, create the file where you would like to save your data and add the following to your web.config

<appSettings>
<add value="Provider=Microsoft.ACE.OLEDB.12.0;Data Source=d:File.xlsx;Extended Properties='Excel 12.0;ReadOnly=False;HDR=YES;'" key="ExcelLink" />
</appSettings>

Read from an Excel file

First, we initialize the connection to the Excel File:

'connection details
Dim _connectionString As String = System.Configuration.ConfigurationManager.AppSettings("ExcelLink")
'get last value of the id in excel file
Dim factory As DbProviderFactory = DbProviderFactories.GetFactory("System.Data.OleDb")
Dim strErrorsFound As New StringBuilder
Dim blnErrorFound As Boolean = False
Using connection As DbConnection = factory.CreateConnection()
connection.ConnectionString = _connectionString

Then, we create the command that will select information from the sheet:

Using command As DbCommand = connection.CreateCommand()
'Get the RefNo for the latest entry
command.CommandText = "SELECT TOP 1 ID FROM [Feedback$] ORDER BY ID DESC"

We will put our code in a Try-Catch block to prevent any errors bleeding into the Client-Side.

Try
connection.Open()

Using dr As DbDataReader = command.ExecuteReader()
If dr.HasRows Then

While dr.Read()
‘store last ref_number value
lastID = If(dr(“ID”).ToString() <> “”, Int32.Parse(dr(“ID”).ToString()), 0)
End While
End If
End Using

‘clean up- close first connection to the excel file
connection.Dispose()
connection.Close()

Catch ex As Exception
PanelServerError.Visible = True
Exit Sub
End Try

Don’t forget to close the Using Tags

End Using
End Using

Get data from SQL Server and save it into Excel

Dim myConnection As New SqlConnection(ConfigurationManager.ConnectionStrings("CN").ConnectionString)

Dim myCommand As New SqlCommand("[Your SP]", myConnection)
myCommand.CommandType = CommandType.StoredProcedure

Try

myConnection.Open()

Dim myReader As SqlDataReader = myCommand.ExecuteReader()

While myReader.Read()

IDNum = myReader.Item("ID")
contactName = myReader.Item("ContactName")
contactCompany = myReader.Item("ContactCompany")
contactPosition = myReader.Item("ContactPosition")
contactEmail = myReader.Item("ContactEmail")
project = myReader.Item("Project")

'insert into excel file
Dim factory2 As DbProviderFactory = DbProviderFactories.GetFactory("System.Data.OleDb")

Dim connection2 As DbConnection = factory2.CreateConnection()
connection2.ConnectionString = _connectionString

Dim command2 As DbCommand = connection2.CreateCommand()

'Build SQL string
Dim SQLQuery As New StringBuilder()
SQLQuery.Append("INSERT INTO [Feedback$] ")
SQLQuery.Append("(ID, Name, Company, Position_, Email, Project, ")

SQLQuery.Append("VALUES ('" & IDNum & "', '" & contactName & "','" & contactCompany & "','" & contactPosition & "','" & contactEmail & "','" & project & "','")

Try
connection2.Open()

'insert data into excel spreadsheet
command2.CommandText = SQLQuery.ToString()
command2.ExecuteNonQuery()

Catch ex As SystemException
blnErrorFound = True
strErrorsFound.AppendLine("Line / ID " + IDNum)
strErrorsFound.AppendLine(ex.Message)
strErrorsFound.AppendLine("Query: " + SQLQuery.ToString())
strErrorsFound.AppendLine("******************************************************************************")
Finally
connection2.Dispose()
connection2.Close()
End Try

'****************excel insert code *******************

End While

myReader.Close()

Catch ex As SystemException
blnErrorFound = True
strErrorsFound.AppendLine("Generic error: " + ex.Message)
strErrorsFound.AppendLine("******************************************************************************")

Finally

'clean up
myConnection.Close()
myConnection.Dispose()
End Try

Errors

Operation must use an updateable query.

Resolution

Granting access to the file being written to (IISUSR User or the IIS App Pool User) solved this issue for me.

The field is too small to accept the amount of data you attempted to add. Try inserting or pasting less data.

Resolution

While importing from an Excel spread sheet you see the following error message “[Microsoft][ODBC Excel Driver] The field is too small to accept the amount of data you attempted to add. Try inserting or pasting less data”. This is problem with the Excel ODBC driver which can often be worked around by reordering the records in the spreadsheet..
Fundamentally, Excel is NOT a database and the columns in a spreadsheet do not have a data type associated with them in the same way that that a database does. The Excel ODBC driver has to make an assumption about the data type for a given column. The driver does this by reading ahead 8 records and looking at the data found. It then makes a decision about the data type based upon what it has read. Problems occur when the assumption about data type is proven wrong by subsequent records. Consider two scenarios….
A column in an Excel spreadsheet has a column in it which contains string data. The first 8 records contain short strings, (let’s say 20-30 characters). The Excel ODBC driver reads this data and assumes that a short string data type will be appropriate for this column. If a subsequent record contains a longer string, (let’s say 300 characters). The data type may prove inappropriate and unable to store the longer string and the error above is raised by the Excel ODBC driver. Moving the record with the long string to the beginning of the dataset will allow the Excel ODBC to select a more appropriate data type for the column which will apply to all records in the spreadsheet.
A column in a spreadsheet has numeric strings for the first 8 records, For example “123”, “456” etc. Excel decides that this column has numeric data. A subsequent record contains a string which is not numeric data, for example “Hello World”. The assumption made by the Excel ODBC driver will prove incorrect and the above error message will be raised by the ODBC driver. The problem can be worked around by reconsidering the order of the records. If the record containing “hello world” is placed within the first 8 records. The Excel ODBC driver will determine that this column contains string data and hopefully a string data type will be selected which will be appropriate for all data records.
In all cases the technique is to arrange the order of the records such that the Excel ODBC driver is allowed to make the correct selection of data type.

Borrowing – Student Loans Responsibly

As you may know, student loans are today’s largest form of student aid.

student-debtResearches have found out that it made up to 54 percent of the total aid awarded every year.
However, with the rise of student loans, several cases of student loan defaults occur.
The student loan debt is even today’s one of the major problems of most student borrowers.
It is rising every year and the college expenses as well as the graduate school costs have definitely gone up faster than inflation.
Well, let me tell you that this case often surface when you take a particular loan then another student loan followed by another loan.
It is often said that as much as you take student loan offers, your loan debt gets bigger and bigger.
Continue reading “Borrowing – Student Loans Responsibly”

Helping your consumers to gain more profits

Personalised Marketing

maximizing-profitsConsumers are individuals or households that consume goods and services generated within the economy. Since this includes just about everyone, the term is a political term as much as an economic term when it is used in everyday speech.
Typically when business people and economists talk of consumers they are talking about person as consumer, an aggregated commodity item with little individuality other than that expressed in the buy not buy decision.
However there is a trend in marketing to individualize the concept. Instead of generating broad demographic profile and psychographic profiles of market segments, marketers are engaging in personalized marketing, permission marketing, and mass customization.
A consumer is assumed to have a budget which can be spent on a range of goods and services available on the market. Under the assumption of rationality, the budget allocation is chosen according to the preference of the consumer, i.e. to maximize his or her utility function.
In ‘time series’ models of consumer behavior, the consumer may also invest a proportion of their budget in order to gain a greater budget in future periods. This investment choice may include either fixed rate interest or risk-bearing securities.
In the context of mental health, consumer is also a term applied to describe a person living with mental illness. Literally, it is not really what is states. They are the people seeking for help in their own profile of their mentality. You can let them gain their life back on track and achieve your goal at the same time.
Problems may also arise when dealing with your possible prospects.
Many patients are frustrated because, despite their best intentions, they seem unable to adhere to an exercise or “diet” program. Actually, there are several proven strategies that can be used to help us be successful in our efforts to improve an aspect of our lifestyle.
First, make a plan and make sure it fits their lifestyle. For example, if you’re planning on beginning an exercise program, think ahead about how you’re going to carve out the time in their already busy life. In this respect, anything you can do to make the exercise program a regular, daily part of your routine will be helpful.
It’s often a good idea to ask those around you to help you in your endeavors. Anything you can do to help assure the support of your family, friends and co-workers will increase the odds of success.
Remember that most of the changes you make, whether changing what you prefer to eat, or changing your schedule to include exercise, will affect those around you. One of the best examples of this is the difficulty that smokers have trying to quit when those around them are smoking.
If you strictly adhere to a “proven” program that has succeeded for others, however, and fail to produce the results you want, you may become discouraged or filled with self-doubt. There are many reasons to explain why self-help techniques fail, and many steps to take to feel good about yourself regardless of the result. Like every other field, the “experts” in self-help disagree on just about everything.
Always remember this: no matter how smart or “successful” someone is, how much “proof” you’re given, how much you trust or respect someone, or how logical something seems, it’s just an opinion, just what worked for someone else, just a possible pathway to success.

How to upload all files from a folder on Microsoft Azure Storage

If you are working with Microsoft Cloud, you must have had the need to move a few files (in bulk) from a folder onto the Azure Storage System.
To get started, you will need to have a Microsoft Azure Account, create a storage account, get the connection string for it.
If you are unsure about how to proceed, check out this great introductory article: How to use Blob Storage from .NET
mobile-blob-storage-account
This is a demo connection string that you will need to add to your web.config file:
<add key="StorageConnectionString" value="DefaultEndpointsProtocol=https;AccountName=srmd1983;AccountKey=JyUOu3/iv+0UMjzI/PtoHd2JKhKx4SOSSxJcsvVp95isAZH6hKpPs/AQDOPxgVXjTNGWCYCSssgiwVVun0rlWXFgJ6A==" />

In your code, import the needed Microsoft.WindowsAzure.Storage namespace. To add this DLL, install the Azure SDK package for your version of Visual Studio and add the DLL into references.

Imports Microsoft.WindowsAzure.Storage
Imports Microsoft.WindowsAzure.Storage.Auth
Imports Microsoft.WindowsAzure.Storage.Blob

Then you will need to create a method that will do all the dirty work. The files will be uploaded in a subdirectory for each folder (in this case, the subdirectory is generated by the ClientID value).

   1:  Imports Microsoft.WindowsAzure.Storage
   2:  Imports Microsoft.WindowsAzure.Storage.Auth
   3:  Imports Microsoft.WindowsAzure.Storage.Blob
   4:  Imports System.IO
   5:  Public Class Azure
   6:  Shared Function UploadAllFilesToBlob(ClientID As String) As String
   7:  Dim err As String = ""
   8:  Try
   9:  Dim storageAccount As CloudStorageAccount = CloudStorageAccount.Parse( _
  10:  ConfigurationManager.AppSettings("StorageConnectionString"))
  11:  Dim blobClient As CloudBlobClient = storageAccount.CreateCloudBlobClient()
  12:  ' // Retrieve a reference to a container. 
  13:  Dim container As CloudBlobContainer = blobClient.GetContainerReference("uploads")
  14:  '// Create the container if it doesn't already exist.
  15:  container.CreateIfNotExists()
  16:  Dim dir As CloudBlobDirectory
  17:  dir = container.GetDirectoryReference(ClientID)
  18:  '// Create or overwrite the "myblob" blob with contents from a local file.
  19:  Dim path As String = System.Configuration.ConfigurationManager.AppSettings("SavePath")
  20:  Dim di As New DirectoryInfo(path)
  21:  For Each fi As FileInfo In di.GetFiles()
  22:  Dim fileStream = System.IO.File.OpenRead(fi.FullName)
  23:  '      // Retrieve reference to a blob named the same as the uploaded file. If the file exists, it will be overwritten.
  24:  Dim blockBlob As CloudBlockBlob = dir.GetBlockBlobReference(fi.Name)
  25:  '//create or replace
  26:  fileStream.Position = 0
  27:  blockBlob.UploadFromStream(fileStream)
  28:  fileStream.Close()
  29:  fi.Delete()
  30:  Next
  31:  Catch ex As Exception
  32:  err = "Error: " + ex.Message + "<br />" + ex.StackTrace
  33:  End Try
  34:  Return err
  35:  End Function

An alternative to the Count(*) function.

Sometimes rapid code development doesn’t always produce the most efficient code.
Take the age old line of code

SELECT COUNT(*) FROM MyTable.

Obviously this will give you the row count for a table, but at what cost?

Doing any SELECT * from a table will ultimately result in a table or clustered index scan.

ExecutionPlan

Turning on STATISTICS IO on reveals 4534 logical reads just to return the row count of 1,133,242.

Table 'tblERPInvoiceData'. Scan count 1, logical reads 4534, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Starting with SQL Server 2005, Microsoft introduced a DMV, sys.dm_db_partition_stats, that provides you with the same information at a fraction of the cost. It requires a little more coding, but once you turn on STATISTICS IO, you will see the performance benefit.

USE [db_name];
SELECT
s.name AS 'SchemaName'
,o.name AS 'TableName'
,SUM(p.row_count) AS 'RowCount'
FROM sys.dm_db_partition_stats p
JOIN sys.objects o ON o.object_id = p.object_id
JOIN sys.schemas s ON o.schema_id = s.schema_id
WHERE p.index_id < 2 AND o.type = 'U'
AND s.name = 'dbo'
AND o.name = 'tblERPInvoiceData'
GROUP BY s.name,o.name
ORDER BY s.name,o.name;
GO

Since we’re querying a DMV, we never touch the base table. We can see here we only need 16 logical reads to return the same row count of 1,133,242, and the tblERPInvoiceData table is nowhere in our execution plan.
Capture

Table 'sysidxstats'. Scan count 1, logical reads 9, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'sysschobjs'. Scan count 0, logical reads 4, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'sysclsobjs'. Scan count 0, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

By using the DMV, we have improved the query performance and reduced the total I/O count by nearly 100%. Another added benefit of using the DMV, is we won’t need locks on the base table and therefore will avoid the possibility of blocking other queries hitting that table.