Saturday, April 18, 2009

Date Format Validation in Javascript

Hi Guys,

Sometimes we want to validate the date which user has entered in text box, with some date formats. This things can be easily done in JavaScript. So here is the script which we can add in our JavaScript utility file for validating data format.

function isDate(dtStr){
var daysInMonth = DaysArray(12)
var pos1=dtStr.indexOf(dtCh)
var pos2=dtStr.indexOf(dtCh,pos1+1)
var strMonth=dtStr.substring(0,pos1)
var strDay=dtStr.substring(pos1+1,pos2)
var strYear=dtStr.substring(pos2+1)
if (strDay.charAt(0)=="0" && strDay.length>1) strDay=strDay.substring(1)
if (strMonth.charAt(0)=="0" && strMonth.length>1) strMonth=strMonth.substring(1)
for (var i = 1; i <= 3; i++) {
if (strYr.charAt(0)=="0" && strYr.length>1) strYr=strYr.substring(1)
if (pos1==-1 || pos2==-1){
alert("The date format should be : mm/dd/yyyy")
return false
if (strMonth.length<1 || month<1 || month>12){
alert("Please enter a valid month")
return false
if (strDay.length<1 || day<1 || day>31 || (month==2 && day>daysInFebruary(year)) || day > daysInMonth[month]){
alert("Please enter a valid day")
return false
if (strYear.length != 4 || year==0 || yearmaxYear){
alert("Please enter a valid 4 digit year between "+minYear+" and "+maxYear)
return false
if (dtStr.indexOf(dtCh,pos2+1)!=-1 || isInteger(stripCharsInBag(dtStr, dtCh))==false){
alert("Please enter a valid date")
return false
return true

Friday, April 17, 2009

Email Validation in Javascript

Hi Friends,

We often add some JavaScript to validate value of controls. And one common validation is Email validation, so here this script will validate the email address and you can also modify it according to your need, so here it is

function validateEmail(str) {

var at="@"
var dot="."
var lat=str.indexOf(at)
var lstr=str.length
var ldot=str.indexOf(dot)
if (str.indexOf(at)==-1){
alert("Invalid E-mail ID")
return false

if (str.indexOf(at)==-1 || str.indexOf(at)==0 || str.indexOf(at)==lstr){
alert("Invalid E-mail ID")
return false

if (str.indexOf(dot)==-1 || str.indexOf(dot)==0 || str.indexOf(dot)==lstr){
alert("Invalid E-mail ID")
return false

if (str.indexOf(at,(lat+1))!=-1){
alert("Invalid E-mail ID")
return false

if (str.substring(lat-1,lat)==dot || str.substring(lat+1,lat+2)==dot){
alert("Invalid E-mail ID")
return false

if (str.indexOf(dot,(lat+2))==-1){
alert("Invalid E-mail ID")
return false

if (str.indexOf(" ")!=-1){
alert("Invalid E-mail ID")
return false

return true

Ways to Improve SQL Server Performance - Part 2

Hi Guys,

This is the second post of my "SQL Server Performance" series. If you know any other performance tips then you can send me in my email or you can add in comment to this post.

8. Use stored Procedures and Parameterized Queries:

Advantages of using stored procedures are,
Logical separation of business logic: we can reduce the amount of code which is written in application code, so we can separate out some logic from application and embed into stored procedure. This has its own advantage like we have to change only single change in stored procedure and it will be reflected into all places.
Reduced deployment time: When we have embedded sql command in application code then if we want to change that business logic then after changes we have to deploy whole application again, but is we have used stored procedures then it requires only change in stored procedures.
Reduced Network bandwidth: if we supply whole sql command to the server then it will require more bandwidth, but if we have used stored procedures then we just have to supply only stored procedure name and its parameters.
SQL injections: stored procedure will protect against sql injections which are produced by direct user input which is used in sql command.

9. Minimize cursor use:

Cursors force database engine to repeatedly fetch rows, negotiate blocking, manage locks, and transmit results. Use forward only and read only cursors unless you wan tot update the tables. In cursors more locks are needed.
Forward only and read only cursors are fastest and least resource intensive to get data from the server.

10. Use Temporary Tables and Table Variables Appropriately

If your application frequently creates temporary tables, consider using the table variable or a permanent table. You can use the table data type to store a row set in memory. Table variables are cleaned up automatically at the end of the function, stored procedure, or batch that they are defined in. Many requests to create temporary tables may cause contention in both the tempdb database and in the system tables. Very large temporary tables are also problematic. If you find that you are creating many large temporary tables, you may want to consider a permanent table that can be truncated between uses.

Table variables use the tempdb database in a manner that is similar to how table variables use temporary tables, so avoid large table variables. Also, table variables are not considered by the optimizer when the optimizer generates execution plans and parallel queries. Therefore, table variables may cause decreased performance. Finally, table variables cannot be indexed as flexibly as temporary tables.

You have to test temporary table and table variable usage for performance. Test with many users for scalability to determine the approach that is best for each situation. Also, be aware that there may be concurrency issues when there are many temporary tables and variables that are requesting resources in the tempdb database.

11. Avoid LEFT JOINs and NULLs:

LEFT JOIN can be used to retrieve all of the rows from a first table and all matching rows from a second table, plus all rows from the second table that do not match the first one. For example, if you wanted to return every Customer and their orders, a LEFT JOIN would show the Customers who did and did not have orders.
LEFT JOINs are costly since they involve matching data against NULL (nonexistent) data. In some cases this is unavoidable, but the cost can be high. A LEFT JOIN is more costly than an INNER JOIN, so if you could rewrite a query so it doesn't use a LEFT JOIN, it could pay huge dividends.
One technique to speed up a query that uses a LEFT JOIN involves creating a TABLE datatype and inserting all of the rows from the first table (the one on the left-hand side of the LEFT JOIN), then updating the TABLE datatype with the values from the second table. This technique is a two-step process, but could save a lot of time compared to a standard LEFT JOIN. A good rule is to try out different techniques and time each of them until you get the best performing query for your application.

12. Rewriting query with OR conditions as a UNION

You can speedup your query,

SELECT * FROM table WHERE (Field1 = 'Value1') OR (Field2 = 'Value2')

By creating indexes on each field in the above conditions and by using a UNION operator instead
of using OR:

SELECT ... WHERE Field1 = 'Value1'


SELECT ... WHERE Field2 = 'Value2'

13. SP Performance Improvement without changing T-SQL

There are two ways, which can be used to improve the performance of Stored Procedure (SP) without making T-SQL changes in SP.

1. Do not prefix your Stored Procedure with sp_.
In SQL Server, all system SPs are prefixed with sp_. When any SP is called which begins sp_ it is looked into masters database first before it is looked into the database it is called in.
2. Call your Stored Procedure prefixed with dbo.SPName - fully qualified name.
When SP is called prefixed with dbo. Or database.dbo. It will prevent SQL Server from placing a COMPILE lock on the procedure. While SP executes it determines if all objects referenced in the code have the same owners as the objects in the current cached procedure plan.

Some more, little but very important tips…..

1. Know your data and business application well.

Familiarize yourself with these sources; you must be aware of the data volume and distribution in your database.

2. Test your queries with realistic data.

A SQL statement tested with unrealistic data may behave differently when used in production. To ensure rigorous testing, the data distribution in the test environment must also closely resemble that in the production environment.

3. Write identical SQL statements in your applications.

Take full advantage of stored procedures, and functions wherever possible. The benefits are performance gain as they are precompiled.

4. Use indexes on the tables carefully.

Be sure to create all the necessary indexes on the tables. However, too many of them can degrade performance.

5. Make an indexed path available.

To take advantage of indexes, write your SQL in such a manner that an indexed path is available to it. Using SQL hints is one of the ways to ensure the index is used.

6. Understand the Optimizer.

Understand the optimizer how it uses indexes, where clause, order by clause, having clause, etc.

7. Think globally when acting locally.

Any changes you make in the database to tune one SQL statement may affect the performance of other statements used by applications and users.

8. The WHERE clause is crucial.

The following WHERE clauses would not use the index access path even if an index is available.
e.g. Table1Col1 (Comparision Operator like >, <, >=, <=,) Table1Col2, Table1Col1 IS (NOT) NULL, Table1Col1 NOT IN (value1, value2), Table1Col1 != expression, Table1Col1 LIKE ‘%pattern%’, NOT Exists sub query.

9. Use WHERE instead of HAVING for record filtering.

Avoid using the HAVING clause along with GROUP BY on an indexed column.

10. Specify the leading index columns in WHERE clauses.

For a composite index, the query would use the index as long as the leading column of the index is specified in the WHERE clause.

11. Evaluate index scan vs. full table scan. (Index Only Searches Vs Large Table Scan, Minimize Table Passes)
If selecting more than 15 percent of the rows from a table, full table scan is usually faster than an index access path. An index is also not used if SQL Server has to perform implicit data conversion. When the percentage of table rows accessed is 15 percent or less, an index scan will work better because it results in multiple logical reads per row accessed, whereas a full table scan can read all the rows in a block in one logical read.

12. Use ORDER BY for index scan.

SQL Server optimizer will use an index scan if the ORDER BY clause is on an indexed column. The following query illustrates this point.

13. Minimize table passes.

Usually, reducing the number of table passes in a SQL query results in better performance. Queries with fewer table passes mean faster queries.

14. Join tables in the proper order.

Always perform the most restrictive search first to filter out the maximum number of rows in the early phases of a multiple table join. This way, the optimizer will have to work with fewer rows in the subsequent phases of join, improving performance.

15. Redundancy is good in where condition.

Provide as much information as possible in the WHERE clause. It will help optimizer to clearly infer conditions.

16. Keep it simple, stupid.

Very complex SQL statements can overwhelm the optimizer; sometimes writing multiple, simpler SQL will yield better performance than a single complex SQL statement.

17. You can reach the same destination in different ways.

Each SQL may use a different access path and may perform differently.

18. Reduce network traffic and increase throughput.

Using T-SQL blocks over Multiple SQL statements can achieve better performance as well as reduce network traffic. Stored Procedures are better over T-SQL blocks as they are stored in SQL Server and they are pre-compiled.

19. Better Hardware.

Better hard ware always helps performance. SCACI drives, Raid 10 Array, Multi processors CPU, 64-bit operating system improves the performance by great amount.

20. Avoid Cursors.

Using SQL Server cursors can result in some performance degradation in comparison with select statements. Try to use correlated sub query or derived tables if you need to perform row-by-row operations.

So guys, i hope you like this post. :)

Thursday, April 16, 2009

Ways to Imporove SQL Server Peroformance - Part 1

Hi Friends,

As i have promised that i will post some tips related to SQL server peroformance, so here they are...

1. Normalizing:

Normalize your table’s schema in such a way that all tables are reduced in columns and are related to other table with some reference.

That will improve performance while you are fetching data from tables that will also reduce the fetching of redundant data.

2. Define Primary and Foreign keys:

Make relation of your table in such a way that you can access any combination of data from various tables just by referencing keys.

3. Choose the most appropriate data types.

This is the main issue when actual data will be stored on local disk, because if you have given some inappropriate data type then it will consume more space than actually needed. That will degrade the disk IO performance.

4. Make Index:

Try to make Index on such columns which are frequently used in searching operations, so it will improve our query performance.

Do not make more indexes on one table that it will degrade the performance, because when table is to be updated by updating or inserting new data, indexes also will be updated, so try to make fewer indexes on table.

5. Return Values:

Return only those columns and rows which are actually needed for requirement. Do not try to fetch all rows and columns which are not needed, because it will slow down the fetching as well as make the high IO operations.

6. Avoid Expensive operations:

Try to avoid expensive operations such as “LIKE”, because we are normally using “LIKE %abc%” wildcard entries…and it will require table scan and it will make very slow response of query result.

7. Avoid Explicit or Implicit functions in WHERE Clause:

The optimizer cannot always select an index by using columns in a WHERE clause that are inside functions. Columns in a WHERE clause are seen as an expression rather than a column. Therefore, the columns are not used in the execution plan optimization.

EX: do not use where clause like this,

SELECT OrderID FROM NorthWind.dbo.Orders WHERE DATEADD(day, 15, OrderDate) = '07/23/1996'

Instead, we can use like,

SELECT OrderID FROM NorthWind.dbo.Orders WHERE OrderDate = DATEADD(day, -15, '07/23/1996')

Guys, i will post second part of this series very soon, so keep watching this BLOG :)

Speed Optimization in ASP.NET 2.0 Web Applications

Hi Friends,

We all face one common problem after designing and implementing a great web application, and it is optimization.

So, its not easy to optimizing a whole web application after developing it. It is better to take some care while designing and coding the application which can save our great time and at last our application will be optimized.

Here i have gathered some useful topics from the internet (various blogs, msdn, articles...) and i want to share with you guys.

Use HTML controls whenever possible

HTML controls is lighter than server controls especially if you are using server controls with its default properties. Server controls generally is easier to use than HTML controls, and on the other side they are slower than HTML controls. So, it is recommended to use HTML controls whenever possible and avoid using unnecessary server controls.

Avoid round trips to server whenever possible

Using server controls will extensively increase round trips to the server via their post back events which wastes a lot of time. You typically need to avoid these unnecessary round trips or post back events as possible. For example, validating user inputs can always (or at least in most cases) take place in the client side. There is no need to send these inputs to the server to check their validity. In general you should avoid code that causes a round trip to the server.

The Page.IsPostBack Property

The Page.IspostBack Boolean property indicates whether this page is loaded as a response to a round trip to the server, or it is being loaded for the first time. This property helps you to write the code needed for the first time the page is loaded, and avoiding running this same code each time the page is posted back. You can use this property efficiently in the page_load event. This event is executed each time a page is loaded, so you can use this property conditionally to avoid unnecessary re-running of certain code.

Server Control's AutoPostBack Property

Always set this property to false except when you really need to turn it on. This property automatically post back to the server after some action takes place depending on the type of the control. For example, in the Text Control this property automatically

Leave Buffering on

It is important to leave page buffering in its on state to improve your page speed, unless you have a serious reason to turn it off.

Server Controls View State

Server control by default saves all the values of its properties between round trips, and this increases both page size and processing time which is of course an undesired behavior. Disable the server control view state whenever possible. For example, if you bind data to a server control each time the page is posted back, then it is useful to disable the control's view state property. This reduces page size and processing time.

Methods for redirection

There are many ways you can use to redirect a user from the current page to another one in the same application, however the most efficient methods to do this are: the Server.Transfer method or cross-page posting.
Web Applications

The following topics give you some tips about how to make an efficient web application:


When an already deployed ASP.NET web application page is requested for the first time, that page needs to be compiled (by the server) before the user gets a response. The compiled page or code is then cached so that we need not to compile it again for the coming requests. It is clear that the first user gets a slow response than the following users. This scenario is repeated for each web page and code file within your web site.

When using precompilation then the ASP.NET entire web application pages and code files will be compiled ahead. So, when a user requests a page from this web application he will get it in a reasonable response time whatever he is the first user or not.

Precompiling the entire web application before making it available to users provides faster response times. This is very useful on frequently updated large web applications.


By default ASP.NET applications use UTF-8 encoding. If your application is using ASCII codes only, it is preferred to set your encoding to ASCII to improve your application performance.


It is recommended to turn authentication off when you do not need it. The authentication mode for ASP.NET applications is windows mode. In many cases it is preferred to turn off the authentication in the 'machin.config' file located on your server and to enable it only for applications that really need it.

Debug Mode

Before deploying your web application you have to disable the debug mode. This makes your deployed application faster than before. You can disable or enable debug mode form within your application's 'web.config' file under the 'system.web' section as a property to the 'compilation' item. You can set it to 'true' or 'false'.
Coding Practices

The following topics give you guidelines to write efficient code:

Page Size

Web page with a large size consumes more bandwidth over the network during its transfer. Page size is affected by the numbers and types of controls it contains, and the number of images and data used to render the page. The larger the slower, this is the rule. Try to make your web pages small and as light as possible. This will improve response time.

Exception Handling

It is better for your application in terms of performance to detect in your code conditions that may cause exceptions instead of relying on catching exceptions and handling them. You should avoid common exceptions like null reference, dividing by zero , and so on by checking them manually in your code.

The following code gives you two examples: The first one uses exception handling and the second tests for a condition. Both examples produce the same result, but the performance of the first one suffers significantly.

8 ' This is not recommended.
9 Try
10 Output = 100 / number
11 Catch ex As Exception
12 Output = 0
13 End Try
15 ' This is preferred.
16 If Not (number = 0) Then
17 Output = 100 / number
18 Else
19 Output = 0
20 End If

Garbage Collector

ASP.NET provides automatic garbage collection and memory management. The garbage collector's main task is to allocate and release memory for your application. There are some tips you can take care of when you writing your application's code to make the garbage collector works for your benefit:

Avoid using objects with a Finalize sub as possible and avoid freeing resources in Finalize functions.
Avoid allocating too much memory per web page because the garbage collector will have to do more work for each request and this increases CPU utilization (not to mention you can go out of memories in larger web applications)
Avoid having unnecessary pointers to objects because this makes these objects alive until you free them yourself within your code not in an automatic way.

Use Try / Finally

If you are to use exceptions anyway, then always use a try / finally block to handle your exceptions. In the finally section you can close your resources if an exception occurred or not. If an exception occurs, then the finally section will clean up your resources and frees them up.

String Concatenation

Many string concatenations are time consuming operations. So, if you want to concatenate many strings such as to dynamically build some HTML or XML strings then use the System.Text.StringBuilder object instead of system.string data type. The append method of the StringBuilder class is more efficient than concatenation.

I hope you like this information. In future i will post more optimization tricks for .NET as well as SQL.

Monday, April 13, 2009

Use of "Using" Keyword

Hi Friends,

We often face some problems with disposing connection, adapter and command objects.
But we can avoid such kind of situations by modifying some process logic. We can use "Using" keywords which will dispose the object which it contains.

It will work just like method block, means when method goes out of scope all object inside it will be prompted to Garbage Collector, here also those objects will be disposed after "Using" statement.


This example demonstrates use of "Using" keyword which disposes all resources used inside it and in Data Access base classes we don't have to worried about connection and adapter dispose, they will be disposed automatically immediatly after use..

internal DataTable GetNetworkSettingsByNetworkID(int networkID)
using (Connection = new MySqlConnection(connectionString))
using (MySqlCommand command = new MySqlCommand())
command.CommandType = CommandType.StoredProcedure;
command.Connection = Connection;
command.Parameters.Add(new MySqlParameter("?_NetworkID", MySqlDbType.Int32));
command.Parameters[0].Value = networkID;
using (MySqlDataAdapter dataAdapter = new MySqlDataAdapter(command))
using (DataTable dtNetworkSettings = new DataTable())
return dtNetworkSettings;
catch (Exception ex)
PacketUtils.WriteLogError(ex, "NetworkDA::GetNetworkSettingsByNetworkID");
throw ex;
if (Connection != null)
if (Connection.State == ConnectionState.Open)

So always use the practice of using "Using" keywords for those objects which implements IDisposable interface.

SQL Interview Questions - Part 5

I hope you enjoyed my Interview questions series...:)
If you have any suggestions or feedback feel free to send me as comment.

What is data integrity? Explain constraints?

Data integrity is an important feature in SQL Server. When used properly, it ensures that data is accurate, correct, and valid. It also acts as a trap for otherwise undetectable bugs within applications.
A PRIMARY KEY constraint is a unique identifier for a row within a database table. Every table should have a primary key constraint to uniquely identify each row and only one primary key constraint can be created for each table. The primary key constraints are used to enforce entity integrity.

A UNIQUE constraint enforces the uniqueness of the values in a set of columns, so no duplicate values are entered. The unique key constraints are used to enforce entity integrity as the primary key constraints.

A FOREIGN KEY constraint prevents any actions that would destroy links between tables with the corresponding data values. A foreign key in one table points to a primary key in another table. Foreign keys prevent actions that would leave rows with foreign key values when there are no primary keys with that value. The foreign key constraints are used to enforce referential integrity.

A CHECK constraint is used to limit the values that can be placed in a column. The check constraints are used to enforce domain integrity.

A NOT NULL constraint enforces that the column will not accept null values. The not null constraints are used to enforce domain integrity, as the check constraints.

What are the properties of the Relational tables?

Relational tables have six properties:

* Values are atomic.
* Column values are of the same kind.
* Each row is unique.
* The sequence of columns is insignificant.
* The sequence of rows is insignificant.
* Each column must have a unique name.

What is De-normalization?

De-normalization is the process of attempting to optimize the performance of a database by adding redundant data. It is sometimes necessary because current DBMSs implement the relational model poorly. A true relational DBMS would allow for a fully normalized database at the logical level, while providing physical storage of data that is tuned for high performance. De-normalization is a technique to move from higher to lower normal forms of database modeling in order to speed up database access.

How to get @@error and @@rowcount at the same time?

If @@Rowcount is checked after Error checking statement then it will have 0 as the value of @@Recordcount as it would have been reset.
And if @@Recordcount is checked before the error-checking statement then @@Error would get reset. To get @@error and @@rowcount at the same time do both in same statement and store them in local variable. SELECT @RC = @@ROWCOUNT, @ER = @@ERROR

What is Identity?

Identity (or AutoNumber) is a column that automatically generates numeric values. A start and increment value can be set, but most DBA leave these at 1. A GUID column also generates numbers, the value of this cannot be controled. Identity/GUID columns do not need to be indexed.

What is a Scheduled Jobs or What is a Scheduled Tasks?

Scheduled tasks let user automate processes that run on regular or predictable cycles. User can schedule administrative tasks, such as cube processing, to run during times of slow business activity. User can also determine the order in which tasks run by creating job steps within a SQL Server Agent job. E.g. Back up database, Update Stats of Tables. Job steps give user control over flow of execution. If one job fails, user can configure SQL Server Agent to continue to run the remaining tasks or to stop execution.

What is a table called, if it does not have neither Cluster nor Non-cluster Index? What is it used for?

Unindexed table or Heap. Microsoft Press Books and Book On Line (BOL) refers it as Heap.
A heap is a table that does not have a clustered index and, therefore, the pages are not linked by pointers. The IAM pages are the only structures that link the pages in a table together.
Unindexed tables are good for fast storing of data. Many times it is better to drop all indexes from table and than do bulk of inserts and to restore those indexes after that.

What is BCP? When does it used?

BulkCopy is a tool used to copy huge amount of data from tables and views. BCP does not copy the structures same as source to destination.

How do you load large data to the SQL server database?

BulkCopy is a tool used to copy huge amount of data from tables. BULK INSERT command helps to Imports a data file into a database table or view in a user-specified format.

Can we rewrite subqueries into simple select statements or with joins?

Subqueries can often be re-written to use a standard outer join, resulting in faster performance. As we may know, an outer join uses the plus sign (+) operator to tell the database to return all non-matching rows with NULL values. Hence we combine the outer join with a NULL test in the WHERE clause to reproduce the result set without using a sub-query.

Can SQL Servers linked to other servers like Oracle?

SQL Server can be lined to any server provided it has OLE-DB provider from Microsoft to allow a link. E.g. Oracle has a OLE-DB provider for oracle that Microsoft provides to add it as linked server to SQL Server group.

How to know which index a table is using?

SELECT table_name,index_name FROM user_constraints

How to copy the tables, schema and views from one SQL server to another?

Microsoft SQL Server 2000 Data Transformation Services (DTS) is a set of graphical tools and programmable objects that lets user extract, transform, and consolidate data from disparate sources into single or multiple destinations.

What is Self Join?

This is a particular case when one table joins to itself, with one or two aliases to avoid confusion. A self join can be of any type, as long as the joined tables are the same. A self join is rather unique in that it involves a relationship with only one table. The common example is when company have a hierarchal reporting structure whereby one member of staff reports to another.

What is Cross Join?

A cross join that does not have a WHERE clause produces the Cartesian product of the tables involved in the join. The size of a Cartesian product result set is the number of rows in the first table multiplied by the number of rows in the second table. The common example is when company wants to combine each product with a pricing table to analyze each product at each price.

Which virtual table does a trigger use?

Inserted and Deleted.

List few advantages of Stored Procedure.

* Stored procedure can reduced network traffic and latency, boosting application performance.
* Stored procedure execution plans can be reused, staying cached in SQL Server’s memory, reducing server overhead.
* Stored procedures help promote code reuse.
* Stored procedures can encapsulate logic. You can change stored procedure code without affecting clients.
* Stored procedures provide better security to your data.

What is DataWarehousing?

* Subject-oriented, meaning that the data in the database is organized so that all the data elements relating to the same real-world event or object are linked together;
* Time-variant, meaning that the changes to the data in the database are tracked and recorded so that reports can be produced showing changes over time;
* Non-volatile, meaning that data in the database is never over-written or deleted, once committed, the data is static, read-only, but retained for future reporting;
* Integrated, meaning that the database contains data from most or all of an organization’s operational applications, and that this data is made consistent.

What is OLTP(OnLine Transaction Processing)?

In OLTP - online transaction processing systems relational database design use the discipline of data modeling and generally follow the Codd rules of data normalization in order to ensure absolute data integrity. Using these rules complex information is broken down into its most simple structures (a table) where all of the individual atomic level elements relate to each other and satisfy the normalization rules.

How do SQL server 2000 and XML linked? Can XML be used to access data?

You can execute SQL queries against existing relational databases to return results as XML rather than standard rowsets. These queries can be executed directly or from within stored procedures. To retrieve XML results, use the FOR XML clause of the SELECT statement and specify an XML mode of RAW, AUTO, or EXPLICIT.

OPENXML is a Transact-SQL keyword that provides a relational/rowset view over an in-memory XML document. OPENXML is a rowset provider similar to a table or a view. OPENXML provides a way to access XML data within the Transact-SQL context by transferring data from an XML document into the relational tables. Thus, OPENXML allows you to manage an XML document and its interaction with the relational environment.

What is an execution plan? When would you use it? How would you view the execution plan?

An execution plan is basically a road map that graphically or textually shows the data retrieval methods chosen by the SQL Server query optimizer for a stored procedure or ad-hoc query and is a very useful tool for a developer to understand the performance characteristics of a query or stored procedure since the plan is the one that SQL Server will place in its cache and use to execute the stored procedure or query. From within Query Analyzer is an option called “Show Execution Plan” (located on the Query drop-down menu). If this option is turned on it will display query execution plan in separate window when query is ran again.

Ok that's it for this series...:)

SQL Interview Questions - Part 4

What is SQL server agent?

SQL Server agent plays an important role in the day-to-day tasks of a database administrator (DBA). It is often overlooked as one of the main tools for SQL Server management. Its purpose is to ease the implementation of tasks for the DBA, with its full-function scheduling engine, which allows you to schedule your own jobs and scripts.

Can a stored procedure call itself or recursive stored procedure? How many levels SP nesting possible?

Yes. Because Transact-SQL supports recursion, you can write stored procedures that call themselves. Recursion can be defined as a method of problem solving wherein the solution is arrived at by repetitively applying it to subsets of the problem. A common application of recursive logic is to perform numeric computations that lend themselves to repetitive evaluation by the same processing steps. Stored procedures are nested when one stored procedure calls another or executes managed code by referencing a CLR routine, type, or aggregate. You can nest stored procedures and managed code references up to 32 levels.

What is @@ERROR?

The @@ERROR automatic variable returns the error code of the last Transact-SQL statement. If there was no error, @@ERROR returns zero. Because @@ERROR is reset after each Transact-SQL statement, it must be saved to a variable if it is needed to process it further after checking it.

What is Raise error?

Stored procedures report errors to client applications via the RAISERROR command. RAISERROR doesn’t change the flow of a procedure; it merely displays an error message, sets the @@ERROR automatic variable, and optionally writes the message to the SQL Server error log and the NT application event log.

What is log shipping?

Log shipping is the process of automating the backup of database and transaction log files on a production SQL server, and then restoring them onto a standby server. Enterprise Editions only supports log shipping. In log shipping the transactional log file from one server is automatically updated into the backup database on the other server. If one server fails, the other server will have the same db can be used this as the Disaster Recovery plan. The key feature of log shipping is that is will automatically backup transaction logs throughout the day and automatically restore them on the standby server at defined interval.

What is the difference between a local and a global variable?

A local temporary table exists only for the duration of a connection or, if defined inside a compound statement, for the duration of the compound statement.

A global temporary table remains in the database permanently, but the rows exist only within a given connection. When connection is closed, the data in the global temporary table disappears. However, the table definition remains with the database for access when database is opened next time.

What are the different types of replication? Explain.

The SQL Server 2000-supported replication types are as follows:

* Transactional
* Snapshot
* Merge

Snapshot replication distributes data exactly as it appears at a specific moment in time and does not monitor for updates to the data. Snapshot replication is best used as a method for replicating data that changes infrequently or where the most up-to-date values (low latency) are not a requirement. When synchronization occurs, the entire snapshot is generated and sent to Subscribers.

Transactional replication, an initial snapshot of data is applied at Subscribers, and then when data modifications are made at the Publisher, the individual transactions are captured and propagated to Subscribers.

Merge replication is the process of distributing data from Publisher to Subscribers, allowing the Publisher and Subscribers to make updates while connected or disconnected, and then merging the updates between sites when they are connected.

What are the OS services that the SQL Server installation adds?

MS SQL SERVER SERVICE, SQL AGENT SERVICE, DTC (Distribution transac co-ordinator)

What are three SQL keywords used to change or set someone’s permissions?


What does it mean to have quoted_identifier on? What are the implications of having it off?

When SET QUOTED_IDENTIFIER is ON, identifiers can be delimited by double quotation marks, and literals must be delimited by single quotation marks. When SET QUOTED_IDENTIFIER is OFF, identifiers cannot be quoted and must follow all Transact-SQL rules for identifiers.

What is the STUFF function and how does it differ from the REPLACE function?

STUFF function to overwrite existing characters. Using this syntax, STUFF(string_expression, start, length, replacement_characters), string_expression is the string that will have characters substituted, start is the starting position, length is the number of characters in the string that are substituted, and replacement_characters are the new characters interjected into the string.
REPLACE function to replace existing characters of all occurance. Using this syntax REPLACE(string_expression, search_string, replacement_string), where every incidence of search_string found in the string_expression will be replaced with replacement_string.

Using query analyzer, name 3 ways to get an accurate count of the number of records in a table?

SELECT * FROM table1
SELECT rows FROM sysindexes WHERE id = OBJECT_ID(table1) AND indid < 2

How to rebuild Master Database?

Shutdown Microsoft SQL Server 2000, and then run Rebuildm.exe. This is located in the Program Files\Microsoft SQL Server\80\Tools\Binn directory.
In the Rebuild Master dialog box, click Browse.
In the Browse for Folder dialog box, select the \Data folder on the SQL Server 2000 compact disc or in the shared network directory from which SQL Server 2000 was installed, and then click OK.
Click Settings. In the Collation Settings dialog box, verify or change settings used for the master database and all other databases.
Initially, the default collation settings are shown, but these may not match the collation selected during setup. You can select the same settings used during setup or select new collation settings. When done, click OK.
In the Rebuild Master dialog box, click Rebuild to start the process.
The Rebuild Master utility reinstalls the master database.
To continue, you may need to stop a server that is running.

What is the basic functions for master, msdb, model, tempdb databases?

The Master database holds information for all databases located on the SQL Server instance and is the glue that holds the engine together. Because SQL Server cannot start without a functioning master database, you must administer this database with care.
The msdb database stores information regarding database backups, SQL Agent information, DTS packages, SQL Server jobs, and some replication information such as for log shipping.
The tempdb holds temporary objects such as global and local temporary tables and stored procedures.
The model is essentially a template database used in the creation of any new user database created in the instance.

What are primary keys and foreign keys?

Primary keys are the unique identifiers for each row. They must contain unique values and cannot be null. Due to their importance in relational databases, Primary keys are the most fundamental of all keys and constraints. A table can have only one Primary key.
Foreign keys are both a method of ensuring data integrity and a manifestation of the relationship between tables.

To be continued...:)

SQL Interview Questions - Part 3

Here we go with some more questions...

What is sub-query? Explain properties of sub-query.

Sub-queries are often referred to as sub-selects, as they allow a SELECT statement to be executed arbitrarily within the body of another SQL statement. A sub-query is executed by enclosing it in a set of parentheses. Sub-queries are generally used to return a single row as an atomic value, though they may be used to compare values against multiple rows with the IN keyword.

A subquery is a SELECT statement that is nested within another T-SQL statement. A subquery SELECT statement if executed independently of the T-SQL statement, in which it is nested, will return a result set. Meaning a subquery SELECT statement can standalone and is not depended on the statement in which it is nested. A subquery SELECT statement can return any number of values, and can be found in, the column list of a SELECT statement, a FROM, GROUP BY, HAVING, and/or ORDER BY clauses of a T-SQL statement. A Subquery can also be used as a parameter to a function call. Basically a subquery can be used anywhere an expression can be used.

Properties of Sub-Query
A subquery must be enclosed in the parenthesis.
A subquery must be put in the right hand of the comparison operator, and
A subquery cannot contain a ORDER-BY clause.
A query can contain more than one sub-query.

What are types of sub-queries?

Single-row subquery, where the subquery returns only one row.
Multiple-row subquery, where the subquery returns multiple rows,.and
Multiple column subquery, where the subquery returns multiple columns.

What is SQL Profiler?

SQL Profiler is a graphical tool that allows system administrators to monitor events in an instance of Microsoft SQL Server. You can capture and save data about each event to a file or SQL Server table to analyze later. For example, you can monitor a production environment to see which stored procedures are hampering performances by executing too slowly.

Use SQL Profiler to monitor only the events in which you are interested. If traces are becoming too large, you can filter them based on the information you want, so that only a subset of the event data is collected. Monitoring too many events adds overhead to the server and the monitoring process and can cause the trace file or trace table to grow very large, especially when the monitoring process takes place over a long period of time.

What is User Defined Functions?

User-Defined Functions allow defining its own T-SQL functions that can accept 0 or more parameters and return a single scalar data value or a table data type.

What kind of User-Defined Functions can be created?

There are three types of User-Defined functions in SQL Server 2000 and they are Scalar, Inline Table-Valued and Multi-statement Table-valued.

Scalar User-Defined Function
A Scalar user-defined function returns one of the scalar data types. Text, ntext, image and timestamp data types are not supported. These are the type of user-defined functions that most developers are used to in other programming languages. You pass in 0 to many parameters and you get a return value.

Inline Table-Value User-Defined Function
An Inline Table-Value user-defined function returns a table data type and is an exceptional alternative to a view as the user-defined function can pass parameters into a T-SQL select command and in essence provide us with a parameterized, non-updateable view of the underlying tables.

Multi-statement Table-Value User-Defined Function
A Multi-Statement Table-Value user-defined function returns a table and is also an exceptional alternative to a view as the function can support multiple T-SQL statements to build the final result where the view is limited to a single SELECT statement. Also, the ability to pass parameters into a T-SQL select command or a group of them gives us the capability to in essence create a parameterized, non-updateable view of the data in the underlying tables. Within the create function command you must define the table structure that is being returned. After creating this type of user-defined function, It can be used in the FROM clause of a T-SQL command unlike the behavior found when using a stored procedure which can also return record sets.

Which TCP/IP port does SQL Server run on? How can it be changed?

SQL Server runs on port 1433. It can be changed from the Network Utility TCP/IP properties –> Port number. both on client and the server.

What are the authentication modes in SQL Server? How can it be changed?

Windows mode and mixed mode (SQL & Windows).

To change authentication mode in SQL Server click Start, Programs, and Microsoft SQL Server and click SQL Enterprise Manager to run SQL Enterprise Manager from the Microsoft SQL Server program group. Select the server then from the Tools menu select SQL Server Configuration Properties, and choose the Security page.

Where SQL server user’s names and passwords are are stored in sql server?

They get stored in master db in the sysxlogins table.

Which command using Query Analyzer will give you the version of SQL server and operating system?

SELECT SERVERPROPERTY('productversion'), SERVERPROPERTY ('productlevel'), SERVERPROPERTY ('edition')

To be continued...:)

Sunday, April 12, 2009

SQL Interview Questions - Part 2

What is a Linked Server?

Linked Servers is a concept in SQL Server by which we can add other SQL Server to a Group and query both the SQL Server dbs using T-SQL Statements. With a linked server, you can create very clean, easy to follow, SQL statements that allow remote data to be retrieved, joined and combined with local data.
Storped Procedure sp_addlinkedserver, sp_addlinkedsrvlogin will be used add new Linked Server.

What is Collation?

Collation refers to a set of rules that determine how data is sorted and compared. Character data is sorted using rules that define the correct character sequence, with options for specifying case-sensitivity, accent marks, kana character types and character width.

What are different types of Collation Sensitivity?

Case sensitivity
A and a, B and b, etc.

Accent sensitivity
a and á, o and ó, etc.

Kana Sensitivity
When Japanese kana characters Hiragana and Katakana are treated differently, it is called Kana sensitive.

Width sensitivity
when a single-byte character (half-width) and the same character when represented as a double-byte character (full-width) are treated differently then it is width sensitive.

What’s the difference between a primary key and a unique key?

Both primary key and unique enforce uniqueness of the column on which they are defined. But by default primary key creates a clustered index on the column, where are unique creates a nonclustered index by default. Another major difference is that, primary key doesn’t allow NULLs, but unique key allows one NULL only.

How to implement one-to-one, one-to-many and many-to-many relationships while designing tables?

One-to-One relationship can be implemented as a single table and rarely as two tables with primary and foreign key relationships.
One-to-Many relationships are implemented by splitting the data into two tables with primary key and foreign key relationships.
Many-to-Many relationships are implemented using a junction table with the keys from both the tables forming the composite primary key of the junction table.

What is a NOLOCK?

Using the NOLOCK query optimizer hint is generally considered good practice in order to improve concurrency on a busy system. When the NOLOCK hint is included in a SELECT statement, no locks are taken when data is read. The result is a Dirty Read, which means that another process could be updating the data at the exact time you are reading it. There are no guarantees that your query will retrieve the most recent data. The advantage to performance is that your reading of data will not block updates from taking place, and updates will not block your reading of data. SELECT statements take Shared (Read) locks. This means that multiple SELECT statements are allowed simultaneous access, but other processes are blocked from modifying the data. The updates will queue until all the reads have completed, and reads requested after the update will wait for the updates to complete. The result to your system is delay (blocking).

What is difference between DELETE & TRUNCATE commands?

Delete command removes the rows from a table based on the condition that we provide with a WHERE clause. Truncate will actually remove all the rows from a table and there will be no data in the table after we run the truncate command.

TRUNCATE is faster and uses fewer system and transaction log resources than DELETE.
TRUNCATE removes the data by deallocating the data pages used to store the table’s data, and only the page deallocations are recorded in the transaction log.
TRUNCATE removes all rows from a table, but the table structure and its columns, constraints, indexes and so on remain. The counter used by an identity for new rows is reset to the seed for the column.
You cannot use TRUNCATE TABLE on a table referenced by a FOREIGN KEY constraint.
Because TRUNCATE TABLE is not logged, it cannot activate a trigger.
TRUNCATE can not be Rolled back.
TRUNCATE is DDL Command.
TRUNCATE Resets identity of the table.

DELETE removes rows one at a time and records an entry in the transaction log for each deleted row.
If you want to retain the identity counter, use DELETE instead. If you want to remove table definition and its data, use the DROP TABLE statement.
DELETE Can be used with or without a WHERE clause
DELETE Activates Triggers.
DELETE can be rolled back.
DELETE is DML Command.
DELETE does not reset identity of the tabl.

Difference between Function and Stored Procedure?

UDF can be used in the SQL statements anywhere in the WHERE/HAVING/SELECT section where as Stored procedures cannot be.
UDFs that return tables can be treated as another rowset. This can be used in JOINs with other tables.
Inline UDF’s can be though of as views that take parameters and can be used in JOINs and other Rowset operations.

When is the use of UPDATE_STATISTICS command?

This command is basically used when a large processing of data has occurred. If a large amount of deletions any modification or Bulk Copy into the tables has occurred, it has to update the indexes to take these changes into account. UPDATE_STATISTICS updates the indexes on these tables accordingly.

What types of Joins are possible with Sql Server?

Joins are used in queries to explain how different tables are related. Joins also let you select data from a table depending upon data from another table.

What is the difference between a HAVING CLAUSE and a WHERE CLAUSE?

Specifies a search condition for a group or an aggregate. HAVING can be used only with the SELECT statement. HAVING is typically used in a GROUP BY clause. When GROUP BY is not used, HAVING behaves like a WHERE clause. Having Clause is basically used only with the GROUP BY function in a query. WHERE Clause is applied to each row before they are part of the GROUP BY function in a query. HAVING criteria is applied after the grouping of rows has occurred.

To be continued...:)

SQL Interview Questions - Part 1

What is RDBMS?

Relational Data Base Management Systems (RDBMS) are database management systems that maintain data records and indices in tables. Relationships may be created and maintained across and among the data and tables. In a relational database, relationships between data items are expressed by means of tables. Interdependencies among these tables are expressed by data values rather than by pointers. This allows a high degree of data independence. An RDBMS has the capability to recombine the data items from different files, providing powerful tools for data usage.

What is normalization?

Database normalization is a data design and organization processes applied to data structures based on rules that help build relational databases. In relational database design, the process of organizing data to minimize redundancy. Normalization usually involves dividing a database into two or more tables and defining relationships between the tables. The objective is to isolate data so that additions, deletions, and modifications of a field can be made in just one table and then propagated through the rest of the database via the defined relationships.

What are different normalization forms?

1NF: Eliminate Repeating Groups
Make a separate table for each set of related attributes, and give each table a primary key. Each field contains at most one value from its attribute domain.
2NF: Eliminate Redundant Data
If an attribute depends on only part of a multi-valued key, remove it to a separate table.
3NF: Eliminate Columns Not Dependent On Key
If attributes do not contribute to a description of the key, remove them to a separate table. All attributes must be directly dependent on the primary key
BCNF: Boyce-Codd Normal Form
If there are non-trivial dependencies between candidate key attributes, separate them out into distinct tables.
4NF: Isolate Independent Multiple Relationships
No table may contain two or more 1:n or n:m relationships that are not directly related.
5NF: Isolate Semantically Related Multiple Relationships
There may be practical constrains on information that justify separating logically related many-to-many relationships.
ONF: Optimal Normal Form
A model limited to only simple (elemental) facts, as expressed in Object Role Model notation.
DKNF: Domain-Key Normal Form
A model free from all modification anomalies.

Remember, these normalization guidelines are cumulative. For a database to be in 3NF, it must first fulfill all the criteria of a 2NF and 1NF database.

What is Stored Procedure?

A stored procedure is a named group of SQL statements that have been previously created and stored in the server database. Stored procedures accept input parameters so that a single procedure can be used over the network by several clients using different input data. And when the procedure is modified, all clients automatically get the new version. Stored procedures reduce network traffic and improve performance. Stored procedures can be used to help ensure the integrity of the database.
e.g. sp_helpdb, sp_renamedb, sp_depends etc.

What is Trigger?

A trigger is a SQL procedure that initiates an action when an event (INSERT, DELETE or UPDATE) occurs. Triggers are stored in and managed by the DBMS.Triggers are used to maintain the referential integrity of data by changing the data in a systematic fashion. A trigger cannot be called or executed; the DBMS automatically fires the trigger as a result of a data modification to the associated table. Triggers can be viewed as similar to stored procedures in that both consist of procedural logic that is stored at the database level. Stored procedures, however, are not event-drive and are not attached to a specific table as triggers are. Stored procedures are explicitly executed by invoking a CALL to the procedure while triggers are implicitly executed. In addition, triggers can also execute stored procedures.
Nested Trigger: A trigger can also contain INSERT, UPDATE and DELETE logic within itself, so when the trigger is fired because of data modification it can also cause another data modification, thereby firing another trigger. A trigger that contains data modification logic within itself is called a nested trigger.

What is View?

A simple view can be thought of as a subset of a table. It can be used for retrieving data, as well as updating or deleting rows. Rows updated or deleted in the view are updated or deleted in the table the view was created with. It should also be noted that as data in the original table changes, so does data in the view, as views are the way to look at part of the original table. The results of using a view are not permanently stored in the database. The data accessed through a view is actually constructed using standard T-SQL select command and can come from one to many different base tables or even other views.

What is Index?

An index is a physical structure containing pointers to the data. Indices are created in an existing table to locate rows more quickly and efficiently. It is possible to create an index on one or more columns of a table, and each index is given a name. The users cannot see the indexes; they are just used to speed up queries. Effective indexes are one of the best ways to improve performance in a database application. A table scan happens when there is no index available to help a query. In a table scan SQL Server examines every row in the table to satisfy the query results. Table scans are sometimes unavoidable, but on large tables, scans have a terrific impact on performance.

Clustered indexes define the physical sorting of a database table’s rows in the storage media. For this reason, each database table may have only one clustered index.
Non-clustered indexes are created outside of the database table and contain a sorted list of references to the table itself.

What are the difference between clustered and a non-clustered index?

A clustered index is a special type of index that reorders the way records in the table are physically stored. Therefore table can have only one clustered index. The leaf nodes of a clustered index contain the data pages.

A nonclustered index is a special type of index in which the logical order of the index does not match the physical stored order of the rows on disk. The leaf node of a nonclustered index does not consist of the data pages. Instead, the leaf nodes contain index rows.

What are the different index configurations a table can have?

A table can have one of the following index configurations:

No indexes
A clustered index
A clustered index and many nonclustered indexes
A nonclustered index
Many nonclustered indexes

What is cursor?

Cursor is a database object used by applications to manipulate data in a set on a row-by-row basis, instead of the typical SQL commands that operate on all the rows in the set at one time.

In order to work with a cursor we need to perform some steps in the following order:

Declare cursor
Open cursor
Fetch row from the cursor
Process fetched row
Close cursor
Deallocate cursor

What is the use of DBCC commands?

DBCC stands for database consistency checker. We use these commands to check the consistency of the databases, i.e., maintenance, validation task and status checks.
E.g. DBCC CHECKDB - Ensures that tables in the db and the indexes are correctly linked.
DBCC CHECKALLOC - To check that all pages in a db are correctly allocated.
DBCC CHECKFILEGROUP - Checks all tables file group for any damage.

To be continued...:)

How to use Microsoft Indexing Service to make search faster...


Microsoft Indexing service is like, it is making catalog for storing index of documents. and when we are searching from using Indexing service it will use those indexes which are stored into catalog, and will do search very much faster.

So here i will try to explain how we can use this Microsoft Indexing Service.
Using the code

First, we have to configure our Microsoft Indexing service to use it.

install Microsoft indexing service from the Add/Remove utility from control panel...

choose Add/Remove Windows Component --> and select Microsoft Indexing Service and install it.

Now write mmc in Start --> Run,

you will find one window which is said "Microsoft Management Console".

no do this procedure..

Click on File -->ADd/Remove Snap-in --> Add -- > select "Indexing Serivce" then --> ok --> finish.

you will see like following window..

now you have to create one catalog, which will keep track of you directory which you want to search in.

to create catalog click on "Indexing Service on Local Machine" then,

Action --> New - > catalog

Enter Name of catalog and path where you want to save the catalog files.

Your newly created catalog will be there in tree,

no just expand that node and you will see "Directories", now right click on it and select "New" --> "Directory"

Enter the path of irectory which holds your documents in which you want to perform searching operation.

Now Stop and then Start the Indexing Service, it"s all, your indexing service is ready to use with your application.

// first create the OleDb connection...

System.Data.OleDb.OleDbConnection odbSearch;
System.Data.OleDb.OleDbCommand cmdSearch;
//Then prepare the Connection String to connect with the

//Indexing Server database.

odbSearch = new System.Data.OleDb.OleDbConnection
("Provider="MSIDXS.1";Data Source=MyDocs;Integrated Security .=");

//Instead of "MyDocs" you have to wrtie catalog name which you have created,

//from above process.

cmdSearch = new System.Data.OleDb.OleDbCommand();
cmdSearch.Connection = odbSearch;

//Now Prepare the Command Text.

cmdSearch.CommandText =
"select doctitle, filename, vpath, rank, characterization from scope()
where FREETEXT(Contents, "" + strSearch + "") order by rank desc ";
//Now Open the Connection.

//Now Fire the command.

OleDbDataAdapter adpt = new OleDbDataAdapter(cmdSearch);


DataSet ds = new DataSet();
DataTable dt = new DataTable();

adpt.Fill(ds, "SearchResult");
dt = ds.Tables[0];

//Thats it, DataTable "dt" will hold your Search Result you can use it now.

Future enhancement

This Process will search only .doc,.txt, .xls files only,

For making it working for .pdf, you have to download Adobe iFilter 6.0 and simply install it.

How to call SSIS package from the stored procedure


SSIS (SQL Server Integration Services) packages, are the server side packages, which will be called from the server, that may be achieved by creating web service. But sometimes we want to pass some excel, or flat files in SSIS package, and this file must be transferred to server to use in SSIS package.

So sometimes there may be some security issues when the web service will be restricted to allow using resources on the server. So we have to use some other way, not web service, to call SSIS.


This article assumes that you are familiar with creating SSIS packages and how to add variables into package and how to call SSIS package to use in code.
Using the code

This article has two attached files
1) enablexp_cmdScript.sql Download - 251 B Download - 251 B

2) ssisfromsql.sql Download - 570 B Download - 570 B

First, i will tell the other way to call SSIS package other tahn using "web service". We can use Stored procedure to call SSIS package. How?

There is one System Stored Procedure in SQL Server 2005 called "xp_cmdshell" which will be set to "False", means this sp is not active by default at the time of SQL Server Installation. We have to manually enable this SP to use. This can be done two way, either by running some script, (which is given in enablexp_cmdscript.sql file) or by using "SQL Server surface Area configuration" tool which will be installed with SQL Server 2005.

xp_cmdshell : "xp_cmdshell" is an extended stored procedure provided by Microsoft and stored in the master database. This procedure allows you to issue operating system commands directly to the Windows command shell via T-SQL code. If needed the output of these commands will be returned to the calling routine.

Start the Surface Area congifuration tool from your SQL server installation in Program Menu, it will look like this,

Now, click on the "Surface Area configuration for Features" link and you will see the following screen, from the Left side meny select your instance name and click on "xp_cmdshell" option under it, just like this,

just enable the xp_cmdshell option, the xp_cmdsheel SP will be enabled after you restart the SQL server services.
If you do not want to do like this, just run the following Script lines in you selected instance in SQL Server,

USE master
EXEC sp_configure "show advanced options", 1
EXEC sp_configure "xp_cmdshell", 1
EXEC sp_configure "show advanced options", 0

Now, we are ready to use "xp_cmdshell" stored procedure to call our SSIS package.

Now, i have created one SSIS package called "ImportItemFile", what it will do, it will fetche the Excel file from the provided location on Server, and will load all the items from excel file to Item table in database.

Varialbes i have to pass are: FileName, CreatedBy, ContractDbConnectionString, BatchID, SupplierID

Here, i have used two special command one is "xp_cmdshell" and second is "dtexec".
now what is "dtexec" command,

dtexec : The dtexec command prompt utility is used to configure and execute SQL Server 2005 Integration Services (SSIS) packages. The dtexec utility provides access to all the package configuration and execution features, such as connections, properties, variables, logging, and progress indicators. The dtexec utility lets you load packages from three sources: a Microsoft SQL Server database, the SSIS service, and the file system.

(Reference from: MSDN)

Now the Script i will create here, is dynamic SQL, means we can use it to call any SSIS pacakges, just we have to pass necessary varibales.

declare @ssisstr varchar(8000), @packagename varchar(200),@servername varchar(100)
declare @params varchar(8000)
----my package name
set @packagename = "ImportItemFile"
----my server name
set @servername = "myserversql2k5"

---- please make this line in single line, i have made this line in multiline
----due to article format.
----package variables, which we are passing in SSIS Package.
set @params = "/set package.variables[FileName].Value;""\
SSISNewItem.xls"" /set package.variables[CreatedBy].Value;
""Chirag"" /set package.variables[ContractDbConnectionString].Value;
""Data Source=myserverSQL2K5;User ID=sa;Password=sapass;
Initial Catalog=Items;Provider=SQLNCLI.1;Persist Security Info=True;
Auto Translate=False;"" /set package.variables[BatchID].Value;""1""
/set package.variables[SupplierID].Value;""22334"""

----now making "dtexec" SQL from dynamic values
set @ssisstr = "dtexec /sq " + @packagename + " /ser " + @servername + " "
set @ssisstr = @ssisstr + @params
-----print line for varification
--print @ssisstr

----now execute dynamic SQL by using EXEC.
DECLARE @returncode int
EXEC @returncode = xp_cmdshell @ssisstr
select @returncode

now we will see the variable passing structure of the "dtexec" command,

/SET packageDataFlowTask.Variables[User::MyVariable].Value;newValue

Now the @returncode variable will be returned by the "dtexec" command and it will be two record set, first will return the code from the following possibl value which will indicate the SSIS package status, and second table will describe all the processes happened during execution of the SSIS package.

Value Description

0 -- The package executed successfully.

1 -- The package failed.

3 -- The package was canceled by the user.

4 -- The utility was unable to locate the requested package. The package could not be found.

5 -- The utility was unable to load the requested package. The package could not be loaded.

6 -- The utility encountered an internal error of syntactic or semantic errors in the command line.

So, by this way we can call the SSIS package from the Stored Procedure by using "xp_cmdsjell" and "dtexec" command from the SQL Server. and we will never face the problems which we may get during calling of SSIS from Web service.


"xp_cmdshell" and "dtexec" also can be used for many more functionality, following are the links for both command which will describe both in details for their syntax and usage.

dtexec : MSDN
xp_cmdshell : Database Journal


Hi friends,
I am Chirag Patel, Software Engineer, who has gained from the internet community more than the traditional teaching styles like books and college.

Dot Net was not my field of interest, but i want to know it from the scratch. So i have decided to learn from the books, but that trail was failed. Then i have searched on the internet about any STEP by STEP guide which can teach me easily and in interesting way. Which has helped me great.

So, I have decided to spread all these knowledge, which i have gained from the you guys, to all you guys who wants to know step by step.

I think this is the TRUE definition of the INTERNET, "Give back to others what you have gained from Others".

So, lets start our journey into sharing knowledge to each other.

Here you will find some interesting topics of Microsoft .Net and some Step By Step guide to learn some new technologies.

Happy sharing :)


My Google Reader