Application Tuning vs Database Tuning

Recently, there was a debate when application developers and database developers were in tussle and pointing fingers on each other for a pity performance issue 🙂

Here are few pointers over the debate and little initial development:

  • Application Developer: There is poorly performing query when being executed from the application. Database developer’s needs to analyse and fix it.
  • Database Developer: The query when executed from database works fine and provides relevant results quickly.
  • Database Administrator: Query pulled from the plan cache – goes for a full table scan and parameterized. There are indexes defined on the predicates, but they are not getting picked-up!

This is what developers saw when executing the query from management studio.

DeveloperPlan

This is the execution plan generated when query is executed from application.

ApplicationPlan

Clearly, looking at highlighted operators in the plan, we can figure out that query executed by application goes for a Clustered Index Scan as opposed to Clustered Index Seek.

Why this difference?

It boils down to the parameterization datatypes!!!!

Application developer passed one parameter of “String” type in the filter predicate, which had a column of type VARCHAR. And SQL Server created a parameter of type NVARCHAR by default for any “String” literal. Hence, to match data type on both side of an equivalence operator, the column with VARCHAR datatype is implicitly converted to NVARCHAR. If we hover over the clustered index scan operator, we will see CONVERT_IMPLICIT being used to perform the conversion task (see highlighted below).

ApplicationScan

On the other hand, when the same query is executed from management studio, query uses Index Seek as no implicit conversion is required.

DeveloperSeek

Possible application fix: Application developers should describe parameter types which gets passed in the query. In NET APIs parameters can be described by using SqlParameter class, that contains not only for parameter name and value, but also for parameter data type, length, precision, and scale.

 

References:

How data access code impacts database performance
SqlParameter Class

ERROR:17892 Database login issue due to logon-trigger

I was working with Logon triggers and was stuck-up in a very interesting issue. Tried to create one trigger named “Logon_trigger” which writes auditing information in audit table in DB named “TestDB”. Once I finished my testing, I dropped the database [TestDB] without dropping the trigger. Unfortunately when I tried to connect to SSMS after that, I got the following error:

error

Thought of connecting thru DAC to drop the trigger and rectify the issue. Interestingly, was not able to connect thru DAC and got the following error:

DAC error

Here is how I rectified the issue:
Connected to the SQL Server Configuration Manger: Start >> All Programs >> SQL Server 2008 R2 >> Configuration Tools >> SQL Server Configuration Manager
Added a trace flag to enable DAC (-T7806), as shown in the screen below and then re-started the service:

services

Tried to connect using DAC again and execute below commands:

connected

Connected using SSMS successfully this time:-)

List jobs residing on a particular database

Below query become very handy when it is required to list all the jobs residing on a specific database.

DECLARE @db_name VARCHAR(100);

SET @db_name = 'My_Database'; -- Change your database name here

SELECT database_name [Database Name],
 name [Job Name],
 js.job_id [Job ID]
FROM msdb.dbo.sysjobsteps js
 INNER JOIN msdb.dbo.sysjobs_view jv
 ON js.job_id = jv.job_id
WHERE database_name = @db_name;

If it is required to get the complete list of jobs on a server along with the database they reside on, below query helpful.

SELECT DISTINCT database_name [Database Name],
                name          [Job Name],
                js.job_id     [Job ID]
FROM   msdb.dbo.sysjobsteps js
       INNER JOIN msdb.dbo.sysjobs_view jv
               ON js.job_id = jv.job_id
ORDER  BY database_name;

Myth: Truncate cannot rollback

There is plethora of reasons which people give when asked if truncate can rollback. Some say it does not rollback as it is a DDL (Data Definition Language) statement and DDL statements are not allowed in explicit transactions. Other view is that as truncate does not create entries in transaction log file it will not rollback. And many other reasons!
Truth is truncate can rollback!
Let us try to prove this fact; Code snippet below does the following:
1) Create table tblTruncate
2) Insert few values [I have inserted 3 values]
3) Truncate the table between an explicit transaction

CREATE TABLE tblTruncate
(
id INT
)
GO

INSERT INTO tblTruncate(id) VALUES (1),(2),(3)
GO
-- Open transaction
BEGIN TRAN
-- Begining of the transaction
PRINT '*********Inside transaction***************'
PRINT ' '
SELECT 'No of records before truncate - ' + CONVERT(CHAR(2), COUNT(*))
FROM   tblTruncate
PRINT '*********Truncating table***************'
PRINT ' '
--Truncating table tblTruncate
TRUNCATE TABLE tblTruncate

SELECT 'No of records after truncate - ' + CONVERT(CHAR(2), COUNT(*))
FROM   tblTruncate

-- Transaction rollback
ROLLBACK TRAN
-- Transaction closed
PRINT '*********Outside transaction***************'
PRINT ' '

SELECT 'No of records after rollback - ' + CONVERT(CHAR(2), COUNT(*))
FROM   tblTruncate

Here it the output log:

*********Inside transaction***************

No of records before truncate - 3

*********Truncating table***************

No of records after truncate - 0

*********Outside transaction***************

No of records after rollback - 3

Clearly we can see that all the three records which were truncated from table tblTruncate were present after rollback.
Taking another example: Here we will create a table inside a transaction within rollback mode and try to check if table exists after rollback.

-- Pre-condition to check if table exists
IF OBJECT_ID('table_inside_transaction') IS NOT NULL
PRINT 'Table exists!'
ELSE
PRINT 'Table does not exist!'
Print '***** Transaction begins *****'
print ''

--Open transaction
BEGIN TRAN

CREATE TABLE table_inside_transaction
(
id INT
)

-- Checking if table is created inside transaction
IF OBJECT_ID('table_inside_transaction') IS NOT NULL
PRINT 'Table exists!'
ELSE
PRINT 'Table does not exist!'

ROLLBACK TRAN
-- Close transaction in rollback mode

Print '***** Transaction closed *****'
print ''
-- Checking if table exists
IF OBJECT_ID('table_inside_transaction') IS NOT NULL
PRINT 'Table exists!'
ELSE
PRINT 'Table does not exist!'

Here is the output log generated:

Table does not exist!

***** Transaction begins *****

Table exists!

***** Transaction closed *****

Table does not exist!

Again, it is very clear that DDL transactions can rollback. Is it good to think at this stage that every DDL transaction can be executed within transaction?
Answer is NO!
All the DDL transactions can be executed inside transaction except the following:

  • CREATE/ALTER/DROP DATABASE
  • CREATE/ALTER/DROP FULLTEXT INDEX
  • CREATE/ALTER/DROP FULLTEXT CATALOGE
  • RECONFIGURE
  • BACKUP/RESTORE
  • KILL
  • Some DBCC Commands
  • DDL statements related to memory-optimized tables (SQL 2014)
  • Natively compiled Stored Procedures

Another thing worth mentioning is that UPDATE STATISTICS can be used inside an explicit transaction. But it cannot be rolled back; the fact being UPDATE STATISTICS commits independently of the transaction in which it is enclosed.

Hope this post was helpful and set a group to bust myths around DDL inside transactions.

Happy Learning
Lokesh Vij

ISDATE() revisited

ISDATE is a wonderful function used to test if the underlying value is a valid date. I have often found developers wink-their-eyes in shock over ISDATE’s non-deterministic behaviour. I hope this small presentation will help developers use ISDATE with care.

What this function does?

ISDATE function returns 1 if the value/expression under test is a valid datetime and returns 0 for all other values.

Syntax

ISDATE (value/expression under test)

Let us know take a deep-dive into application of this function. Though it would be very easy for me to list down what is valid (when function returns 1) and what is invalid (when function returns 0) date; it would be better to apply and check how this function behaves in different scenarios (build over different application sets, given below, with varied complexity). These will build-up our learning base very well.

Application Set – 1

Below is the list of queries where ISDATE is applied to different expressions. For all the these cases, consider that dateformat is set to DMY and language is set to English (US).

SELECT ISDATE('15/01/2012') -- Result returned = 1
SELECT ISDATE('15-01-2012') -- Result returned = 1
SELECT ISDATE('15-2012-01') -- Result returned = 1
SELECT ISDATE('01-15-2012') -- Result returned = 0
SELECT ISDATE('01-2012-15') -- Result returned = 0
SELECT ISDATE('2012-01-15') -- Result returned = 0
SELECT ISDATE('2012-15-01') -- Result returned = 1

As per ISDATE logic it is clear that for #1 and #2 the return value should be 1 and for all other it should be 0. But this is not the case!

#3 and #7 also have return value of 1. Why??

Because, SQL Server can implicitly convert expression (character string specifically) in #3 and #7 into valid datetime and for others (#4, #5 and #6) implicit conversion will fail with out-of-range conversion error. To demonstrate this, just try to execute the following code snippet in SSMS.

PRINT '@DT1'
DECLARE @DT1 DATETIME = '15/01/2012';

PRINT '@DT2'
DECLARE @DT2 DATETIME = '15-01-2012';

PRINT '@DT3'
DECLARE @DT3 DATETIME = '15-2012-01';

PRINT '@DT4'
DECLARE @DT4 DATETIME = '01-15-2012';

PRINT '@DT5'
DECLARE @DT5 DATETIME = '01-2012-15';

PRINT '@DT6'
DECLARE @DT6 DATETIME = '2012-01-15';

PRINT '@DT7'
DECLARE @DT7 DATETIME = '2012-15-01';

You will find the following error message for @DT4, @DT5 and @DT6
The conversion of a varchar data type to a datetime data type resulted in an out-of-range value.

Application Set – 2

Let us try to re-execute the same set of statements executed in Application Set – 1, but this time dateformat is set to MDY (SET DATEFORMAT MDY)

SELECT ISDATE('15/01/2012') -- Result returned = 0
SELECT ISDATE('15-01-2012') -- Result returned = 0
SELECT ISDATE('15-2012-01') -- Result returned = 0
SELECT ISDATE('01-15-2012') -- Result returned = 1
SELECT ISDATE('01-2012-15') -- Result returned = 1
SELECT ISDATE('2012-01-15') -- Result returned = 1
SELECT ISDATE('2012-15-01') -- Result returned = 0

Return value of the function is just the opposite. This does not need any explanation. But it is worth mentioning here that

response of ISDATE function depends upon the current dateformat

Application Set – 3

Dateformat being MDY let us change the language setting to something other than English (US) and check the result. Say for example, we are changing the language to Italian (SET LANGUAGE Italian)

SELECT ISDATE('15/01/2012') -- Result returned = 1
SELECT ISDATE('15-01-2012') -- Result returned = 1
SELECT ISDATE('15-2012-01') -- Result returned = 1
SELECT ISDATE('01-15-2012') -- Result returned = 0
SELECT ISDATE('01-2012-15') -- Result returned = 0
SELECT ISDATE('2012-01-15') -- Result returned = 0
SELECT ISDATE('2012-15-01') -- Result returned = 1

Again we noticed that this has an impact on ISDATE response. Hence it is worth mentioning that

response of ISDATE function depends upon current language settings

Application Set – 4

Let us try to check the behaviour of ISDATE when expression is of the following types (dateformat being DMY and language being English (US)):

  • Varchar
  • N Varchar
  • Datetime
  • Datetime2
  • Date
  • Null expression

Here is the code snippet used to check this:

SET DATEFORMAT DMY;
SET LANGUAGE English;
-- When expression => VARCHAR
DECLARE @dt_varchar VARCHAR(10) = '15-01-2012'
SELECT ISDATE(@dt_varchar)
GO
-- When expression => N'VARCHAR
DECLARE @dt_NVarchar NVARCHAR(10) = N'15-01-2012'
SELECT ISDATE(@dt_NVarchar)
GO
-- When expression => NULL
DECLARE @dt_Null DATETIME = NULL
SELECT ISDATE(@dt_Null)
GO
-- When expression => DATETIME
DECLARE @dt_DateTime DATETIME = '15-01-2012 00:00:00.000'
SELECT ISDATE(@dt_DateTime)
GO
-- When expression => DATETIME2
DECLARE @dt_DateTime2 DATETIME2 = '15-01-2012 00:00:00.0000000'
SELECT ISDATE(@dt_DateTime2)
GO
-- When expression => DATE
DECLARE @dt_Date DATE = '15-01-2012'
SELECT ISDATE(@dt_Date)
GO

Here is the resultant

#
Expression
Return Value
1
VARCHAR
1
2
N'VARCHAR
1
3
Is Null
0
4
DATETIME
1
5
DATETIME2
ERROR: Invalid argument
6
DATE
ERROR: Invalid argument

From above, picture about the behaviour of ISDATE is very clear now. It is also crystal clear that Datatime2 and Date data type are not valid arguments for ISDATE function. But care should be taken when datetime2 value is directly passed as character string in ISDATE argument. Here return value will be zero rather than an error message.

SELECT ISDATE('15-01-2012 00:00:00.0000000')
-- Returns 0

In nutshell, we can summarize our discussion with following points:

  • Behaviour of ISDATE function is dependent on default dateformat and language
  • It returns 1 when the string argument can implicitly be converted into valid datetime value. Increasing the seconds precision of the same string value to represent datetime2 format (or even if the seconds precision exceeds 3 significant digits) will return a 0 value.
  • Date and Datetime2 arguments are not allowed, they result in invalid argument error
  • Null values inside expressions also results in 0 as return value
  • Values such as text, ntext and image data types also return 0 value

I hope you enjoyed reading this post and must have admitted few facts to your memory.

The motivation to write this post came from SQLServerCentral Question of the Day. Here is the link to the question posted by me –> link

Happy Learning
Lokesh Vij

How to acquire a table lock for specific time?

Recently one of my colleagues asked me how to acquire an exclusive lock on a database table for a specific amount of time. And this lock should be released once the specified time expires.

You need two components to accomplish this task:
1) Table hints – that puts exclusive lock on the table
2) Delay element – that will hold the lock for a specified amount of time

1) Table hints
Table hint TABLOCKX can be used to acquire exclusive lock on the table.

 
SELECT *
FROM <table_name> 
WITH (TABLOCKX)

2) Delay element
WAITFOR statement inside transaction block can be used to introduce delay. Following sample code snippet introduces a delay for 10 seconds

 
WAITFOR DELAY '00:00:10' 

Here is the code snippet that creates a test table, inserts some data and places an exclusive lock for 10 seconds on this table

 
CREATE TABLE test_table
  (
     col1 INT
  ) 
GO 

INSERT test_table
VALUES (1)
GO

DECLARE @delay VARCHAR(8) = '00:00:10'

BEGIN TRAN

SELECT *
FROM   test_table WITH (TABLOCKX)
WHERE  col1 = '1'

WAITFOR DELAY @delay

ROLLBACK TRAN

Hope you enjoyed reading this post.

Happy Learning
Lokesh Vij

LIST Partitioning in SQL Server

SQL Server offers only range partitioning our tables, but what if I want to partition my table on the basis of List of values? Is it not doable?

The answer is Yes and No both – Yes because SQL Server offers only range based partitioning and No because, there is a trick to partition the table indirectly.

Let us not talk much and directly get into action to see how list partition works.

Below code snippet creates a simple table that we will like to partition basis list of values.

CREATE TABLE list_partition
(
id INT IDENTITY ( 1 , 1 ),
value CHAR ( 1 ) NOT NULL
)

INSERT INTO list_partition
( value )
VALUES ( 'A' ),
( 'B' ),
( 'C' ),
( 'D' ),
( 'A' ),
( 'B' ),
( 'A' ),
( 'E' ),
( 'F' ),
( 'G' )

SELECT *
FROM list_partition
| ID | VALUE |
--------------
|  1 |     A |
|  2 |     B |
|  3 |     C |
|  4 |     D |
|  5 |     A |
|  6 |     B |
|  7 |     A |
|  8 |     E |
|  9 |     F |
| 10 |     G |

Ideally I want to assign value ‘A’ and ‘C’ to suppose partition-1, ‘B’ to partition-2 and any thing else to partition-3. Directly using partition range function over this value column will only allow value ‘B’ to go to partition-1 as it is between ‘A’ and ‘C’ which are part of partition-1. To do away with we need to create a computed-persisted column in this table. The computed column will give us an indicator, based on a logic we define, where the partitions would be assigned.

Below code snippet adds a computed-persisted column with logic to generate a new value which can be assigned to partition range function. Idea here is to generate a same indicator for all the values that we want to put in one partition.

ALTER TABLE list_partition
ADD partition_flag AS ( CASE value WHEN 'A' THEN 1 WHEN 'B' THEN 2 WHEN 'C' THEN 1 ELSE 3 END ) PERSISTED

SELECT *
FROM list_partition
| ID | VALUE | PARTITION_FLAG |
-------------------------------
|  1 |     A |              1 |
|  2 |     B |              2 |
|  3 |     C |              1 |
|  4 |     D |              3 |
|  5 |     A |              1 |
|  6 |     B |              2 |
|  7 |     A |              1 |
|  8 |     E |              3 |
|  9 |     F |              3 |
| 10 |     G |              3 |

Let us check the number of partitions in current table (obviously it will be one)

SELECT partition_id , object_id , index_id , partition_number , rows
FROM sys . partitions
WHERE OBJECT_ID = OBJECT_ID ( 'list_partition' )
|      PARTITION_ID |  OBJECT_ID | INDEX_ID | PARTITION_NUMBER | ROWS |
-----------------------------------------------------------------------
| 72057595266007040 | 1668917017 |        0 |                1 |   10 |

Below code snippet does the following:
1) Create partition range function, it will create three partitions range for the following:
Partition 1 – Partition value less than and equal 1
Partition 2 – Partition value greater than 1 and less than equal to 2
Partition 3 – Partition value greater than 2
2) Create partition scheme and attach all the partitions to primary file group

CREATE PARTITION FUNCTION pf_list_partition ( INT )
AS RANGE LEFT
FOR VALUES ( 1 , 2 );

CREATE PARTITION SCHEME ps_list_partition
AS PARTITION pf_list_partition ALL TO ( [PRIMARY] )

--Partition scheme 'ps_list_partition' has been created successfully. 
--'PRIMARY' is marked as the next used filegroup in partition scheme 'ps_list_partition'.

Having reached here, in order to partition the table we need to create a clustered index on the partition_flag using partition scheme – as shown below

CREATE CLUSTERED INDEX CI_list_partition ON list_partition
(
id ASC
)
ON ps_list_partition ( partition_flag )

Let us take a look at the number of partions now

SELECT partition_id , object_id , index_id , partition_number , rows
FROM sys . partitions
WHERE OBJECT_ID = OBJECT_ID ( 'list_partition' )
|      PARTITION_ID |  OBJECT_ID | INDEX_ID | PARTITION_NUMBER | ROWS |
-----------------------------------------------------------------------
| 72057595266072580 | 1668917017 |        1 |                1 |    4 |
| 72057595266138110 | 1668917017 |        1 |                2 |    2 |
| 72057595266203650 | 1668917017 |        1 |                3 |    4 |

It is clear that there are three partitions with corresponding count of rows.

Woow…isn’t it a simple trick 🙂

Hope you enjoyed reading and ready to apply this!

Happy Learning
Lokesh Vij

Date sequence generator using CTE

Today we will learn how to generate a sequence of dates. Usually “Date sequence generator” has a wide variety of use, we will touch up this in our next blog.

Before getting into generating a sequence of dates let us first generate a number sequence . Here is the example to generate a sequence from 1 to 10

DECLARE @start INT = 1
DECLARE @end INT = 10

;WITH num_seq
AS
(
 SELECT @start seq
 UNION ALL
 SELECT seq + 1
 FROM num_seq
 WHERE seq + 1 <= @end
 )
SELECT *
FROM num_seq
| SEQ |
-------
| 1 |
| 2 |
| 3 |
| 4 |
| 5 |
| 6 |
| 7 |
| 8 |
| 9 |
| 10 |

Now let us expand this concept to generate a sequence of dates. Following example shows how to generate a date sequence from 01-JAN-2012 to 10-JAN-2012

DECLARE @start DATETIME = '01-JAN-2012'
DECLARE @end DATETIME = '10-JAN-2012'

;WITH date_seq
AS
(
 SELECT @start seq
 UNION ALL
 SELECT seq + 1
 FROM date_seq
 WHERE seq + 1 <= @end
 )
SELECT *
FROM date_seq
| SEQ |
-----------------------------
| January, 01 2012 00:00:00 |
| January, 02 2012 00:00:00 |
| January, 03 2012 00:00:00 |
| January, 04 2012 00:00:00 |
| January, 05 2012 00:00:00 |
| January, 06 2012 00:00:00 |
| January, 07 2012 00:00:00 |
| January, 08 2012 00:00:00 |
| January, 09 2012 00:00:00 |
| January, 10 2012 00:00:00 |

Isn’t it very simple!
Let us go a step further generate a date sequence of all Monday’s for the month of January,2012
To do this I need to get the first Monday which falls between my start and end date.

Here’s the logic to get first Monday of the given date (@start – taking this from previous example)

DECLARE @start DATETIME = '01-JAN-2012'
SELECT @start+2-DATEPART(dw,@start) Monday_dt
| MONDAY_DT |
-----------------------------
| January, 02 2012 00:00:00 |

The idea here is that DATEPART function with ‘dw’ as datepart will return week-day. DATEPART by default takes Sunday to be the first day of the week and returns an interger value 1 and for Monday it returns value as 2. Because I am interested in getting Monday, I have added an offset of +2 in my given date to nullify the DATEPART value to zero if it is Tuesday.

Something like this: @Start +2 (offset value ) – 2 (this 2 is returned if @Start is Monday)
Hence what I am left in the end is @Start [considering if it is Monday]

Let us take another example of @Start = ’01-JAN-2011’ (This date is Saturday)

DECLARE @start DATETIME = '01-JAN-2011'
SELECT @start+2-DATEPART(dw,@start) Monday_dt
| MONDAY_DT |
------------------------------
| December, 27 2010 00:00:00 |

This logic is correct, 1st of January,2011 being Saturday, this logic returns first Monday as 27th December,2011. But the point is that date sequence should always generate a date (Monday in our case) which is greater than or equal to start date. Hence I need to modify my logic to get this.

Here it is: I check if weekday is less than or equal to 2, if it is there is no need to modify the logic, if it is not I add +7 as offset to get Next Monday.

DECLARE @start DATETIME = '01-JAN-2011'
SELECT CASE
 WHEN DATEPART(dw, @Start)<=2
 THEN @start +2 -DATEPART(dw,@start)
 ELSE
 @start +2 + 7 -DATEPART(dw,@start)
 END Monday_dt
| MONDAY_DT |
-----------------------------
| January, 03 2011 00:00:00 |

Looking at the logic above for same date I am getting a Monday which is greater than start date.

Now embed this logic in the CTE to generate an anchor record and make a small modification in the recursive code to add an offset of +7 to start date

DECLARE @start DATETIME = '01-JAN-2012'
DECLARE @end DATETIME = '31-JAN-2012'

;WITH date_seq
AS
(
 SELECT CASE
 WHEN DATEPART(dw, @Start)<=2
 THEN @start +2 -DATEPART(dw,@start)
 ELSE
 @start +2 + 7 -DATEPART(dw,@start)
 END Monday_seq
 UNION ALL
 SELECT Monday_seq + 7
 FROM date_seq
 WHERE Monday_seq + 7 <= @end
 )
SELECT *
FROM date_seq
| MONDAY_SEQ |
-----------------------------
| January, 02 2012 00:00:00 |
| January, 09 2012 00:00:00 |
| January, 16 2012 00:00:00 |
| January, 23 2012 00:00:00 |
| January, 30 2012 00:00:00 |

Hope you enjoyed reading this post.

Happy Learning
Lokesh

Operating on consecutive rows/columns

Today we will learn how to operate upon two consecutive rows. The logic here can be extrapolated and applied to as many rows as you can.

To start with, suppose there is a process which runs every day and it produces certain quantity of product (as output). I need to know by how many products is my today’s process leading/trailing then yesterday.

Sample data:

CREATE TABLE test
(
process VARCHAR(10), prcs_dt DATE, qty INT
);

INSERT INTO test
(process, prcs_dt,qty)
VALUES
('P1',GETDATE(),10),
('P1',GETDATE()+1,20),
('P1',GETDATE()+2,25),
('P1',GETDATE()+3,30),
('P1',GETDATE()+4,40);
 

Data in test table looks like:

| PROCESS |    PRCS_DT | QTY |
------------------------------
|      P1 | 2012-07-22 |  10 |
|      P1 | 2012-07-23 |  20 |
|      P1 | 2012-07-24 |  25 |
|      P1 | 2012-07-25 |  30 |
|      P1 | 2012-07-26 |  40 |

To find out if the today’s output(qty) is leading/trailing than yesterday our Query should give the difference between two consecutive dates.

Thinking strategy: If qty of every previous process execution is present in the same row, it would have made our life easier. Just subtracting the new value with the old one would have solved the problem. This can be done by first ordering the processes by prcs_dt and then assign them a sequential row number. Once this is done, we can perform a self join on consecutive rows to bring them together (in a single row).

Step-by-step logic
Step 1: Order the data set by prcs_dt and create sequential row number for them

SELECT ROW_NUMBER() OVER (ORDER BY prcs_dt) seq,
process, prcs_dt, qty
FROM test;

Output:

| SEQ | PROCESS |    PRCS_DT | QTY |
------------------------------------
|   1 |      P1 | 2012-07-22 |  10 |
|   2 |      P1 | 2012-07-23 |  20 |
|   3 |      P1 | 2012-07-24 |  25 |
|   4 |      P1 | 2012-07-25 |  30 |
|   5 |      P1 | 2012-07-26 |  40 |

Step 2: Perform a self join on this result set such that two consecutive rows appears as a single row. To do this, I am using a CTE which holds the above result set and then perform a self join on CTE with a condition that seq of one CTE is equal to seq + 1 (sequence incremented by 1) of other CTE

WITH cte
AS
(
SELECT ROW_NUMBER() OVER (ORDER BY prcs_dt) seq,
process, prcs_dt, qty
FROM test
)
SELECT c1.seq seq1,c1.prcs_dt,c1.qty qty1,
c2.seq seq2,c2.qty qty2
FROM cte c1 LEFT JOIN cte c2
ON c1.seq = c2.seq+1;
 

Output:

| SEQ1 |    PRCS_DT | QTY1 |   SEQ2 |   QTY2 |
----------------------------------------------
|    1 | 2012-07-22 |   10 | (null) | (null) |
|    2 | 2012-07-23 |   20 |      1 |     10 |
|    3 | 2012-07-24 |   25 |      2 |     20 |
|    4 | 2012-07-25 |   30 |      3 |     25 |
|    5 | 2012-07-26 |   40 |      4 |     30 |

Step 3: Simply subtract the current quantity(qty1) from the previous one (qty2) to get the differential result

WITH cte
AS
(
SELECT ROW_NUMBER() OVER (ORDER BY prcs_dt) seq,
process, prcs_dt, qty
FROM test
)
SELECT c1.seq seq1,c1.prcs_dt,c1.qty qty1,
c2.seq seq2,c2.qty qty2,c1.qty-ISNULL(c2.qty,0) qty_diff
FROM cte c1 LEFT JOIN cte c2
ON c1.seq = c2.seq+1;

Output:

| SEQ1 |    PRCS_DT | QTY1 |   SEQ2 |   QTY2 | QTY_DIFF |
---------------------------------------------------------
|    1 | 2012-07-22 |   10 | (null) | (null) |       10 |
|    2 | 2012-07-23 |   20 |      1 |     10 |       10 |
|    3 | 2012-07-24 |   25 |      2 |     20 |        5 |
|    4 | 2012-07-25 |   30 |      3 |     25 |        5 |
|    5 | 2012-07-26 |   40 |      4 |     30 |       10 |

Kindly observe for the first row in the output above qty2 is NULL, this cannot be subtracted from qty1 as it will output a NULL value so I have used ISNULL function to replace NULL with zero.

 

Performing SUM over a column of datatype TIME

Few days back one of my colleague asked me a very interesting problem of adding time. He wanted to know how to add time in a similar fashion just add we add integers using SUM() function.

This can be done very easily: Let us first create a table time_store to store TIME format [hh:mm:ss.msc] and insert some values in this table.


CREATE TABLE time_store
  (
     tme TIME
  )
GO

--Following code loops 5 times
--and inserts time in time_store table with a delay of 5 seconds

WAITFOR DELAY '000:00:5'
INSERT INTO time_store VALUES (CONVERT(TIME,GETDATE()))
GO 5

SELECT * FROM time_store
GO

tme
----------------
02:09:05.8370000
02:09:10.8500000
02:09:15.8670000
02:09:20.8800000
02:09:25.8930000

Adding time is a simple three step process:
1) Convert your time to seconds
2) Add all the seconds together
3) Convert your seconds back in time format [hh:mm:ss.msc]

Below code snippet gives you the result.


SELECT CONVERT(TIME, DATEADD(s, SUM(( DATEPART(hh, tme) * 3600 ) + ( DATEPART(mi, tme) * 60 ) + DATEPART(ss, tme)), 0)) AS total_time
FROM   time_store

total_time
----------------
10:46:15.0000000

To understand, let us investigate the code step by step

1) Convert time into seconds

Formula: Hours x 60(mins) x 60(secs) + Minutes x 60 (secs) + Seconds


SELECT tme,
       ( DATEPART(hh, tme) * 3600 ) +
       ( DATEPART(mi, tme) * 60 ) +
         DATEPART(ss, tme) AS tme_in_sec
FROM   time_store

tme              tme_in_sec
---------------- -----------
02:09:05.8370000 7745
02:09:10.8500000 7750
02:09:15.8670000 7755
02:09:20.8800000 7760
02:09:25.8930000 7765

2) Use the simple SUM() formula now to add seconds


SELECT SUM(
             ( DATEPART(hh, tme) * 3600 ) +
             ( DATEPART(mi, tme) * 60 ) +
               DATEPART(ss, tme)
          )    AS total_sec
FROM   time_store

total_sec
-----------
38775

3) Convert total seconds to time format [hh:mm:ss]

Simple tip to do this is to use ADDDATE() function. Using this function you should add total seconds to a dummy date. This will return you with a datetime value. Clip-off the date part by converting datetime to time using CONVERT() function.

Let us firt generate a dummy datetime, this can be done with:


SELECT DATEADD(s, 0, 0) AS dummy_datetime

dummy_datetime
-----------------------
1900-01-01 00:00:00.000

Here we are adding zero seconds to dummy date. If we add total seconds we calculated earlier to this dummy date, it will automatically convert seconds to time format.


DECLARE @total_sec INT    -- declared variable to hold total seconds

SELECT @total_sec  =  SUM(( DATEPART(hh, tme) * 3600 ) +
                          ( DATEPART(mi, tme) * 60 ) +
                            DATEPART(ss, tme))
FROM   time_store

SELECT DATEADD(s, @total_sec, 0) AS total_datetime
FROM   time_store

total_datetime
-----------------------
1900-01-01 10:46:15.000

Observe the date and time part, date part matches with that of dummy date but total seconds are converted into time format. Now to get rid of the date part, convert this datetime to time format using CONVERT() function.


SELECT CONVERT(TIME, DATEADD(s, @total_sec, 0)) AS total_time
FROM   time_store

total_time
----------------
10:46:15.0000000

If you think this is the end, then you are wrong, there is a catch here. Let us try this logic on a different set of data and check if it works fine or not.

New data set being the following, let us apply our code:


tme
----------------
13:06:55.8030000
13:07:00.8300000
13:07:05.8400000
13:07:10.8500000
13:07:15.8600000

DECLARE @total_sec INT

SELECT @total_sec  =  SUM(( DATEPART(hh, tme) * 3600 ) +
                          ( DATEPART(mi, tme) * 60 ) +
                            DATEPART(ss, tme))
FROM   time_store

SELECT CONVERT(TIME, DATEADD(s, @total_sec, 0)) AS total_time
FROM   time_store

total_time
----------------
17:35:25.0000000

Do you think this is correct?
Absolute No No!!

Let us remove the CONVERT() function from the above code and check the result in datetime format


SELECT DATEADD(s, @total_sec, 0) AS total_datetime
FROM   time_store

total_datetime
-----------------------
1900-01-03 17:35:25.000

Clearly observe the date part, dummy date has got incremented by two days. With a small modification in the code, we can actually calcuate total time in days, hours:mins:secs


SELECT @total_sec / ( 24 * 3600 ) as total_days, convert(time,DATEADD(s, @total_sec, 0)) AS total_datetime

total_days  total_datetime
----------- ----------------
2           17:35:25.0000000

If we apply the same code to the data set chosen earlier, it gives the following result.


total_days  total_datetime
----------- ----------------
0           10:46:15.0000000

Hope you all enjoyed reading this post.

Happy Learning
Lokesh Vij