SQL Server – Monitoring a Rollback

Standard

Recently bumped into an issue where we needed to kill a large insert, obviously causing SQL Server to perform a rollback. I needed to find out what was going on, so looked to Pinal Dave’s blog post about finding out which queries are running, which you can find here. Which uses the sys.dm_exec_requests and sys.dm_exec_sql_text DMVs.

The below script adds a few little things, to tell you for a given SPID, what the SQL text is that’s being executed and most importantly the expected finish time.

--Set how long you want the script to run for in HH:MM:SS format.
--The longer you leave it running the more accurate it will be
DECLARE @Delay VARCHAR(20)	= '00:00:10';

--Add your SPID, which you can get from sp_who2
DECLARE @SPID INT			= 206;


DECLARE @StartPercent DECIMAL(18,9), @Statement VARCHAR(MAX), @EndPercent DECIMAL(18,9), @DelaySeconds INT, @PercentToGo DECIMAL(18,9), @Increase DECIMAL(18,9);

--Set the starting percentage complete
SELECT @StartPercent = req.percent_complete, @Statement = sqltext.TEXT
FROM sys.dm_exec_requests req
CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS sqltext
WHERE session_id = @SPID

--Wait for the specified delay time
WAITFOR DELAY @Delay

--Get the end percentage complete
SELECT @EndPercent = req.percent_complete
FROM sys.dm_exec_requests req
WHERE session_id = 206

SET @PercentToGo = 100 - @EndPercent;
SET @Increase = @EndPercent - @StartPercent
SET @DelaySeconds = DATEDIFF(SECOND, 0, @Delay)

--Return the output including the expected finish time
SELECT @SPID [SPID], @Statement [SQLText], @DelaySeconds [DelaySeconds], @StartPercent [StartPercent], @EndPercent [EndPercent], @Increase [Increase], DATEADD(SECOND,(@PercentToGo / @Increase) * @DelaySeconds, GETDATE()) [ExpectedFinishTime]

 

You can use the same query to monitor the below events (source):

ALTER INDEX REORGANIZE
AUTO_SHRINK option with ALTER DATABASE
BACKUP DATABASE
DBCC CHECKDB
DBCC CHECKFILEGROUP
DBCC CHECKTABLE
DBCC INDEXDEFRAG
DBCC SHRINKDATABASE
DBCC SHRINKFILE
RECOVERY
RESTORE DATABASE
ROLLBACK
TDE ENCRYPTION

SQL Server – Join to list of Strings

Standard

This one can also be used when passing a multi-value parameter from SSRS to the database.

Firstly you need to use the delimited8k split function that was created and refined by many people such as Itzek Ben Gan, Lynn Pettis and Jeff Modern. If you don’t know who these people are, go and find out as there’s some excellent reading to be had. Anyway, you can download the function here: http://www.sqlservercentral.com/articles/Tally+Table/72993/

Here’s some code you can use to see it in action: 

DECLARE @IntsAsStrings VARCHAR(100) = '1,2,3,4,5,6,7,8,9';

DECLARE @TestTable TABLE 
(
	Id INTEGER IDENTITY(1,1),
	ValueString VARCHAR(50)
);

INSERT INTO @TestTable VALUES ('We'),('Are'),('Leeds'),('MOT');

SELECT *
FROM @TestTable t
RIGHT JOIN [dbo].[DelimitedSplit8K](@IntsAsStrings, ',') ss ON ss.ItemNumber = t.Id

You can then pass a string into your proc and at the top split that out into a table or join directly onto the split function table results (as it’s a Table Valued Function (TVF)).

Merge – Incremental Soft Delete

Standard

Sometimes we may have situations where we only load a day’s worth of data into our data warehouse. We may also need to keep the destination data in line with the incoming data.

For instance, imagine we had a shopping basket filled with items one day, which we load into our data warehouse, but the next day (for whatever reason) two of the products are no longer valid. How can we remove these records without deleting all of the other values in our data warehouse for our other trolleys that aren’t in the incoming data set?

The answer is with an incremental MERGE: 

--Declare temp tables to store data
DECLARE @Target TABLE
(
PackageId INTEGER,
JournalId INTEGER,
Amount MONEY,
IsDeleted BIT DEFAULT 0
)


DECLARE @Source TABLE
(
PackageId INTEGER,
JournalId INTEGER,
Amount MONEY
)

--Insert some dummy data
INSERT INTO @Source VALUES (1,1,10),(1,2,20),(1,3,30),(2,1,40),(2,2,50)

--Now merge it into the destination table.
MERGE INTO @Target AS TARGET
	USING
	@Source AS SOURCE
	ON TARGET.PackageId = SOURCE.PackageId
	AND TARGET.JournalId = SOURCE.JournalId

--When we match on the merge key, then update the destination table with the new value
WHEN MATCHED THEN
	UPDATE SET
		TARGET.Amount = Source.Amount

--Whne we don't match, then we're a new record, so insert the new record into the destination
WHEN NOT MATCHED BY TARGET THEN
	INSERT
	(
	PackageId,
	JournalId,
	Amount
	)
	VALUES
	(
	SOURCE.PackageId,
	SOURCE.JournalId,
	SOURCE.Amount
	);

--Now show our results.
SELECT * FROM @Source;
SELECT * FROM @Target;


--Now let's get rid of some data and carry out an update to show how we would handle a merge for
--an incremental load
DELETE FROM @Source WHERE PackageId = 1 OR (PackageId = 2 AND JournalId = 1);
UPDATE @Source SET Amount = Amount + 20

--Use a CTE to limit the destination rows we want to match against, becuase if we've only loaded
--1 day worth of values in from our source system, we could end up deleting lots of data by accident
;WITH tgtCTE AS
(
SELECT T.*
FROM @Source S
INNER JOIN @Target T ON T.PackageId = S.PackageId
)


MERGE INTO tgtCTE AS TARGET
	USING @Source AS SOURCE
	ON TARGET.PackageId = SOURCE.PackageId
	AND TARGET.JournalId = SOURCE.JournalId

--Again, update any existing records
WHEN MATCHED THEN
	UPDATE SET
		TARGET.Amount = Source.Amount

--Again, insert any new records
WHEN NOT MATCHED BY TARGET THEN
	INSERT
	(
	PackageId,
	JournalId,
	Amount
	)
	VALUES
	(
	SOURCE.PackageId,
	SOURCE.JournalId,
	SOURCE.Amount
	)

--Here is the intersting part; we're going to soft delete any records from the 
--destination table that do not exist in the incoming data set.
WHEN NOT MATCHED BY SOURCE THEN UPDATE SET TARGET.IsDeleted = 1;


--Show our final results. As you can see in the second merge we only had 1 record for PackageId = 2
--the other JournalId didn't exist in the source table, so it was soft deleted from the destination.
SELECT * FROM @Source;
SELECT * FROM @Target;

Go ahead and run the above code and see how it all works. Hopefully this will help you on your way.

Paul White gives a great example here at DBA.StackExchange

SQL Server – Find all non empty tables

Standard

So there are a couple of ways to do this, one is to go to every table in your database and check whether there is data, or you can go to the indexes and check the row count, here I’ll show you how to do both:

1: hit every table (don’t do this in a production environment or your DBAs will go mental..

IF OBJECT_ID('tempdb..#Tables') IS NOT NULL
BEGIN
	DROP TABLE #Tables;
END

--Create some temp tables and declare variables
CREATE TABLE #Tables
(
TableId INTEGER IDENTITY(1,1),
TableName VARCHAR(100),
HasRows BIT,
Processed BIT
);


DECLARE @Result TABLE
(
Result INT
);

DECLARE @TableCount INT = 0, @i INT = 1;

DECLARE @CurrentTable NVARCHAR(150);

--Get the name of each table in the database wrapped in square brackets.
INSERT INTO #Tables (TableName, HasRows, Processed)
SELECT '[' + s.name + '].[' + o.name + ']', 0, 0
FROM sys.objects o
INNER JOIN sys.schemas s ON s.schema_id = o.schema_id
WHERE type = 'u'
--AND o.name IN (N'Tbl_Name','Tbl_OtherTable');

SELECT @TableCount = COUNT(1) FROM #tables;

--Loop through all of the tables and check if there are rows
WHILE @i <= @TableCount
BEGIN

SELECT @CurrentTable = 'SELECT TOP 1 1 FROM ' + TableName FROM #Tables WHERE TableId = @i;

--PRINT @CurrentTable;

INSERT INTO @Result EXEC sp_executesql @CurrentTable;

--Populate the result table
UPDATE t SET t.HasRows = Result
FROM #Tables t
INNER JOIN @Result r ON t.TableId = @i;

SET @i = @i + 1;
DELETE FROM @Result;

END

--Get the rows we're interested in
SELECT *
FROM #Tables
WHERE HasRows = 1

 

The other way is to hit the indexes and check the row counts:

SELECT DISTINCT s.name [Schema], o.name [Table]--, i.name [IndexName], p.[rows] [RowCount]
FROM sys.objects o
INNER JOIN sys.schemas s	ON s.schema_id = o.schema_id
INNER JOIN sys.indexes i	ON i.object_id = o.object_id
INNER JOIN sys.partitions p ON p.index_id = i.index_id
								AND p.object_id = i.object_id
WHERE o.type = 'U'
AND p.[rows] > 0
ORDER BY s.name, o.name

 

You can remove the DISTINCT and uncomment i.name and p.rows to get the count of rows in each index.

 

 

MSBuild – Missing values for the following SqlCmd variables

Standard

Today our automated build broke with a strange error:

Missing values for the following SqlCmd variables: DW

We have a couple of references to other databases in our build specifically to deal with warnings thrown by temporary cross database references. One of which is master and the other is another database on the same server:

refs1

The build was failing due to the reference to the DW database. I’d removed the reference and added it back in to see if that made a difference, and checked that it matched what was in live, no dice. Still broken.

What I’d overlooked was the project properties. When a new database reference is added it automatically comes with a new SQLCMD variable that you can set, which looks like this:

refs2

Having this reference was causing the Build to try and use it, which wasn’t required. Simply removing this variable solved the problem, as our build server already has a copy of the referenced database.

 

Scripting DROP and CREATE SQL Server Agent Jobs

Standard

Something to watch out for..

When Scripting a SQL Server Agent Job as drop and create in SSMS watch out that the generated script uses the job_name to check if the job exists and then job_id to delete it. If you want your scripts to be re-runnable then make sure you change the generated script as per the below:

Before: Notice that the @job_id is used, when you drop this job and recreate it, a new job_id will be generated, meaning you can’t your script twice.

/****** Object:  Job [SSIS_Framework_Demand_Load_Full]    Script Date: 07/11/2016 15:49:17 ******/
IF  EXISTS (SELECT job_id FROM msdb.dbo.sysjobs_view WHERE name = N'SSIS_Framework_Demand_Load_Full')
EXEC msdb.dbo.sp_delete_job @job_id=N'd0370c2e-6fae-4717-8726-8f481a3b47f1', @delete_unused_schedule=1
GO

Make sure you change your code to use the job_name:

/****** Object:  Job [SSIS_Framework_Demand_Load_Full]    Script Date: 07/11/2016 15:49:17 ******/
IF  EXISTS (SELECT job_id FROM msdb.dbo.sysjobs_view WHERE name = N'SSIS_Framework_Demand_Load_Full')
EXEC msdb.dbo.sp_delete_job @job_name=N'SSIS_Framework_Demand_Load_Full', @delete_unused_schedule=1
GO

Good luck!

Login failed for user ‘NT AUTHORITY\ANONYMOUS USER’

Standard

I needed to manually run a project deployment model SSIS package (on box A) that talks to a SQL Server instance on another machine (box B), the package runs a stored procedure and returns some results. Box A and Box B are not linked servers, however the SQL Agent service account and the SQL Server service account from Box A both have permissions to execute the procedure on box B.

When I run the package manually through the SSIS project

Anonymous_User

it fails with the below error:

Anonymous_User2

What’s going on? I’ve got permission to run the stored procedure on Box B as well, but my user seems to be getting replaced with NT AUTHORITY\ANONYMOUS LOGIN. The issue here is that as of SQL Server needs credential delegation to be turned on. You can do this either through a linked server, or as I did it, by creating a SQL Agent job to handle the delegation for me.

There’s more learning to do here as we may have impersonation handling the delegation for me, but for the purposes of recording how an error message relates to a problem, I think this suffices for the moment and will allow further investigation.