SQL Server – Monitoring a Rollback

Standard

Recently bumped into an issue where we needed to kill a large insert, obviously causing SQL Server to perform a rollback. I needed to find out what was going on, so looked to Pinal Dave’s blog post about finding out which queries are running, which you can find here. Which uses the sys.dm_exec_requests and sys.dm_exec_sql_text DMVs.

The below script adds a few little things, to tell you for a given SPID, what the SQL text is that’s being executed and most importantly the expected finish time.

--Set how long you want the script to run for in HH:MM:SS format.
--The longer you leave it running the more accurate it will be
DECLARE @Delay VARCHAR(20)	= '00:00:10';

--Add your SPID, which you can get from sp_who2
DECLARE @SPID INT			= 206;


DECLARE @StartPercent DECIMAL(18,9), @Statement VARCHAR(MAX), @EndPercent DECIMAL(18,9), @DelaySeconds INT, @PercentToGo DECIMAL(18,9), @Increase DECIMAL(18,9);

--Set the starting percentage complete
SELECT @StartPercent = req.percent_complete, @Statement = sqltext.TEXT
FROM sys.dm_exec_requests req
CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS sqltext
WHERE session_id = @SPID

--Wait for the specified delay time
WAITFOR DELAY @Delay

--Get the end percentage complete
SELECT @EndPercent = req.percent_complete
FROM sys.dm_exec_requests req
WHERE session_id = 206

SET @PercentToGo = 100 - @EndPercent;
SET @Increase = @EndPercent - @StartPercent
SET @DelaySeconds = DATEDIFF(SECOND, 0, @Delay)

--Return the output including the expected finish time
SELECT @SPID [SPID], @Statement [SQLText], @DelaySeconds [DelaySeconds], @StartPercent [StartPercent], @EndPercent [EndPercent], @Increase [Increase], DATEADD(SECOND,(@PercentToGo / @Increase) * @DelaySeconds, GETDATE()) [ExpectedFinishTime]

 

You can use the same query to monitor the below events (source):

ALTER INDEX REORGANIZE
AUTO_SHRINK option with ALTER DATABASE
BACKUP DATABASE
DBCC CHECKDB
DBCC CHECKFILEGROUP
DBCC CHECKTABLE
DBCC INDEXDEFRAG
DBCC SHRINKDATABASE
DBCC SHRINKFILE
RECOVERY
RESTORE DATABASE
ROLLBACK
TDE ENCRYPTION

Advertisements

SQL Server – Join to list of Strings

Standard

This one can also be used when passing a multi-value parameter from SSRS to the database.

Firstly you need to use the delimited8k split function that was created and refined by many people such as Itzek Ben Gan, Lynn Pettis and Jeff Modern. If you don’t know who these people are, go and find out as there’s some excellent reading to be had. Anyway, you can download the function here: http://www.sqlservercentral.com/articles/Tally+Table/72993/

Here’s some code you can use to see it in action: 

DECLARE @IntsAsStrings VARCHAR(100) = '1,2,3,4,5,6,7,8,9';

DECLARE @TestTable TABLE 
(
	Id INTEGER IDENTITY(1,1),
	ValueString VARCHAR(50)
);

INSERT INTO @TestTable VALUES ('We'),('Are'),('Leeds'),('MOT');

SELECT *
FROM @TestTable t
RIGHT JOIN [dbo].[DelimitedSplit8K](@IntsAsStrings, ',') ss ON ss.ItemNumber = t.Id

You can then pass a string into your proc and at the top split that out into a table or join directly onto the split function table results (as it’s a Table Valued Function (TVF)).

Merge – Incremental Soft Delete

Standard

Sometimes we may have situations where we only load a day’s worth of data into our data warehouse. We may also need to keep the destination data in line with the incoming data.

For instance, imagine we had a shopping basket filled with items one day, which we load into our data warehouse, but the next day (for whatever reason) two of the products are no longer valid. How can we remove these records without deleting all of the other values in our data warehouse for our other trolleys that aren’t in the incoming data set?

The answer is with an incremental MERGE: 

--Declare temp tables to store data
DECLARE @Target TABLE
(
PackageId INTEGER,
JournalId INTEGER,
Amount MONEY,
IsDeleted BIT DEFAULT 0
)


DECLARE @Source TABLE
(
PackageId INTEGER,
JournalId INTEGER,
Amount MONEY
)

--Insert some dummy data
INSERT INTO @Source VALUES (1,1,10),(1,2,20),(1,3,30),(2,1,40),(2,2,50)

--Now merge it into the destination table.
MERGE INTO @Target AS TARGET
	USING
	@Source AS SOURCE
	ON TARGET.PackageId = SOURCE.PackageId
	AND TARGET.JournalId = SOURCE.JournalId

--When we match on the merge key, then update the destination table with the new value
WHEN MATCHED THEN
	UPDATE SET
		TARGET.Amount = Source.Amount

--Whne we don't match, then we're a new record, so insert the new record into the destination
WHEN NOT MATCHED BY TARGET THEN
	INSERT
	(
	PackageId,
	JournalId,
	Amount
	)
	VALUES
	(
	SOURCE.PackageId,
	SOURCE.JournalId,
	SOURCE.Amount
	);

--Now show our results.
SELECT * FROM @Source;
SELECT * FROM @Target;


--Now let's get rid of some data and carry out an update to show how we would handle a merge for
--an incremental load
DELETE FROM @Source WHERE PackageId = 1 OR (PackageId = 2 AND JournalId = 1);
UPDATE @Source SET Amount = Amount + 20

--Use a CTE to limit the destination rows we want to match against, becuase if we've only loaded
--1 day worth of values in from our source system, we could end up deleting lots of data by accident
;WITH tgtCTE AS
(
SELECT T.*
FROM @Source S
INNER JOIN @Target T ON T.PackageId = S.PackageId
)


MERGE INTO tgtCTE AS TARGET
	USING @Source AS SOURCE
	ON TARGET.PackageId = SOURCE.PackageId
	AND TARGET.JournalId = SOURCE.JournalId

--Again, update any existing records
WHEN MATCHED THEN
	UPDATE SET
		TARGET.Amount = Source.Amount

--Again, insert any new records
WHEN NOT MATCHED BY TARGET THEN
	INSERT
	(
	PackageId,
	JournalId,
	Amount
	)
	VALUES
	(
	SOURCE.PackageId,
	SOURCE.JournalId,
	SOURCE.Amount
	)

--Here is the intersting part; we're going to soft delete any records from the 
--destination table that do not exist in the incoming data set.
WHEN NOT MATCHED BY SOURCE THEN UPDATE SET TARGET.IsDeleted = 1;


--Show our final results. As you can see in the second merge we only had 1 record for PackageId = 2
--the other JournalId didn't exist in the source table, so it was soft deleted from the destination.
SELECT * FROM @Source;
SELECT * FROM @Target;

Go ahead and run the above code and see how it all works. Hopefully this will help you on your way.

Paul White gives a great example here at DBA.StackExchange

SQL Server – Find all non empty tables

Standard

So there are a couple of ways to do this, one is to go to every table in your database and check whether there is data, or you can go to the indexes and check the row count, here I’ll show you how to do both:

1: hit every table (don’t do this in a production environment or your DBAs will go mental..

IF OBJECT_ID('tempdb..#Tables') IS NOT NULL
BEGIN
	DROP TABLE #Tables;
END

--Create some temp tables and declare variables
CREATE TABLE #Tables
(
TableId INTEGER IDENTITY(1,1),
TableName VARCHAR(100),
HasRows BIT,
Processed BIT
);


DECLARE @Result TABLE
(
Result INT
);

DECLARE @TableCount INT = 0, @i INT = 1;

DECLARE @CurrentTable NVARCHAR(150);

--Get the name of each table in the database wrapped in square brackets.
INSERT INTO #Tables (TableName, HasRows, Processed)
SELECT '[' + s.name + '].[' + o.name + ']', 0, 0
FROM sys.objects o
INNER JOIN sys.schemas s ON s.schema_id = o.schema_id
WHERE type = 'u'
--AND o.name IN (N'Tbl_Name','Tbl_OtherTable');

SELECT @TableCount = COUNT(1) FROM #tables;

--Loop through all of the tables and check if there are rows
WHILE @i <= @TableCount
BEGIN

SELECT @CurrentTable = 'SELECT TOP 1 1 FROM ' + TableName FROM #Tables WHERE TableId = @i;

--PRINT @CurrentTable;

INSERT INTO @Result EXEC sp_executesql @CurrentTable;

--Populate the result table
UPDATE t SET t.HasRows = Result
FROM #Tables t
INNER JOIN @Result r ON t.TableId = @i;

SET @i = @i + 1;
DELETE FROM @Result;

END

--Get the rows we're interested in
SELECT *
FROM #Tables
WHERE HasRows = 1

 

The other way is to hit the indexes and check the row counts:

SELECT DISTINCT s.name [Schema], o.name [Table]--, i.name [IndexName], p.[rows] [RowCount]
FROM sys.objects o
INNER JOIN sys.schemas s	ON s.schema_id = o.schema_id
INNER JOIN sys.indexes i	ON i.object_id = o.object_id
INNER JOIN sys.partitions p ON p.index_id = i.index_id
								AND p.object_id = i.object_id
WHERE o.type = 'U'
AND p.[rows] > 0
ORDER BY s.name, o.name

 

You can remove the DISTINCT and uncomment i.name and p.rows to get the count of rows in each index.

 

 

SQL SERVER SELECT INTO IDENTITY with BCP

Standard

A couple of things I learned recently:

1: If you carry out a SELECT column1, column2…. INTO dbo.NewTable the newly created table won’t have any of the indexes, foreign keys or primary keys etc. of the source/original table BUT!!! The table will retain any
IDENTITY specifications for any columns. So the source/original table I was looking at had an INTEGER IDENTITY column as the surrogate primary key. This was brought over to NewTable.

2: BCP will ignore the load for any columns where IDENTITY is specified. So when I was attempting to BCP data into NewTable all the columns came across as per source apart from the IDENTITY column. Which
had a seed of 1 and an increment of 1. Once I removed the IDENTITY specification the BCP worked out OK.

Directory.Exists, File.Exists, Test-Path etc no longer work after importing SQLPS module in PowerShell

Standard

This was a bit of a strange one which pointed out how little I actually know about PowerShell! After importing the SQLPS module to allow me to query a database, I was no longer able to check my file structure as File.Exists just returned nothing, no error, just blank lines.

The issue is that when you import the SQLPS module the directory in which you are working changes to the SQLSERVER directory (more info can be found here and here). As you can see in the blog post the simple answer is just to set the working directory before importing the SQLPS module:

#Importing the "sqlps" module changes the current directory to the SQLSERVER drive which is an instance of the SQLSERVER Powershell Provider, so take a copy of the current location here:
$workingDirectory = (Get-Item -Path ".\" -Verbose).FullName

Import-Moduel "sqlps" -DisableNameChecking

#Get the Report Server URI from the Config table so we can use it in our script
$ReportServerURIQuery = Invoke-Sqlcmd -Query "SELECT ConfiguredValue FROM Config.dbo.Config WHERE ConfigurationFilter LIKE 'ReportServerURI';" -ServerInstance "Dev"

$ReportServerURI = $ReportServerURIQuery.ConfiguredValue | Out-String

#Change the directory back to where we were
cd $workingDirectory

Good luck!

~Script Temp Table Definition

Standard

Today I was creating a temp table using a SELECT * INTO in a stored procedure that was being used a source for an SSIS load package. Of course when plugged the stored proc into my load package it gave the below error:

TempTableDefinition

This is easily resolved by using WITH RESULT SETS. Now my problem was that my temp table was quite wide and would have taken ages to manually create the definition to use in my result sets. “Tell us Jim what’s the solution?!?!?! I can hear you screaming. Well it’s off to the sys.objects table in Tempdb where our temporary table is stored.

Firstly let’s create some test data and insert it into a temp table:


IF OBJECT_ID('dbo.Product') IS NOT NULL
BEGIN
   DROP TABLE dbo.Product;
END

CREATE TABLE dbo.Product
(
ProductId INTEGER IDENTITY(1,1),
ProductName VARCHAR(200),
CreatedDate DATETIME DEFAULT GETDATE()
);

INSERT INTO dbo.Product VALUES ('Cogs'),('Widgets'),('Spoolers'),('Sprockets'),('Openers'),('Closers'),('Silencers'),('Speakers');

SELECT *
INTO #Product
FROM dbo.Product

Now that we’ve got a temp table in tempdb let’s get it’s definition:

--Create a variable to hold the name of the temp table and one to hold the resulting definition
DECLARE @TableName NVARCHAR(256) = '#Product', @Definition NVARCHAR(1000) = '';

--Now build the columns:
SELECT @Definition = @Definition
       + c.Name
       + ' '
       + UPPER(t.name)
       + CASE WHEN t.name LIKE '%char%' THEN '(' + CAST(t.max_length AS VARCHAR(4)) + ')' ELSE '' END
       + IIF(t.is_nullable = 1, ' NULL,', ' NOT NULL,')
       + CHAR(10)
FROM tempdb.sys.columns c
INNER JOIN tempdb.sys.types t ON t.system_type_id = c.system_type_id
WHERE OBJECT_ID = OBJECT_ID('tempdb..' + @TableName)
ORDER BY c.column_id;

--Now build the create table startement
SELECT 'CREATE TABLE '
       + @TableName
       + '(' + CHAR(10)
       + LEFT(@Definition,LEN(@Definition) - 2)
       + CHAR(10)
       + ')';

When you run the above code you get out the table definition of our temp table:

CREATE TABLE #Product
(
   ProductId INT NULL,
   ProductName VARCHAR(8000) NULL,
   CreatedDate DATETIME NULL
)

It’s a little rough at the moment as our varchar column was originally varchar(200) so it does need a little bit of tweaking, but it was sufficient for what I needed.