Category Archives: Database Administrator

How to set a database to single user and back to multiuser

Here’s a quick script to set a database to single user and then back to multi user.

You’ll need to do this when altering the database in situations like database name changes, file name changes or even restores 

/*
Set Database to Single User
*/
USE [master];
GO

ALTER DATABASE [my_db]

SET SINGLE_USER
WITH

ROLLBACK IMMEDIATE;
GO

/*
Set Database to back to Multi User
*/
USE MASTER;

ALTER DATABASE [my_db]

SET MULTI_USER
WITH

ROLLBACK IMMEDIATE;
GO

 

How to monitor file stats using a batch script

The use case for this batch script is to monitor SQL Server files using Windows task manager scheduled to run the file every 5 minutes. The batch scripts gathers the stats of  every file in a referenced directory. The script can obviously be adapted however to something that suits your specific needs.

Note: Running this script against the C drive on a laptop took 15 minutes and produced a text file that was around 50 Mb. The script is best utilized in scenarios such as monitoring file sizes on drives used by applications that do not generate thousands of files, e.g. backup directories etc.

There is two scripts presented below. The first is the simpler of the two scripts, it just records the file stats of a single directory.

Directions for use:

Take either script below and save it as a batch file called FileStatsLogger.

Two variables need to be updated to use the script:

“LogPath=C:\Log\” Change C:\Log\ to another directory if you want the output file written somewhere else.

“ScanDir=C:\FolderToScan\” Change C:\FolderToScan\ to the directory you want to gather file stats from.

The file that will be created, FileStats, will record the following information:

  • A timestamp of when the batch script was run
  • A reference to each parent directory
  • Every file creation date and time
  • File size in Kbs
  • Every file owner
  • The name of the file
  • How many files are in the directory and their collective size

The output file will look like this when opened.

FileStats

Each time the script is run it will update the text file FileStats.

@ECHO OFF

REM SET TIMESTAMP VARIABLE
SET Timestamp=%date:~6,8%%date:~3,2%%date:~0,2%%time:~0,2%%time:~3,2%%time:~6,2%

REM SET FILE LOGGING VARIABLES
SET "LogPath=C:\Log\"
SET "LogFileName=FileStats.txt"
SET "Log=%LogPath%%LogFileName%"
SET "ScanDir=C:\FolderToScan\"

REM CREATE FILE IF DOES NOT EXIST
IF NOT EXIST "%Log%" ECHO File Created Timestamp: %Timestamp% > "%Log%"

REM TIMESTAMP ENTRY
ECHO/ >> "%Log%"
ECHO Entry Timestamp: %Timestamp% >> "%Log%"
ECHO/ >> "%Log%"

REM SCAN DIRECTORY
DIR /a %ScanDir% >> %Log%

To scan multiple directories create a text file called DirectoryList and populate the the file with the directories you want to scan. Each directory on a new line, for example:

L:\
D:\
T:\

It is assumed the directory list text file will be kept in the same directory as the batch file FileStatsLogger but you should add the full file path to were DirectoryList.txt is referenced in the script to avoid any problems.

Note: L, D & T are the naming convention for the Log, Data and tempdb drive names for SQL Server. These are just drive letters examples but you can be more specific like C:\Users\UserName\Desktop\ just be sure to include the backslash at the end of the reference.

This script will create a new log file for each day as the files can be large. The filename will following the format FileStats_YYYYMMDD.

As before you will have to:

“LogPath=C:\FileStats\” Change C:\Log\ to another directory if you want the output file written somewhere else.

“ScanDir=C:\FileStats\DirectoryList.txt” Change to the directory you will store DirectoryList.txt.

Though it is recommended you keep everything in a folder like this (potentially writing the FileStats log to a directory off of the host server):

FileStatsDirectory

@ECHO OFF

REM SET TIMESTAMP VARIABLES
for /f "tokens=2 delims==" %%G in ('wmic os get localdatetime /value') do set datetime=%%G
SET "dd=%datetime:~6,2%"
SET "mth=%datetime:~4,2%"
SET "yyyy=%datetime:~0,4%"
SET "hh=%time:~0,2%"
SET "mm=%time:~3,2%"
SET "ss=%time:~6,2%"

SET "FileNameDate=%yyyy%%mth%%dd%"
SET "Timestamp=%yyyy%%mth%%dd%%hh%%mm%%ss%"

REM SET FILE LOGGING VARIABLES
SET "LogPath=c:\FileStats\"
SET "LogFileName=FileStats_%FileNameDate%.txt"
SET "Log=%LogPath%%LogFileName%"


REM CREATE FILE IF DOES NOT EXIST
IF NOT EXIST "%Log%" ECHO File Created Timestamp: %Timestamp% > "%Log%"

REM TIMESTAMP ENTRY
ECHO/ >> "%Log%"
ECHO Entry Timestamp: %Timestamp% >> "%Log%"
ECHO/ >> "%Log%"

REM READ DIRECTORY LIST AND SCAN EACH DIRECTORY REFERENCED 
FOR /f "tokens=*" %%x IN (c:\FileStats\DirectoryList.txt) DO DIR /a /s /q "%%x" >> %Log%

Finally you can create a Windows task scheduler task to run the script at an interval that suits your needs.

The description of the task could be something along the lines of:

This task runs the batch script “FileStatsLogger” every 5 minutes which
logs the stats for each file of each directory referenced in the file
Directory List. The stats are recorded in the FileStats txt file.
All resources related to this tasks are contained in the FileStats directory.

How to find Missing Indexes for all databases in a SQL Server instance

This script is for SQL Server 2005 and up. The script will return all the missing indexes for a SQL Server instance, rating their impact and provide the T-SQL to create the missing indexes.

SQL Server 2005 was the first version of SQL Server to add DMV (Database Management View) and DMO (Database Management Objects) which this script requires to function.
DMV & DMO provide useful information about SQL Server like expensive queries, wait types, missing indexes etc.

WARNING!
Before you create the missing indexes on the referenced tables you must consider the following essential points:
• Find and assess all the queries that are using the table referenced. If the table has a heavy amount of Data Manipulation Language (DML) operations against it (SELECT, INSERT, UPDATE, or DELETE) then you must analyse what impact adding the missing index will have before you create it on the table. INSERTs on tables are slowed down by nonclustered indexes for example.
• You need to make sure that by creating the missing indexes you are not going to end up with duplicate indexes on tables. The duplicate or unwanted indexes can kill your database performance. Search for the blog “over-indexing can hurt your SQL Server performance” for more information.
• If you find there is already an existing index that has most of the columns of the missing index highlighted you should consider adding the missing columns to the current index rather than creating another index on the table. FYI making an index wider does not mean adding all columns from a table into the current index.

/*Script to find Missing Indexes for all databases in SQL Server*/
/*
This script is for SQL Server 2005 and up. 
The script will return all the missing indexes for a SQL Server instance, rating their impact 
and provide the T-SQL to create the missing indexes.

SQL Server 2005 was the first version of SQL Server to add DMV (Database Management View) 
and DMO (Database Management Objects) which this script requires to function. 
DMV & DMO provide useful information about SQL Server like expensive queries, wait types, missing indexes etc.

WARNING!
Before you create the missing indexes on the referenced tables you must consider the following essential points:
• Find and assess all the queries that are using the table referenced. If the table has a heavy amount of Data Manipulation Language (DML) 
operations against it (SELECT, INSERT, UPDATE, or DELETE) then you must analyse what impact adding the missing index will have before you create it on the table. 
INSERTs on tables are slowed down by nonclustered indexes for example.
• You need to make sure that by creating the missing indexes you are not going to end up with duplicate indexes on tables. 
The duplicate or unwanted indexes can kill your database performance. Search for the blog “over-indexing can hurt your SQL Server performance” for more information.
• If you find there is already an existing index that has most of the columns of the missing index highlighted you should consider adding the missing columns to 
the current index rather than creating another index on the table. FYI making an index wider does not mean adding all columns from a table into the current index.
*/
SELECT [EstIndexUses]
	,[EstIndexImpact%]
	,[EstAvgQueryCost]
	,[DbName]
	,[SchemaName]
	,[TableName]
	,[CreateIndex]
	,[EqualityColumns]
	,[InequalityColumns]
	,[IncludedColumns]
	,[UniqueCompiles]
	,[LastUserSeek]
FROM (
	SELECT migs.user_seeks AS [EstIndexUses]
		,migs.avg_user_impact AS [EstIndexImpact%]
		,migs.avg_total_user_cost AS [EstAvgQueryCost]
		,db_name(mid.database_id) AS [DbName]
		,OBJECT_SCHEMA_NAME(mid.OBJECT_ID, mid.database_id) AS [SchemaName]
		,OBJECT_NAME(mid.OBJECT_ID, mid.database_id) AS [TableName]
		,'CREATE INDEX [IX_' + OBJECT_NAME(mid.OBJECT_ID, mid.database_id) + '_' + REPLACE(REPLACE(REPLACE(ISNULL(mid.equality_columns, ''), ', ', '_'), '[', ''), ']', '') + CASE 
			WHEN mid.equality_columns IS NOT NULL
				AND mid.inequality_columns IS NOT NULL
				THEN '_'
			ELSE ''
			END + REPLACE(REPLACE(REPLACE(ISNULL(mid.inequality_columns, ''), ', ', '_'), '[', ''), ']', '') + ']' + ' ON ' + mid.statement + ' (' + ISNULL(mid.equality_columns, '') + CASE 
			WHEN mid.equality_columns IS NOT NULL
				AND mid.inequality_columns IS NOT NULL
				THEN ','
			ELSE ''
			END + ISNULL(mid.inequality_columns, '') + ')' + ISNULL(' INCLUDE (' + mid.included_columns + ') WITH (MAXDOP =?, FILLFACTOR=?, ONLINE=?, SORT_IN_TEMPDB=?);', '') AS [CreateIndex]
		,mid.equality_columns AS EqualityColumns
		,mid.inequality_columns AS InequalityColumns
		,mid.included_columns AS IncludedColumns
		,migs.unique_compiles AS UniqueCompiles
		,migs.last_user_seek AS LastUserSeek
	FROM sys.dm_db_missing_index_group_stats AS migs WITH (NOLOCK)
	INNER JOIN sys.dm_db_missing_index_groups AS mig WITH (NOLOCK) ON migs.group_handle = mig.index_group_handle
	INNER JOIN sys.dm_db_missing_index_details AS mid WITH (NOLOCK) ON mig.index_handle = mid.index_handle
	) AS a
WHERE 1 = 1
--AND [EstIndexUses] > 1000
--AND [EstIndexImpact%] > 10
--AND [EstAvgQueryCost] > 1
--AND DbName IN ('DatabaseName')
ORDER BY [EstIndexUses] DESC
	,[EstAvgQueryCost] DESC
	,[EstIndexImpact%] DESC
OPTION (RECOMPILE);

 

A do not use sign signified by a cartoon man showing his hand within a red circle with a red line dividing it in two

How to NOT write a T-SQL query

So the vendors for the call system software ran the code below on the live system on a Friday night. Thanks guys. I’ve changed the name of the tables in the code for security reasons if anyone would like to use this it as an example of how not to write T-SQL code!

The vendors claim the code came from the company that developed the call system and if this is the case I think someone in head office is trying to get fired or get someone fired.

Scroll down for a review of why the code is such a mess.

/*
TableOne a 312 million row table
TableTwo a 55 million row table 
TableThree a 22 million row table
*/

DELETE
FROM TableOne
WHERE SecurityPolicyId = @p_secPolId
	AND RecordingId NOT IN (
		SELECT c.RECORDINGID
		FROM TableThree c
		WHERE (
				EXISTS (
					SELECT *
					FROM TableTwo b0
					WHERE b0.RecordingID = c.RECORDINGID
						AND b0.Workgroup = N'CS EMAIL - [Cancellations]'
					)
				OR EXISTS (
					SELECT *
					FROM TableTwo b1
					WHERE b1.RecordingID = c.RECORDINGID
						AND b1.Workgroup = N'CS EMAIL - [Claims]'
					)
				OR EXISTS (
					SELECT *
					FROM TableTwo b2
					WHERE b2.RecordingID = c.RECORDINGID
						AND b2.Workgroup = N'CS EMAIL - [Contact Request]'
					)
				OR EXISTS (
					SELECT *
					FROM TableTwo b3
					WHERE b3.RecordingID = c.RECORDINGID
						AND b3.Workgroup = N'CS EMAIL - [DD Enquiries]'
					)
				OR EXISTS (
					SELECT *
					FROM TableTwo b4
					WHERE b4.RecordingID = c.RECORDINGID
						AND b4.Workgroup = N'CS EMAIL - [Feedback]'
					)
				OR EXISTS (
					SELECT *
					FROM TableTwo b5
					WHERE b5.RecordingID = c.RECORDINGID
						AND b5.Workgroup = N'CS EMAIL - [MTA]'
					)
				OR EXISTS (
					SELECT *
					FROM TableTwo b6
					WHERE b6.RecordingID = c.RECORDINGID
						AND b6.Workgroup = N'CS EMAIL - [NB Cover Query]'
					)
				OR EXISTS (
					SELECT *
					FROM TableTwo b7
					WHERE b7.RecordingID = c.RECORDINGID
						AND b7.Workgroup = N'CS EMAIL - [O/S Documents]'
					)
				OR EXISTS (
					SELECT *
					FROM TableTwo b8
					WHERE b8.RecordingID = c.RECORDINGID
						AND b8.Workgroup = N'CS EMAIL - [Otherl]'
					)
				OR EXISTS (
					SELECT *
					FROM TableTwo b9
					WHERE b9.RecordingID = c.RECORDINGID
						AND b9.Workgroup = N'CS EMAIL - [SME]'
					)
				OR EXISTS (
					SELECT *
					FROM TableTwo b10
					WHERE b10.RecordingID = c.RECORDINGID
						AND b10.Workgroup = N'CS WebChat'
					)
				OR EXISTS (
					SELECT *
					FROM TableTwo b11
					WHERE b11.RecordingID = c.RECORDINGID
						AND b11.Workgroup = N'CUSTOMER SERVICE - [Cancellations]'
					)
				OR EXISTS (
					SELECT *
					FROM TableTwo b12
					WHERE b12.RecordingID = c.RECORDINGID
						AND b12.Workgroup = N'CUSTOMER SERVICE - [CBL enquiry]'
					)
				OR EXISTS (
					SELECT *
					FROM TableTwo b13
					WHERE b13.RecordingID = c.RECORDINGID
						AND b13.Workgroup = N'CUSTOMER SERVICE - [Claims]'
					)
				OR EXISTS (
					SELECT *
					FROM TableTwo b14
					WHERE b14.RecordingID = c.RECORDINGID
						AND b14.Workgroup = N'CUSTOMER SERVICE - [Commercial]'
					)
				OR EXISTS (
					SELECT *
					FROM TableTwo b15
					WHERE b15.RecordingID = c.RECORDINGID
						AND b15.Workgroup = N'CUSTOMER SERVICE - [DD Payment]'
					)
				OR EXISTS (
					SELECT *
					FROM TableTwo b16
					WHERE b16.RecordingID = c.RECORDINGID
						AND b16.Workgroup = N'CUSTOMER SERVICE - [Diary Team]'
					)
				OR EXISTS (
					SELECT *
					FROM TableTwo b17
					WHERE b17.RecordingID = c.RECORDINGID
						AND b17.Workgroup = N'CUSTOMER SERVICE - [Doc Request]'
					)
				OR EXISTS (
					SELECT *
					FROM TableTwo b18
					WHERE b18.RecordingID = c.RECORDINGID
						AND b18.Workgroup = N'CUSTOMER SERVICE - [DocChase CL]'
					)
				OR EXISTS (
					SELECT *
					FROM TableTwo b19
					WHERE b19.RecordingID = c.RECORDINGID
						AND b19.Workgroup = N'CUSTOMER SERVICE - [DocChase FN]'
					)
				OR EXISTS (
					SELECT *
					FROM TableTwo b20
					WHERE b20.RecordingID = c.RECORDINGID
						AND b20.Workgroup = N'CUSTOMER SERVICE - [DocChase IN]'
					)
				OR EXISTS (
					SELECT *
					FROM TableTwo b21
					WHERE b21.RecordingID = c.RECORDINGID
						AND b21.Workgroup = N'CUSTOMER SERVICE - [Life]'
					)
				OR EXISTS (
					SELECT *
					FROM TableTwo b22
					WHERE b22.RecordingID = c.RECORDINGID
						AND b22.Workgroup = N'CUSTOMER SERVICE - [MTA]'
					)
				OR EXISTS (
					SELECT *
					FROM TableTwo b23
					WHERE b23.RecordingID = c.RECORDINGID
						AND b23.Workgroup = N'CUSTOMER SERVICE - [NB Cover Query]'
					)
				OR EXISTS (
					SELECT *
					FROM TableTwo b24
					WHERE b24.RecordingID = c.RECORDINGID
						AND b24.Workgroup = N'CUSTOMER SERVICE - [Other Query]'
					)
				OR EXISTS (
					SELECT *
					FROM TableTwo b25
					WHERE b25.RecordingID = c.RECORDINGID
						AND b25.Workgroup = N'Customer Service Admin'
					)
				OR EXISTS (
					SELECT *
					FROM TableTwo b26
					WHERE b26.RecordingID = c.RECORDINGID
						AND b26.Workgroup = N'OUTBOUND - [Welcome Calls]'
					)
				OR EXISTS (
					SELECT *
					FROM TableTwo b27
					WHERE b27.RecordingID = c.RECORDINGID
						AND b27.Workgroup = N'OUTBOUND CS - [Cancelations]'
					)
				OR EXISTS (
					SELECT *
					FROM TableTwo b28
					WHERE b28.RecordingID = c.RECORDINGID
						AND b28.Workgroup = N'OUTBOUND CS - [DocChase]'
					)
				OR EXISTS (
					SELECT *
					FROM TableTwo b29
					WHERE b29.RecordingID = c.RECORDINGID
						AND b29.Workgroup = N'OUTBOUND CS - [Final Notice]'
					)
				OR EXISTS (
					SELECT *
					FROM TableTwo b30
					WHERE b30.RecordingID = c.RECORDINGID
						AND b30.Workgroup = N'OUTBOUND CS - [Initial Chase]'
					)
				OR EXISTS (
					SELECT *
					FROM TableTwo b31
					WHERE b31.RecordingID = c.RECORDINGID
						AND b31.Workgroup = N'OVERFLOW - [Claims]'
					)
				OR EXISTS (
					SELECT *
					FROM TableTwo b32
					WHERE b32.RecordingID = c.RECORDINGID
						AND b32.Workgroup = N'OVERFLOW - [Customer Service - MTA]'
					)
				OR EXISTS (
					SELECT *
					FROM TableTwo b33
					WHERE b33.RecordingID = c.RECORDINGID
						AND b33.Workgroup = N'RENEWALS - [Personal Enquiries]'
					)
				OR EXISTS (
					SELECT *
					FROM TableTwo b34
					WHERE b34.RecordingID = c.RECORDINGID
						AND b34.Workgroup = N'RENEWALS - [Personal Outbound]'
					)
				OR EXISTS (
					SELECT *
					FROM TableTwo b35
					WHERE b35.RecordingID = c.RECORDINGID
						AND b35.Workgroup = N'RENEWALS - [Personal Payments]'
					)
				)
			AND c.RecordingDate >= @p_dateAfter
			AND c.RecordingDate < @p_dateBefore
			OR c.RecordingDate < @p_dateAfter
			OR c.RecordingDate >= @p_dateBefore
		)

 

Here are some of the issues with the above in no particular order.

The command is running against a 312 million row table, a 55 million row table and a 22 million row table. Although is was only planned to run once IT STILL NEEDS TO BE EFFICIENT!!!!!

It uses Select Star or (Select *) which raises the probability that SQL Server will query the whole table rather than accessing the data through indexes. Only an Id is needed to begin with.

There are subqueries used when a join would be much faster.

The subqueries are completely unnecessary when WHERE conditions could have been listed in an IN statement.

Correlated subquery!!! Literally designed to bring a server to its knees! Evaluated once for each row processed. See more on correlated sub queries here https://en.wikipedia.org/wiki/Correlated_subquery

There are EXISTS Statements used due to the poor design of the command.

A ridiculously giant case statement used! Typically the comparison data should be written to a temp table and assessed as a join when there are an excessive number of case statements.

Needless to say this query ballooned the tempdb and I’ll let you guess what happened next.

The lesson for today’s posting, trust no one!

How to get the size of every index in a local SQL Server Instance

The code below will run against every online (local i.e. not Azure SQL Server) database (excluding the system databases bar the master database) and return the size of each index in each database. Knowing these values is especially important as it is recommended that the free space available on disk should be at least 1.5 times the size of the largest index. This is only one guideline regarding disk space however and use cases vary from database to database.

The output will include:

  • Database Name
  • Schema Name
  • Table Name
  • Index Id
  • Index Name
  • Index Size Mb

You can uncomment the last three lines to focus on nonclustered indexes.

SET NOCOUNT ON;

DECLARE @Database TABLE (DbName SYSNAME);
DECLARE @IndexStats TABLE (
	ServerName SYSNAME
	,DbName SYSNAME
	,SchemaName SYSNAME
	,TableName SYSNAME
	,IndexId INT
	,IndexType VARCHAR(12)
	,IndexName SYSNAME
	,IndexSizeMb INT
	);
DECLARE @DbName AS SYSNAME;
DECLARE @Sql AS VARCHAR(MAX);

SET @DbName = '';

INSERT INTO @Database (DbName)
SELECT NAME
FROM sys.databases
WHERE NAME NOT IN (
		'tempdb'
		,'msdb'
		,'model'
		)
	AND state_desc = 'ONLINE'
ORDER BY NAME ASC;

WHILE @DbName IS NOT NULL
BEGIN
	SET @DbName = (
			SELECT MIN(DbName)
			FROM @Database
			WHERE DbName > @DbName
			);
	SET @Sql = 'USE ' + QUOTENAME(@DbName) + ';	
	SELECT @@ServerName AS ServerName
	,''' + @DbName + ''' AS ''DbName'' 
	,OBJECT_SCHEMA_NAME(i.OBJECT_ID) AS SchemaName
	,OBJECT_NAME(i.OBJECT_ID) AS TableName
	,i.index_id AS IndexId
	,CASE WHEN i.index_id > 1 THEN ''Nonclustered'' WHEN i.index_id = 1 THEN ''Clustered'' ELSE ''Heap'' END AS IndexType
	,CASE WHEN i.NAME IS NULL THEN ''No Name'' ELSE i.Name END AS IndexName
	,(8 * SUM(a.used_pages)/1024) AS ''IndexSizeMb''
FROM sys.indexes AS i
JOIN sys.partitions AS p ON p.OBJECT_ID = i.OBJECT_ID
	AND p.index_id = i.index_id
JOIN sys.allocation_units AS a ON a.container_id = p.partition_id
WHERE a.used_pages > 0
GROUP BY i.OBJECT_ID
	,i.index_id
	,i.NAME
ORDER BY IndexSizeMb DESC; 
'

	INSERT INTO @IndexStats
	EXEC (@Sql);
END

SELECT ServerName
	,DbName
	,SchemaName
	,TableName
	,IndexId
	,IndexType
	,IndexName
	,IndexSizeMb
FROM @IndexStats
--WHERE IndexType = 'Nonclustered'
--AND IndexSizeMb > 0
--ORDER BY IndexSizeMb DESC;

How to perform incremental database file shrinking on SQL Server

First off you don’t ever shrink a database file unless you known it won’t need the space back again. (There’s some caveats to that statement but it mostly holds true) If that’s the space the file has consumed generally that’s the space the file needs. Shrinking just means the file will grow again and SQL Server will take a performance hit while it does that. You’re better off just adding the additional disk space.

If you need to perform a shrink and you’re worried about how long it will take and the performance impact it will have you can perform shrinks in small chunks. This is good practice particularly in a production transactional (OLTP) system as small increments  can make continuous, incremental progress, instead of trying to shrink by a large amount in one command. This makes it easier to shrink a database that must be shrunk by a large amount, and allows the shrink process to interrupted without losing all progress.

The script below can be used to shrink a database file in small increments until it reaches a target free space size based on the current free space percentage.

Simply provide as variables the database name, whether the file to be shrunk is the rows or log file, how much space to be left in terms of a percentage and how many shrinks you wish to perform. If you only want to truncate the file set the flag to 1 and you can ignore providing values for the number of shrinks or the percentage of space to keep.

(TruncateOnly means releasing all free space at the end of the file to the operating system without performing any page movement inside the file)

The script will determine given the number of shrinks and the target size what the shrink increment should be. It will then loop and execute the DBCC SHRINKFILE command to shrink the database file by the calculated increment until it reaches the target free space.

Progress updates are written to the global temp table ##DbStats so you can query this table during the shrink to see how far along the process is. The query to perform this check is commented out at the bottom of the script.

SET NOCOUNT ON;

DECLARE @DbName SYSNAME;
DECLARE @DbFileName SYSNAME;
DECLARE @RowsOrLog VARCHAR(4);
DECLARE @SpaceToLeavePercentage VARCHAR(2);
DECLARE @Sql VARCHAR(MAX);
DECLARE @ShrinkSql VARCHAR(MAX);
DECLARE @NumberOfShrinks INT;
DECLARE @FreeMbRounded INT;
DECLARE @TargetFreeMbRounded INT;
DECLARE @ShrinkIncrementInMb INT;
DECLARE @ShrinkToInMb INT;
DECLARE @TruncateOnly BIT;

/*SET USER INPUT VARIABLES*/
SET @DbName = '_DatabaseName_';
SET @RowsOrLog = 'Rows';
SET @SpaceToLeavePercentage = '10';
SET @NumberOfShrinks = 1;

--SET @TruncateOnly = 1;
IF OBJECT_ID('tempdb..##DbStats') IS NOT NULL
BEGIN
	DROP TABLE ##DbStats
END;

IF OBJECT_ID('tempdb..#Check') IS NOT NULL
BEGIN
	DROP TABLE #Check
END;

CREATE TABLE ##DbStats (
	Id_DbStats INT NOT NULL IDENTITY(1, 1) PRIMARY KEY
	,LogicalName SYSNAME
	,FreeMbRounded INT
	,TargetFreeMbRounded INT
	,DiffMb INT
	,ShrinkIncrementInMb INT
	,NumberOfShrinksLeft INT
	);

CREATE TABLE #Check (
	Id_DbStats INT NOT NULL IDENTITY(1, 1) PRIMARY KEY
	,LogicalName SYSNAME
	,FreeMbRounded INT
	,TargetFreeMbRounded INT
	,DiffMb INT
	,ShrinkIncrementInMb INT
	,NumberOfShrinksLeft INT
	);

IF @SpaceToLeavePercentage IS NULL
BEGIN
	SET @SpaceToLeavePercentage = '10'
END

SET @Sql = '
USE ' + QUOTENAME(@DbName) + ';
SELECT LogicalName
,FreeMbRounded
,TargetFreeMbRounded
,FLOOR(FreeMbRounded) - FLOOR(TargetFreeMbRounded) AS DiffMb
FROM (
SELECT NAME AS LogicalName
	,CAST(ROUND(CONVERT(DECIMAL(12, 1), (size - FILEPROPERTY(NAME, ''SpaceUsed'')) / 128.0), 0) AS INT) AS FreeMbRounded
	,CAST(ROUND(CONVERT(DECIMAL(12, 1),((((size - FILEPROPERTY(NAME, ''SpaceUsed'')) / 128.0) / 100.0) * ' + @SpaceToLeavePercentage + ')), 0) AS INT) AS TargetFreeMbRounded
FROM ' + QUOTENAME(@DbName) + '.sys.database_files 
WHERE type_desc = ''' + @RowsOrLog + '''
) AS a
';

INSERT INTO ##DbStats (
	LogicalName
	,FreeMbRounded
	,TargetFreeMbRounded
	,DiffMb
	)
EXEC (@Sql);

SET @TargetFreeMbRounded = (
		SELECT TargetFreeMbRounded
		FROM ##DbStats
		)
SET @DbFileName = (
		SELECT LogicalName
		FROM ##DbStats
		);

IF @TruncateOnly = 1
BEGIN
	SET @ShrinkSql = '
USE ' + QUOTENAME(@DbName) + ';

DBCC SHRINKFILE (
		N' + '''' + @DbFileName + '''' + '
		, 0
		, TRUNCATEONLY
		) WITH NO_INFOMSGS;
';

	EXEC (@ShrinkSql);
END
ELSE
BEGIN
	SET @ShrinkIncrementInMb = (
			(
				SELECT (FLOOR(MAX(FreeMbRounded)) - @TargetFreeMbRounded)
				FROM ##DbStats
				) / @NumberOfShrinks
			);

	UPDATE ##DbStats
	SET ShrinkIncrementInMb = @ShrinkIncrementInMb
		,NumberOfShrinksLeft = @NumberOfShrinks
	WHERE NumberOfShrinksLeft IS NULL;

	SELECT TOP 1 *
	FROM ##DbStats
	ORDER BY Id_DbStats DESC;

	IF @ShrinkIncrementInMb > 0
	BEGIN
		WHILE @NumberOfShrinks > 0
		BEGIN
			SET @FreeMbRounded = (
					SELECT FreeMbRounded
					FROM ##DbStats
					WHERE Id_DbStats = (
							SELECT MAX(Id_DbStats)
							FROM ##DbStats
							)
					);

			IF @TargetFreeMbRounded > (@FreeMbRounded - @ShrinkIncrementInMb)
			BEGIN
				SET @ShrinkToInMb = @TargetFreeMbRounded;
			END
			ELSE
			BEGIN
				SET @ShrinkToInMb = (@FreeMbRounded - @ShrinkIncrementInMb);
			END

			SET @ShrinkSql = '
USE ' + QUOTENAME(@DbName) + ';

DBCC SHRINKFILE (
		N' + '''' + @DbFileName + '''' + '
		,' + CONVERT(VARCHAR(12), @ShrinkToInMb) + '
		) WITH NO_INFOMSGS;
';

			EXEC (@ShrinkSql);

			SET @NumberOfShrinks = @NumberOfShrinks - 1;
			SET @Sql = '
USE ' + QUOTENAME(@DbName) + ';
SELECT LogicalName
,FreeMbRounded
,' + CONVERT(VARCHAR(255), @TargetFreeMbRounded) + '
,FLOOR(FreeMbRounded) - ' + CONVERT(VARCHAR(255), @TargetFreeMbRounded) + ' AS DiffMb
,' + CONVERT(VARCHAR(255), @ShrinkIncrementInMb) + '
,' + CONVERT(VARCHAR(255), @NumberOfShrinks) + '
FROM (
SELECT NAME AS LogicalName
	,CAST(ROUND(CONVERT(DECIMAL(12, 1), (size - FILEPROPERTY(NAME, ''SpaceUsed'')) / 128.0), 0) AS INT) AS FreeMbRounded
	,CAST(ROUND(CONVERT(DECIMAL(12, 1),((((size - FILEPROPERTY(NAME, ''SpaceUsed'')) / 128.0) / 100.0) * ' + @SpaceToLeavePercentage + ')), 0) AS INT) AS TargetFreeMbRounded
FROM ' + QUOTENAME(@DbName) + '.sys.database_files 
WHERE type_desc = ''' + @RowsOrLog + '''
) AS a
';

			INSERT INTO ##DbStats (
				LogicalName
				,FreeMbRounded
				,TargetFreeMbRounded
				,DiffMb
				,ShrinkIncrementInMb
				,NumberOfShrinksLeft
				)
			EXEC (@Sql);

			SELECT TOP 1 *
			FROM ##DbStats
			ORDER BY Id_DbStats DESC;
		END
	END
	ELSE
	BEGIN
		SELECT *
		FROM ##DbStats;
	END

	DROP TABLE ##DbStats
END;
		--DROP TABLE ##DbStats
		/*CHECK TO RUN IN ANOTHER SSMS WINDOW*/
		/*
SELECT TOP 1 Id_DbStats
	,LogicalName
	,FreeMbRounded
	,TargetFreeMbRounded
	,DiffMb
	,ShrinkIncrementInMb
	,NumberOfShrinksLeft
FROM ##DbStats WITH (NOLOCK)
ORDER BY Id_DbStats DESC;
/*
/*CHECK PROGRESS*/
/*
SELECT percent_complete AS PercentageComplete
	,start_time AS StartTime
	,STATUS AS CurrentStatus
	,command AS Command
	,estimated_completion_time AS EstimatedCompletionTime
	,cpu_time AS CpuTime
	,total_elapsed_time AS TotalElapsedTime
FROM sys.dm_exec_requests
WHERE Command = 'DbccFilesCompact'
*/
*/
/*
DROP TABLE ##DbStats
*/
*/

 

If you found this post helpful please like, comment and share.

How to grant a User SELECT permission on multiple tables in SQL Server using T-SQL and Excel Formulas

Maybe you get emails from time to time saying something like “Hey can you grant so-and-so SELECT permission on” and then they list a few dozen tables.

There’s a couple of issues with this.

Firstly you shouldn’t be getting that as a simple email, it should come in as a formal access request.

Secondly User access should be defined in roles (or group logins if you want to manage access at an Active Directory level) that tie back to departments and seniority. Different roles have different permissions on different objects. This makes the subject of access more manageable and easily auditable. The access request should be “can you add so-and-so to this role” and ideally people should only exist in one role.

Thirdly this would be really annoying and, depending on the length of the table list, take too long to do via the SSMS GUI.

So if you are getting emails like the above try move your organisation along with regards the first two points. But to help you action the email I’ve created the Excel file DbaScripts_GrantSelect which can be downloaded here.

Grant Select Excel Sheet Snippet

The DbaScripts_GrantSelect file allows you to enter the Login (user name), Database name and Schema name in the first three columns. You can then copy and paste the table names into the fourth column called Table. Drag the first three columns down for as many table name entries there are. Then drag the SQL Command formula column down for as many table name entries there are and this will create the commands to grant SELECTs on the tables for the user specified.

If you can’t download the file above you can recreate it.

In an empty Excel sheet write the following into the cells as directed.

A1: Login
B1: Database
C1: Schema
D1: Table
E1: SQL Command

In E2 paste the following formula:

=”GRANT SELECT ON [“&B2&”].[“&C2&”].[“&D2&”] TO [“&A2&”];”

 

Find and drop every table in a SQL Server Database that contains specific text in the table name

GDPR compliance has given people working in the DBA space the exciting opportunity to drop tables! Tables once considered gold mines are now being treated like live grenades management want rid of fast. This is a prudent stance because if a table contains personal data and it’s not being used for some vital business process why keep it around now? If somehow the wrong person gained access to the data it could have severe reputational and financial consequences. Of course a business should have never kept unneeded personal data but in truth most companies have gathered as much data as they could up until this point even if it wasn’t used as the assumption has been it might be needed later.

Before dropping tables though it is still good practice to rename the table first for a period of time to make sure nothing breaks. Once a sufficient amount of time has passed and you are confident the tables can be dropped without adverse effects the script below can help drop the newly unwanted tables.

If you’ve followed a standard naming convention for renaming unwanted tables, for example prefixing all the targeted tables with “_DropThis_” or something to that effect, this script will provide commands to:

  • Count the number of rows in each targeted table
  • Drop each targeted table
  • Confirm each targeted table has been dropped.

Simply find & replace the text “DatabaseName” with the name of the database that contains the tables to be dropped and “TextTarget” with the text each table name should contain and run the script.

Running the script will produce two tables. The first table will contain the commands to count the number of rows for each targeted table. The second table will contain the commands to drop each targeted table. Open new windows in SSMS, referencing the database to run the commands against, and copy and paste the scripts of each table into the windows. Run the count script first obviously before the drop script. Once the drop script has been run you can run the commented out query at the end of the script below to confirm the tables have been dropped.

You can take the screen shots and/or copy and paste the results of the commands (and the commands themselves) into an email or text document as a simple report to confirm the number of rows dropped and that the tables have been dropped.

/*
Find & Replace:
DatabaseName
TextTarget
*/
USE [DatabaseName];
GO

DECLARE @keyword AS VARCHAR(MAX);
DECLARE @MaxRow AS INT;

SET @keyword = 'TextTarget'

IF OBJECT_ID('tempdb.dbo.#TableStats', 'U') IS NOT NULL
	DROP TABLE #TableStats;

IF OBJECT_ID('tempdb.dbo.#CountCommand', 'U') IS NOT NULL
	DROP TABLE #CountCommand;

/*
Confirm Table Existance
*/
SELECT s.NAME AS SchemaName
	,t.*
INTO #TableStats
FROM sys.tables AS t
LEFT JOIN sys.schemas AS s ON t.schema_id = s.schema_id
WHERE t.NAME LIKE '%' + @keyword + '%'
ORDER BY s.NAME ASC
	,t.NAME ASC;

/*
Create Count SQL Commands
*/
SELECT ROW_NUMBER() OVER (
		ORDER BY [--SqlCommand]
		) AS Row#
	,[--SqlCommand]
INTO #CountCommand
FROM (
	SELECT 'SELECT ''' + s.NAME + '.' + t.NAME + ''' AS SchemaTableName, COUNT (*) AS [TableRowCount] FROM ' + QUOTENAME(s.NAME) + '.' + QUOTENAME(t.NAME) + ' WITH (NOLOCK) UNION ALL' AS [--SqlCommand]
	FROM sys.tables AS t
	LEFT JOIN sys.schemas AS s ON t.schema_id = s.schema_id
	WHERE t.NAME LIKE '%zzz%'
	) AS SqlCommand
ORDER BY [--SqlCommand];

SET @MaxRow = (
		SELECT MAX(Row#)
		FROM #CountCommand
		);

UPDATE #CountCommand
SET [--SqlCommand] = LEFT([--SqlCommand], LEN([--SqlCommand]) - 9)
WHERE Row# = @MaxRow;

UPDATE #CountCommand
SET [--SqlCommand] = [--SqlCommand] + ';'
WHERE Row# = @MaxRow;

SELECT *
FROM #CountCommand
ORDER BY [--SqlCommand];

/*
Create Drop Table SQL Commands
*/
SELECT 'DROP TABLE ' + QUOTENAME(s.NAME) + '.' + QUOTENAME(t.NAME) + ';' AS '--DropTableSqlCommand'
FROM sys.tables AS t
LEFT JOIN sys.schemas AS s ON t.schema_id = s.schema_id
WHERE t.NAME LIKE '%' + @keyword + '%'
ORDER BY s.NAME ASC
	,t.NAME ASC;
/*
Run seperately
Confirm Tables are dropped
*/
	/*
SELECT s.NAME AS SchemaName
	,t.NAME AS TableName
FROM sys.tables AS t
LEFT JOIN sys.schemas AS s ON t.schema_id = s.schema_id
INNER JOIN #TableStats AS ts ON s.NAME = ts.SchemaName
	AND t.NAME = ts.NAME
ORDER BY s.NAME ASC
	,t.NAME ASC;
*/

 

How to tell if Virtual Machine backups are running full database backups on a SQL Server instance

Some Virtual Machine backup solutions use the SQL Server VSS writer service to make SQL Server database backups as part of the VM backup.

You may not have had visibility on this process so to confirm if this is happening in your environment open up SQL Server Management Studio (SSMS) then in Object Explorer right click on the instance name, click on new query and run the script below. 

This query looks for entries in the physical_device_name field of msdb.dbo.backupmediafamily that are not directory paths.

SELECT @@SERVERNAME AS ServerName
	,bs.backup_set_id AS BackupSetId
	,bs.database_name AS DbName
	,bs.backup_start_date AS BackupStartDate
	,bs.backup_finish_date AS BackupFinishDate
	,CAST(CAST(bs.backup_size / 1000000 AS INT) AS VARCHAR(14)) + ' ' + 'MB' AS [Size]
	,bs.[type] AS BackupType
	,CASE bs.[type]
		WHEN 'D'
			THEN 'Full Backup'
		WHEN 'I'
			THEN 'Differential Backup'
		WHEN 'L'
			THEN 'TLog Backup'
		WHEN 'F'
			THEN 'File or filegroup'
		WHEN 'G'
			THEN 'Differential file'
		WHEN 'P'
			THEN 'Partial'
		WHEN 'Q'
			THEN 'Differential Partial'
		END AS BackupTypeDescription
	,bmf.physical_device_name AS BackupFilePath
	,CAST(bs.first_lsn AS VARCHAR(50)) AS FirstLogSeqNo
	,CAST(bs.last_lsn AS VARCHAR(50)) AS LastLogSeqNo
FROM msdb.dbo.backupset AS bs WITH (NOLOCK)
INNER JOIN msdb.dbo.backupmediafamily AS bmf WITH (NOLOCK) ON bs.media_set_id = bmf.media_set_id
WHERE bs.[type] = 'D'
	AND bmf.physical_device_name NOT LIKE '%\%'
ORDER BY bs.database_name ASC
	,bs.backup_set_id ASC;

 

For the field BackupFilePath if you see GUID looking entries like {AQ8F5957-2804-4X05-9EE5-109EB87EBCAB}5 the VM backup is probably running full database backups on the SQL Server.

It might be time to review your recovery strategy as the VM backups could be covering the same ground as the SQL Server backups creating a lot of expensive redundancy due to unnecessary replication of work.

How to schedule a job to restore the last backup made of a SQL Server database

This post provides you with a script that will generate a restore script for a database using the latest full backup file that exists in a directory. (No need to state the filename explicitly)

You need to provide the following at the start of the script:

  • The target database i.e. the database you will restore to
  • The directory where the backup file is saved

If you are using the excellent Ola Hallengren maintenance solution (see link) the directory path will look something like below. If you’re not using Ola’s solution, you should be.

\\ServerWhereBackupsAreSaved\DriveName\InstanceName\TargetDatabase\Full\

This restore script is designed to work with Ola’s solution as it segregates the backup directory structure such that each database has an allocated folder and each full backup file is named with the date and time of the file creation.

The restore script determines which backup file is the latest backup file based on the max name. So for the script to work it is assumed you have an appropriate backup strategy (i.e. using Ola’s solution) were backup types are segregated into different folders, backup names have a date reference and the backup location is dedicated to backups and nothing else, i.e. no trash files in the location.

Some use cases for this solution might be:

  • Restoring a nightly backup to another instance for reporting purposes
  • Restoring backups to a development environment
  • Restoring backups to another server to test the backups

You can use the logic in a stored procedure or as the T-SQL in a job step and schedule accordingly.

/*
You need to reference the following:
* The target database i.e. the database you will restore to.
* The directory where the backup file is saved.
If you are using the Ola Hallengren backup scripts the directory path will look 
something like below.
\\ServerWhereBackupsAreSaved\DriveName\InstanceName\TargetDatabase\Full\

Find & Replace the follow text for the target database and directory:

TARGET_DATABASE
DIR_PATH
*/
/*
Declare Variables
*/
DECLARE @DatabaseToRestore AS VARCHAR(MAX);
DECLARE @DirToSearch AS VARCHAR(MAX);
DECLARE @ShellCommand AS VARCHAR(MAX);
DECLARE @BackupFile AS VARCHAR(MAX);
DECLARE @Sql AS VARCHAR(MAX);

/*
Set User Variables
*/
SET @DatabaseToRestore = 'TARGET_DATABASE';
SET @DirToSearch = 'DIR_PATH';
SET @ShellCommand = 'dir ' + @DirToSearch;

/*
Create Temp Table To Hold xp_cmdshell Output
*/
IF OBJECT_ID('tempdb..#DirList') IS NOT NULL
	DROP TABLE #DirList;

CREATE TABLE #DirList (
	Id INT identity(1, 1)
	,line NVARCHAR(1000)
	);

/*
Enable Advanced Options To Enable xp_cmdshell Temporarily
*/
EXEC master.dbo.sp_configure 'show advanced options'
	,1;

RECONFIGURE
WITH OVERRIDE;

EXEC master.dbo.sp_configure 'xp_cmdshell'
	,1;

RECONFIGURE
WITH OVERRIDE;

/*
Run The Shell Command To Capture And Write Dir Info To Temp Table
*/
SET @Sql = '
INSERT INTO #DirList (line)
EXEC xp_cmdshell ' + '''' + @ShellCommand + '''' + ';';

EXEC (@Sql);

EXEC master.dbo.sp_configure 'xp_cmdshell'
	,0;

/*
Disable Advanced Options And xp_cmdshell Again
*/
RECONFIGURE
WITH OVERRIDE;

EXEC master.dbo.sp_configure 'show advanced options'
	,0;

RECONFIGURE
WITH OVERRIDE;

/*
Get The Last Backup File Name And Save To A Variable
*/
WITH CTE
AS (
	SELECT SUBSTRING(line, 37, 100) [FileName]
	FROM #DirList
	WHERE Id > (
			SELECT MIN(Id)
			FROM #DirList
			WHERE line LIKE '%<DIR>%..%'
			)
		AND Id < (
			SELECT MAX(Id) - 2
			FROM #DirList
			)
	)
SELECT @BackupFile = [FileName]
FROM CTE
WHERE [FileName] = (
		SELECT MAX(FileName)
		FROM CTE
		);

/*
Create The Restore Script
*/
SET @BackupFile = @DirToSearch + @BackupFile
SET @Sql = '
ALTER DATABASE ' + QUOTENAME(@DatabaseToRestore) + ' SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
RESTORE DATABASE ' + QUOTENAME(@DatabaseToRestore) + ' FROM DISK = ' + '''' + @BackupFile + '''' + '
WITH NORECOVERY
,REPLACE;
RESTORE DATABASE ' + QUOTENAME(@DatabaseToRestore) + ' WITH RECOVERY;
'

/*
To Directly Execute The Script Uncomment The EXEC Statement And Delete The Select Statement
*/
/*
EXEC (@Sql)
*/
SELECT @Sql