Category Archives: Uncategorized

How to assess a SQL Server instance for GDPR compliance by writing every table and column to Excel

This post uses the script I had written before here LINK modified slightly to include a count of each table. If you are a DBA you’re likely assisting compliance offers to find personal identifiable data within the databases at this time. This script will allow you to provide them with a record of every database, table and column on an entire instance. The compliance offer can then sieve through all the columns and highlight any columns that look like they contain personal data for further investigation.

Below is a SQL query that will return the following metadata about each table from each database located on a SQL server database server:

  • ServerName
  • DatabaseName
  • SchemaName
  • TableName
  • CountOfRows
  • ColumnName
  • KeyType

The output is provided in a table format with these additional formatted lines of text which can be used as queries or as part of queries.

  • A Select table query
  • A Count table row columns
  • A Select column query
  • Each column bracketed
  • Each table and column bracketed

The table returned by the query can be exported to excel. Using excels filter option applied to the columns of the table makes finding and selecting specific tables and columns very easy.

This process can be repeated for every SQL Server instance used by the business to generate a single mapped servers master excel file allowing the user to find any table or column available to the organization quickly.

Applying some colour coding like below adds to the ease of use.

Image of excel file with mapped database server structure

How to use:

Simply open SQL Server Management Studio and from object explorer right click on the server name and select new query. This will open a window set to the master database of the server. Copy and paste the SQL below into this SQL Server window and execute. When the query is finished you will have created the table above.

/*
SCRIPT UPDATED
20180316
*/

USE [master]
GO

/*DROP TEMP TABLES IF THEY EXIST*/
IF OBJECT_ID('tempdb..#DatabaseList') IS NOT NULL
	DROP TABLE #DatabaseList;

IF OBJECT_ID('tempdb..#TableStructure') IS NOT NULL
	DROP TABLE #TableStructure;

IF OBJECT_ID('tempdb..#ErrorTable') IS NOT NULL
	DROP TABLE #ErrorTable;

IF OBJECT_ID('tempdb..#MappedServer') IS NOT NULL
	DROP TABLE #MappedServer;
	
IF OBJECT_ID('tempdb..#TableCount') IS NOT NULL
	DROP TABLE #TableCount;
	
IF OBJECT_ID('tempdb..#Count') IS NOT NULL
	DROP TABLE #Count;

DECLARE @ServerName AS SYSNAME
DECLARE @Count INT

SET @ServerName = @@SERVERNAME

CREATE TABLE #DatabaseList (
	Id INT NOT NULL IDENTITY(1, 1) PRIMARY KEY
	,ServerName SYSNAME
	,DbName SYSNAME
	);

CREATE TABLE [#TableStructure] (
	[DbName] SYSNAME
	,[SchemaName] SYSNAME
	,[TableName] SYSNAME
	,[ColumnName] SYSNAME
	,[KeyType] CHAR(7)
	) ON [PRIMARY];
	
CREATE TABLE [#TableCount] (
	[Id_TableCount] INT NOT NULL IDENTITY(1, 1) PRIMARY KEY
	,[ServerName] SYSNAME
	,[DatabaseName] SYSNAME
	,[SchemaName] SYSNAME
	,[TableName] SYSNAME
	,[SQLCommand] VARCHAR(MAX)
	,[TableCount] INT
	);

CREATE TABLE #Count (ReturnedCount INT);

/*THE ERROR TABLE WILL STORE THE DYNAMIC SQL THAT DID NOT WORK*/
CREATE TABLE [#ErrorTable] ([SqlCommand] VARCHAR(MAX)) ON [PRIMARY];

/*
A LIST OF DISTINCT DATABASE NAMES IS CREATED
THESE TWO COLUMNS ARE STORED IN THE #DatabaseList TEMP TABLE
THIS TABLE IS USED IN A FOR LOOP TO GET EACH DATABASE NAME
*/
INSERT INTO #DatabaseList (
	ServerName
	,DbName
	)
SELECT @ServerName
	,NAME AS DbName
FROM master.dbo.sysdatabases WITH (NOLOCK)
WHERE NAME <> 'tempdb'
ORDER BY NAME ASC

/*VARIABLES ARE DECLARED FOR USE IN THE FOLLOWING FOR LOOP*/
DECLARE @sqlCommand AS VARCHAR(MAX)
DECLARE @DbName AS SYSNAME
DECLARE @i AS INT
DECLARE @z AS INT

SET @i = 1
SET @z = (
		SELECT COUNT(*) + 1
		FROM #DatabaseList
		)

/*WHILE 1 IS LESS THAN THE NUMBER OF DATABASE NAMES IN #DatabaseList*/
WHILE @i < @z
BEGIN
	/*GET NEW DATABASE NAME*/
	SET @DbName = (
			SELECT [DbName]
			FROM #DatabaseList
			WHERE Id = @i
			)
	/*CREATE DYNAMIC SQL TO GET EACH TABLE NAME AND COLUMN NAME FROM EACH DATABASE*/
	SET @sqlCommand = 'USE [' + @DbName + '];' + '

INSERT INTO [#TableStructure]
SELECT DISTINCT ' + '''' + @DbName + '''' + ' AS DbName
	,SCHEMA_NAME(SCHEMA_ID) AS SchemaName
	,T.NAME AS TableName	
	,C.NAME AS ColumnName
	,CASE 
		WHEN OBJECTPROPERTY(OBJECT_ID(iskcu.CONSTRAINT_NAME), ''IsPrimaryKey'') = 1 
			THEN ''Primary'' 
		WHEN OBJECTPROPERTY(OBJECT_ID(iskcu.CONSTRAINT_NAME), ''IsForeignKey'') = 1 
			THEN ''Foreign''
		ELSE NULL 
		END AS ''KeyType''
FROM SYS.TABLES AS t WITH (NOLOCK)
INNER JOIN SYS.COLUMNS C ON T.OBJECT_ID = C.OBJECT_ID
LEFT JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE AS iskcu WITH (NOLOCK) 
ON SCHEMA_NAME(SCHEMA_ID) = iskcu.TABLE_SCHEMA 
	AND T.NAME = iskcu.TABLE_NAME
	AND C.NAME = iskcu.COLUMN_NAME
ORDER BY SchemaName ASC
	,TableName ASC
	,ColumnName ASC;
';

	/*ERROR HANDLING*/
	BEGIN TRY
		EXEC (@sqlCommand)
	END TRY

	BEGIN CATCH
		INSERT INTO #ErrorTable
		SELECT (@sqlCommand)
	END CATCH

	SET @i = @i + 1
END

/*
JOIN THE TEMP TABLES TOGETHER TO CREATE A MAPPED STRUCTURE OF THE SERVER
ADDITIONAL FIELDS ARE ADDED TO MAKE SELECTING TABLES AND FIELDS EASIER
*/
SELECT DISTINCT @@SERVERNAME AS ServerName
	,DL.DbName
	,TS.SchemaName
	,TS.TableName
	,TS.ColumnName
	,TS.[KeyType]
	,',' + QUOTENAME(TS.ColumnName) AS BracketedColumn
	,',' + QUOTENAME(TS.TableName) + '.' + QUOTENAME(TS.ColumnName) AS BracketedTableAndColumn
	,'SELECT * FROM ' + QUOTENAME(DL.DbName) + '.' + QUOTENAME(TS.SchemaName) + '.' + QUOTENAME(TS.TableName) + '--WHERE --GROUP BY --HAVING --ORDER BY' AS [SelectTable]
	,'SELECT ' + QUOTENAME(TS.TableName) + '.' + QUOTENAME(TS.ColumnName) + ' FROM ' + QUOTENAME(DL.DbName) + '.' + QUOTENAME(TS.SchemaName) + '.' + QUOTENAME(TS.TableName) + '--WHERE --GROUP BY --HAVING --ORDER BY' AS [SelectColumn]
	,'SELECT COUNT(*) FROM ' + QUOTENAME(DL.DbName) + '.' + QUOTENAME(TS.SchemaName) + '.' + QUOTENAME(TS.TableName) + 'WITH (NOLOCK)' AS [PerformTableCount]
INTO #MappedServer
FROM [#DatabaseList] AS DL
INNER JOIN [#TableStructure] AS TS ON DL.DbName = TS.DbName
ORDER BY DL.DbName ASC
	,TS.SchemaName ASC
	,TS.TableName ASC
	,TS.ColumnName ASC

/*
HOUSE KEEPING
*/
IF OBJECT_ID('tempdb..#DatabaseList') IS NOT NULL
	DROP TABLE #DatabaseList;

IF OBJECT_ID('tempdb..#TableStructure') IS NOT NULL
	DROP TABLE #TableStructure;
	
INSERT INTO #TableCount (
	[ServerName]
	,[DatabaseName]
	,[SchemaName]
	,[TableName]
	,[SQLCommand]
	)
SELECT DISTINCT [ServerName]
	,[DbName]
	,[SchemaName]
	,[TableName]
	,[PerformTableCount]
FROM #MappedServer
ORDER BY [ServerName] ASC
	,[DbName] ASC
	,[SchemaName] ASC
	,[TableName] ASC

SET @i = 1
SET @z = (
		SELECT COUNT(*) + 1
		FROM #TableCount
		)
	
WHILE @i < @z
BEGIN
	SET @SQLCommand = (
			SELECT SQLCommand
			FROM #TableCount
			WHERE Id_TableCount = @i
			)

	--ERROR HANDLING
	BEGIN TRY
		INSERT INTO #Count
		EXEC (@SqlCommand)
	END TRY

	BEGIN CATCH
		INSERT INTO #ErrorTable
		SELECT (@sqlCommand)
	END CATCH

	SET @Count = (
			SELECT ReturnedCount
			FROM #Count
			)

	TRUNCATE TABLE #Count

	UPDATE #TableCount
	SET TableCount = @Count
	WHERE Id_TableCount = @i;

	SET @i = @i + 1
END

SELECT *
FROM #ErrorTable;

IF OBJECT_ID('tempdb..#ErrorTable') IS NOT NULL
	DROP TABLE #ErrorTable;

/*
THE DATA RETURNED CAN NOW BE EXPORTED TO EXCEL
USING A FILTERED SEARCH WILL NOW MAKE FINDING FIELDS A VERY EASY PROCESS
*/
SELECT DISTINCT ms.ServerName
	,ms.DbName
	,ms.SchemaName
	,ms.TableName
	,ts.TableCount AS CountOfRows
	,ms.ColumnName
	,ms.KeyType
	,ms.BracketedColumn
	,ms.BracketedTableAndColumn
	,ms.SelectColumn
	,ms.SelectTable
FROM #MappedServer AS ms
LEFT JOIN #TableCount AS ts ON ms.ServerName = ts.ServerName
AND ms.DbName = ts.DatabaseName
AND ms.SchemaName = ts.SchemaName
AND ms.TableName = ts.TableName
ORDER BY ms.DbName ASC
	,ms.SchemaName ASC
	,ms.TableName ASC
	,ms.ColumnName ASC;
	
IF OBJECT_ID('tempdb..#MappedServer') IS NOT NULL
	DROP TABLE #MappedServer;

IF OBJECT_ID('tempdb..#TableCount') IS NOT NULL
	DROP TABLE #TableCount;
	
IF OBJECT_ID('tempdb..#Count') IS NOT NULL
	DROP TABLE #Count;

 

How to solve the SQL Server error ‘String or binary data would be truncated’

The ‘String or binary data would be truncated’ error will occur if an insert or update statement is trying to put too many characters into a field, defined in a table, which has been assigned too few character spaces. For example trying to write an email address with 255 characters into a table where the column email has been assigned 40 characters.

The easy fix is assign more characters to the column or columns you have determined are experiencing the problem. The more complicated but potentially necessary fix might be to change the logic or introduce validation at the source of data entry.

Finding the columns experiencing the problems however can be time consuming.

( . . . without the little script below of course)

SQL Server will kindly direct you to the stored procedure or insert/update statement that is experiencing the problem. However it will not pin point the exact column or columns that cannot be written to. The pain then is determining where the data won’t fit.

To speed things up take the entire query or query section you know to be causing the problem and write the results it into a temp table called #temp, i.e. SELECT * INTO #temp FROM SomeTable

Once the data has been written to the temp table #temp run the scrip below in the same window.

DECLARE @sql VARCHAR(MAX)

SET @sql = (
		SELECT (
				SELECT ',MAX(LEN(' + NAME + ')) AS [' + NAME + ']'
				FROM tempdb.sys.columns
				WHERE object_id = object_id('tempdb..#temp')
				FOR XML PATH('')
				)
		)
SET @sql = 'SELECT ' + RIGHT(@sql, LEN(@sql) - 1) + ' FROM #temp'

EXEC (@sql)
This will output results giving you the max character length of each field.
You can then compare these results to the defined destination table that the data could not be written to.
The source of the error will be where the max character number is greater than the assigned character spaces on the destination table.
For example the last time I used this query it easily highlighted that an agent had written a customers full address to the county name field which had a limit of 30 characters.

 

How to size transaction log files appropriately and reduce the number of virtual log files

Sizing transaction log files and resolving the issue of having too many virtual log files may or may not be caused by the same problem but both have similar solutions revolving around the same steps. As such the script provide below can resolve both but the application of the script differs slightly.

Appropriately sizing the Transaction Log (TLog) File

Ideally, TLog files should be created at their desired size at installation rather than being allowed to grow slowly over time.

Typically though the DBA will not know how large or frequent the transactions will be prior to the database becoming active in the production environment.

Or the DBA might be new to the organisation and may not be able to tell if the TLog file size is appropriate for the typical transactions the database handles or is the result of some abnormally large transactions or failed TLog backups.

All this can of course be investigated somewhat but if the database transactions are in the kilobytes and transaction frequency is low then you could also take the pragmatic approach outlined below:

  1. Backup the log until it gets truncated
  2. Shrink the log as much as possible
  3. Allow the file to grow
  4. Record the maximum size of the log
  5. Backup the log until it gets truncated
  6. Shrink the log as much as possible
  7. Manually expand the log to the maximum size recorded in step 4

Carrying out the steps above will also resolve any issues with excessive Virtual Log Files. This is dealt with in more detail below.

The script provided at the end of this article can be utilized to achieve the steps above, see the instructions “How to use the script” below.

Before proceeding with this approach however ensure the following points have been considered as these will minimize the impact of growing the files on the databases:

  • That the autogrowth increment for the transaction log is suitably large, @DefaultFileGrowth is set at 128 Mb. This value or higher is recommended.
  • Appropriate restriction settings for growth, preferably unrestricted
  • Instant file initialization is on.

Review the following article to set up instant file initialization.

https://www.brentozar.com/blitz/instant-file-initialization/

You may already know that the TLog file has been appropriately sized but there are too many VLFs in which case the following section outlines how to resolve VLFs as an issue.

Dealing with Virtual Log Files (VLFs)

Each Transaction Log (TLog) file is divided logically into smaller segments called VLFs. VLFs are the unit of truncation for the TLog. When a VLF no longer contains log records for active transactions, it can be truncated and the space becomes available to log new transactions.

SQL Server should not have an excessive number of VLFs inside the TLog. Having a large number of small VLFs can cause the following problems:

  • Slow down the recovery process on startup or after restoring a backup
  • Slow down TLog backups
  • Affect insert/update/delete performance

What constitutes too many VLFs as a problem is subjective however Microsoft flags 1000+ VLFs as an issue.

The number of VLFs can grow based on the autogrowth settings for the TLog file and how often active transactions are written to disk. The problem of too many VLFs is often the result of accepting the default settings for initial log file size and autogrowth increment when the database is created.

To check how many VLFs a database has run the below command against the database:

DBCC LOGINFO;

Each row returned represents a VLF.

To resolving too many VLFs unfortunately requires some downtime. The fix is to shrink and regrow the log file. This is a blocking operation and I would suggest once complete you run a full backup on the database\s as the shrink will break the continuity of the Full and TLog backup schedule which will cause the TLog backups to fail.

Taking the following steps will reduce the number of VLFs in the TLog of a database:

  1. Record the maximum size of the log
  2. Backup the log until it gets truncated
  3. Shrink the log as much as possible
  4. Manually expand the log to the maximum size recorded in the first step

The script provided at the end of this article can be utilized to achieve the steps above, see the instructions “How to use the script” below.

How to use the script:

The script below will output the TSQL to run against all the databases in your instance. If you want to address every database on the instance run the entire script during scheduled downtime. To run against a specific database simply run the snippet that relates to that database.

The role of Backups

The TLog needs to be backed up and truncated prior to the running the shrink to get the file as small as possible.

Backup Option 1:

The script will backup the TLog\s if you provide a value for the @BackupLocation variable. This is defaulted to ‘C:\SQL_Server\Backup\’.

Backup Option 2: 

Alternatively you can set  @BackupLocation = NULL and run your own TLog backup maintenance plan or maintenance script prior to running the script.

Future Proofing by adding a margin for growth:

The script variable @Margin will allow you to set a percentage of how much larger you want the new TLog file size base on the current size. So for a database with a current TLog file size of 100 Mb, running the script with @Margin = 10 will output TSQL to change the TLog file size to 110 Mb. This has increased the TLog File size by 10%, providing 10% additional disk space before the TLog would need to grow again.

Appropriate TLog File Size Instructions:

Once you have backups arranged using one the options above set the variable @LogSizeMb = ‘128’. And run the script. This will start the TLog file size at 128 Mb.

Take the TSQL Generated and run the snippets needed for particular databases or run the entire script.

Let the file grow over a period that is representative of repeated database transcation activity.

When the TLog file is no longer growing you have found the natural size of the TLog for that database.

Run the script again this time setting @LogSizeMb = 0 and if appropriate provide a value for @Margin. The TSQL generated will then use the current TLog file size as is, plus the optional margin.

Run the snippets for the required database\s.

Dealing with Virtual Log Files (VLFs) Instructions:

Once you have backups arranged using one the options above, ensure that the variable @LogSizeMb is set to zero and if appropriate provide a value for @Margin. Then the script can be run to generate the TSQL needed to shrink and resize the TLog file size reducing the number of VLFs.

Run the snippets for the required database\s.

USE master;
GO

SET QUOTED_IDENTIFIER ON
GO

SET ARITHABORT ON
GO

DECLARE @DbName NVARCHAR(255)
	,@LogicalFileName NVARCHAR(255)
	,@DBRecoveryDesc VARCHAR(200)
	,@LogSizeMb INT
	,@DefaultFileGrowth VARCHAR(10)
	,@BackupLocation VARCHAR(255)
	,@GetDate VARCHAR(23)
	,@Margin AS DECIMAL(5, 2)

SELECT @Margin = 0
	,@LogSizeMb = 0
	,@BackupLocation = 'C:\SQL_Server\Backup\'
	,@DefaultFileGrowth = '128'
	,@GetDate = CONVERT(VARCHAR(23), GETDATE(), 126)
	,@GetDate = REPLACE(REPLACE(REPLACE(REPLACE(@GetDate, '-', ''), ':', ''), 'T', '_'), '.', '')

DECLARE DatabaseList CURSOR
FOR
SELECT d.NAME
	,d.recovery_model_desc AS DBRecoveryDesc
	,CASE 
		WHEN @LogSizeMb > 0
			THEN @LogSizeMb
		ELSE CEILING((mf.size * 8.0) / 1024) + (CEILING(((mf.size * 8.0) / 1024) * (@Margin / 100)))
		END AS LogSizeMb
	,@BackupLocation AS BackupLocation
	,@DefaultFileGrowth AS DefaultFileGrowth
FROM sys.databases AS d
LEFT JOIN sys.master_files AS mf ON d.database_Id = mf.database_Id
WHERE d.state_desc = 'ONLINE'
	AND d.is_read_only = 0
	AND d.database_id > 4
	AND type_desc = 'LOG'
ORDER BY NAME

OPEN DatabaseList

FETCH NEXT
FROM DatabaseList
INTO @DbName
	,@DBRecoveryDesc
	,@LogSizeMb
	,@BackupLocation
	,@DefaultFileGrowth

WHILE @@FETCH_STATUS = 0
BEGIN
	SET @LogicalFileName = (
			SELECT TOP 1 NAME
			FROM sys.master_files AS mf
			WHERE DB_NAME(database_id) = @DbName
				AND type_desc = 'LOG'
			)

	IF @DBRecoveryDesc = 'Full'
		AND @BackupLocation IS NOT NULL
	BEGIN
		PRINT (
				'Use [' + @DbName + '] 
            GO  
			
			BACKUP LOG [' + @DbName + '] TO DISK=''' + @BackupLocation + @DbName + '_' + @GetDate + '_Log.trn' + ''';
			GO
			
            ALTER DATABASE [' + @DbName + '] SET RECOVERY SIMPLE WITH NO_WAIT
            GO   

            DBCC SHRINKFILE (''' + @LogicalFileName + ''',10)  
            GO  
            
            ALTER DATABASE [' + @DbName + '] MODIFY FILE (NAME=''' + @LogicalFileName + ''', SIZE=' + CONVERT(VARCHAR(10), @LogSizeMb) + 'MB, MAXSIZE=UNLIMITED, FILEGROWTH=' + @DefaultFileGrowth + 'MB);
            GO

            ALTER DATABASE [' + @DbName + '] SET RECOVERY FULL WITH NO_WAIT
            GO '
				)
		PRINT '----------------------------------------------------------- '
	END
	ELSE IF @DBRecoveryDesc = 'Full'
	BEGIN
		PRINT (
				'Use [' + @DbName + '] 
            GO  
					
            ALTER DATABASE [' + @DbName + '] SET RECOVERY SIMPLE WITH NO_WAIT
            GO   

            DBCC SHRINKFILE (''' + @LogicalFileName + ''',10)  
            GO  
            
            ALTER DATABASE [' + @DbName + '] MODIFY FILE (NAME=''' + @LogicalFileName + ''', SIZE=' + CONVERT(VARCHAR(10), @LogSizeMb) + 'MB, MAXSIZE=UNLIMITED, FILEGROWTH=' + @DefaultFileGrowth + 'MB);
            GO

            ALTER DATABASE [' + @DbName + '] SET RECOVERY FULL WITH NO_WAIT
            GO '
				)
		PRINT '----------------------------------------------------------- '
	END
	ELSE IF @DBRecoveryDesc = 'Simple'
	BEGIN
		PRINT (
				'Use [' + @DbName + ']  
            GO  

            DBCC SHRINKFILE (''' + @LogicalFileName + ''',10)    
            GO
            
            ALTER DATABASE [' + @DbName + '] MODIFY FILE (NAME=''' + @LogicalFileName + ''', SIZE=' + CONVERT(VARCHAR(10), @LogSizeMb) + 'MB, MAXSIZE=UNLIMITED, FILEGROWTH=' + @DefaultFileGrowth + 'MB);
            GO
            '
				)
		PRINT '----------------------------------------------------------- '
	END

	FETCH NEXT
	FROM DatabaseList
	INTO @DbName
		,@DBRecoveryDesc
		,@LogSizeMb
		,@BackupLocation
		,@DefaultFileGrowth
END

CLOSE DatabaseList

DEALLOCATE DatabaseList

How to get the date of creation and size of current temporary objects from tempdb

The query below will return the temporary objects that currently exist in the tempdb of an instance.

It’s a good query if you’re experiencing issues with the tempdb and want to see what objects are taking up space or if temp tables are surviving longer than you might expect them to.

Recently I found that because queries were running directly from Tableau and the connection was being held open temp tables were not being removed. All the temp tables existing at the same time was taking up several GBs of space needlessly.

-- Get current temporary objects date of creation and size 
SELECT DISTINCT obj.name AS ObjectName
	,obj.type_desc AS ObjectType
	,obj.object_id AS ObjectId
	,obj.principal_id AS PrincipalId
	,obj.schema_id AS SchemaId
	,obj.parent_object_id AS ParentId
	,stat.row_count AS RowCountStat
	,stat.used_page_count * 8 AS UsedSizeKB
	,stat.reserved_page_count * 8 AS RevervedSizeKB
	,obj.create_date AS CreatedDate
	,obj.modify_date AS ModifiedDate
FROM tempdb.sys.partitions AS part WITH (NOLOCK)
INNER JOIN tempdb.sys.dm_db_partition_stats AS stat WITH (NOLOCK) ON part.partition_id = stat.partition_id
	AND part.partition_number = stat.partition_number
INNER JOIN tempdb.sys.tables AS tbl WITH (NOLOCK) ON stat.object_id = tbl.object_id
LEFT JOIN tempdb.sys.objects AS obj WITH (NOLOCK) ON tbl.name = obj.name
ORDER BY CreatedDate ASC;

How to identify all user objects in the SQL Server Master database

What is the SQL Server Master Database used for?

The master database is used by SQL Server to contain all of the system level information:

  • Logins
  • Linked servers
  • Endpoints
  • Configuration settings
  • Information about the other databases on this instance and the location of their files

If the master database is not present, SQL Server cannot start.

This is way you should always take regular backups of the master database.
as if SQL Server suffers a failure, those changes will be lost and you’ll be in a lot of trouble.

So the master database is needed for SQL Server to work. Logic and objects needed to enable your application should not be tangled up in there so you should always avoid creating objects in the master database. Use a user database instead.

In saying that everyone has made the mistake of not specifying the database they want to USE in a script and by default objects get written to the Master database. Or you might have inherited the responsibility for an application which did not conform to best practices and it uses objects created in Master.

To gain visibility of these objects just run the script below.

SELECT o.object_id AS ObjectId
	,o.NAME AS ObjectName
	,o.type_desc AS ObjectType
	,o.create_date AS CreateDate
	,o.modify_date AS ModifyDate
	,SUM(st.row_count) AS RowCnt
	,CAST(SUM(st.used_page_count) / 128.0 AS DECIMAL(36, 1)) AS DataSize_MB
FROM master.sys.objects o
LEFT JOIN master.sys.dm_db_partition_stats st ON st.object_id = o.object_id
	AND st.index_id < 2
GROUP BY o.object_id
	,o.NAME
	,o.type_desc
	,o.create_date
	,o.modify_date
	,o.is_ms_shipped
HAVING o.is_ms_shipped = 0
	AND o.NAME <> 'sp_ssis_startup'
	AND o.type_desc NOT LIKE '%CONSTRAINT%'
ORDER BY CAST(SUM(st.used_page_count) / 128.0 AS DECIMAL(36, 1)) DESC
	,RowCnt DESC

How to create a datetime stamp within a batch script in the format YYYY-MM-DD

Below is a batch script that populates the variable Timestamp with a datetime value in the format YYYY-MM-DD_HH:MM:SS.

This technique can then be used when creating files etc.

To try out the example below copy the text and paste it into a text editor like notepad. Save the file as timestamp with the extension .bat and then double click on the filestamp.bat file.

This code example will display the current datetime in windows command shell. It will also create a blank file called Test along with the current datetime in the format yyyymmddhhmmss in the same directory the batch script is run from.

@ECHO off
CLS
ECHO Date format = %date%
ECHO dd = %date:~0,2%
ECHO mm = %date:~3,2%
ECHO yyyy = %date:~6,8%
ECHO.
ECHO Time format = %time%
ECHO hh = %time:~0,2%
ECHO mm = %time:~3,2%
ECHO ss = %time:~6,2%
ECHO.
SET Timestamp=%date:~6,8%-%date:~3,2%-%date:~0,2%_%time:~0,2%:%time:~3,2%:%time:~6,2%
ECHO %Timestamp%
SET Timestamp=%date:~6,8%%date:~3,2%%date:~0,2%%time:~0,2%%time:~3,2%%time:~6,2%
ECHO %Timestamp%
BREAK>Test%Timestamp%.txt 
PAUSE
An example of the HTML table sent as part of the email

How to send an email notification when disk space is low from SQL Server

Firstly let me acknowledge a few points worth nothing:

  • Disk space monitoring should be performed by enterprise quality network monitoring software. (And yes these are often very expensive)
  • The availability of adequate amounts of disk space is the responsibility of the people in charge of the IT infrastructure. (Not the DBAs)
  • The process doing the monitoring should not be running on the machine it is monitoring. (If the monitored machine experiences a problem the monitor may not be able to notify anyone)

With all that being said you might be in the situation where you still need to know when SQL Server is running out of local disk space and the solution below provides you with a free means to achieve this. This is a dynamic solution which will be able to determine the disk space available of all the local drives visible to SQL Server.

Prerequisites:

  • You will need to have an exchange server set up with SQL Server already capable of sending emails.
  • I have set up a database called Admin to hold the below tables and SP. It is good practice to store this sort of thing in a purposely created database instead of using something like Master. You can use another database if you would prefer just change the references to Admin in the first and second script with a find and replace.

Setup:

There are 4 components to this process.

Tables:

DiskSpaceAlertThreshold, this table holds a red and a yellow value. These values relate to the percentage space available on the local drives. A good rule of thumb would be to ensure there is over 20% of space available. So disk space availability for the yellow alert is set to 20% and the red alert is set to 10%. (Yes I’ve borrowed the alert names from Star Trek)

StaffEmailList, this table contains the email address of the staff you want to be notified when disk space is running low. For this example Clark Kent of The Daily Planet is very interested in SQL server disk space. You can of course add additional staff email addresses to the table. To exclude a staff member from receiving an email, after they have been added to the table, set the active flag to zero.

Store Procedure:

LowDiskSpaceAlert, this store procedure contains all the logic to determine what percentage of disk space is available. It also creates the HTML table used in the email to display the alert results and calls for the email to be sent.

N.B. You will need to change the profile named in the stored procedure to match an email account profile available within your SQL Server. Simply change the value of @profileName before executing the script.

SET @profileName = 'SQL Report'

An example of the HTML table sent as part of the email:

An example of the HTML table sent as part of the email

Job:

LowDiskSpaceAlert, this job simply runs the LowDiskSpaceAlert stored procedure every 10 minutes. You can change the schedule as you wish.

Create Tables and SP:

USE [Admin]
GO

/****** Object:  Table [dbo].[DiskSpaceAlertThreshold]    Script Date: 09/23/2016 18:46:29 ******/
SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

SET ANSI_PADDING ON
GO

CREATE TABLE [dbo].[DiskSpaceAlertThreshold] (
	[DiskSpaceAlertThresholdId] [int] IDENTITY(1, 1) NOT NULL
	,[AlertClass] [varchar](25) NULL
	,[ThresholdValue] [int] NULL
	,PRIMARY KEY CLUSTERED ([DiskSpaceAlertThresholdId] ASC) WITH (
		PAD_INDEX = OFF
		,STATISTICS_NORECOMPUTE = OFF
		,IGNORE_DUP_KEY = OFF
		,ALLOW_ROW_LOCKS = ON
		,ALLOW_PAGE_LOCKS = ON
		) ON [PRIMARY]
	) ON [PRIMARY]
GO

SET ANSI_PADDING OFF
GO

SET IDENTITY_INSERT [dbo].[DiskSpaceAlertThreshold] ON

INSERT [dbo].[DiskSpaceAlertThreshold] (
	[DiskSpaceAlertThresholdId]
	,[AlertClass]
	,[ThresholdValue]
	)
VALUES (
	1
	,N'Red'
	,10
	)

INSERT [dbo].[DiskSpaceAlertThreshold] (
	[DiskSpaceAlertThresholdId]
	,[AlertClass]
	,[ThresholdValue]
	)
VALUES (
	2
	,N'Yellow'
	,20
	)

SET IDENTITY_INSERT [dbo].[DiskSpaceAlertThreshold] OFF
/****** Object:  Table [dbo].[StaffEmailList]    Script Date: 09/23/2016 18:46:29 ******/
SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

SET ANSI_PADDING ON
GO

CREATE TABLE [dbo].[StaffEmailList] (
	[StaffEmailListId] [int] IDENTITY(1, 1) NOT NULL
	,[FirstName] [varchar](255) NULL
	,[LastName] [varchar](255) NULL
	,[EmailAddress] [varchar](255) NULL
	,[Active] [bit] NULL
	,PRIMARY KEY CLUSTERED ([StaffEmailListId] ASC) WITH (
		PAD_INDEX = OFF
		,STATISTICS_NORECOMPUTE = OFF
		,IGNORE_DUP_KEY = OFF
		,ALLOW_ROW_LOCKS = ON
		,ALLOW_PAGE_LOCKS = ON
		) ON [PRIMARY]
	) ON [PRIMARY]
GO

SET ANSI_PADDING OFF
GO

SET IDENTITY_INSERT [dbo].[StaffEmailList] ON

INSERT [dbo].[StaffEmailList] (
	[StaffEmailListId]
	,[FirstName]
	,[LastName]
	,[EmailAddress]
	,[Active]
	)
VALUES (
	1
	,N'Clark'
	,N'Kent'
	,N'cKent@thedailyplanet.com'
	,1
	)

SET IDENTITY_INSERT [dbo].[StaffEmailList] OFF
/****** Object:  StoredProcedure [dbo].[LowDiskSpaceAlert]    Script Date: 09/23/2016 18:46:33 ******/
SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

CREATE PROCEDURE [dbo].[LowDiskSpaceAlert]
AS
SET NOCOUNT ON

IF OBJECT_ID('tempdb..#DiskSpaceStats') IS NOT NULL
	DROP TABLE #DiskSpaceStats

IF OBJECT_ID('tempdb..#DriveAlert') IS NOT NULL
	DROP TABLE #DriveAlert

DECLARE @profileName AS VARCHAR(128)
DECLARE @thresholdRed AS INT
DECLARE @thresholdYellow AS INT
DECLARE @drive AS VARCHAR(5)
DECLARE @sql AS VARCHAR(MAX)
DECLARE @i AS INT
DECLARE @j AS INT

/*Change the name for your SQL Server email profile*/
SET @profileName = 'SQL Report'

CREATE TABLE #DriveAlert (
	AlertEntry VARCHAR(25)
	,AlertValue VARCHAR(25)
	)

SET @thresholdRed = (
		SELECT ThresholdValue
		FROM [Admin].[dbo].[DiskSpaceAlertThreshold]
		WHERE AlertClass = 'Red'
		)
SET @thresholdYellow = (
		SELECT ThresholdValue
		FROM [Admin].[dbo].[DiskSpaceAlertThreshold]
		WHERE AlertClass = 'Yellow'
		)

SELECT ROW_NUMBER() OVER (
		ORDER BY AvailableDriveSpacePercentage ASC
		) AS RowNo
	,ObservationDT
	,Drive
	,AvailableDriveSpacePercentage
	,AvailableDriveSpaceGB
	,UsedDriveSpaceGB
	,TotalDriveSpaceGB
INTO #DiskSpaceStats
FROM (
	SELECT DISTINCT GETDATE() AS ObservationDT
		,dovs.volume_mount_point AS Drive
		,CAST(((dovs.available_bytes / 1073741824.0) / (dovs.total_bytes / 1073741824.0)) * 100 AS NUMERIC(8, 2)) AS AvailableDriveSpacePercentage
		,CAST((dovs.available_bytes / 1073741824.0) AS NUMERIC(8, 3)) AvailableDriveSpaceGB
		,CAST(((dovs.total_bytes - dovs.available_bytes) / 1073741824.0) AS NUMERIC(8, 3)) AS UsedDriveSpaceGB
		,CAST((dovs.total_bytes / 1073741824.0) AS NUMERIC(8, 3)) TotalDriveSpaceGB
	FROM sys.master_files AS mf
	CROSS APPLY sys.dm_os_volume_stats(mf.database_id, mf.FILE_ID) AS dovs
	) AS r

SET @i = 1
SET @j = (
		SELECT MAX(RowNo)
		FROM #DiskSpaceStats
		)

WHILE @i <= @j
BEGIN
	SET @drive = (
			SELECT Drive
			FROM #DiskSpaceStats
			WHERE RowNo = @i
			)
	SET @sql = 'DECLARE @alert AS VARCHAR(6)
IF ' + CONVERT(VARCHAR(3), @thresholdYellow) + ' > (
		SELECT AvailableDriveSpacePercentage
		FROM #DiskSpaceStats
		WHERE Drive = ''' + @drive + '''
		)
BEGIN
SET @alert = ''YELLOW''
END
IF ' + CONVERT(VARCHAR(3), @thresholdRed) + ' > (
		SELECT AvailableDriveSpacePercentage
		FROM #DiskSpaceStats
		WHERE Drive = ''' + @drive + '''
		)
BEGIN
SET @alert = ''RED''
END
IF @alert IS NOT NULL
BEGIN
	SELECT AlertEntry
	,AlertValue
	FROM (
	SELECT ''Alert Class: '' AS AlertEntry
	,@alert AS AlertValue 
	UNION ALL
	SELECT ''DateTime: '' AS AlertEntry
	,CONVERT(VARCHAR(30), ObservationDT)  AS AlertValue
	FROM #DiskSpaceStats
	WHERE Drive = ''' + @drive + '''
	UNION ALL
	SELECT ''Drive: '' AS AlertEntry
	,Drive AS AlertValue
	FROM #DiskSpaceStats
	WHERE Drive = ''' + @drive + 
		'''
	UNION ALL
	SELECT ''Percentage Available: '' AS AlertEntry
	,CONVERT(VARCHAR(10), AvailableDriveSpacePercentage) + ''%'' AS AlertValue
	FROM #DiskSpaceStats
	WHERE Drive = ''' + @drive + '''
	UNION ALL
	SELECT ''Available Space: '' AS AlertEntry
	,CONVERT(VARCHAR(10), AvailableDriveSpaceGB) + '' GB'' AS AlertValue
	FROM #DiskSpaceStats
	WHERE Drive = ''' + @drive + '''
	UNION ALL
	SELECT ''Used Space: '' AS AlertEntry
	,CONVERT(VARCHAR(10), UsedDriveSpaceGB) + '' GB'' AS AlertValue
	FROM #DiskSpaceStats
	WHERE Drive = ''' + @drive + '''
	UNION ALL
	SELECT ''Total Space: '' AS AlertEntry
	,CONVERT(VARCHAR(10), TotalDriveSpaceGB) + '' GB'' AS AlertValue
	FROM #DiskSpaceStats
	WHERE Drive = ''' + @drive + '''
	UNION ALL 
	SELECT '' '' AS AlertEntry
	,'' '' AS AlertValue
	) AS alert
END
'

	INSERT INTO #DriveAlert
	EXEC (@sql)

	SET @i = @i + 1
END

IF EXISTS (
		SELECT *
		FROM #DriveAlert
		)
BEGIN
	DECLARE @emailList AS VARCHAR(MAX)
	DECLARE @subjectMsg AS VARCHAR(255)
	DECLARE @tableHTML NVARCHAR(MAX)

	SET @emailList = (
			SELECT STUFF((
						SELECT '; ' + EmailAddress
						FROM [Admin].[dbo].[StaffEmailList]
						WHERE Active = 1
						FOR XML PATH('')
						), 1, 1, '') AS EmailAddress
			)
	SET @subjectMsg = (
			SELECT @@SERVERNAME
			) + ' Low Disk Space Alert'
	SET @tableHTML = N'<style>
	.tableFormat {
		width:80%;
		border:1px solid #C0C0C0;
		border-collapse:collapse;
		padding:5px;
	}
	.tableFormat th {
		border:1px solid #C0C0C0;
		padding:5px;
		background:#F0F0F0;
	}
	.tableFormat td {
		border:1px solid #C0C0C0;
		text-align:right;
		padding:5px;
	}
</style>' + N'<H1></H1>' + N'<table class="tableFormat" align="center">' + N'<tr><th>Description</th><th>Value</th></tr>' + CAST((
				SELECT td = AlertEntry
					,''
					,td = AlertValue
					,''
				FROM #DriveAlert
				FOR XML PATH('tr')
					,TYPE
				) AS NVARCHAR(MAX)) + N'</table>';

	DROP TABLE #DiskSpaceStats

	SET @tableHTML = REPLACE(@tableHTML, '<td> </td>', '<td bgcolor="#F0F0F0"> </td>')
	SET @tableHTML = REPLACE(@tableHTML, '<td>RED</td>', '<td bgcolor="red"><b>RED</b></td>')
	SET @tableHTML = REPLACE(@tableHTML, '<td>YELLOW</td>', '<td bgcolor="yellow"><b>YELLOW</b></td>')

	EXEC msdb.dbo.sp_send_dbmail @profile_name = @profileName
		,@recipients = @emailList
		,@body = @tableHTML
		,@subject = @subjectMsg
		,@importance = 'High'
		,@body_format = 'HTML';
END
GO
Create Job:
USE [msdb]
GO

/****** Object:  Job [LowDiskSpaceAlert]    Script Date: 09/23/2016 19:03:39 ******/
BEGIN TRANSACTION

DECLARE @ReturnCode INT

SELECT @ReturnCode = 0

/****** Object:  JobCategory [[Uncategorized (Local)]]]    Script Date: 09/23/2016 19:03:39 ******/
IF NOT EXISTS (
		SELECT NAME
		FROM msdb.dbo.syscategories
		WHERE NAME = N'[Uncategorized (Local)]'
			AND category_class = 1
		)
BEGIN
	EXEC @ReturnCode = msdb.dbo.sp_add_category @class = N'JOB'
		,@type = N'LOCAL'
		,@name = N'[Uncategorized (Local)]'

	IF (
			@@ERROR <> 0
			OR @ReturnCode <> 0
			)
		GOTO QuitWithRollback
END

DECLARE @jobId BINARY (16)

EXEC @ReturnCode = msdb.dbo.sp_add_job @job_name = N'LowDiskSpaceAlert'
	,@enabled = 1
	,@notify_level_eventlog = 0
	,@notify_level_email = 0
	,@notify_level_netsend = 0
	,@notify_level_page = 0
	,@delete_level = 0
	,@description = N'No description available.'
	,@category_name = N'[Uncategorized (Local)]'
	,@owner_login_name = N'PL\admin.ph'
	,@job_id = @jobId OUTPUT

IF (
		@@ERROR <> 0
		OR @ReturnCode <> 0
		)
	GOTO QuitWithRollback

/****** Object:  Step [RunLowDiskSpaceAlertSP]    Script Date: 09/23/2016 19:03:40 ******/
EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id = @jobId
	,@step_name = N'RunLowDiskSpaceAlertSP'
	,@step_id = 1
	,@cmdexec_success_code = 0
	,@on_success_action = 1
	,@on_success_step_id = 0
	,@on_fail_action = 2
	,@on_fail_step_id = 0
	,@retry_attempts = 0
	,@retry_interval = 0
	,@os_run_priority = 0
	,@subsystem = N'TSQL'
	,@command = N'[dbo].[LowDiskSpaceAlert]'
	,@database_name = N'Admin'
	,@flags = 0

IF (
		@@ERROR <> 0
		OR @ReturnCode <> 0
		)
	GOTO QuitWithRollback

EXEC @ReturnCode = msdb.dbo.sp_update_job @job_id = @jobId
	,@start_step_id = 1

IF (
		@@ERROR <> 0
		OR @ReturnCode <> 0
		)
	GOTO QuitWithRollback

EXEC @ReturnCode = msdb.dbo.sp_add_jobschedule @job_id = @jobId
	,@name = N'LowDiskSpaceAlertSchedule'
	,@enabled = 1
	,@freq_type = 4
	,@freq_interval = 1
	,@freq_subday_type = 4
	,@freq_subday_interval = 10
	,@freq_relative_interval = 0
	,@freq_recurrence_factor = 0
	,@active_start_date = 20160824
	,@active_end_date = 99991231
	,@active_start_time = 0
	,@active_end_time = 235959
	,@schedule_uid = N'a8831f35-e7a6-4880-bc2d-4d48aff82ff6'

IF (
		@@ERROR <> 0
		OR @ReturnCode <> 0
		)
	GOTO QuitWithRollback

EXEC @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @jobId
	,@server_name = N'(local)'

IF (
		@@ERROR <> 0
		OR @ReturnCode <> 0
		)
	GOTO QuitWithRollback

COMMIT TRANSACTION

GOTO EndSave

QuitWithRollback:

IF (@@TRANCOUNT > 0)
	ROLLBACK TRANSACTION

EndSave:
GO

How to set SQL Server file autogrowth settings to recommended sizes across all databases on an instance

As part of a Microsoft RAP analysis you might encounter some of the following:

  • Databases identified with auto-growth set to percentage growth
  • Databases identified with auto-growth set to 1MB growth
  • Database files have been identified that have the next Auto Growth increment of 1GB or above
  • Databases have been identified with one or more Transaction Log files where expected next Auto Growth increment is 1GB or above

Brent Ozar suggests that the data file and log file autogrowth setting should be 256MB and 128MB respectively, and he makes a good argument here.

Broadly I agree with his position but I also acknowledge, as he does, that every database is different and has its own considerations. (Keep that in mind while reviewing the following script)

I’ve created the script below, which programmatically creates the SQL commands, to apply these recommended values to the autogrowth settings of each database on an instance.

The script gets every database name, and associated data file and log file names and using dynamic SQL writes a script to update autogrowth settings for each database.

You can then adjust the setting values manually for each database before executing.

Or if you’re really brave/crazy execute the dynamic script automatically by changing PRINT @sql to EXEC(@sql).

USE Master;
GO

SET NOCOUNT ON

DECLARE @database AS NVARCHAR(256)
	,@dataFileName AS NVARCHAR(256)
	,@logFileName AS NVARCHAR(256)
	,@sql AS VARCHAR(MAX)
	,@loop AS INT
	,@end AS INT

IF OBJECT_ID('tempdb..#DatabaseList') IS NOT NULL
	DROP TABLE #DatabaseList

CREATE TABLE #DatabaseList (
	[#DatabaseListId] INT IDENTITY(1, 1) PRIMARY KEY
	,[DatabaseName] [sysname] NOT NULL
	,[DataFileName] [sysname] NOT NULL
	,[LogFileName] [sysname] NOT NULL
	)

INSERT INTO #DatabaseList
SELECT d.DatabaseName
	,d.FileName AS DataFileName
	,l.FileName AS LogFileName
FROM (
	SELECT d.NAME AS DatabaseName
		,mf.NAME AS FileName
	FROM sys.master_files mf(NOLOCK)
	JOIN sys.databases d(NOLOCK) ON mf.database_id = d.database_id
	WHERE type_desc = 'ROWS'
	) AS d
LEFT JOIN (
	SELECT d.NAME AS DatabaseName
		,mf.NAME AS FileName
	FROM sys.master_files mf(NOLOCK)
	JOIN sys.databases d(NOLOCK) ON mf.database_id = d.database_id
	WHERE mf.type_desc = 'LOG'
	) AS l ON l.DatabaseName = d.DatabaseName

SET @end = (
		SELECT MAX(#DatabaseListId)
		FROM #DatabaseList
		)
SET @loop = 1

WHILE @loop <= @end
BEGIN
	SELECT @database = DatabaseName
		,@dataFileName = DataFileName
		,@logFileName = LogFileName
	FROM #DatabaseList
	WHERE #DatabaseListId = @loop

	--PRINT @database
	SET @sql = 'USE [' + @database + '];
		GO
		ALTER DATABASE [' + @database + '] MODIFY FILE (NAME=''' + @dataFileName + ''',MAXSIZE = UNLIMITED, FILEGROWTH = 256MB);
		ALTER DATABASE [' + @database + '] MODIFY FILE (NAME=''' + @logFileName + ''',MAXSIZE = UNLIMITED, FILEGROWTH = 128MB);
		GO'

	PRINT @sql

	SET @loop = @loop + 1
END
		/*
Change PRINT @sql to:
EXEC (@sql)
To Update the file sizes automatically
*/

 

As part of this process I’d also run the script below before and after you’ve made the changes to have a record for change management purposes.

SELECT GETDATE() AS DateOfObservation
	,d.NAME AS database_name
	,mf.NAME AS file_name
	,mf.type_desc AS file_type
	,mf.growth AS current_percent_growth
FROM sys.master_files mf(NOLOCK)
JOIN sys.databases d(NOLOCK) ON mf.database_id = d.database_id

How to determine the index fragmentation level on your SQL Server instance

Below is a script that will run against every table in every database on your Sql Server instance and output, among other things, the average index fragmentation percentage.

It can be a bit heavy when running against large databases with thousands of tables so I would recommend running it during downtime or outside office hours.

This script returns fragmentation stats by utilizing a DMV.

When Nulls are presented as parameters this DMV works at a server level and uses preset defaults.

[sys].[dm_db_index_physical_stats](NULL, NULL, NULL, NULL, NULL)

However it also accepts the parameter options below to limited the results returned.

[sys].[dm_db_index_physical_stats](Database_Id, Object_Id, Index_Id, partition_number, MODE)

MODE OPTIONS:

Mode specifies the scan level that is used to obtain statistics. Valid inputs are DEFAULT, NULL, LIMITED, SAMPLED, or DETAILED. The default (NULL) is LIMITED.

DEFAULT will have 12 columns full of data and then NULLS in the remainder.
SAMPLED will have 21 columns full of data.
LIMITED will have 12 columns of data and the NULLS in the remainder.
DETAILED will have 21 columns full of data.

Note: I’ve capture more fields in the temp table #Index than I use in the final select statement. This is because you might want to include additional index related fields i.e. check if the index is unique etc. If all you want is the index name in the final select statement you can tidy up the query as needed and exclude the unwanted additional fields.

SET NOCOUNT ON

IF OBJECT_ID('tempdb..#Schema') IS NOT NULL
	DROP TABLE #Schema

IF OBJECT_ID('tempdb..#Index') IS NOT NULL
	DROP TABLE #Index

DECLARE @Sql AS VARCHAR(MAX)
DECLARE @Database TABLE (
	DbId INT
	,DbName SYSNAME
	)
DECLARE @DbName AS SYSNAME
DECLARE @DbId AS INT

CREATE TABLE #Schema (
	DbId INT
	,DbName SYSNAME
	,Object_Id INT NULL
	,ObjectName SYSNAME NULL
	,TableFlag BIT
	,ViewFlag BIT
	,Schema_Id INT NULL
	,SchemaName SYSNAME NULL
	)

CREATE TABLE #Index (
	DbId INT
	,DbName SYSNAME
	,Object_Id INT NOT NULL
	,NAME SYSNAME NULL
	,index_id INT NOT NULL
	,type TINYINT NOT NULL
	,type_desc NVARCHAR(60) NULL
	,is_unique BIT NULL
	,data_space_id INT NULL
	,ignore_dup_key BIT NULL
	,is_primary_key BIT NULL
	,is_unique_constraint BIT NULL
	,fill_factor TINYINT NOT NULL
	,is_padded BIT NULL
	,is_disabled BIT NULL
	,is_hypothetical BIT NULL
	,allow_row_locks BIT NULL
	,allow_page_locks BIT NULL
	,has_filter BIT NULL
	,filter_definition NVARCHAR(max) NULL
	)

INSERT INTO @Database (
	DbId
	,DbName
	)
SELECT dbid
	,NAME
FROM master.dbo.sysdatabases
WHERE NAME <> 'tempdb'
ORDER BY NAME ASC

SET @DbId = 0

WHILE @DbId IS NOT NULL
BEGIN
	SET @DbId = (
			SELECT MIN(DbId)
			FROM @Database
			WHERE DbId > @DbId
			)
	SET @DbName = (
			SELECT DbName
			FROM @Database
			WHERE DbId = @DbId
			)
	SET @Sql = '
	INSERT INTO #Schema (
		DbId
		,DbName
		,Object_Id
		,ObjectName
		,TableFlag
		,ViewFlag
		,Schema_Id
		,SchemaName
		)
	SELECT ' + CONVERT(VARCHAR(128), @DbId) + ' AS DbId
		,' + '''' + @DbName + '''' + ' AS DbName
		,o.Object_Id
		,o.NAME AS ObjectName
		,CASE 
			WHEN o.type = ''IT''
				THEN 1
			WHEN o.type = ''S''
				THEN 1
			WHEN o.type = ''TT''
				THEN 1
			WHEN o.type = ''U''
				THEN 1
			ELSE NULL
			END AS TableFlag
		,CASE 
			WHEN o.type = ''V''
				THEN 1
			ELSE NULL
			END AS ViewFlag
		,s.Schema_Id
		,s.NAME AS SchemaName
	FROM ' + QUOTENAME(@DbName) + '.sys.objects AS o WITH (NOLOCK)
	INNER JOIN ' + QUOTENAME(@DbName) + '.sys.schemas AS s WITH (NOLOCK) ON o.schema_id = s.schema_id
	'

	EXEC (@Sql)

	SET @Sql = '
	INSERT INTO #Index (
	DbId
	,DbName
	,object_id
	,NAME
	,index_id
	,type
	,type_desc
	,is_unique
	,data_space_id
	,ignore_dup_key
	,is_primary_key
	,is_unique_constraint
	,fill_factor
	,is_padded
	,is_disabled
	,is_hypothetical
	,allow_row_locks
	,allow_page_locks
	,has_filter
	,filter_definition
	)
	SELECT ' + CONVERT(VARCHAR(128), @DbId) + ' AS DbId
		,' + '''' + @DbName + '''' + ' AS DbName
		,object_id
		,NAME
		,index_id
		,type
		,type_desc
		,is_unique
		,data_space_id
		,ignore_dup_key
		,is_primary_key
		,is_unique_constraint
		,fill_factor
		,is_padded
		,is_disabled
		,is_hypothetical
		,allow_row_locks
		,allow_page_locks
		,has_filter
		,filter_definition
	FROM ' + QUOTENAME(@DbName) + '.sys.indexes WITH (NOLOCK)
	'

	EXEC (@Sql)
END

SELECT ROW_NUMBER() OVER (
		ORDER BY s.DbName ASC
			,s.SchemaName ASC
			,s.ObjectName
		) AS Row
	,s.DbName
	,s.SchemaName AS SchemaName
	,s.Object_Id AS ObjectId
	,s.ObjectName
	,s.TableFlag
	,s.ViewFlag
	,i.Name AS IndexName
	,ddips.index_type_desc AS IndexType
	,ddips.page_count AS Pages
	,ddips.record_count AS Records
	,ddips.fragment_count AS Frags
	,ddips.avg_fragmentation_in_percent AS AvgFragPercent
FROM sys.dm_db_index_physical_stats(NULL, NULL, NULL, NULL, 'SAMPLED') AS ddips
INNER JOIN #Schema AS s ON ddips.database_id = s.DbId
INNER JOIN #Index AS i ON s.DbId = i.DbId
	AND s.Object_Id = i.Object_Id
	AND ddips.index_id = i.index_id
	AND ddips.object_id = s.Object_Id
WHERE (
		s.TableFlag = 1
		OR s.ViewFlag = 1
		)
ORDER BY s.DbName ASC
	,s.SchemaName ASC
	,s.ObjectName
Space Sniffer image showing how much space AVG Web Tuneup using

AVG Web Tuneup you jerk!

So I finally got bothered enough to find out what the hell was eating up all the space on my C drive. I had just assumed it was something to do with the enforced Windows 10 upgrade but it turns out it was the AVG Web Tuneup application writing to the AppData directory. (Sorry for blaming you Windows 10)

For anyone else experiencing low disk space that is going unnoticed by disk cleanup utilities it might be worth checking this out. I used the fantastic utility Space Sniffer. This utility doesn’t require an install, it’s super light and super quick and will visually display all the files and folders on the drive you choose to scan. (Tip: run as administrator so it doesn’t throw warnings about directories it doesn’t have access to)

Space Sniffer image showing how much space AVG Web Tuneup using

To access the AppData folder on Windows 10 go to start and type %AppData%.

To rectify  this specific problem caused by AVG Web Tuneup utility first uninstall it. If this doesn’t get rid of the folder then access the AppData folder and navigate to AppData\Local\Packages\windows_ie_ac_001\AC\AVG Web TuneUp and delete that damn directory.

So long AVG Web Tuneup you jerk!