Category Archives: Data

How to handle a Database creation request

If you are working as a DBA you may find that developers will ask you to create a database having given little thought to what the database will be used for or the impact the database could have to the resources or security of the hosting environment.

If you find yourself in that situation I would suggest you walk the requester through the questionnaire from the previous article “How to determine where a new database should be deployed“.

Once you have completed that process I would then suggest that an official request to deploy a new database be made using the DATABASE REQUEST FORM provided here link. If you have a change request process I would still suggest you use this form. Having a database specific request form covers more relevant and vital information.

This is a fairly high level request form with most of the technical details still to be defined by the DBA but it provides documentation of the request and states the requester’s initial expectations and requirements.

Following the database deployment if the actual footprint of the database does not match up with what was agreed the form will confirm if the requested resources were under specced or misleading.

The form is outlined as below.

DATABASE REQUEST FORM image

Some important points the form clarifies:

There’s a difference between requester and owner. If the database runs into any problems you don’t want to be contacting the temporary intern that requested it instead of say the department head.

The application the database supports. Most of the time the database name will have some tie-in to the application name but maybe it does not. For instance the database could be named something generic like Reporting which could be the back end for really anything.

The form asks the requester to prepare a profile for the database. I could have named this section “who is your daddy and what does he do?”. If the requester states they are looking for a reporting database but operationally it’s running as a transnational database, that could make a big difference in terms of the resources provided for the database and underlying hardware.

Possibly most importantly the form helps to establish the likely impact of the new database with the Resource Impact Estimation section. For example if a requester asks for 10 Gb of space and states they expect space usage to increase by 5 Gb a year but the disk has lost a terabyte in the first few months the form will clarify who got their numbers wrong.

The user and groups section will clarify who should have access to the database. Effectively everything related to data and data access should be okayed by a compliance officer to confirm everything is GDPR compliant. This form will assist the compliance officer in establishing that.

The Business continuity & Upkeep section is really the domain of the DBA but it helps to get requester input on these matters. For instance establishing maintenance windows.

If you have any additional questions you feel should be on the form please feel free to contact me and I’ll add them.

 

 

How to assess a SQL Server instance for GDPR compliance by writing every table and column to Excel

This post uses the script I had written before here LINK modified slightly to include a count of each table. If you are a DBA you’re likely assisting compliance offers to find personal identifiable data within the databases at this time. This script will allow you to provide them with a record of every database, table and column on an entire instance. The compliance offer can then sieve through all the columns and highlight any columns that look like they contain personal data for further investigation.

Below is a SQL query that will return the following metadata about each table from each database located on a SQL server database server:

  • ServerName
  • DatabaseName
  • SchemaName
  • TableName
  • CountOfRows
  • ColumnName
  • KeyType

The output is provided in a table format with these additional formatted lines of text which can be used as queries or as part of queries.

  • A Select table query
  • A Count table row columns
  • A Select column query
  • Each column bracketed
  • Each table and column bracketed

The table returned by the query can be exported to excel. Using excels filter option applied to the columns of the table makes finding and selecting specific tables and columns very easy.

This process can be repeated for every SQL Server instance used by the business to generate a single mapped servers master excel file allowing the user to find any table or column available to the organization quickly.

Applying some colour coding like below adds to the ease of use.

Image of excel file with mapped database server structure

How to use:

Simply open SQL Server Management Studio and from object explorer right click on the server name and select new query. This will open a window set to the master database of the server. Copy and paste the SQL below into this SQL Server window and execute. When the query is finished you will have created the table above.

/*
SCRIPT UPDATED
20180316
*/

USE [master]
GO

/*DROP TEMP TABLES IF THEY EXIST*/
IF OBJECT_ID('tempdb..#DatabaseList') IS NOT NULL
	DROP TABLE #DatabaseList;

IF OBJECT_ID('tempdb..#TableStructure') IS NOT NULL
	DROP TABLE #TableStructure;

IF OBJECT_ID('tempdb..#ErrorTable') IS NOT NULL
	DROP TABLE #ErrorTable;

IF OBJECT_ID('tempdb..#MappedServer') IS NOT NULL
	DROP TABLE #MappedServer;
	
IF OBJECT_ID('tempdb..#TableCount') IS NOT NULL
	DROP TABLE #TableCount;
	
IF OBJECT_ID('tempdb..#Count') IS NOT NULL
	DROP TABLE #Count;

DECLARE @ServerName AS SYSNAME
DECLARE @Count INT

SET @ServerName = @@SERVERNAME

CREATE TABLE #DatabaseList (
	Id INT NOT NULL IDENTITY(1, 1) PRIMARY KEY
	,ServerName SYSNAME
	,DbName SYSNAME
	);

CREATE TABLE [#TableStructure] (
	[DbName] SYSNAME
	,[SchemaName] SYSNAME
	,[TableName] SYSNAME
	,[ColumnName] SYSNAME
	,[KeyType] CHAR(7)
	) ON [PRIMARY];
	
CREATE TABLE [#TableCount] (
	[Id_TableCount] INT NOT NULL IDENTITY(1, 1) PRIMARY KEY
	,[ServerName] SYSNAME
	,[DatabaseName] SYSNAME
	,[SchemaName] SYSNAME
	,[TableName] SYSNAME
	,[SQLCommand] VARCHAR(MAX)
	,[TableCount] INT
	);

CREATE TABLE #Count (ReturnedCount INT);

/*THE ERROR TABLE WILL STORE THE DYNAMIC SQL THAT DID NOT WORK*/
CREATE TABLE [#ErrorTable] ([SqlCommand] VARCHAR(MAX)) ON [PRIMARY];

/*
A LIST OF DISTINCT DATABASE NAMES IS CREATED
THESE TWO COLUMNS ARE STORED IN THE #DatabaseList TEMP TABLE
THIS TABLE IS USED IN A FOR LOOP TO GET EACH DATABASE NAME
*/
INSERT INTO #DatabaseList (
	ServerName
	,DbName
	)
SELECT @ServerName
	,NAME AS DbName
FROM master.dbo.sysdatabases WITH (NOLOCK)
WHERE NAME <> 'tempdb'
ORDER BY NAME ASC

/*VARIABLES ARE DECLARED FOR USE IN THE FOLLOWING FOR LOOP*/
DECLARE @sqlCommand AS VARCHAR(MAX)
DECLARE @DbName AS SYSNAME
DECLARE @i AS INT
DECLARE @z AS INT

SET @i = 1
SET @z = (
		SELECT COUNT(*) + 1
		FROM #DatabaseList
		)

/*WHILE 1 IS LESS THAN THE NUMBER OF DATABASE NAMES IN #DatabaseList*/
WHILE @i < @z
BEGIN
	/*GET NEW DATABASE NAME*/
	SET @DbName = (
			SELECT [DbName]
			FROM #DatabaseList
			WHERE Id = @i
			)
	/*CREATE DYNAMIC SQL TO GET EACH TABLE NAME AND COLUMN NAME FROM EACH DATABASE*/
	SET @sqlCommand = 'USE [' + @DbName + '];' + '

INSERT INTO [#TableStructure]
SELECT DISTINCT ' + '''' + @DbName + '''' + ' AS DbName
	,SCHEMA_NAME(SCHEMA_ID) AS SchemaName
	,T.NAME AS TableName	
	,C.NAME AS ColumnName
	,CASE 
		WHEN OBJECTPROPERTY(OBJECT_ID(iskcu.CONSTRAINT_NAME), ''IsPrimaryKey'') = 1 
			THEN ''Primary'' 
		WHEN OBJECTPROPERTY(OBJECT_ID(iskcu.CONSTRAINT_NAME), ''IsForeignKey'') = 1 
			THEN ''Foreign''
		ELSE NULL 
		END AS ''KeyType''
FROM SYS.TABLES AS t WITH (NOLOCK)
INNER JOIN SYS.COLUMNS C ON T.OBJECT_ID = C.OBJECT_ID
LEFT JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE AS iskcu WITH (NOLOCK) 
ON SCHEMA_NAME(SCHEMA_ID) = iskcu.TABLE_SCHEMA 
	AND T.NAME = iskcu.TABLE_NAME
	AND C.NAME = iskcu.COLUMN_NAME
ORDER BY SchemaName ASC
	,TableName ASC
	,ColumnName ASC;
';

	/*ERROR HANDLING*/
	BEGIN TRY
		EXEC (@sqlCommand)
	END TRY

	BEGIN CATCH
		INSERT INTO #ErrorTable
		SELECT (@sqlCommand)
	END CATCH

	SET @i = @i + 1
END

/*
JOIN THE TEMP TABLES TOGETHER TO CREATE A MAPPED STRUCTURE OF THE SERVER
ADDITIONAL FIELDS ARE ADDED TO MAKE SELECTING TABLES AND FIELDS EASIER
*/
SELECT DISTINCT @@SERVERNAME AS ServerName
	,DL.DbName
	,TS.SchemaName
	,TS.TableName
	,TS.ColumnName
	,TS.[KeyType]
	,',' + QUOTENAME(TS.ColumnName) AS BracketedColumn
	,',' + QUOTENAME(TS.TableName) + '.' + QUOTENAME(TS.ColumnName) AS BracketedTableAndColumn
	,'SELECT * FROM ' + QUOTENAME(DL.DbName) + '.' + QUOTENAME(TS.SchemaName) + '.' + QUOTENAME(TS.TableName) + '--WHERE --GROUP BY --HAVING --ORDER BY' AS [SelectTable]
	,'SELECT ' + QUOTENAME(TS.TableName) + '.' + QUOTENAME(TS.ColumnName) + ' FROM ' + QUOTENAME(DL.DbName) + '.' + QUOTENAME(TS.SchemaName) + '.' + QUOTENAME(TS.TableName) + '--WHERE --GROUP BY --HAVING --ORDER BY' AS [SelectColumn]
	,'SELECT COUNT(*) FROM ' + QUOTENAME(DL.DbName) + '.' + QUOTENAME(TS.SchemaName) + '.' + QUOTENAME(TS.TableName) + 'WITH (NOLOCK)' AS [PerformTableCount]
INTO #MappedServer
FROM [#DatabaseList] AS DL
INNER JOIN [#TableStructure] AS TS ON DL.DbName = TS.DbName
ORDER BY DL.DbName ASC
	,TS.SchemaName ASC
	,TS.TableName ASC
	,TS.ColumnName ASC

/*
HOUSE KEEPING
*/
IF OBJECT_ID('tempdb..#DatabaseList') IS NOT NULL
	DROP TABLE #DatabaseList;

IF OBJECT_ID('tempdb..#TableStructure') IS NOT NULL
	DROP TABLE #TableStructure;
	
INSERT INTO #TableCount (
	[ServerName]
	,[DatabaseName]
	,[SchemaName]
	,[TableName]
	,[SQLCommand]
	)
SELECT DISTINCT [ServerName]
	,[DbName]
	,[SchemaName]
	,[TableName]
	,[PerformTableCount]
FROM #MappedServer
ORDER BY [ServerName] ASC
	,[DbName] ASC
	,[SchemaName] ASC
	,[TableName] ASC

SET @i = 1
SET @z = (
		SELECT COUNT(*) + 1
		FROM #TableCount
		)
	
WHILE @i < @z
BEGIN
	SET @SQLCommand = (
			SELECT SQLCommand
			FROM #TableCount
			WHERE Id_TableCount = @i
			)

	--ERROR HANDLING
	BEGIN TRY
		INSERT INTO #Count
		EXEC (@SqlCommand)
	END TRY

	BEGIN CATCH
		INSERT INTO #ErrorTable
		SELECT (@sqlCommand)
	END CATCH

	SET @Count = (
			SELECT ReturnedCount
			FROM #Count
			)

	TRUNCATE TABLE #Count

	UPDATE #TableCount
	SET TableCount = @Count
	WHERE Id_TableCount = @i;

	SET @i = @i + 1
END

SELECT *
FROM #ErrorTable;

IF OBJECT_ID('tempdb..#ErrorTable') IS NOT NULL
	DROP TABLE #ErrorTable;

/*
THE DATA RETURNED CAN NOW BE EXPORTED TO EXCEL
USING A FILTERED SEARCH WILL NOW MAKE FINDING FIELDS A VERY EASY PROCESS
*/
SELECT DISTINCT ms.ServerName
	,ms.DbName
	,ms.SchemaName
	,ms.TableName
	,ts.TableCount AS CountOfRows
	,ms.ColumnName
	,ms.KeyType
	,ms.BracketedColumn
	,ms.BracketedTableAndColumn
	,ms.SelectColumn
	,ms.SelectTable
FROM #MappedServer AS ms
LEFT JOIN #TableCount AS ts ON ms.ServerName = ts.ServerName
AND ms.DbName = ts.DatabaseName
AND ms.SchemaName = ts.SchemaName
AND ms.TableName = ts.TableName
ORDER BY ms.DbName ASC
	,ms.SchemaName ASC
	,ms.TableName ASC
	,ms.ColumnName ASC;
	
IF OBJECT_ID('tempdb..#MappedServer') IS NOT NULL
	DROP TABLE #MappedServer;

IF OBJECT_ID('tempdb..#TableCount') IS NOT NULL
	DROP TABLE #TableCount;
	
IF OBJECT_ID('tempdb..#Count') IS NOT NULL
	DROP TABLE #Count;

 

How to determine where a new database should be deployed

Below I’ve listed 22 questions to consider when deciding where a new SQL Server database should be deployed given the four main options of:

  1. Add Database to existing Instance
  2. Create separate Instance for Database
  3. Create separate VM for Instance and Database
  4. New physical machine needed

There’s no hard and fast rule for determining where a database should go however there are often a lot of limitations dictating your choices. I’ve started the list with a big limitation, money.

If you want to use the list like a score card I’ve prepared one called “New Database Hosting Environment Assessment Form”, I know not the snappiest of titles.

New Database Hosting Environment Assessment Form

Use: I’d recommend stepping through the questions with the requester while gathering feedback. You can then fill in the option scoring with weighted figures or simply use the boxes to record notes. The main deliverable of the form is you will have some documentation that demonstrates the location for the database was assessed and the logic and reasoning for the location chosen has been recorded.

Of course consider adding additional questions relevant to your case to the form. If you think your additional questions would be relevant to everyone deploying a new database free to forward your questions on to me and I’ll include them in the form.

The form can be downloaded here link.

Question 1: Can the business afford a new physical machine?

Considerations: If the financial resources are not available option four
can be eliminated.

Question 2: Are there OS or SQL Server Licensing restrictions?

Considerations: If there are no restrictions to a new VM with it’s own instance this will likely offer the most scalable solution and produce the best performance. If restrictions exist options are limited to 1 & 2.

Question 3: Is the Database the back end for a stand-alone application?

Considerations: The back end for a stand-alone application should be isolated as much as possible from other impacting factors. The higher the number of the option chosen the better.

Question 4: What is the primary purpose of the Database?

Considerations: What is the business use\s and in what capacity will the database function i.e. Transactional Db, Reporting Db, Datastore?

Question 5: Do you want\need to be able to limit access to hardware resources?

Considerations: Access to resources, ram etc. can only be limited at an instance level. Option 1 cannot be used.

Question 6: Are there any SQL Server Service Pack dependencies?

Considerations: SQL Server can have different SPs on different instances but cannot have different SPs for different databases within an instance.

Question 7: What is the current excess capacity of the hardware?

Considerations: Can the Hardware handle the additional workload? If not either reduce resource consumption of resources on the hardware, add RAM and/or cores or choose option 4.

Question 8: Is there a VM capacity restraint on the hardware?

Considerations: Can the hardware actually host a new VM without a trade off in resources or a decrease in performance? If restrained option 3 cannot be used.

Question 9: What is the VM capacity?

Considerations: Is the OS already maxed out on resources? Can the VM handle the additional workload?

Question 10: Is there an expected increase in size and usage in the Database over time?

Considerations: If known and minimal option 1 can be used. If unknown or unlimited the higher the number of the option chosen the better.

Question 11: Is the resource usage of the new Database known?

Considerations: Benchmarking RAM, CPU and bandwidth usage should be carried out prior to installation.

Question 12: What are the disaster recovery requirements?

Considerations: Should the databases that share common dependencies be hosted on
the same server?

Question 13: What is the required operational up time? 24/7 etc.

Considerations: Does this operational run time tie in with the rest of the databases
on the instance, or the instances hosted on a VM or physical server?

Question 14: What are the Maintenance requirements?

Considerations: Will new index rebuilds etc. take a long time and effect the schedule of the instance overall?

Question 15: What are the Backup requirements?

Considerations: Will the backup schedule of the other databases be impacted
by the addition?

Question 16: Is the Database functionally similar or supporting databases currently on the instance?

Considerations: Does the new Database logically fit in with the database/s currently running on an instance?

Question 17: Have server security concerns been addressed?

Considerations: Will people with administrative access to the server have access to a Database and data they should not be able to see?

Question 18: Does hosting particular databases together compound the potential
damage If an unauthorised person was to gain access to the server?

Considerations: Will an unauthorised person have access to data that represents a major security breach if the data is combined from different databases?

Question 19: Does a vendor have control over the instance or server?

Considerations: Will putting a Database on a particular instance leave you at the mercy of a vendor?

Question 20: Will stacking instances make tracking down performance issues across
the VM difficult?

Considerations: Will this create a server level noisy neighbour problem?

Question 21: Will packing databases too densely make tracking down performance issues across the Instance difficult?

Considerations: Will this create an instance level noisy neighbour problem?

Question 22: Will moving the Database to be hosted somewhere else in the
future be possible?

Considerations: Does this decision need to be gotten right the first time?

How to pass arguments from command line to a console application written in C#

This is a simple tutorial on passing arguments or parameter values from command line to a console application written in C#. Using the example below you should be able to edit and expand on the logic to fit your own needs.

First you’ll need to create a new Visual Studio C# console application, to do so follow these steps:

To create and run a console application

  1. Start Visual Studio.

  2. On the menu bar, choose FileNewProject.
  3. Expand Installed, expand Templates, expand Visual C#, and then choose Console Application.
  4. In the Name box, specify a name for your project, and then choose the OK button.
  5. If Program.cs isn’t open in the Code Editor, open the shortcut menu for Program.cs in Solution Explorer, and then choose View Code.
  6. Replace the contents of Program.cs with the following code.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace TestArgsInput
{
    class Program
    {
        static void Main(string[] args)
        {
            if (args.Length == 0)
            {
				// Display message to user to provide parameters.
                System.Console.WriteLine("Please enter parameter values.");
                Console.Read();
            }
            else
            {
                // Loop through array to list args parameters.
                for (int i = 0; i < args.Length; i++)
                {
                    Console.Write(args[i] + Environment.NewLine);
                    
                }
                // Keep the console window open after the program has run.
                Console.Read();
            }
        }
    }
}

 

The Main method is the entry point of a C# application. When the application is started, the Main method is the first method that is invoked.

The parameter of the Main method is a String array that represents the command-line arguments. Usually you determine whether arguments exist by testing the Length property as in the example above.

When run the example above will list out the parameters you have provided to the command window. The delimiter for command line separating arguments or parameter values is a single space. For example the following would be interpreted as two arguments or parameter values:

“This is parameter 1” “This is parameter 2”

If the arguments were not enclosed by double quotes each word would be considered an argument.

To pass arguments to the console application when testing the application logic the arguments can be written into the debug section of the project properties as shown below.

TestArgs

So using the if the app is run with the command line arguments provided as above in the image the command window will list:
Parameter 1
Parameter 2
If you would like to know how to create a console application in Visual Studio that won’t open a command window when it runs see this tutorial link.
If you would like to know how to create a csv file with C# see this tutorial link.

How to copy a large result set from SQL Server Management Studio to Excel

So you’ve tried copying and pasting the results of a query into an excel file only to get the out of memory exception. Now the fun starts!

There are multiple options to achieve copying data from SQL Server Management Studio to excel however most of them are a pain.

You’ve tried the save as option but the csv and text files have jumbled up content. You could use the Management Studio export function, but this is intended for physical tables not results sets and you might not have the permissions to create tables in the environment. You could use the bcp Utility but you’ve probably read leaving this option turned on represents a security risk. You could create an SSIS package . . . yeah that’s an efficient and effective option when you just want the damn results of an ad hoc query!!!

I’d suggest splitting the result set into chunks. You can then copy and paste the chunks into the excel file without running out of memory. Sure it’s kinda manual but trust me it’ll take less time than the options above.

I’d wager you probably only need the result set split into two, so you’ve to copy and paste twice rather than once. Not that big a deal right? I’ve even provided some code below that will really move things along.

Start by writing your query results into a temporary tablet called #QueryResult, for example SELECT * INTO #QueryResult FROM TableName.

Then all you need to do is determine how many segments you need. NTILE(n) is a function that allocates your output into n segments, each of the same size (give or take rounding when the number of rows isn’t divisible by n).

So this produces an output like:

Id Name Ntile
1 Mickey 1
2 Leo 1
3 Raph 2
4 Donnie 2

Start by leaving n set to the default of 2. Once the data is written to the table #QueryResult run the code below in the same SSMS window the temp table was created in. Running the code should produce the same number of returned result sets as the n value you provided. Use a higher n number to create more segments if you still run out of memory when you try to copy and paste the first segment.

/*
Write your query results to a temp table here
i.e. SELECT * INTO #QueryResult FROM TableName
*/
DECLARE @n INT
DECLARE @i INT

/*
Set n to how many segments/results set returned you need
*/
SET @n = 2
SET @i = 1

SELECT *
	,NTILE(@n) OVER (
		ORDER BY RowNum
		) AS NtileGroup
INTO #Export
FROM (
	SELECT ROW_NUMBER() OVER (
			ORDER BY (
					SELECT NULL
					)
			) AS RowNum
		,*
	FROM #QueryResult
	) AS a

WHILE @i <= @n
BEGIN
	SELECT *
	FROM #Export
	WHERE NtileGroup = @i
	ORDER BY RowNum ASC

	SET @i = @i + 1
END

DROP TABLE #QueryResult

DROP TABLE #Export

 

So that’s it, you should now be able to copy and paste your results. Maybe someday in the future Microsoft will add the option of saving results directly to excel . . .

How to assess T-SQL code quickly

I’m sure you’re an excellent SQL coder writing beautiful efficient queries, but your predecessor . . . well they might have just been lucky to have a job.

Going through someone else’s bad code is usually tiresome, tedious and often very confusing.

I’ve created the T-SQL Assessor excel file to help in this task.

DOWNLOAD (Dropbox link)

The assessor will colour code the sql to highlight the lines of importance. With the Key Word column you can then simply filter to words like INSERT, UPDATE, MERGE and EXEC to see where the data is going or filter the column by the word FROM to see where the data has come from.

To use the T-SQL Assessor file you will first have to format your code using Poor Man’s T-SQL Formatter. This excellent tool can be installed in Visual Studio, SQL Server Management Studio or Notepad++.

http://architectshack.com/PoorMansTSqlFormatter.ashx

You can also use the online option:

http://poorsql.com/

Poor Man’s T-SQL Formatter makes text that contains a SQL command a new line, so you can’t have INSERT and FROM on one line. This is what allows Excel formula’s to highlight the lines with key words as each line can only contain one key word, excluding comments.

Once the code is formatted simply paste it into the first sheet of the file, “SQL”.

That’s it, all the work is then done for you on the second sheet of the file, “SQL Assessed”

T-SQL Assessor is also great at preparing a report from a schema compare script created by Visual Studio. It’s very annoying Microsoft didn’t provide a way of exporting the comparison directly into excel the way Redgate did but this will help. Simply filter the file to only include the keywords.

DOWNLOAD (Dropbox link)

How to archive tables that contain a keyword from one database to another using T-Sql

In the previous article I wrote about how to identify and remove unwanted tables. Link

In that tutorial I suggested prefixing the tables you want to remove with _DELETE_.

I also suggested you may want to archive the tables in some manner before you delete them. This could be because there is a chance someone might come looking for the data that was in a deleted table or something might break by removing the table and you might want to put it back asap.

If you have only a few tables you could script the tables out but if you have a lot of tables that becomes a little unmanageable. Also if the tables total in size to 10 Gb the script to recreate the tables will be a lot lot larger.

An alternative method is to create an Archive database, copy the tables across to this target database and then delete the tables in the source database. You can then backup and drop the Archive database saving the .bak file somewhere cheaper.

The script below will allow you to do just that. It prints the T-Sql to do the job, it doesn’t carry out the job, so it’s completely safe to execute and review.

To use the script below create a target database.

Use the target database name for the variable value @TargetDb

Use the source database name for the variable value @SourceDb

The @KeyWord variable is used to gather all the tables that contain the string of choice, in the example below _DELETE_.

The @RemoveKeyWord variable is a flag that will remove the keyword string from the target database table name, e.g. _DELETE_Sales will become Sales.

IF OBJECT_ID('tempdb..#Table') IS NOT NULL
	DROP TABLE #Table
GO

DECLARE @RemoveKeyWord BIT
DECLARE @Id AS INT
DECLARE @KeyWord AS VARCHAR(256)
DECLARE @SourceDb AS SYSNAME
DECLARE @TargetDb AS SYSNAME
DECLARE @TableName AS SYSNAME
DECLARE @SchemaName AS SYSNAME
DECLARE @Sql AS VARCHAR(MAX)

SET @RemoveKeyWord = 1
SET @KeyWord = '_DELETE_'
SET @TargetDb = 'Archive'
SET @SourceDb = 'Source'
SET NOCOUNT ON

CREATE TABLE #Table (
	Id_Table INT IDENTITY(1, 1)
	,SchemaName SYSNAME
	,TableName SYSNAME
	);

SET @Sql = '
INSERT INTO #Table (
	SchemaName
	,TableName
	)
SELECT s.NAME
	,so.NAME
FROM ' + QUOTENAME(@SourceDb) + '.sys.tables AS so
LEFT JOIN ' + QUOTENAME(@SourceDb) + '.sys.schemas AS s ON so.schema_id = s.schema_id
WHERE so.NAME LIKE ' + '''' + '%' + @KeyWord + '%' + '''' + '
ORDER BY s.NAME ASC'

EXEC (@Sql)

SET @SchemaName = ''

WHILE @SchemaName IS NOT NULL
BEGIN
	SET @SchemaName = (
			SELECT MIN(SchemaName)
			FROM #Table
			WHERE SchemaName > @SchemaName
				AND SchemaName <> 'dbo'
			)

	PRINT 'USE ' + QUOTENAME(@TargetDb) + ';
GO
	
IF NOT EXISTS (SELECT * FROM ' + QUOTENAME(@TargetDb) + '.sys.schemas WHERE name = ' + '''' + @SchemaName + '''' + ')
  BEGIN
    EXEC (' + '''' + 'CREATE SCHEMA ' + QUOTENAME(@SchemaName) + ';' + '''' + ');
  END;
  
'
END

SET @Id = 1

WHILE @Id IS NOT NULL
BEGIN
	SELECT @TableName = TableName
		,@SchemaName = SchemaName
	FROM #Table
	WHERE Id_Table = @Id

	IF @RemoveKeyWord = 1
	BEGIN
		PRINT '
SELECT *
INTO ' + QUOTENAME(@TargetDb) + '.' + QUOTENAME(@SchemaName) + '.' + QUOTENAME(REPLACE(@TableName, @KeyWord, '')) + ' 
FROM ' + QUOTENAME(@SourceDb) + '.' + QUOTENAME(@SchemaName) + '.' + QUOTENAME(@TableName)
	END
	ELSE
	BEGIN
		PRINT '
SELECT *
INTO ' + QUOTENAME(@TargetDb) + '.' + QUOTENAME(@SchemaName) + '.' + QUOTENAME(@TableName) + ' 
FROM ' + QUOTENAME(@SourceDb) + '.' + QUOTENAME(@SchemaName) + '.' + QUOTENAME(@TableName)
	END

	SET @Id = (
			SELECT MIN(Id_Table)
			FROM #Table
			WHERE Id_Table > @Id
			)
END

How to identify and remove unused tables in SQL Server with T-Sql

In a perfect world an organisation should never get itself into the situation where tables need to be identified as functionally obsolete and removed in bulk. Schema changes should be stepped through the cycle of development, test, staging and live with developers cleaning as they go, but we don’t live in a perfect world as you may have noticed.

What do I mean by functionally obsolete? This means the tables are no longer being interacted with by user generated objects like stored procedures, functions or views. There may also be tables that could be classified as business redundant. That is they are being referenced by user generated objects frequently but they no longer have a use to the business, i.e. one job may have been replaced by another without the former being disabled or dropped. Business redundant objects are more difficult to determine and finding them may require input from multiple stakeholders.

A good approach for removing objects is to rename the objects first. This makes it easier to put the environment back the way it was if there are any problems encountered. After a set period of time if there is no impact on the overall environment script out the object then drop it. (Obviously do this in a test environment first if possible)

The script below is mostly a light weight SELECT statement that can be run on any environment. It does not execute any of the code it generates. It uses the sys.dm_db_index_usage_stats dynamic management view to determine when the tables were last interacted with. Interaction being defined as the following actions being applied against the object, update, seek, scan, lookup.

Caveat: Entries in this view reset to NULL after a Server reboot. Also the DMV has been known to be a bit unreliable with earlier versions of SQL Server with cases of the view being reset when a full index rebuild is carried out against a table. So don’t execute any code without first reviewing it.

Tables with NULL values for the fields below should be tables that have not been referenced at all or at least since the last time the server was rebooted.

LastUserUpdate
LastUserSeek
LastUserScan
LastUserLookup

Removing these objects is the low hanging fruit of cleaning up an environment. The script also provides stats on how many times these interactions happened and the size of the object. All these stats together should help you determine if an object is functionally redundant or business redundant and can be removed.

To aid further in the cleanup the script also creates the fields Action, Comments, Renamed, RenamedDate, RenameForDeletion, RestoreOriginalName, DropTable, DroppedDate.

The script output can then be copied and pasted into an Excel spread sheet and used to coordinate and track the cleanup progress.

SET NOCOUNT ON

DECLARE @Database TABLE ([DbName] [sysname])
DECLARE @DbName AS [sysname]
DECLARE @Sql AS [varchar] (max)

IF OBJECT_ID('tempdb..#TableStats', 'U') IS NOT NULL
	DROP TABLE #TableStats

IF OBJECT_ID('tempdb..#IndexStats', 'U') IS NOT NULL
	DROP TABLE #IndexStats

IF OBJECT_ID('tempdb..#TableUsageStats', 'U') IS NOT NULL
	DROP TABLE #TableUsageStats

IF OBJECT_ID('tempdb..#TableSizeStats', 'U') IS NOT NULL
	DROP TABLE #TableSizeStats

CREATE TABLE #TableStats (
	[DbName] [sysname]
	,[SchemaName] [sysname]
	,[ObjectId] [bigint]
	,[TableName] [sysname]
	,[ModifiedDate] [datetime]
	);

CREATE TABLE #IndexStats (
	[DbName] [sysname]
	,[ObjectId] [bigint]
	,[HasIndex] [bit]
	);

CREATE TABLE #TableSizeStats (
	[DbName] [varchar](255) NULL
	,[SchemaName] [varchar](255) NULL
	,[ObjectId] [bigint]
	,[TableName] [varchar](255) NULL
	,[RowCount] [bigint] NULL
	,[AvailableSpacePercentage] [numeric](6, 2) NULL
	,[UnusedSpaceGb] [numeric](10, 3) NULL
	,[UsedSpaceGb] [numeric](10, 3) NULL
	,[TotalSpaceGb] [numeric](10, 3) NULL
	,[UnusedSpaceMb] [numeric](13, 3) NULL
	,[UsedSpaceMb] [numeric](13, 3) NULL
	,[TotalSpaceMb] [numeric](13, 3) NULL
	,[UnusedSpaceKb] [bigint] NULL
	,[UsedSpaceKb] [bigint] NULL
	,[TotalSpaceKb] [bigint] NULL
	)

CREATE TABLE #TableUsageStats (
	[DbName] [sysname]
	,[ObjectId] [bigint]
	,[TableName] [nvarchar](128) NULL
	,[LastUserUpdate] [datetime] NULL
	,[LastUserSeek] [datetime] NULL
	,[LastUserScan] [datetime] NULL
	,[LastUserLookup] [datetime] NULL
	,[UserUpdateCount] [bigint] NOT NULL
	,[UserSeekCount] [bigint] NOT NULL
	,[UserScanCount] [bigint] NOT NULL
	,[UserLookupCount] [bigint] NOT NULL
	);

SET @DbName = '';

INSERT INTO @Database (DbName)
SELECT NAME
FROM sys.databases 
WHERE NAME NOT IN (
		'tempdb'
		,'master'
		,'mode'
		,'model'
		)
AND state_desc = 'ONLINE'
ORDER BY NAME ASC;

WHILE @DbName IS NOT NULL
BEGIN
	SET @DbName = (
			SELECT MIN(DbName)
			FROM @Database
			WHERE DbName > @DbName
			);
	SET @Sql = '
INSERT INTO #TableStats (
	DbName
	,schemaName
	,ObjectId
	,TableName
	,ModifiedDate
	)
SELECT DbName
	,SchemaName
	,ObjectId
	,TableName
	,ModifiedDate
FROM (
SELECT DISTINCT ' + '''' + @DbName + '''' + ' AS DbName
	,s.NAME AS SchemaName
	,t.object_id AS ObjectId
	,t.NAME AS TableName
	,t.modify_date AS ModifiedDate
FROM ' + QUOTENAME(@DbName) + '.sys.tables AS t
INNER JOIN ' + QUOTENAME(@DbName) + '.sys.schemas AS s ON t.schema_id = s.schema_id
LEFT JOIN ' + QUOTENAME(@DbName) + '.sys.extended_properties AS ep ON ep.major_id = t.[object_id] /*Exclude System Tables*/
WHERE t.NAME IS NOT NULL
	AND s.NAME IS NOT NULL
	AND (ep.[name] IS NULL OR ep.[name] <> ''microsoft_database_tools_support'')
	) AS rd
WHERE rd.SchemaName IS NOT NULL
ORDER BY DbName ASC
	,TableName ASC;
'

	EXEC (@Sql)

	SELECT @Sql = '
INSERT INTO #IndexStats (
	DbName
	,ObjectId
	,HasIndex
	)
SELECT ' + '''' + @DbName + '''' + ' AS DbName
	,OBJECT_ID AS ObjectId
	,IndexCheck AS HasIndex
FROM (
	SELECT DISTINCT OBJECT_ID
		,CASE 
			WHEN (
					[TYPE] > 0
					AND is_disabled = 0
					)
				THEN 1
			ELSE 0
			END AS IndexCheck
	FROM ' + QUOTENAME(@DbName) + '.sys.indexes
	) AS rd
WHERE rd.IndexCheck = 1
'

	EXEC (@Sql)

	SET @Sql = 
		'
INSERT INTO #TableSizeStats (
	[DbName]
	,[SchemaName]
	,[ObjectId]
	,[TableName]
	,[RowCount]
	,[AvailableSpacePercentage]
	,[UnusedSpaceGb]
	,[UsedSpaceGb]
	,[TotalSpaceGb]
	,[UnusedSpaceMb]
	,[UsedSpaceMb]
	,[TotalSpaceMb]
	,[UnusedSpaceKb]
	,[UsedSpaceKb]
	,[TotalSpaceKb]
	)
SELECT DISTINCT rd.[DbName]
	,rd.[SchemaName]
	,rd.[ObjectId]
	,rd.[TableName]
	,rd.[RowCount]
	,CASE 
		WHEN TotalSpaceKb > 0
			THEN ((UnusedSpaceKb / TotalSpaceKb) * 100)
		ELSE 0
		END AS AvailableSpacePercentage
	,CONVERT(NUMERIC(10, 3), (rd.[UnusedSpaceKb] / 1024.) / 1024.) AS UnusedSpaceGb
	,CONVERT(NUMERIC(10, 3), (rd.[UsedSpaceKb] / 1024.) / 1024.) AS UsedSpaceGb
	,CONVERT(NUMERIC(10, 3), (rd.[TotalSpaceKb] / 1024.) / 1024.) AS TotalSpaceGb
	,CONVERT(NUMERIC(13, 3), (rd.[UnusedSpaceKb] / 1024.)) AS UnusedSpaceMb
	,CONVERT(NUMERIC(13, 3), (rd.[UsedSpaceKb] / 1024.)) AS UsedSpaceMb
	,CONVERT(NUMERIC(13, 3), (rd.[TotalSpaceKb] / 1024.)) AS TotalSpaceMb
	,rd.[UnusedSpaceKb]
	,rd.[UsedSpaceKb]
	,rd.[TotalSpaceKb]
FROM (
	SELECT ' 
		+ '''' + @DbName + '''' + ' AS DbName
		,t.Object_id AS ObjectId
		,s.[name] AS [SchemaName]
		,t.[name] AS [TableName]
		,p.[rows] AS [RowCount]
		,SUM(a.[used_pages]) * 8 AS [UsedSpaceKb]
		,(SUM(a.[total_pages]) - SUM(a.[used_pages])) * 8 AS [UnusedSpaceKb]
		,SUM(a.[total_pages]) * 8 AS [TotalSpaceKb]
	FROM ' + QUOTENAME(@DbName) + '.sys.tables AS t
	INNER JOIN ' + QUOTENAME(@DbName) + '.sys.schemas AS s ON t.schema_id = s.schema_id
	INNER JOIN ' + QUOTENAME(@DbName) + '.sys.indexes AS i ON t.OBJECT_ID = i.object_id
	INNER JOIN ' + QUOTENAME(@DbName) + '.sys.partitions AS p ON i.object_id = p.OBJECT_ID
		AND i.[index_id] = p.[index_id]
	INNER JOIN ' + QUOTENAME(@DbName) + '.sys.allocation_units a ON p.[partition_id] = a.[container_id]
	WHERE t.[is_ms_shipped] = 0
		AND i.OBJECT_ID > 255
	GROUP BY s.[name]
		,t.[name]
		,t.[object_id]
		,p.[rows]
	) AS rd
ORDER BY DbName ASC
	,SchemaName ASC
	,TableName ASC;
'

	EXEC (@Sql)

	SET @Sql = '
INSERT INTO #TableUsageStats (
	[DbName]
	,[ObjectId]
	,[TableName]
	,[LastUserUpdate]
	,[LastUserSeek]
	,[LastUserScan]
	,[LastUserLookup]
	,[UserUpdateCount]
	,[UserSeekCount]
	,[UserScanCount]
	,[UserLookupCount]
	)
SELECT DbName
	,ObjectId
	,TableName
	,LastUserUpdate
	,LastUserSeek
	,LastUserScan
	,LastUserLookup
	,UserUpdateCount
	,UserSeekCount
	,UserScanCount
	,UserLookupCount
FROM (
	SELECT DISTINCT ROW_NUMBER() OVER (
			PARTITION BY ius.Object_Id ORDER BY last_user_update DESC
			) AS RN
		,' + '''' + @DbName + '''' + ' AS DbName
		,ius.OBJECT_ID AS ObjectId
		,o.NAME AS TableName
		,ius.last_user_update AS LastUserUpdate
		,ius.last_user_seek AS LastUserSeek
		,ius.last_user_scan AS LastUserScan
		,ius.last_user_lookup AS LastUserLookup
		,ius.user_updates AS UserUpdateCount
		,ius.user_seeks AS UserSeekCount
		,ius.user_scans AS UserScanCount
		,ius.user_lookups AS UserLookupCount
	FROM ' + QUOTENAME(@DbName) + 
		'.sys.dm_db_index_usage_stats AS ius
	INNER JOIN ' + QUOTENAME(@DbName) + '.sys.objects AS o ON ius.OBJECT_ID = o.OBJECT_ID
		AND o.NAME IS NOT NULL
	) AS rd
WHERE rd.RN = 1
ORDER BY rd.DbName ASC
	,rd.TableName ASC
	,rd.LastUserUpdate DESC
	,rd.LastUserSeek DESC
	,rd.LastUserScan DESC
	,rd.LastUserLookup DESC;
'

	EXEC (@Sql)
END;
GO

SELECT DISTINCT ROW_NUMBER() OVER (
		ORDER BY ts.[DbName] ASC
			,ts.[SchemaName] ASC
			,ts.[TableName] ASC
		) AS Row
	,ts.[DbName]
	,ts.[SchemaName]
	,ts.[TableName]
	,'' AS Action --Rename, keep etc.
	,'' AS Comments
	,'' AS Renamed --boolean flag
	,'' AS RenamedDate 
	,i.[HasIndex] --Tables without an index are heaps
	,tss.[TotalSpaceMb] AS TableSizeInMb
	,ts.[ModifiedDate]
	,tus.[LastUserUpdate]
	,tus.[LastUserSeek]
	,tus.[LastUserScan]
	,tus.[LastUserLookup]
	,tus.[UserUpdateCount]
	,tus.[UserSeekCount]
	,tus.[UserScanCount]
	,tus.[UserLookupCount]
	,tss.[AvailableSpacePercentage]
	,tss.[UnusedSpaceGb]
	,tss.[UsedSpaceGb]
	,tss.[TotalSpaceGb]
	,tss.[UnusedSpaceMb]
	,tss.[UsedSpaceMb]
	,tss.[TotalSpaceMb]
	,tss.[UnusedSpaceKb]
	,tss.[UsedSpaceKb]
	,tss.[TotalSpaceKb]
	,'USE ' + QUOTENAME(ts.[DbName]) + '; EXEC sp_rename ' + '''' + ts.[SchemaName] + '.' + ts.[TableName] + '''' + ', ' + '''' + '_DELETE_' + ts.[TableName] + '''' + ';' AS RenameForDeletion
	,'USE ' + QUOTENAME(ts.[DbName]) + '; EXEC sp_rename ' + '''' + ts.[SchemaName] + '.' + '_DELETE_' + ts.[TableName] + '''' + ', ' + '''' + ts.[TableName] + '''' + ';' AS RestoreOriginalName
	,'USE ' + QUOTENAME(ts.[DbName]) + '; DROP TABLE ' + QUOTENAME(ts.[SchemaName]) + '.' + '[' + '_DELETE_' + ts.[TableName] + ']' + ';' AS 'DropTable'
	,'' AS DroppedDate
FROM #TableStats AS ts
LEFT JOIN #TableSizeStats AS tss ON ts.ObjectId = tss.ObjectId
	AND tss.DbName = ts.DbName
LEFT JOIN #IndexStats AS i ON ts.ObjectId = i.ObjectId
	AND i.DbName = ts.DbName
LEFT JOIN #TableUsageStats AS tus ON ts.ObjectId = tus.ObjectId
	AND tus.DbName = ts.DbName
ORDER BY ts.[DbName] ASC
	,ts.[SchemaName] ASC
	,ts.[TableName] ASC

	-- REF: https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-db-index-usage-stats-transact-sql

How to size transaction log files appropriately and reduce the number of virtual log files

Sizing transaction log files and resolving the issue of having too many virtual log files may or may not be caused by the same problem but both have similar solutions revolving around the same steps. As such the script provide below can resolve both but the application of the script differs slightly.

Appropriately sizing the Transaction Log (TLog) File

Ideally, TLog files should be created at their desired size at installation rather than being allowed to grow slowly over time.

Typically though the DBA will not know how large or frequent the transactions will be prior to the database becoming active in the production environment.

Or the DBA might be new to the organisation and may not be able to tell if the TLog file size is appropriate for the typical transactions the database handles or is the result of some abnormally large transactions or failed TLog backups.

All this can of course be investigated somewhat but if the database transactions are in the kilobytes and transaction frequency is low then you could also take the pragmatic approach outlined below:

  1. Backup the log until it gets truncated
  2. Shrink the log as much as possible
  3. Allow the file to grow
  4. Record the maximum size of the log
  5. Backup the log until it gets truncated
  6. Shrink the log as much as possible
  7. Manually expand the log to the maximum size recorded in step 4

Carrying out the steps above will also resolve any issues with excessive Virtual Log Files. This is dealt with in more detail below.

The script provided at the end of this article can be utilized to achieve the steps above, see the instructions “How to use the script” below.

Before proceeding with this approach however ensure the following points have been considered as these will minimize the impact of growing the files on the databases:

  • That the autogrowth increment for the transaction log is suitably large, @DefaultFileGrowth is set at 128 Mb. This value or higher is recommended.
  • Appropriate restriction settings for growth, preferably unrestricted
  • Instant file initialization is on.

Review the following article to set up instant file initialization.

https://www.brentozar.com/blitz/instant-file-initialization/

You may already know that the TLog file has been appropriately sized but there are too many VLFs in which case the following section outlines how to resolve VLFs as an issue.

Dealing with Virtual Log Files (VLFs)

Each Transaction Log (TLog) file is divided logically into smaller segments called VLFs. VLFs are the unit of truncation for the TLog. When a VLF no longer contains log records for active transactions, it can be truncated and the space becomes available to log new transactions.

SQL Server should not have an excessive number of VLFs inside the TLog. Having a large number of small VLFs can cause the following problems:

  • Slow down the recovery process on startup or after restoring a backup
  • Slow down TLog backups
  • Affect insert/update/delete performance

What constitutes too many VLFs as a problem is subjective however Microsoft flags 1000+ VLFs as an issue.

The number of VLFs can grow based on the autogrowth settings for the TLog file and how often active transactions are written to disk. The problem of too many VLFs is often the result of accepting the default settings for initial log file size and autogrowth increment when the database is created.

To check how many VLFs a database has run the below command against the database:

DBCC LOGINFO;

Each row returned represents a VLF.

To resolving too many VLFs unfortunately requires some downtime. The fix is to shrink and regrow the log file. This is a blocking operation and I would suggest once complete you run a full backup on the database\s as the shrink will break the continuity of the Full and TLog backup schedule which will cause the TLog backups to fail.

Taking the following steps will reduce the number of VLFs in the TLog of a database:

  1. Record the maximum size of the log
  2. Backup the log until it gets truncated
  3. Shrink the log as much as possible
  4. Manually expand the log to the maximum size recorded in the first step

The script provided at the end of this article can be utilized to achieve the steps above, see the instructions “How to use the script” below.

How to use the script:

The script below will output the TSQL to run against all the databases in your instance. If you want to address every database on the instance run the entire script during scheduled downtime. To run against a specific database simply run the snippet that relates to that database.

The role of Backups

The TLog needs to be backed up and truncated prior to the running the shrink to get the file as small as possible.

Backup Option 1:

The script will backup the TLog\s if you provide a value for the @BackupLocation variable. This is defaulted to ‘C:\SQL_Server\Backup\’.

Backup Option 2: 

Alternatively you can set  @BackupLocation = NULL and run your own TLog backup maintenance plan or maintenance script prior to running the script.

Future Proofing by adding a margin for growth:

The script variable @Margin will allow you to set a percentage of how much larger you want the new TLog file size base on the current size. So for a database with a current TLog file size of 100 Mb, running the script with @Margin = 10 will output TSQL to change the TLog file size to 110 Mb. This has increased the TLog File size by 10%, providing 10% additional disk space before the TLog would need to grow again.

Appropriate TLog File Size Instructions:

Once you have backups arranged using one the options above set the variable @LogSizeMb = ‘128’. And run the script. This will start the TLog file size at 128 Mb.

Take the TSQL Generated and run the snippets needed for particular databases or run the entire script.

Let the file grow over a period that is representative of repeated database transcation activity.

When the TLog file is no longer growing you have found the natural size of the TLog for that database.

Run the script again this time setting @LogSizeMb = 0 and if appropriate provide a value for @Margin. The TSQL generated will then use the current TLog file size as is, plus the optional margin.

Run the snippets for the required database\s.

Dealing with Virtual Log Files (VLFs) Instructions:

Once you have backups arranged using one the options above, ensure that the variable @LogSizeMb is set to zero and if appropriate provide a value for @Margin. Then the script can be run to generate the TSQL needed to shrink and resize the TLog file size reducing the number of VLFs.

Run the snippets for the required database\s.

USE master;
GO

SET QUOTED_IDENTIFIER ON
GO

SET ARITHABORT ON
GO

DECLARE @DbName NVARCHAR(255)
	,@LogicalFileName NVARCHAR(255)
	,@DBRecoveryDesc VARCHAR(200)
	,@LogSizeMb INT
	,@DefaultFileGrowth VARCHAR(10)
	,@BackupLocation VARCHAR(255)
	,@GetDate VARCHAR(23)
	,@Margin AS DECIMAL(5, 2)

SELECT @Margin = 0
	,@LogSizeMb = 0
	,@BackupLocation = 'C:\SQL_Server\Backup\'
	,@DefaultFileGrowth = '128'
	,@GetDate = CONVERT(VARCHAR(23), GETDATE(), 126)
	,@GetDate = REPLACE(REPLACE(REPLACE(REPLACE(@GetDate, '-', ''), ':', ''), 'T', '_'), '.', '')

DECLARE DatabaseList CURSOR
FOR
SELECT d.NAME
	,d.recovery_model_desc AS DBRecoveryDesc
	,CASE 
		WHEN @LogSizeMb > 0
			THEN @LogSizeMb
		ELSE CEILING((mf.size * 8.0) / 1024) + (CEILING(((mf.size * 8.0) / 1024) * (@Margin / 100)))
		END AS LogSizeMb
	,@BackupLocation AS BackupLocation
	,@DefaultFileGrowth AS DefaultFileGrowth
FROM sys.databases AS d
LEFT JOIN sys.master_files AS mf ON d.database_Id = mf.database_Id
WHERE d.state_desc = 'ONLINE'
	AND d.is_read_only = 0
	AND d.database_id > 4
	AND type_desc = 'LOG'
ORDER BY NAME

OPEN DatabaseList

FETCH NEXT
FROM DatabaseList
INTO @DbName
	,@DBRecoveryDesc
	,@LogSizeMb
	,@BackupLocation
	,@DefaultFileGrowth

WHILE @@FETCH_STATUS = 0
BEGIN
	SET @LogicalFileName = (
			SELECT TOP 1 NAME
			FROM sys.master_files AS mf
			WHERE DB_NAME(database_id) = @DbName
				AND type_desc = 'LOG'
			)

	IF @DBRecoveryDesc = 'Full'
		AND @BackupLocation IS NOT NULL
	BEGIN
		PRINT (
				'Use [' + @DbName + '] 
            GO  
			
			BACKUP LOG [' + @DbName + '] TO DISK=''' + @BackupLocation + @DbName + '_' + @GetDate + '_Log.trn' + ''';
			GO
			
            ALTER DATABASE [' + @DbName + '] SET RECOVERY SIMPLE WITH NO_WAIT
            GO   

            DBCC SHRINKFILE (''' + @LogicalFileName + ''',10)  
            GO  
            
            ALTER DATABASE [' + @DbName + '] MODIFY FILE (NAME=''' + @LogicalFileName + ''', SIZE=' + CONVERT(VARCHAR(10), @LogSizeMb) + 'MB, MAXSIZE=UNLIMITED, FILEGROWTH=' + @DefaultFileGrowth + 'MB);
            GO

            ALTER DATABASE [' + @DbName + '] SET RECOVERY FULL WITH NO_WAIT
            GO '
				)
		PRINT '----------------------------------------------------------- '
	END
	ELSE IF @DBRecoveryDesc = 'Full'
	BEGIN
		PRINT (
				'Use [' + @DbName + '] 
            GO  
					
            ALTER DATABASE [' + @DbName + '] SET RECOVERY SIMPLE WITH NO_WAIT
            GO   

            DBCC SHRINKFILE (''' + @LogicalFileName + ''',10)  
            GO  
            
            ALTER DATABASE [' + @DbName + '] MODIFY FILE (NAME=''' + @LogicalFileName + ''', SIZE=' + CONVERT(VARCHAR(10), @LogSizeMb) + 'MB, MAXSIZE=UNLIMITED, FILEGROWTH=' + @DefaultFileGrowth + 'MB);
            GO

            ALTER DATABASE [' + @DbName + '] SET RECOVERY FULL WITH NO_WAIT
            GO '
				)
		PRINT '----------------------------------------------------------- '
	END
	ELSE IF @DBRecoveryDesc = 'Simple'
	BEGIN
		PRINT (
				'Use [' + @DbName + ']  
            GO  

            DBCC SHRINKFILE (''' + @LogicalFileName + ''',10)    
            GO
            
            ALTER DATABASE [' + @DbName + '] MODIFY FILE (NAME=''' + @LogicalFileName + ''', SIZE=' + CONVERT(VARCHAR(10), @LogSizeMb) + 'MB, MAXSIZE=UNLIMITED, FILEGROWTH=' + @DefaultFileGrowth + 'MB);
            GO
            '
				)
		PRINT '----------------------------------------------------------- '
	END

	FETCH NEXT
	FROM DatabaseList
	INTO @DbName
		,@DBRecoveryDesc
		,@LogSizeMb
		,@BackupLocation
		,@DefaultFileGrowth
END

CLOSE DatabaseList

DEALLOCATE DatabaseList