Below is a script to get the default error log path for SQL Server and set it as a variable.
Below is a script to get the default error log path for SQL Server and set it as a variable.
So you’ve tried copying and pasting the results of a query into an excel file only to get the out of memory exception. Now the fun starts!
There are multiple options to achieve copying data from SQL Server Management Studio to excel however most of them are a pain.
You’ve tried the save as option but the csv and text files have jumbled up content. You could use the Management Studio export function, but this is intended for physical tables not results sets and you might not have the permissions to create tables in the environment. You could use the bcp Utility but you’ve probably read leaving this option turned on represents a security risk. You could create an SSIS package . . . yeah that’s an efficient and effective option when you just want the damn results of an ad hoc query!!!
I’d suggest splitting the result set into chunks. You can then copy and paste the chunks into the excel file without running out of memory. Sure it’s kinda manual but trust me it’ll take less time than the options above.
I’d wager you probably only need the result set split into two, so you’ve to copy and paste twice rather than once. Not that big a deal right? I’ve even provided some code below that will really move things along.
Start by writing your query results into a temporary tablet called #QueryResult, for example SELECT * INTO #QueryResult FROM TableName.
Then all you need to do is determine how many segments you need. NTILE(n) is a function that allocates your output into n segments, each of the same size (give or take rounding when the number of rows isn’t divisible by n).
So this produces an output like:
Start by leaving n set to the default of 2. Once the data is written to the table #QueryResult run the code below in the same SSMS window the temp table was created in. Running the code should produce the same number of returned result sets as the n value you provided. Use a higher n number to create more segments if you still run out of memory when you try to copy and paste the first segment.
So that’s it, you should now be able to copy and paste your results. Maybe someday in the future Microsoft will add the option of saving results directly to excel . . .
You’re probably never going to be familiar with every database object, i.e. Table, View, Stored Procedure and Function, that exists in a large production database, even if you were the one who designed it. So everyone who maintains an environment be it a call center back end or sales system back end or CRM back end etc. etc. needs to be able to locate objects quickly.
Redgate offer a fantastic free tool to do this within SQL Server Management Studio through a GUI called SQL Search.
As great as this is though sometimes you might want to search through object definitions programmatically.
To clarify I’m defining object definition as being column names of a table or view or the command that makes up a stored procedure or function.
Below is the Store Procedure I’ve written to do this called SearchObjectDefinition. To work this Stored Procedure also requires the User Defined Function (UDF) called Split which I used in the tutorial “How to pass a multi-value parameter to a stored procedure from a SSRS Report“.
Below are a few use cases for SearchObjectDefinition:
As always be sure to deploy the following Function and Store Procedure in a utility database not the master database as this is bad practice.
SearchObjectDefinition Stored Procedure:
The ‘String or binary data would be truncated’ error will occur if an insert or update statement is trying to put too many characters into a field, defined in a table, which has been assigned too few character spaces. For example trying to write an email address with 255 characters into a table where the column email has been assigned 40 characters.
The easy fix is assign more characters to the column or columns you have determined are experiencing the problem. The more complicated but potentially necessary fix might be to change the logic or introduce validation at the source of data entry.
Finding the columns experiencing the problems however can be time consuming.
( . . . without the little script below of course)
SQL Server will kindly direct you to the stored procedure or insert/update statement that is experiencing the problem. However it will not pin point the exact column or columns that cannot be written to. The pain then is determining where the data won’t fit.
To speed things up take the entire query or query section you know to be causing the problem and write the results it into a temp table called #temp, i.e. SELECT * INTO #temp FROM SomeTable
Once the data has been written to the temp table #temp run the scrip below in the same window.
Say you have code you want executed against every table on a SQL Server instance, you could use SQL Server’s inbuilt sp_MSForEachDB and sp_MSForEachTable. I’m not a big fan of them though because they are undocumented, so I’d always be concerned Microsoft might decide to kill it with any given patch or service pack update. (I know the likelihood of that is extremely low but I’m a risk adverse kinda guy)
I prefer to use the example below. It may not be the most efficient snippet of code available on the net but it’s good and simple and it’s not going anywhere unless I drop it.
In the previous article I wrote about how to identify and remove unwanted tables. Link
In that tutorial I suggested prefixing the tables you want to remove with _DELETE_.
I also suggested you may want to archive the tables in some manner before you delete them. This could be because there is a chance someone might come looking for the data that was in a deleted table or something might break by removing the table and you might want to put it back asap.
If you have only a few tables you could script the tables out but if you have a lot of tables that becomes a little unmanageable. Also if the tables total in size to 10 Gb the script to recreate the tables will be a lot lot larger.
An alternative method is to create an Archive database, copy the tables across to this target database and then delete the tables in the source database. You can then backup and drop the Archive database saving the .bak file somewhere cheaper.
The script below will allow you to do just that. It prints the T-Sql to do the job, it doesn’t carry out the job, so it’s completely safe to execute and review.
To use the script below create a target database.
Use the target database name for the variable value @TargetDb
Use the source database name for the variable value @SourceDb
The @KeyWord variable is used to gather all the tables that contain the string of choice, in the example below _DELETE_.
The @RemoveKeyWord variable is a flag that will remove the keyword string from the target database table name, e.g. _DELETE_Sales will become Sales.
In a perfect world an organisation should never get itself into the situation where tables need to be identified as functionally obsolete and removed in bulk. Schema changes should be stepped through the cycle of development, test, staging and live with developers cleaning as they go, but we don’t live in a perfect world as you may have noticed.
What do I mean by functionally obsolete? This means the tables are no longer being interacted with by user generated objects like stored procedures, functions or views. There may also be tables that could be classified as business redundant. That is they are being referenced by user generated objects frequently but they no longer have a use to the business, i.e. one job may have been replaced by another without the former being disabled or dropped. Business redundant objects are more difficult to determine and finding them may require input from multiple stakeholders.
A good approach for removing objects is to rename the objects first. This makes it easier to put the environment back the way it was if there are any problems encountered. After a set period of time if there is no impact on the overall environment script out the object then drop it. (Obviously do this in a test environment first if possible)
The script below is mostly a light weight SELECT statement that can be run on any environment. It does not execute any of the code it generates. It uses the sys.dm_db_index_usage_stats dynamic management view to determine when the tables were last interacted with. Interaction being defined as the following actions being applied against the object, update, seek, scan, lookup.
Caveat: Entries in this view reset to NULL after a Server reboot. Also the DMV has been known to be a bit unreliable with earlier versions of SQL Server with cases of the view being reset when a full index rebuild is carried out against a table. So don’t execute any code without first reviewing it.
Tables with NULL values for the fields below should be tables that have not been referenced at all or at least since the last time the server was rebooted.
Removing these objects is the low hanging fruit of cleaning up an environment. The script also provides stats on how many times these interactions happened and the size of the object. All these stats together should help you determine if an object is functionally redundant or business redundant and can be removed.
To aid further in the cleanup the script also creates the fields Action, Comments, Renamed, RenamedDate, RenameForDeletion, RestoreOriginalName, DropTable, DroppedDate.
The script output can then be copied and pasted into an Excel spread sheet and used to coordinate and track the cleanup progress.
I’d really recommend not running the output of this script on a live environment! This is just a little something I whipped up to test on a development environment.
Running the below script will output the T-Sql required to disable all enabled job schedules on the SQL Server instance. Just copy the outputted text and paste into a new SSMS window and execute to disable the jobs.
Running the below script will output the T-Sql required to disable all enabled jobs on the SQL Server instance where the job name contains a specific keyword. Just copy the outputted text and paste into a new SSMS window and execute to disable the jobs.
For example, if you change the text value ‘PLACEHOLDER’, for the variable @KeyWord, to ‘Backup’ then any job which has ‘Backup’ in its name will have T-Sql generated to disable it. You could change the PRINT to EXEC but I wouldn’t recommend it.
I’d really recommend not running the output of this script on a live environment! This is just a little something I whipped up to clear out a development environment.
Running the script will output the T-Sql required to deleted all jobs on the SQL Server instance. Jobs for maintenance plans are not included as maintenance plans need to be deleted first. Just copy the text and paste into a new SSMS window and execute. You could change the PRINT to EXEC but I wouldn’t recommend it.