June 2009 - Posts

After disabling TDE you still requires the certificate to restore the database
29 June 09 11:46 PM | MartinBell | 3 comment(s)

I came across this problem today when trying to restore a database that had previously had TDE enabled. (Note: the connect item seems to list the wrong database in the error message!!)

As you can imagine, after waiting 1 hour for the backup to restore I was not very happy!! But I did guess it was the fact that the database had been previously encrypted which lead me to the connect item. I then had to wait another hour to be proved correct!!

You don’t need two servers to witness this behaviour, the same thing will happen on a single server if you disable encryption on a database, drop the certificate and then backup and restore the database. To re-create this, see the following script.. The error message is the same as the connect item:

Processed 176 pages for database 'TDEDatabase', file 'TDEDatabase' on file 1.
Processed 1 pages for database 'TDEDatabase', file 'TDEDatabase_log' on file 1.
Msg 3283, Level 16, State 1, Line 1
The file "TDEDatabase_log" failed to initialize correctly. Examine the error logs for more details.
Msg 3013, Level 16, State 1, Line 1
RESTORE DATABASE is terminating abnormally.

From this error message alone it is not obvious that the issue is with a missing certificate, and you could spend a long of time investigating the problem and taking new backups if you thought it was either the backup file or database that was corrupt. So how did I ever know that this was the cause?

Always remember to backup any server certificate you use when creating database encryption keys and make sure you know which databases use which certificates on any given server!

As part of my investigations into SQL Server 2008’s new features, I had encrypted a database using TDE to demo what happens when you try to restore it to a SQLExpress instance. If you want to try this yourself see the following script. The error message you get is:

13 percent processed.
22 percent processed.
31 percent processed.
40 percent processed.
53 percent processed.
62 percent processed.
71 percent processed.
80 percent processed.
94 percent processed.
100 percent processed.
Processed 176 pages for database 'TDEDatabase', file 'TDEDatabase' on file 1.
Processed 3 pages for database 'TDEDatabase', file 'TDEDatabase_log' on file 1.
Msg 33117, Level 16, State 2, Line 1
Transparent Data Encryption is not available in the edition of this SQL Server instance. See books online for more details on feature support in different SQL Server editions.
Msg 3167, Level 16, State 1, Line 1
RESTORE could not start database 'TDEDatabase'.
Msg 3013, Level 16, State 1, Line 1
RESTORE DATABASE is terminating abnormally.
Msg 933, Level 21, State 1, Line 1
Database 'TDEDatabase' cannot be started because some of the database functionality is not available in the current edition of SQL Server.
 

It is only when the backup has processed all the file you get the error, but at least the error message is more informative!! The same error is produced by all the other editions of SQL Server that don't support TDE (i.e. every one except Enterprise and Developer!), even if TDE was removed.

 

You would really expect the backup header to hold information about TDE being enabled, but according to Books Online there does not seem to be any value/attribute for it. If there is, it's not used and I raised another connect item for this.

London Area SQL Server User Group 18 June Slidedecks
19 June 09 06:29 PM | MartinBell | with no comments

Here are the slide decks from the London User Group meeting on 18 June 2009.

Iain's Slide Deck on SQL Server Performance Troubleshooting.

Martin's slide deck on Replaying SQL Profiler Traces.

Please check out the SQLBits website for more information on the SQLBits conference.

 

Filed under:
SQL Profiler and TEMP directory
18 June 09 07:00 PM | MartinBell | 1 comment(s)

In my last article I wrote about moving your temp directory to avoid problems if the system disc became full. If this does happen it could be a DBA running profiler that caused it!

Accepted best practices for running SQL Profiler application, says you should run it on a different machine to the server. This is good advice because it is not using processing or memory on the server, but also good because it will not be using disc space, specifically space on the temp directory.

Under “Space Requirements for SQL Server Profiler”, Books Online does say that you should move the temp directory and that SQL Profiler requires at lease 10 MB to function. The issue here is that it is a minimum value and system partitions don’t tend to be that large! To show this I started a standard trace, but added TSQL:StmtStarting  and SQL:StmtCompleted events. This created a file PrfF8F2.tmp in the C:\Users\Martin\AppData\Local\Temp directory. At a command prompt I then ran:

sqlcmd -E -S (local) -Q "WHILE 1=1 SELECT * FROM master.sys.databases"

This caused the file to grow by about 1MB per minute so after half-an-hour it was approaching 30MB. This file grow is nothing compared to a heavily used system.



I copied the file, and then stopped the trace. This didn’t destroy the original temporary file, so I closed down the trace window which did.

So what does this file contain?
I renamed the file PrfF8F2.trc and opened it with SQL Profiler. This showed that it contained all 161304 rows of trace events. Not only that but refreshing the TEMP directory showed that SQL Profiler had created another temporary file Prf3A1C.tmp, which was the same size as the trace I loaded!!!



What happens when you save the trace to a file?
SQL Profiler allows you to save profile data to a file as well as displaying on screen.



Running this trace did not stop the file being created in the temp directory, so therefore I had increased by two the disc writes and space used.

So as you can imagine running the SQL Profiler GUI can be very costly in terms of disc space, but hopefully you have set your TEMP directory to be somewhere other than the system disc.

Do you change the default temp folder location on your live servers?
17 June 09 07:00 PM | MartinBell | 3 comment(s)

Depending on which version of windows you are running the location of the default temp directory may vary e.g. for Vista and Windows 2008 it is Drive_Letter:\Users\AppData\Local\Temp  where Drive_Letter is the drive on which you installed Windows i.e. the system disc. So why should you change this?

One reason is because if the system disc gets full, the server can become un-responsive and it can be difficult to logon to diagnose and correct the problem. If your temp directory is elsewhere the effects may still cause you problems, but should (hopefully) be less of an issue to correct!

Of course you may think that the controls on your production system(s) will mean this will never happen, but when does everyone always stick to the plan or even know it?

In my next article I show how a DBA can un-knowingly cause problems.

<Update> 

Books Online now carries a warning about filling up the TEMP directory when using SQL Profiler http://msdn.microsoft.com/en-us/library/ms174203.aspx.

SQL Profiler will keep a copy of the trace displayed in the GUI even if you specify a file for it to save the output to; this means you could end up with two large files!!!

</Update>

Filed under: ,
Yet another blog posting on BCP and Temporary Tables!!!
16 June 09 01:00 PM | MartinBell | with no comments
A question came up in the newsgroups yesterday about an error the person posting the question was having when trying to use BCP and a stored procedure being executed using the QUERYOUT option. Using BCP to run a procedure in this way was not something I can remember doing before. Usually if I want to export data that required manipulation, I would process that data in a user table or global temporary table before calling BCP, in which case I could just use the OUT option and not have to run a stored procedure. I think this is the best method, and MVP Erland Sommarskog seems to concur. Doing too much work in the stored procedure could possibly lead to timeouts by BCP, although I have never put this to the test!!.

If you look in Books Online when defining the “query” parameter for the BCP command it says:

"The query can reference a stored procedure as long as all tables referenced inside the stored procedure exist prior to executing the bcp statement. For example, if the stored procedure generates a temp table, the bcp statement fails because the temp table is available only at run time and not at statement execution time. In this case, consider inserting the results of the stored procedure into a table and then use bcp to copy the data from the table into a data file."

This is not quite true… all tables referenced inside the stored procedure do not necessarily have to exist prior to executing!! But taken on face value, this would seem to rule out using local temporary tables (or global temporary table created in the procedure), but that isn’t the case as I will show later in this article.

Examples of how BCP can fail or not when a temporary table or global temporary table is created within the stored procedure and how a table variable or global table variable can work, are as follows:

CREATE PROCEDURE P1
AS
DECLARE @t TABLE ( id int )
INSERT INTO @t (id ) VALUES ( 1), (2)
SELECT * FROM @t
GO

CREATE PROCEDURE P2
AS
CREATE TABLE #t ( id int )
INSERT INTO #t (id ) VALUES ( 1), (2)
SELECT * FROM #t
GO

CREATE PROCEDURE P3
AS
CREATE TABLE ##t ( id int )
INSERT INTO ##t (id ) VALUES ( 1), (2)
SELECT * FROM ##t
GO

CREATE TABLE ##t_global ( id int )
GO

CREATE
PROCEDURE P4
AS
INSERT INTO ##t_global (id ) VALUES ( 1), (2)
SELECT * FROM ##t_global
GO

Which if you ran the appropriate BCP commands would give you the following successes and failures:

C:\Temp>BCP "EXEC tempdb..p1" QUERYOUT p1out.txt -S (Local) -T -c

Starting copy...

2 rows copied.
Network packet size (bytes): 4096
Clock Time (ms.) Total     : 1      Average : (2000.00 rows per sec.)

C:\Temp>BCP "EXEC tempdb..p2" QUERYOUT p2out.txt -S (Local) -T -c
SQLState = S0002, NativeError = 208
Error = [Microsoft][SQL Server Native Client 10.0][SQL Server]Invalid object name '#t'.

C:\Temp>BCP "EXEC tempdb..p3" QUERYOUT p3out.txt -S (Local) -T -c
SQLState = S0002, NativeError = 208
Error = [Microsoft][SQL Server Native Client 10.0][SQL Server]Invalid object name '##t'.

C:\Temp>BCP "EXEC tempdb..p4" QUERYOUT p4out.txt -S (Local) -T -c

Starting copy...

2 rows copied.
Network packet size (bytes): 4096
Clock Time (ms.) Total     : 1      Average : (2000.00 rows per sec.)

Erland explained the reason for the poster having problems was because “BCP needs to figure out the structure of the result set before it is returned. To do that it runs the procedure in a special mode, but the table is not created in that mode and so it fails. A table variable on the hand is a declared entity and that works”

Which makes sense considering the statement in BOL, but I did say that the statement is not quite right!! If you run SQL Profiler as you execute the BCP command:

BCP "EXEC tempdb..sp_helpdb" QUERYOUT hlpdbout.txt -S (Local) -T -c

The command will give you an error similar to the earlier procedure calls:

SQLState = S0002, NativeError = 208
Error = [Microsoft][SQL Server Native Client 10.0][SQL Server]Invalid object name '#spdbdesc'.

And in SQL Profiler you’ll see that the batch that BCP actual sends the following batch to be executed:

set fmtonly on EXEC master..sp_helpdb set fmtonly off

On the SQLServerCentral forums I found a way of “fooling” BCP by NOT returning a “dummy” result set!! The “dummy” result set should have the same structure that your procedure will return, so I now wish I had chosen an easier example as this will wrap!!

BCP "IF 1=0 BEGIN SELECT CAST('name' AS nvarchar(256)),CAST('db_size' AS nvarchar(26)),CAST('owner' AS nvarchar(256)),CAST(0 AS smallint),CAST('created' AS nvarchar(22)),CAST('status' AS nvarchar(1200)),CAST(0 AS tinyint) END; SET FMTONLY OFF; EXEC tempdb..sp_helpdb" QUERYOUT hlpdbout.txt -S (Local) -T -c

Starting copy...

15 rows copied.
Network packet size (bytes): 4096
Clock Time (ms.) Total     : 125    Average : (120.00 rows per sec.)

Great it works!!

So that was going to be the end of this article, but I then thought I would look at profiler to see what was happening...



As you can hopefully see the procedure (sp_helpdb) is run twice, which from a performance aspect could be quite expensive.

If this is a procedure that you have written yourself, then you may want to use one of the other techniques detailed above (that works). If for some reason you can’t or don't want to modify the procedure, you can still get around this by using a global temporary table or table variable to store and returned the results e.g.

USE tempdb
GO

CREATE PROCEDURE usp_helpdb
AS
DECLARE @t TABLE ( [name] nvarchar(256),
                                    [db_size] nvarchar(26),
                                    [owner] nvarchar(256),
                                    [dbid] smallint,
                                    [created] nvarchar(22),
                                    [status] nvarchar(1200),
                                    [compatibility_level] tinyint );
INSERT INTO @t ([name],
                                    [db_size],
                                    [owner],
                                    [dbid],
                                    [created],
                                    [status],
                                    [compatibility_level] 
                                    )
EXEC sp_helpdb;
SELECT * FROM @t;
GO

In Profiler you can see the when usp_helpdb is run with SET FMTONLY ON a minimal amount of work is carried out (4 reads 0 CPU).


Other ways to see user options
15 June 09 09:00 AM | MartinBell | 7 comment(s)

After my last post it got me thinking about other ways to find out what a the setting are for a connection. The first one that springs to mind is..

DBCC USEROPTIONS
This will give the SET options for the current current connections. This will give you additional information to @@OPTIONS such as language, datefirst and dateformat settings.

Set Option Value
textsize 2147483647
language us_english
dateformat mdy
datefirst 7
lock_timeout -1
quoted_identifier SET
arithabort SET
ansi_null_dflt_on SET
ansi_warnings SET
ansi_nulls SET
concat_null_yields_null SET
isolation level read committed

But what if you want to see the settings for an existing connection?

SQL Profiler
Using the ExistingConnection most of information returned from @@OPTIONS or DBCC USEROPTIONS, but there are differences. For instance, if I connected in a SSMS Query window and then SET NOCOUNT ON, SQL Profiler does not pick this up, whereas the other two methods will indicate it is set. The ExistingConnection does give you the network protocol being used, this can be very useful to know.

Exesting Connection

The ExistingConnection is one of the events profiled when you choose to “Trace process in SQL Server Profiler” after right clicking the process in Activity Monitor.

Profile From Activity Monitor

sys.dm_exec_sessions
You can also get this information from the dynamic management view sys.dm_exec_sessions. This DMV has bit columns for quoted_identifier, arithabort, ansi_null_dflt_on, ansi_warnings, ansi_padding, ansi_nulls and concat_null_yields_null. If the setting is set it will contain a value of 1.

To get the information for your current session you can use the query:

SELECT * FROM sys.dm_exec_sessions
WHERE session_id = @@SPID

As an exercise I though would try and return this information as a result set similar to my last post, and to do this would require the columns to be transposed into rows i.e. UNPIVOT. I have to admit I have never used UNPIVOT outside demos and answering questions in the newsgroups!! I came up with:

SELECT a.val, a.code, a.description
FROM 
   (SELECT session_id, quoted_identifier * 256 AS quoted_identifier, arithabort * 64 AS arithabort , ansi_null_dflt_on * 1024 AS ansi_null_dflt_on , ansi_warnings * 8 AS ansi_warnings , ansi_padding * 16 AS ansi_padding , ansi_nulls * 32 AS ansi_nulls , concat_null_yields_null * 4096 AS concat_null_yields_null  
           
FROM sys.dm_exec_sessions
            WHERE session_id = @@SPID) p
UNPIVOT
   (code FOR setting IN 
      (quoted_identifier, arithabort, ansi_null_dflt_on, ansi_warnings, ansi_padding, ansi_nulls, concat_null_yields_null)
)AS unpvt
CROSS APPLY
(
SELECT 1 as val, 'DISABLE_DEF_CNST_CHK' as code,'Controls interim or deferred constraint checking.' as description
UNION ALL SELECT 2,'IMPLICIT_TRANSACTIONS','For dblib network library connections, controls whether a transaction is started implicitly when a statement is executed. The IMPLICIT_TRANSACTIONS setting has no effect on ODBC or OLEDB connections.'
UNION ALL SELECT 4,'CURSOR_CLOSE_ON_COMMIT','Controls behavior of cursors after a commit operation has been performed.'
UNION ALL SELECT 8,'ANSI_WARNINGS','Controls truncation and NULL in aggregate warnings.'
UNION ALL SELECT 16,'ANSI_PADDING','Controls padding of fixed-length variables.'
UNION ALL SELECT 32,'ANSI_NULLS','Controls NULL handling when using equality operators.'
UNION ALL SELECT 64,'ARITHABORT','Terminates a query when an overflow or divide-by-zero error occurs during query execution.'
UNION ALL SELECT 128,'ARITHIGNORE','Returns NULL when an overflow or divide-by-zero error occurs during a query.'
UNION ALL SELECT 256,'QUOTED_IDENTIFIER','Differentiates between single and double quotation marks when evaluating an expression.'
UNION ALL SELECT 512,'NOCOUNT','Turns off the message returned at the end of each statement that states how many rows were affected.'
UNION ALL SELECT 1024,'ANSI_NULL_DFLT_ON','Alters the session''s behavior to use ANSI compatibility for nullability. New columns defined without explicit
nullability are defined to allow nulls.'
UNION ALL SELECT 2048,'ANSI_NULL_DFLT_OFF','Alters the session''s behavior not to use ANSI compatibility for nullability. New columns defined without
explicit nullability do not allow nulls.'
UNION ALL SELECT 4096,'CONCAT_NULL_YIELDS_NULL','Returns NULL when concatenating a NULL value with a string.'
UNION ALL SELECT 8192,'NUMERIC_ROUNDABORT','Generates an error when a loss of precision occurs in an expression.'
UNION ALL SELECT 16384,'XACT_ABORT','Rolls back a transaction if a Transact-SQL statement raises a run-time error.'
      ) a
WHERE  unpvt.code & a.val = a.val 
ORDER BY a.val ;
GO

It is probably easier to string match on the settings, so I modified it to this.
Filed under:
Table Variables and OUTPUT clause
14 June 09 06:14 PM | MartinBell | 6 comment(s)
At the end of my previous post on the OUTPUT clause I mentioned the fact that the OUTPUT clause can be very useful if you want to see the effects of an INSERT/UPDATE/DELETE or MERGE statement without actually committing the change by rolling back the transaction.

You may think that this is only good to you if you are running the query in Management Studio as the information would not be persisted because of the rollback? For example if I had a Product_Audit table rolling back the transaction would rollback the changes made when inserting into Product_Audit. For example:

CREATE TABLE Product_Audit ( [ProductID] [int] NOT NULL,
      [Name] [nvarchar](100) NOT NULL,
      [ProductNumber] [nvarchar](25) NOT NULL,
      [Colour] [nvarchar](15) NULL,
      [FrameSize] smallint ) ;

BEGIN TRANSACTION ;
 
INSERT INTO Products  ( [Name],     [ProductNumber],  [Colour] )
OUTPUT inserted.[ProductID], inserted.[Name], inserted.[ProductNumber], inserted.[Colour], inserted.[FrameSize] INTO Product_Audit( [ProductID], [Name], [ProductNumber], [Colour], [FrameSize] )
VALUES
( 'Mountain-500 Silver, 24', 'BK-M18S-24', 'Silver'),
( 'Mountain-500 Silver, 28', 'BK-M18S-28', 'Silver'),
( 'Mountain-500 Silver, 32', 'BK-M18S-32', 'Silver') ;
 
SELECT * FROM Product_Audit ;

ROLLBACK TRANSACTION ;

SELECT * FROM Product_Audit ;

If you run this SQL you will see that the statement after the ROLLBACK returns no data. This is where table variables come in useful!

If you’ve read the topic “table (Transact-SQL)” in books online, you may have missed the last sentence “Also, because table variables have limited scope and are not part of the persistent database, they are not affected by transaction rollbacks.” This means that if I used a table variable and not an actual table, the table variable will still hold the changes after the rollback and I can then store or manipulate it as necessary. e.g.

CREATE TABLE Product_Audit  ( [ProductID] [int] NOT NULL,
      [Name] [nvarchar](100) NOT NULL,
      [ProductNumber] [nvarchar](25) NOT NULL,
      [Colour] [nvarchar](15) NULL,
      [FrameSize] smallint ) ;
     
DECLARE @Product_Audit  TABLE ( [ProductID] [int] NOT NULL,
      [Name] [nvarchar](100) NOT NULL,
      [ProductNumber] [nvarchar](25) NOT NULL,
      [Colour] [nvarchar](15) NULL,
      [FrameSize] smallint ) ;
 
BEGIN TRANSACTION ;

INSERT
INTO Products  ( [Name],     [ProductNumber],  [Colour] )
OUTPUT inserted.[ProductID], inserted.[Name], inserted.[ProductNumber], inserted.[Colour], inserted.[FrameSize] INTO Product_Audit( [ProductID], [Name], [ProductNumber], [Colour], [FrameSize] )
VALUES
( 'Mountain-500 Silver, 24', 'BK-M18S-24', 'Silver'),
( 'Mountain-500 Silver, 28', 'BK-M18S-28', 'Silver'),
( 'Mountain-500 Silver, 32', 'BK-M18S-32', 'Silver') ;
 
SELECT * FROM @Product_Audit ;
 
ROLLBACK TRANSACTION ;
 
INSERT INTO Product_Audit  Product_Audit( [ProductID], [Name], [ProductNumber], [Colour], [FrameSize] )
SELECT Product_Audit [ProductID], [Name], [ProductNumber], [Colour], [FrameSize] FROM @Product_Audit
 
SELECT * FROM Product_Audit  ;

Filed under: ,
Deciphering User Options
12 June 09 10:30 PM | MartinBell | with no comments
Sometimes it’s necessary to decipher what the current user options (@@OPTIONS) are when you have differences in behaviour. I wrote this a while ago and though some people may find it useful:

SELECT val, code, description
FROM (
SELECT 1 as val, 'DISABLE_DEF_CNST_CHK' as code,'Controls interim or deferred constraint checking.' as description
UNION ALL SELECT 2,'IMPLICIT_TRANSACTIONS','For dblib network library connections, controls whether a transaction is started implicitly when a statement is executed. The IMPLICIT_TRANSACTIONS setting has no effect on ODBC or OLEDB connections.'
UNION ALL SELECT 4,'CURSOR_CLOSE_ON_COMMIT','Controls behavior of cursors after a commit operation has been performed.'
UNION ALL SELECT 8,'ANSI_WARNINGS','Controls truncation and NULL in aggregate warnings.'
UNION ALL SELECT 16,'ANSI_PADDING','Controls padding of fixed-length variables.'
UNION ALL SELECT 32,'ANSI_NULLS','Controls NULL handling when using equality operators.'
UNION ALL SELECT 64,'ARITHABORT','Terminates a query when an overflow or divide-by-zero error occurs during query execution.'
UNION ALL SELECT 128,'ARITHIGNORE','Returns NULL when an overflow or divide-by-zero error occurs during a query.'
UNION ALL SELECT 256,'QUOTED_IDENTIFIER','Differentiates between single and double quotation marks when evaluating an expression.'
UNION ALL SELECT 512,'NOCOUNT','Turns off the message returned at the end of each statement that states how many rows were affected.'
UNION ALL SELECT 1024,'ANSI_NULL_DFLT_ON','Alters the session''s behavior to use ANSI compatibility for nullability. New columns defined without explicit
nullability are defined to allow nulls.'
UNION ALL SELECT 2048,'ANSI_NULL_DFLT_OFF','Alters the session''s behavior not to use ANSI compatibility for nullability. New columns defined without explicit nullability do not allow nulls.'
UNION ALL SELECT 4096,'CONCAT_NULL_YIELDS_NULL','Returns NULL when concatenating a NULL value with a string.'
UNION ALL SELECT 8192,'NUMERIC_ROUNDABORT','Generates an error when a loss of precision occurs in an expression.'
UNION ALL SELECT 16384,'XACT_ABORT','Rolls back a transaction if a Transact-SQL statement raises a run-time error.'
      ) a
WHERE @@OPTIONS & val = val
  
Filed under: ,
Object housekeeping
12 June 09 12:50 PM | MartinBell | 2 comment(s)

After my last blog entry it got me thinking about something I have been meaning to blog about for a while, which is object housekeeping. Some time ago I worked on a system that we upgraded from SQL 6.5 to SQL Server 2000, as part of the upgrade we looked at each database object to see if it was still being used.

Applications can grow in a very unstructured manner (what I call the carbuncle methodology and it isn’t a gem stone!"), a developer may not realise or check that a database object is made redundant by an application change, or they may just be “too scared” to remove something in case “something breaks”. Refactoring only tends to be considered when things are already in a mess.

But you should approach housekeeping in your database in the same way you approach cleaning your own house (well I do!). It’s a lot easier if you do a little bit often, rather than a massive spring clean once or twice a year with nothing in between.

The initial pass of our system required trawling through code and DDL scripts. Fortunately we had already put each individual object into a source code control system as an earlier task which helped. Unlike SQL Server 2008’s sys.sql_expression_dependencies looking at the sysdepends table was not reliable and syscomments sometimes breaks a keyword over multiple rows making it difficult to search.

As you may have realised, doing a single pass of the code is not going to root out all redundant objects, as a whole new set of redundant objects may start to appear once the first level is removed, so the task of highlighting redundancy is iterative.

That was fine for references to objects within database code, finding redundant application code was more of a problem. This really required the time and help of a developer. Compiler messages about unreferenced functions may help in this process, but you need to make sure they are switched on! There are also third party tools that can analyse projects.

When we stopped adding to the list, we then went through a process of renaming the objects (we were not that brave!) which meant we could easily undo the change if we were proved wrong (and there was a couple of instance where we had got it wrong). Subsequent releases added to the number of renamed objects, and after a while we were confident enough to remove the renamed objects.

The overall effect of this clear out was we had marked 1/3 of all the database objects were actually no longer used and the application executable size was drastically reduced. Going forward that meant that we could build the application and database significantly quicker and it was a lot easier to manage. It also meant that we had a much more compact database to migrate to SQL Server 2000.

Filed under:
Changing Bound Defaults to Default Constraints
11 June 09 06:50 PM | MartinBell | 4 comment(s)
I answered a question on the newsgroups today about how to do this. Books Online says that CREATE DEFAULT, sp_bindefault and sp_unbindefault will be removed in future versions. I can’t think of any reasons why you can’t plan to remove them now as part of your release cycle and eliminate the additional work from a future upgrade!!

Looking at a way to distinguish DEFAULT constraints from bound defaults, I noticed that there was not entry in sys.default_constraints. But then I realised that it would be easier to check the ‘IsDefault’ property in the OBJECTPROPERTY function. Unfortunately I couldn't find an easy way to obtain the constant_expression from the CREATE DEFAULT statement, as the whole statement is held in the text of sys.comments and I couldn’t find it anywhere else! (if you know a different way let me know!!) I therefore made the assumption that the first occurrence of 'AS' in the definition would be before the constant_expression, the constant_expression can be something like
CAST(1 as int) which means that you can’t rely on it being the last occurence!!!

I came up with the script:

SELECT
'EXEC sp_unbindefault '''
+ QUOTENAME(s.name) + '.' + QUOTENAME(o.name) + '.' + QUOTENAME(c.name)
+ '''; ALTER TABLE ' + QUOTENAME(s.name) + '.' + QUOTENAME(o.name)
+ ' ADD CONSTRAINT [DF_' +  o.name +  '_' + c.name
+ '] DEFAULT ' + REPLACE(RIGHT(CAST(t.text as varchar(max)), 
             
LEN(CAST(t.text as varchar(max))) - CHARINDEX('AS', CAST(t.text as varchar(max)))-1 ),';','')
+ ' FOR ' + QUOTENAME (c.name ) + ' ;'
FROM sys.columns c
JOIN sys.syscomments t ON c.default_object_id = t.id
JOIN sys.objects o ON o.object_id = c.object_id
JOIN sys.schemas s ON s.schema_id = o.schema_id
WHERE OBJECTPROPERTY(c.default_object_id, 'IsDefault') = 1

Once you have removed all bindings to the defaults the DEFAULTs can then be removed.

SELECT 'DROP DEFAULT ' + QUOTENAME(s.name) + '.' + QUOTENAME(o.name) + ';'
FROM sys.objects o
JOIN sys.schemas s ON s.schema_id = o.schema_id
WHERE OBJECTPROPERTY(o.object_id, 'IsDefault') = 1

As I have made some assuptions here you should run test it works on a backup of your own system before implementing it on live.
More Posts Next page »

This Blog

SQL Blogs

Syndication