July 2009 - Posts

How to get the logical and physical file names for a database

For some operations you need to know the logical and physical file names for the data and log files in a database. Here's some code to get hold of this information. It assumes you only have one data and one log file. This is all I have needed so far. I'm sure it could be adapted fairly easily to return a set of file names.

declare @DBName                 sysname

        , @LogicalDataFile      sysname

        , @LogicalLogFile       sysname

        , @PhysicalDataFile     nvarchar(260)

        , @PhysicalLogFile      nvarchar(260)

 

set @DBName = '<database-name>'

 

-- Data file

select  @LogicalDataFile = name

        , @PhysicalDataFile = physical_name

from    sys.master_files

where   database_id = db_id(@DBName)

        and type_desc = 'ROWS'

 

-- Log file

select  @LogicalLogFile = name

        , @PhysicalLogFile = physical_name

from    sys.master_files

where   database_id = db_id(@DBName)

        and type_desc = 'LOG'

 

select  @LogicalDataFile as [@LogicalDataFile]

        , @LogicalLogFile as [@LogicalLogFile]

        , @PhysicalDataFile as [@PhysicalDataFile]

        , @PhysicalLogFile as [@PhysicalLogFile]

 

Note: This is the revised version. Thanks to steveh99999 for gently pointing out I was using a deprecated system table. A nice side effect is that there's no need for dynamic SQL either.

Better dependency checking

The dependency information in SQL Server has always been a bit shaky. It's a hard problem for them to solve. Meanwhile, I think you can do a lot worse than simply search the 'source code' of the objects in the database. That way you can even search for things that aren't proper objects, such as column names or in fact any string at all. All you need are two fairly simple stored procedures in your master database.

 

if exists (    select     1

        from     dbo.sysobjects

        where     id = object_id(N'dbo.dba_depends')

            and objectproperty(id, N'IsProcedure') = 1    )

    drop procedure dbo.dba_depends

go

 

create proc dbo.dba_depends

(

    @SearchTerm        varchar(255)

    , @IncludeSystemDBs    bit = 0

)

as

 

/*

-------------------------------------------------------------------------------

Version : 1.5

Date     : 07/11/2007

Comments :    syscomments.xtype

        -----------------

        C = CHECK constraint

        D = Default or DEFAULT constraint

        F = FOREIGN KEY constraint

        L = Log

        FN = Scalar function

        IF = Inlined table-function

        P = Stored procedure

        PK = PRIMARY KEY constraint (type is K)

        RF = Replication filter stored procedure

        S = System table

        TF = Table function

        TR = Trigger

        U = User table

        UQ = UNIQUE constraint (type is K)

        V = View

        X = Extended stored procedure

-------------------------------------------------------------------------------

*/

 

declare @DBName        varchar(128)

    , @SearchThisDB    bit

 

create table #t

(

    DBName        varchar(128)    null

    , ObjectName    sysname        null

    , ObjectXType    char(2)        null

)

 

 

declare db_cursor cursor

local forward_only

for

select    [name]

from    sys.databases

where    state = 0 --ONLINE

order by name

 

 

open db_cursor

 

    fetch    next

    from    db_cursor

    into     @DBName

 

    while @@fetch_status = 0

    begin

 

        set @SearchThisDB = 1

 

        if @IncludeSystemDBs = 0 and lower(@DBName) in ('master','tempdb','model','msdb','reportserver','reportservertempdb')

            set @SearchThisDB = 0

 

        if @SearchThisDB = 1

            exec master.dbo.dba_depends_inner @SearchTerm, @DBName

 

        fetch    next

        from    db_cursor

        into     @DBName

 

    end

 

close db_cursor

deallocate db_cursor

 

 

select    distinct

    DBName

    , case

        when ObjectXType = 'FN' then 'Function'

        when ObjectXType = 'P' then 'Stored Procedure'

        when ObjectXType = 'IF' then 'Inlined table-function'

        when ObjectXType = 'TF' then 'Table function'

        when ObjectXType = 'TR' then 'Trigger'

        when ObjectXType = 'V' then 'View'

        when ObjectXType = 'X' then 'Extended stored procedure'

        when ObjectXType = 'U' then 'Table'

        else ObjectXType

    end as ObjectType

    , ObjectName

    , ObjectXType

from    #t

order by 1,2,3

 

 

drop table #t

 

 

go

 

 

 

 

if exists (    select     1

        from     dbo.sysobjects

        where     id = object_id(N'dbo.dba_depends_inner')

            and objectproperty(id, N'IsProcedure') = 1    )

    drop procedure dbo.dba_depends_inner

go

 

create proc dbo.dba_depends_inner

(

    @SearchTerm     varchar(255)

    , @DBName    varchar(128)

)

as

 

/*

-------------------------------------------------------------------------------

Version : 1.3

Date     : 12/03/2007

-------------------------------------------------------------------------------

*/

 

set nocount on

 

declare @SQL nvarchar(500)

 

set @SQL = 'use [' + @DBName + '] ' +

'insert    #t

    (DBName

    , ObjectName

    , ObjectXType)

select     ''' + @DBName +

'''    , so.[name]

    , so.xtype

from     syscomments sc

    inner join sysobjects so on so.[id] = sc.[id]

where     so.xtype in (''FN'', ''P'', ''IF'', ''TF'', ''TR'', ''V'', ''X'')

    and sc.[text] like ''%' + @SearchTerm + '%''

order by sc.[id], sc.colid'

 

exec sp_executesql @sql

 

set @SQL = 'use [' + @DBName + '] ' +

'insert    #t

    (DBName

    , ObjectName

    , ObjectXType)

select     ''' + @DBName +

'''    , obj.name + ''.'' + col.name

    , ''U''

from    sysobjects obj

    inner join syscolumns col

    on obj.id = col.id

where    obj.xtype in (''U'')

    and col.name like ''%' + @SearchTerm + '%'''

 

exec sp_executesql @sql

 

set nocount off

 

go

 

 

Posted by DavidWimbush | with no comments
Filed under:

Housekeeping backup and restore history

Every backup and restore event is logged in history tables in msdb and these tables can get out of hand, especially if you're using log shipping. Here's one way to sort it out that will catch up with the backlog and then keep your history tables trimmed.

This code will delete the oldest days worth of history as long as it's more than 30 days ago:

--Delete earliest day of backup and restore history if it's older than 30 days.

--This way you can clear a long history a bit at a time.

 

use msdb

go

 

set nocount on

 

declare @oldest_backup_date    smalldatetime

    , @oldest_restore_date    smalldatetime

    , @oldest_date        smalldatetime

 

 

--Find the oldest backup date (can be null)

select    @oldest_backup_date = min(backup_finish_date)

from    backupset with (nolock)

where    backup_finish_date is not null

 

set @oldest_backup_date = isnull(@oldest_backup_date, '01-JUN-2079')

 

 

--Find the oldest restore date (can be null)

select    @oldest_restore_date = min(restore_date)

from    restorehistory with (nolock)

where    restore_date is not null

 

set @oldest_restore_date = isnull(@oldest_restore_date, '01-JUN-2079')

 

 

--Determine the earlier of the two dates

if @oldest_restore_date < @oldest_backup_date

    set @oldest_date = @oldest_restore_date

else

    set @oldest_date = @oldest_backup_date

 

print 'Oldest date is ' + convert(varchar(10), @oldest_date, 103)

 

 

--If it's more than 30 days ago delete it

if @oldest_date >= dateadd(day, -30, getdate())

begin

    print 'no action'

end

else

begin

    print 'deleting history'

 

    --sp_delete_backuphistory deletes everything earlier than @oldest_date.

    --Add 1 day otherwise it won't delete anything.

    set @oldest_date = dateadd(day, 1, @oldest_date)

 

    exec sp_delete_backuphistory @oldest_date

end

Before you implement it though, I advise you to add some indexes to msdb. Normally I wouldn't screw around with system tables but I made an exception here. These tables are poorly indexed and adding some indexes makes so much difference. I found this post on Geoff Hiten's blog and brought the script up to date:

-- msdb performance indexes for SQL Server 2005

-- History:

-- 12/12/2005 Geoff N. Hiten Created (http://weblogs.sqlteam.com/geoffh/archive/2008/01/21/MSDB-Performance-Tuning.aspx).

-- 06/07/2009 David Wimbush   Commented out indexes that Microsoft have implemented.

--                            Added an index on restorehistory restore_date.

 

use msdb

go

 

--backupset

--implemented by Microsoft

--create index IX_backupset_backup_set_id on backupset(backup_set_id)

--go

--implemented by Microsoft

--create index IX_backupset_backup_set_uuid on backupset(backup_set_uuid)

--go

create index IX_backupset_media_set_id on backupset(media_set_id)

go

create index IX_backupset_backup_finish_date on backupset(backup_finish_date)

go

create index IX_backupset_backup_start_date on backupset(backup_start_date)

go

 

--backupmediaset

--implemented by Microsoft

--create index IX_backupmediaset_media_set_id on backupmediaset(media_set_id)

--go

 

--backupfile

--implemented by Microsoft

--create index IX_backupfile_backup_set_id on backupfile(backup_set_id)

--go

 

--backupmediafamily

--implemented by Microsoft

--create index IX_backupmediafamily_media_set_id on backupmediafamily(media_set_id)

--go

 

--restorehistory

--implemented by Microsoft

--create index IX_restorehistory_restore_history_id on restorehistory(restore_history_id)

--go

--implemented by Microsoft

--create index IX_restorehistory_backup_set_id on restorehistory(backup_set_id)

--go

--NEW:

create index IX_restorehistory_restore_date on restorehistory(restore_date)

go

 

--restorefile

create index IX_restorefile_restore_history_id on restorefile(restore_history_id)

go

 

--restorefilegroup

create index IX_restorefilegroup_restore_history_id on restorefilegroup(restore_history_id)

go

 

Posted by DavidWimbush | 3 comment(s)
Filed under:

Roll your own log shipping

I tried to use SQL Server’s built-in log shipping to set up a disaster recovery facility but I found there were some serious limitations. So, after some research, I decided to do it myself. Here’s what happened.

Requirements

A disaster recovery plan is useless unless you can test it, preferably at least once a year. To be really sure, I think you need to prove you can fail over, use, and fail back the production systems. It’s all very well testing on a test setup – you have to make sure it works – but there’s no substitute for confirming that it actually works in the environment it’s meant to preserve. There are so many things that have to fit together and, in a disaster situation, you really don’t want find yourself saying “Crap, we didn’t think about that!” and explaining to your boss that the business is going to be down for a day or two instead of the hour or two you’d confidently promised.


So you have to be able to smoothly fail over to the standby servers, have all the data consuming systems gracefully adapt to the new data sources, confirm they all work, and then reverse the process. Without losing any data. At least once a year. Easy!

Some specific technical requirements:

  • Setup and configuration must be scriptable. I don’t want to screw around with multi-page wizards and write reams of documentation about it. If I have to rebuild the server I want to get it done quickly and easily, and scripts are the way.
  • I need it to tell me if it goes wrong, and then I need some tools to help me understand where I am and how I got there so I can fix it. 
  • When failed over, I want to be able to log ship back the other way. In a real disaster this is probably not an issue but it’s vital for testing the solution. Once live transactions have been applied to the standby server you have to log ship them back to the normal live server or lose them.
  • I decided to have a time lag of 3 hours before transactions are applied to the standby server. We thought we could use this to give us a chance to rescue data that was deleted by mistake.

Trying Out-of-the-Box Log Shipping

I filled in about 11 pages of wizard, setting up the source, standby and monitor. Then I captured the script, thinking I could just copy it for each database and find and replace the database name. But the script doesn’t run! I spent quite a while searching for how to fix and couldn’t find an answer. At this point I decided I could live without the scriptability so I went through the wizard again, building a detailed installation guide for our domain.


It all looked great at first. There’s a report on the monitor server to show the progress of backups, file copies and restores. I failed over, putting the source database into standby mode ready to receive logs. But when I tried to restore the standby database right up to date there was no way to override the 3 hour delay I’d specified. I went into the wizard, took out the delay, restored all logs and brought the database online. My test application adapted to the new data source. Not ideal in a real disaster with everyone breathing down my neck but I can live with it.

Now to set up log shipping back the other way. I go into the wizard and it won’t let me set up the monitor. Apparently you can only monitor shipping between two servers in one direction. If you remove the initial shipping you can set it up going the other way. So that’s about 11 pages of wizard per database to failover, and the same to fail back. That’s enough of that!

My Way


I looked around hoping to find that somebody else had already done a full solution but I couldn’t find one. I felt Pop Rivetts Does Log Shipping came the closest so I cannibalised it into what I wanted.

I have the following jobs on both servers:

LogShipBackup
Does a log backup on selected databases, compresses the file with WinRAR, copies it to a share on the other server, and does housekeeping on the files and the backup/respore history tables in msdb. The copy stage copies over any files that are not on the other side so it can catch up if connection has been lost for a bit.

LogShipBackupAlert
Sends me an email if the LogShipBackup job has not run recently and successfully. LogShipBackup will send me an email if it fails but that won’t alert me if it doesn’t run.

LogShipRestore
Decompresses and restores logs in sequence on selected databases, and does housekeeping on the files and the backup/respore history tables in msdb. You can specify a delay so that logs aren’t applied immediately. It checks the backup history so it doesn’t trip up if a file is copied over again. It actively kills connections to a database to ensure the restore can go ahead.

LogShipRestoreAlert
Sends me an email if the LogShipRestore job has not run recently and successfully. LogShipRestore will send me an email if it fails but that won’t alert me if it doesn’t run.

LogShipFailOver
Disables the LogShipBackup and LogShipBackupAlert jobs, backs up the tail of the log on selected databases, puts them in read-only standby mode, and copies the backup files over to the other server. The copy stage copies over any files that are not on the other side so it’s quite resilient.

LogShipBringOnline
Disables the LogShipRestore and LogShipRestoreAlert jobs, restores all logs in sequence on selected databases and brings those databases online.

The jobs LogShipBackup and LogShipBackupAlert are enabled on the primary server, and LogShipRestore and LogShipRestoreAlert are enabled on the standby server. I run them every 15 minutes. To fail over, run LogShipFailOver on the primary and then LogShipBringOnline on the standby server.

Apart from the alert jobs, each job has one or more steps that perform the appropriate actions on a database that is shipping or receiving logs. The jobs rely on a set of stored procedures installed in both servers’ master databases.

The code is available for download here. If you’re interested, please feel free to check it out and use it if you want to. If it doesn’t quite do what you want you can always fix that. I’d be interested to know what you think of it. I’d also be very keen to hear from you if you’ve found ways round the problems I found with the built-in log shipping.

Recommended Reading
To find out more about log shipping I recommend the following:

Posted by DavidWimbush | 5 comment(s)
Filed under: