How to find out when a DB recovery model was changed

A client asked me today how to find out when the recovery model had changed on a DB. The reason was that an overnight process broke because the DB was put into full recovery mode and the transaction log filled up.

My initial reaction was that the default trace should have the information but unfortunately it only captures database object changes. This covers all changes to the DB object and so includes other things not just recovery model changes, it is a start though.

Ideally it should capture the Audit Database Object Management Event, this will capture the statement issued for the alter which the object alter event doesn’t.

What other options do we have. Well the error log is one.  Quite nicely recovery model changes are logged to the error log. Now I get annoyed about some messages being logged to the error log because they aren’t messages. Backups being the main culprit. If you have 100 DBs all with transaction log backups you will have 100s and 100s of backup entries in the log, and its impossible to see the wood for the trees. In that case I turn on trace flag 3226 (http://msdn.microsoft.com/en-us/library/ms188396.aspx) this disables the outputing of those completion messages to the error log.

How do we read the error log. Well there is an undocumented system procedure to do it

sp_readerrorlog

This procedure reads from the disk using an extended stored procedure. To use it you need to be in the security admin group. Or you will get

Msg 15003, Level 16, State 1, Line 1

Only members of the securityadmin role can execute this stored procedure.

The parameters for sp_readerrorlog are

@p1 = Log file to read, this is the number i.e. ERRORLOG.n, if you specify 0 you will get the current file ERRORLOG
@p2 = Type of log to read, SQL Server (1) or SQL Agent (2)
@p3 = Some text to find in the log entries
@p4 = Some more text to find in the log entries (essentially the result is @p3 AND @p4)

Using this information we can do the following

EXEC sp_readerrorlog 0, 1, 'RECOVERY','OPTION'

This is a start but only reads one file, if the files have been cycled or the server restarted we will miss our change. We can use a second undocumented procedure to get the list of log files

sp_enumerrorlogs

This takes 1 parameter

@p1 = Type of log to read, SQL Server (1) or SQL Agent (2). Same as @p2 for sp_readerrorlog. This defaults on 1 if not specified.

Using these together we can go through the logs looking for our changes.

set nocount on

declare @searchString1 varchar(255)

declare @searchString2 varchar(255)

set @searchString1 = 'RECOVERY'

set @searchString2 = 'OPTION'

 

declare @logs table (LogNo int, StartDate Datetime, FileSize int)

declare @results table (LogFileNo int, LogDate  Datetime, ProcessInfo varchar(20),Text varchar(max))

insert into @logs EXEC master..sp_enumerrorlogs

 

declare cLogs cursor for select LogNo from @logs

declare @LogNo int

open cLogs

fetch cLogs into @LogNo

while @@fetch_status =0

    begin

    insert into @results(LogDate, ProcessInfo, Text)

    EXEC sp_readerrorlog @LogNo, 1, @searchString1,@searchString2

    update @results set LogFileNo = @LogNo where LogFileNo is null

    fetch cLogs into @LogNo

    end

deallocate cLogs

   

select * from @results

This tells us when the change was made but not who. We can tie this information with the default trace to get the who. In the default trace the event class of 164 is for the object changes and event subclass of 1 is when the change is committed.

select *

  from fn_trace_gettable('C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Log\log.trc',null) t

 where eventclass = 164

   and eventsubclass = 1

This is ok but not very generic and doesn’t tie the two results together. We can query the sys.traces system view to find the default trace (where is_default = 1) to get the path to the default trace. This however is the latest file. What we need to do is strip off the file number so that fn_trace_gettable can iterate through all the trace files. We look for the last _ and strip off everything after it and then append .trc and it gives us what we want. We can join to the results table above and to get the additional information in the trace, i.e. who and from what machine.

One thing to note is the time. The log entries generally occur before the event changes in the trace file so we have a woolly between clause. Additionally the spid join is required to get a hash joins, it is needed as hash joins have to have at least one equality argument for them to work. Otherwise you will have a nested loop join that may result in a very slow execution as the trace will be executed for each row in the @results. In my test case the difference was 80ms for the hash join and 300ms for the nested loop join. You could use a temp table in which case you will get a spool.

declare @logFile varchar(max)

set @logFile = (select path from sys.traces where is_default=1)

 

set @logFile = left(@logFile,len(@LogFile) - charindex('_',reverse(@LogFile))) + '.trc'

select starttime,*

from fn_trace_gettable(@logFile,null) t

join @results r on t.spid = substring(r.ProcessInfo,5,10)

                and t.StartTime between dateadd(ms,-150,r.logDate) and dateadd(ms,150,r.logdate)

where t.EventClass = 164

and EventsubClass = 1

So the final result is this script which allows you to correlate entries in the error log with the default trace to find out who and when your database recovery settings where changed.

 

set nocount on

declare @searchString1 varchar(255)

declare @searchString2 varchar(255)

set @searchString1 = 'RECOVERY'

set @searchString2 = 'OPTION'

 

declare @logs table (LogNo int, StartDate Datetime, FileSize int)

declare @results table (LogFileNo int, LogDate  Datetime, ProcessInfo varchar(20),Text varchar(max))

insert into @logs EXEC master..sp_enumerrorlogs

 

declare cLogs cursor for select LogNo from @logs

declare @LogNo int

open cLogs

fetch cLogs into @LogNo

while @@fetch_status =0

    begin

    insert into @results(LogDate, ProcessInfo, Text)

    EXEC sp_readerrorlog @LogNo, 1, @searchString1,@searchString2

    update @results set LogFileNo = @LogNo where LogFileNo is null

    fetch cLogs into @LogNo

    end

deallocate cLogs

   

select * from @results

 

declare @logFile varchar(max)

set @logFile = (select path from sys.traces where is_default=1)

 

set @logFile = left(@logFile,len(@LogFile) - charindex('_',reverse(@LogFile))) + '.trc'

set statistics time on

select starttime,*

from fn_trace_gettable(@logFile,null) t

join @results r on  t.StartTime between dateadd(ms,-150,r.logDate) and dateadd(ms,150,r.logdate)

                and t.spid = substring(r.ProcessInfo,5,10) --required to enable a hash join to be used

where t.EventClass = 164

and EventsubClass = 1

set statistics time off

--Use of a temp table results in a nested loo[ join but also a spool

Published Wednesday, April 27, 2011 12:59 PM by simonsabin
Filed under:

Comments

Wednesday, April 27, 2011 4:15 PM by GrumpyOldDBA

# re: How to find out when a DB recovery model was changed

you could always add a trigger to stop the recovery model being changed ?

# Is there a way to find Historical Recovery Model changes on a database? Yes,you are lucky ! « SQLSailor

Pingback from  Is there a way to find Historical Recovery Model changes on a database?  Yes,you are lucky ! «  SQLSailor

# Is there a way to find Historical Recovery Model changes on a database? Yes,you are lucky ! « SQLSailor

Pingback from  Is there a way to find Historical Recovery Model changes on a database?  Yes,you are lucky ! «  SQLSailor

# Database SIMPLE recovery but log file still not truncate « sqlorcl

Pingback from  Database SIMPLE recovery but log file still not truncate « sqlorcl

Thursday, March 26, 2015 4:44 PM by [SIMS] Transaction Log File - Page 3

# [SIMS] Transaction Log File - Page 3

Pingback from  [SIMS] Transaction Log File - Page 3