December 2006 - Posts

NTFS Fragmentation - Bad for Databases ?

( Note: I had a couple of attempts to post this before I figured out how to publish pictures )

 

So what has this to do with the DBA you might say? Well I mostly make my living from tuning databases and applications. I always start with a hardware audit as it makes little sense to spend lots of money on a consultant when a simple hardware upgrade might resolve the problems. It is very very important to realise that many SQL Server implementations do not follow best practices, this can be for any number of reasons, many of which are often beyond the control of the DBA. For instance I was  faced with Servers with six disk slots and no external storage. Whichever way I decide to go is going to be a compromise. It's very easy to be the purist but this doesn't help real world. After showing all that go wrong I'll make a few suggestions as to how you may be able to mitigate some of the negative effects of fragmentation. Remember don't bring problems bring solutions!!

A quick check with Disk Defragmenter, available from My Computer, Manage, will give a quick indication of anything untoward. You'll hopefully see something like this

 

 

 

 

 

 

 

 

 

 

 

 

However you might see this

 

 

 

 

 

 

 

 

 

 

 

 

So far so good, there’s 40% free space so a defrag will be fine, except for this

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

What has happened is that the fragmentation is so interwoven there is no contiguous free space to enable defragmentation of files, and I have 65% free space !!

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

And sometimes you just can’t get it to run!

 

Database files are very prone to suffering fragmentation due to auto grow, the more databases upon your server the more likely the problem. As database files occupy a greater percentage of the available disk space the issue of defragmentation becomes more critical. It is quite possible to have adequate free space but no large enough contiguous space. ( If you have a 80Gb file which is fragmented you need 80Gb contiguous free space for a fast and effective defrag ) This can occur with fragmentation of the MFT, these are system files so will not be defragmented and a badly fragmented MFT really gives you little option other than to reformat your disk. ( There are some third party tools which will deal with MFT fragmentation I believe )

 

 

 

 

 

 

 

 

 

Bad file fragmentation

 

 

 

 

 

 

 

 

 

 

Fragmented MFT

  

So having established that even regular maintenance still shows severe fragmentation what steps can you take to lessen the effects?

  1. Make sure that auto shrink is not enabled on any of your databases
  2. Make sure you're not running any regular shrink file jobs - this is no different to a controlled auto shrink.
  3. Set the growth of your database devices to large amounts - If your database is 10Gb make the growth step 1 Gb - as DBA you need to be in control of growth. Back in the days of SQL 6.x there was no auto grow, growth had to be pre allocated, pre allocated growth allows you to control fragmentation much better. Better one 1Gb growth then 100 10Mb growths.
  4. Set your system databases including tempdb the same .. ideally you want 0% dynamic growth of these databases. I usually set master and msdb to an initial 25Mb with a 10Mb log and growth of 10Mb. The actual values will depend upon your usage of these databases, I have a custom stored procedure which clears old data from msdb, this helps maintain size, after all do you really want 5 years of database backup information?
  5. You might want to consider migrating your larger databases to multiple files ( not file groups ) It would be easier to defragment 8 10Gb files than one 80Gb file.
  6. Use file groups to separate static from dynamic/growing table - you might also want to use multiple files within your file groups, again on the basis of ease of working on smaller files.
  7. Add more memory - I read so many posts concerning database segmentation to avoid "hot spots" but ultimately the easiest way to avoid hotspots is to put the data in cache. Consider a 100Gb database residing on a server with minimal memory, say 3Gb, at best the data cache will be about 1.25Gb, actual value will vary but whichever way you slice and dice your memory it supports no more than around 2% of your data. Taking the memory to say 32Gb will give you a 30% data cache. Does it work? Well I upgraded an 8 way box from 8Gb to 32Gb , cpu dropped by 75% and actual throughput ( transactions etc.) jumped by around 50% , i/o was turned around with minimal read activity compared with before. As with all changes to server hardware you may well introduce a new bottleneck!
  8. If possible replace your disks with larger spindle sizes, having more free space makes defragmentation easier. It used to be recommended to leave at least 25% free space on a disk/array, I personally think that for databases this figure should be higher, but again "it just depends" if your databases don't grow then it's not so much a problem, however if they don't grow you won't get fragmentation of the database files, just backups.

 

It worries me when I read about virtualisation allowing for better than 85% disk utilisation as a selling point - that leaves so little space for defragmentation!

Now I've touched on a whole range of subject areas, which if nothing else should provoke a healthy discussion, so a quick explanation of how and why you can offset the degraded performance of fragmentation but not remove the fragmentation.

Most database reads use Read Ahead, which is sequential, most database writes are random. Sequential reads on fragmented files really slow performance with the disk heads jumping all over the place, somewhat like visiting every shop in a mall at random instead of visiting each in sequence ( you'll walk less! ) Disk controllers attempt to turn random writes into sequential writes through caching and through elevator sorting, but generally writes are still random, so technically file fragmentation does not affect disk writes so intensely, the exception to this is of course log writes which are always sequential. An examination of wait stats will discover if your log writes are causing performance problems. I don't have a quick fix to log writes, technically more memory should help, yes log files should be on dedicated drives but life isn't always like that. We're really talking OLTP databases here of course, DSS systems have far less random i/o so the effect of fragmentation upon a DSS system is likely to be even worse. There are a couple of other points to consider, carving an array into separate logical disks only partitions the data, if a single partition on that array becomes fragmented it most likely impacts all the logical drives even if they themselves are not fragmented. I'd really advise against carving physical disks into logical disks it doesn't improve performance ( i/o's and throughput for an array/disk remain the same regardless of how the disks are carved - this is a prime area of performance problems with SAN's but that most certainly is out of scope for this article ) The sql server and sql agent logs by default fragment badly, separating those logs away from the data will help or cycling your error log using sp_cycle_errorlog before a defrag.

Posted by GrumpyOldDBA with 2 comment(s)

Not thinking it through - the end ( maybe )

Just to finalise the saga of the index rebuild job, we’ve submitted a replacement job which doesn’t have the failings of the current job which leaves the database without a recovery point. The current job is as follows:-

  • Step 1         
    • disable the transaction log backup job
    • Set database to “truncate log on checkpoint”
  • Step 2         
    • within a cursor loop apply a dbcc dbreindex for each table
  • Step 3         
    • Set database to full recovery
    • Enable the transaction log backup job.

 I had some discussions with the data centre and pointed out that the logic of the job steps was to quit on failure so that if the actual index rebuild failed ( step 2 ) the database was left in simple recovery with no transaction log backups ( and no recovery ).  Well two weeks on and as expected the job when running successfully leaves the database with no recovery point until the next full backup, 18 hours later. On the previous week when the index rebuild step failed and the database left in simple recovery the response I had was that the database had not been switched to simple recovery, ok,  so the option actually reads “trunc. log on chkpt.” but this is simple recovery !!

 

My change involves logic to ensure the database is returned to full recovery in the event of a step failure and the all important backup after the switch back to full recovery. The index rebuild script has been replaced with a stored procedure and a couple of tables –  we carry out a dbcc showcontig, storing the results in a table so we can build up an understanding of what is fragmented, we then rebuild only those indexes which show  >10 fragmentation for a clustered and >20% for a non-clustered. The actual index rebuilds are stored in another table for later analysis.

The code for the proc and tables is shown below – I have a dba database called ServerAdmin which holds the data.

 

So a couple of additional points in passing, after changing the database recovery model it isn’t necessary to do a full backup, a differential will suffice, assuming you’re happy using differential backups? The process thus is :-

  • Step 1
    • Backup the transaction log   ( missing in the original job )
    • Disable transaction log job
    • Switch database to simple recovery
  • Step 2
    • The re-index job
  • Step 3
    • Switch database to full recovery
    • Differential backup
    • Re-enable the transaction log job

 In addition I’ve added a startup proc to make sure the database is in full recovery, this is just in case the cluster should failover during the index rebuild.

A stored procedure placed in master can be made to execute on service start by issuing the following command against it  

 

exec dbo.sp_procoption N'sp_myprocedure', N'startup', N'true'

 

This is a useful feature for those not in the know, I usually have a procedure which sends a smtp mail set this way, so if the service restarts there is an immediate mail alert, useful if you want to know if your cluster has failed over for example.

Other uses can be to set memory allocations to instances, thus making sure instances do not suffer starvation. I applied this technique to a development server where the developers had sysadmin rights and were prone to ( well tried to ) making changes to the memory for the various instances, I admit the procedure was very well hidden and encrypted but it did assist in making sure one instance could not grab all the memory.

 

This is the code for the proc and support tables.

Note that this rebuilds all indexes with 100% fillfactor, this is intentional as I don’t believe in allocating fill factors unless you know it will help reduce fragmentation , I personally dislike the practice of allocating a 90% or 80% fill factor as a blanket value – just makes the database bigger and may increase i/o though increased scans.

So in the event of creating individual fill factors the sysnatx of the dbcc would need to be changed.

 

Apologies for loss of some of the formatting.

 

Use ServerAdmin

go

--

-- create tables for proc

--

if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[FragList_mydb]') and OBJECTPROPERTY(id, N'IsUserTable') = 1)

drop table [dbo].[FragList_mydb]

GO

CREATE TABLE [dbo].[FragList_mydb] (

            [ObjectName] [char] (255) NULL ,

            [ObjectId] [int] NULL ,

            [IndexName] [char] (255) NULL ,

            [IndexId] [int] NULL ,

            [Level] [int] NULL ,

            [CountPages] [int] NULL ,

            [CountRows] [int] NULL ,

            [MinRecSize] [int] NULL ,

            [MaxRecSize] [int] NULL ,

            [AvgRecSize] [int] NULL ,

            [ForRecCount] [int] NULL ,

            [Extents] [int] NULL ,

            [ExtentSwitches] [int] NULL ,

            [AvgFreeBytes] [int] NULL ,

            [AvgPageDensity] [int] NULL ,

            [ScanDensity] [decimal](18, 0) NULL ,

            [BestCount] [int] NULL ,

            [ActualCount] [int] NULL ,

            [LogicalFrag] [decimal](18, 0) NULL ,

            [ExtentFrag] [decimal](18, 0) NULL

) ON [PRIMARY]

GO

if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[ResultTable_mydb]') and OBJECTPROPERTY(id, N'IsUserTable') = 1)

drop table [dbo].[ResultTable_mydb]

GO

CREATE TABLE [dbo].[ResultTable_mydb] (

            [Command] [varchar] (1000) NOT NULL ,

            [TheTime] [datetime] NOT NULL

) ON [PRIMARY]

GO

--

-- create the proc in the user database to have the index rebuild

--

Use mydb

go

if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[dbasp_IndexBuild_mydb]') and OBJECTPROPERTY(id, N'IsProcedure') = 1)

drop procedure [dbo].[dbasp_IndexBuild_mydb]

GO

 

SET QUOTED_IDENTIFIER ON

GO

SET ANSI_NULLS ON

GO

 

CREATE proc dbo.dbasp_IndexBuild_mydb

@p1 int=0

-- ============================================================= 

--                           Stored Procedure:       dbasp_IndexBuild_mydb                         

--                           Written by:      (c)colin leversuch-roberts     

--                           www.kelemconsulting.co.uk                          

--                                                               

--  Purpose:            Procedure to facilitate selective index rebuilds for mydb Database

--                            Required to provide checkpoints to clear transaction log during the rebuild   

--                            as tran log space is limited

--                                                                

--  System:             Calling User database ( must exist within the database to be defragged )

--

--  Input Paramters:     @p1                 int        - calling code

--                      

--  Output Parameters:            None                                     

--                                                                

--  Usage:              EXEC dbo.dbasp_IndexBuild_mydb  38547

--                                                               

--  Called By:                SQL Agent

--

--  Notes:                     Calling code stops accidental execution of stored procedure

--                                  This proc execute a dbcc showcontig which is intrusive

--                                  dbcc dbreindex only updates stats when applied to a table so we must

--                                  always make sure we also update stats in a seperate job                                       

--                                                               

--  VERSION HISTORY

--  Version No               Date                             Description

--  1                              22-August-2002          Initial Release                             

--  2                               23-May-2003              Move to ServerAdmin

--  3                              25-Nov-2003                Code rewrite. used raiserror for errors  clr

--  4                              14th dec 2006             mydb version which requires t log truncates

--                                                                                               

-- ============================================================= 

as

set nocount on

IF @p1<>38547

            BEGIN

                        raiserror('error running proc',16,1 )

                        Return(-1)

            END

--endif 

--

-- Truncate table prior to population

--

Truncate table ServerAdmin.dbo.FragList_mydb

Truncate table ServerAdmin.dbo.ResultTable_mydb

--

-- Populate the table

--

INSERT INTO ServerAdmin.dbo.FragList_mydb([ObjectName], [ObjectId], [IndexName], [IndexId], [Level], [CountPages], [CountRows], [MinRecSize], [MaxRecSize], [AvgRecSize], [ForRecCount], [Extents], [ExtentSwitches], [AvgFreeBytes], [AvgPageDensity], [ScanDensity], [BestCount], [ActualCount], [LogicalFrag], [ExtentFrag])

            EXEC(' DBCC SHOWCONTIG  with all_indexes,TABLERESULTS')

--

-- remove system tables

--

delete from ServerAdmin.dbo.FragList_mydb where objectname like 'sys%'

--

-- Create a list of fragmented clustered indexes.

-- Choose fragmentation of >10%

-- choose tables greater than 8 pages in size ( 1 extent )

-- Defrag the largest tables first

-- As the clustered index rebuild will rebuild all secondary indexes we only need

-- to rebuild secondary indexes where we haven't rebuilt the clustered index

--

declare @Clustered table

(

ObjectName varchar (255) not NULL ,

ObjectId int not NULL ,

IndexName varchar (255) not NULL ,

CountPages int not NULL,

Numkey int not null identity(1,1)

)

--

insert into @Clustered (ObjectName,ObjectID,IndexName,CountPages)

select ObjectName,ObjectID,IndexName,CountPages from  ServerAdmin.dbo.FragList_mydb where indexid=1 and scandensity<90

and countpages>8

order by CountPages desc

--

-- now get the non clustered indexes

--

declare @NonClustered table

(

ObjectName varchar (255) not NULL ,

ObjectId int not NULL ,

IndexName varchar (255) not NULL ,

CountPages int not NULL,

Numkey int not null identity(1,1)

)

--

-- The page size was tricky to define for effective rebuilds

-- a 3 page index did not rebuild very well, but a 7 page did

--

insert into @NonClustered (ObjectName,ObjectID,IndexName,CountPages)

select ObjectName,ObjectID,IndexName,CountPages from  ServerAdmin.dbo.FragList_mydb where indexid>1 and indexid<255 and scandensity<80

and countpages>3

and objectID not in ( select objectid from @Clustered )

order by countPages desc

/*

Now use cursors to rebuild the indexes

I don't like cursors and would usually use a while loop

however a while does tend to bind everything into a transaction

and I'd prefer that not to happen

--

A result table time stamped will be populated so we can see what was actually rebuilt

*/

--

Declare @ResultTable table ( Command varchar(1000) not null, TheTime datetime not null default getdate())

--

--

Declare @tablename varchar(255),@IndexName varchar(255),@CmdStr nvarchar(1000)

-- populate cursor for clustered indexes

DECLARE IndexRebuild CURSOR FOR SELECT ObjectName FROM @Clustered order by Numkey

-- Open the cursor

OPEN IndexRebuild

-- loop through the indexes

FETCH NEXT FROM IndexRebuild INTO @tablename

WHILE @@FETCH_STATUS = 0

BEGIN

set @CmdStr = 'dbcc dbreindex('+rtrim(@tablename)+','+''''+''''+','+'100) WITH NO_INFOMSGS '

--print @cmdstr

exec (@cmdstr)

insert into @ResultTable(Command) values(@cmdstr)

--

-- issue a checkpoint to empty transaction log - Modification for mydb

--

checkpoint

 

FETCH NEXT FROM IndexRebuild INTO @tablename

END

-- Close and deallocate the cursor

CLOSE IndexRebuild

DEALLOCATE IndexRebuild

--

-- non clustered indexes ( shouldn't need a checkpoint here as secondary indexes are typically small )

--

DECLARE IndexRebuild CURSOR FOR SELECT ObjectName,IndexName FROM @NonClustered order by Numkey

-- Open the cursor

OPEN IndexRebuild

-- loop through the indexes

FETCH NEXT FROM IndexRebuild INTO @tablename,@IndexName

WHILE @@FETCH_STATUS = 0

BEGIN

set @CmdStr = 'dbcc dbreindex('+rtrim(@tablename)+','+rtrim(@IndexName)+','+'100) WITH NO_INFOMSGS '

--print @cmdstr

exec (@cmdstr)

insert into @ResultTable(Command) values(@cmdstr)

FETCH NEXT FROM IndexRebuild INTO @tablename,@IndexName

END

-- Close and deallocate the cursor

CLOSE IndexRebuild

DEALLOCATE IndexRebuild

 

insert into ServerAdmin.dbo.ResultTable_mydb

select * from @ResultTable

--

-- add a final checkpoint here

--

checkpoint

-- endproc

GO

SET QUOTED_IDENTIFIER OFF

GO

SET ANSI_NULLS ON

GO

--

Posted by GrumpyOldDBA with 1 comment(s)
Filed under:

Not thinking it through - Part 4 .. the saga continues!

My previous posts under this heading and the How much memory relate to my experiences with Production SQL Servers that are managed within an out sourced data centre. I'm performance tuning one of these servers and having resolved some issues I was back to monitoring wait stats and blocking. Initial results indicated i/o completion problems on the transaction log drive, it's a SAN so no real surprise there then! I was looking through the initial results from my blocking report when I noticed that the transaction log backup job was causing blocking for users - strange I thought!!

Now the server is a little lacking in disk space and the transaction logs have bloated during index rebuilds causing the data centre problems, this is out of my control so I can only watch, see previous posts, so the latest "modification" is to apply a really savage shrink to the ldf file after every bakup, yes honestly. Typically the hourly logs average about 50 mb or so with some overnight processes taking the max size to 250mb, the added step in the job seeks to shrink the ldf file to 10mb . The whole job is bound within a cursor for good measure, even though the job is for one database. So during the shrink transactions can't be written to the log file, for a oltp database this is obviously a good move. Add to this the %age growth is set to 10% , so how many increments to interfere with performance to get to 50Mb ? Answers on a postcard to .......  I did a quick check after a backup and the 14mb ldf file was in 8 fragments already.

Obviously I'm in "observer mode", but I do feel somewhat disappointed, should one expect more from a data centre ? My client will be recruiting a permanent DBA , until I started to stick my nose in no-one had really monitored the servers in great detail, so things will improve I'm sure. So who watches the watchers?

ps. I should really file this under Very Grumpy!!!

Posted by GrumpyOldDBA with no comments
Filed under:

For the parnaoid production DBA - xp_logininfo

I‘ve been involved in some performance tuning of an application which involves the calling of system stored procedures.  The first of these is xp_logininfo, now if you’re a paranoid production DBA like me then this procedure can be a friend indeed.

 

But first to the application which calls this procedure as part of its login process. If you look in BOL you’ll see this procedure returns information concerning domain or active directory users, so in other words it queries the active directory, well this is fine except if there should happen to be any latency, I’m not going to discuss the client architecture other than to say that not all networks are simple and a geographically distributed system may need special considerations. 

The procedure actually uses openrowset  with the call to NetGroupGetMembers which goes and gets information via oledb – this isn’t an area in which I have expertise, I just mention this  in passing.

 

So my attention was drawn to this procedure because of latency in the calls for information which were part of a blocking problem. Now I’d hope in normal circumstances not to find an application calling this procedure, by default this procedure does not have any granted permissions, however this was not the case and this procedure was part of a problem, which brings me back to the paranoid production DBA.

 

Probably to much surprise I am not a great lover of integrated security, I think it offers less security as once a user has logged into the domain/AD and that user account has rights to the sql server, off they go. I personally prefer another level of authentication, sure make the domain login is secure, but if you want access to my SQL Server then you should need another login!!  However, my main pet hate is that usually the DBA’s have no control of AD group membership, it can be very difficult too to actually discover what route of authentication a user is taking, which is where xp_logininfo comes into its own.

 

exec dbo.xp_logininfo 'builtin\administrators','members'

 

 

account name

type

privilege

mapped login name

permission path

MyServer\Anadmin

user

admin

MyServer\Anadmin

builtin\administrators

UKDOMAIN\DBAdmin

user

admin

UKDOMAIN\DBAdmin

builtin\administrators

UKDOMAIN\Domain Admins

group

admin

UKDOMAIN\Domain Admins

builtin\administrators

UKDOMAIN\ClusterService

user

admin

UKDOMAIN\ClusterService

builtin\administrators

UKDOMAIN\Login1

user

admin

UKDOMAIN\Login1

builtin\administrators

 

This shows the members of the local admins group, which contains another group. You can’t query this group directly but if you add the group to the sql server logins then you can query this group for members, and so on until there are no more nested groups.

Nested groups are a potential loophole in security, I’ve actually found all manner of users buried in nested groups. When there are testing environments it can be common to drop groups into groups because “ well it worked for me in xxxx environment/group “  In one case this resulted in a test account being added to a production account and test data being created in the production system – you try explaining that one away to SOX !!!

 

If you have lots of domain/ad groups on your production server an audit with xp_logininfo may well bring some unpleasant surprises.

Posted by GrumpyOldDBA with 2 comment(s)
Filed under:

Not thinking it through - Part 3

Continuing the saga of the index rebuild job, ( see "not thinking it through" and ditto part 2 ) the job has been rewritten into three steps

step 1 puts the database into simple recovery and disables tran log backups
step 2 does the index rebuild stuff
step 3 puts the database mode to full recovery and enables the transaction log backups.
 
Sadly the step sequence is that each relies on success of the previous so that putting the database back into full recovery requires success of step 2.  Step 2 failed this weekend so the production database was left in simple recovery with disabled transaction log backups, which wouldn't have worked anyway as you can't make a log backup for a database in simple recovery!
The job sequence is also still missing a full backup after the database mode is returned to full, this job runs early morning but some 17 hours before the next scheduled full backup.
 
Once again the logic has not been thought through and a critical database left in a perilous state.
 
The saga continues. 
 
 
 
Posted by GrumpyOldDBA with 1 comment(s)
Filed under:

Transactional Replication without a GUI

As I'd previously posted I had to be able to set up replication through T-SQL scripts only, my database to be replicated has identity columns and time stamps. The subscribing database has to be fully operational seperate to the publishing database, the plan is to use the subscribing database for support.

Changes to the subscriber do not get pushed back to the publisher. After much discussion we've decided to go with data only replication, no snapshot. This has it's own problems as you then have to generate the procedures yourself and then for tables with identity columns you have to edit the procs - fun huh?

So first change is to make sure all the identity columns on those tables with identity columns have the NOT FOR REPLICATION  set, this script shows the status of sql2k and sql2005 ( I did the 2k script earlier )

There's a script available to update the system tables, I don't support this and apart from that the database schema is in version control, so it's off to modify each table in turn!

--

-- Identify the setting of the identity status

-- we need this to be NOT FOR REPLICATION

--

-- Run in database to be published

--

-- SQL 2000 Version

--

select  obj.name,col.name,

case columnproperty(obj.id,col.name,'isidnotforrepl')

when 0 then 'Not Set'

when 1 then 'Set'

else 'unknown'

end as 'Not for Repl'

from dbo.sysobjects obj join dbo.syscolumns col on obj.id=col.id

where col.autoval is not null

and  objectproperty(obj.id,'tablehasidentity')=1

and obj.xtype='U' and obj.name not like 'dt%'

--

-- SQL 2005 Version

--

select  obj.name,col.name,

case columnproperty(obj.object_id,col.name,'isidnotforrepl')

when 0 then 'Not Set'

when 1 then 'Set'

else 'unknown'

end as 'Not for Repl'

from sys.objects obj join sys.columns col on obj.object_id=col.object_id

where col.is_identity=1

and  objectproperty(obj.object_id,'tablehasidentity')=1

and obj.type='U' and obj.name not like 'dt%'

 

Posted by GrumpyOldDBA with no comments
Filed under:

Not thinking it through - Part 2

OK - First off, this isn't my routine I'm just an observer on the process running on a managed server!

Having had the index rebuild job fail due to insufficient transaction log space and query governor setting and the resultant code change leading to overwriting transaction logs which invalidated the recovery from the transaction logs, this weekend the job has finally been run successfully and the database indexes rebuilt. ( see Not thinking it through )

BUT ... to achieve this the database was set into simple recovery mode, the indexes rebuilt and the mode set back to full. The fatal flaw... no full backup after the job so the following transaction log backups will not be recoverable, and as the nights full backup failed due to a lack of disk space, once again a critical production database is being used for a working day with no recovery point. It's "school boy" or "school girl" errors but worrying from the point of view that this server is managed by an outsourced provider, and no I will not be naming names.

Posted by GrumpyOldDBA with 1 comment(s)
Filed under: