B Esakkiappan's SQL Thoughts

To learn by sharing all about SQL Server 2005
Know The Transaction Log – Part 4 - Restoring Data

After looking about the backups in SQL Server it is time to know about the Restore and Recovery in SQL Server.

SQL Server supports three levels of Restoring data. They are

1. Complete Database Restore : This is the basic restore strategy. A complete database restore for a simple recovery model database simply involve a full backup followed by the latest differential backup if any available. For a full or bulk-logged recovery model database this complete Data Restore involves restoring a Full Backup followed by a latest differential backup and then a sequence of Transaction log backup in which they are backed up and finally tail-log backup if any available.

2. File Restore : This restore operation is very useful when any one of the files in the file group is damage. The main advantage of this restore is restore time will be less, obviously compared to complete database restore. For simple recovery model database file restore will work with read-only secondary files.

3. Page Restore : This restore is only applicable for Full or Bulk-Logged Recovery model database and not available for Simple recovery model. Using this level of restore , a particular page or pages can be restored.

How Restore Works ?

Restoring is the process of copying data from a backup and applying Transaction logs to the data to point of time when the backup is taken. This process is done in three phases, Data Copy phase, Redo Phase and Undo Phase.

Data Copy Phase: The process of copying the data, index and log pages from the backup media to the database files. No log backups nor log information in the data backups are applied in this phase.

Redo Phase : This phase applies logged changes to the data by processing log information from the log backups.

Undo Phase : This phase applies undoing any uncommitted transactions from the data that are restored from Backup and brings the database online.

In this stage we have to understand the relationship between the WITH RECOVERY and WITH NORECOVERY options in RESTORE Command.

The default option is WITH RECOVERY. This will continue the Undo phase after completing the REDO phase.

A normal restore operation stops at the redo stage if WITH NORECOVERY is included in RESTORE statement. This allows Roll Forward to continue with the next statement of the Restore Sequence, in which the other backups  typically a differential or a transaction backup will do the undo phase.

For a Full Recovery model database or for a Bulk-logged recovery model database, a restore operation is done by a sequence of RESTORE statements. This sequence is called Restore Sequence.

For a simple scenario a restore sequence might be

· starting with restoring a recent full backup,

· applying the most recent differential backup,

· restoring the sequence of log backups in the order they are backed up after the most recent differential backup,

· finally the restoring tail log backup if any taken after the failure occurred.

For more complex scenarios, complex sequence planning will be required. For these planning a recovery path is very important. A Recovery Path is a complete sequence of data and log backups that can  bring database to a point of time. For more details about Recovery Path search in Books On Line.

Complete Database Restore :

A simple restore strategy. Let us see how you have to do a Complete Database Restore using an example. Suppose for a full or bulk-logged recovery model database, a series of backups are taken in the following schedules. A Full backup on Monday 10 PM, Differential backups are scheduled on 10 PM of Wednesday,Friday and Sunday. Transaction Log Backups are scheduled twice a day 6 AM and 6 PM every day. In this sequence, the database is got failed on Saturday 4 PM. How to Restore this database with available backups ?

On a failure situation of database, first thing we have to do is take the tail log backup with NOTRUNCATE option if possible. So take the Tail-log backup first.

Every Restore is to be started with Full Backup. So start with restoring the full backup taken on Monday 10 PM with NORECOVERY option. We have the latest differential backup taken on Friday 10 PM. So apply that backup, and we can omit applying log backups taken after Monday 10 PM and before Friday 10 PM. After restoring this latest differential backup, we have to restore the log backup taken on Saturday 6 AM. That is latest log backup taken on schedule before failure. Now the database is ready up to the Saturday 6 AM. Now, Restore the tail backup that is taken after the failure , that Saturday 4 PM with RECOVERY option. Now the database is fully restored.

File or File Group Restore :

To Restore a Single File in a File group or Complete File group of a Database you have to use Restore Command with FILE option or FILEGROUP option. All you need is unbroken chain of log backups from the time of file or file group backup was made. Before applying the file  or file group backup you have to take the transaction log backup. After restoring the file or file group , you have to restore all the transaction log backups to synchronise that file or file group with the rest of the database.

Let us see an example. Suppose a SecondaryFG is a file group of a database backed up on Friday 12 noon and the database is still in use. Backing up of Transaction log of this database is scheduled on 10 AM, 11:30 AM, 1:00 PM, 2:30 PM , 4:00PM, 5:30PM and so on. Note that the database is still in use, and the changes are made in SecondaryFG and other file groups too. At 5:15 PM, a media failure occurs that corrupts the SecondaryFG. Now we have to restore this. First take the tail log backup that contains all the log records after 4:00 PM Log Backup, with NOTRUNCATE NORECOVERY, to make the database in Restoring state so that no other modification will be done after the failure. Now apply the backup that was taken at 12 noon. So the SecondaryFG now have  all the changes that are made up to 12 noon. Now start applying the Transaction log backups in the sequence of they backed up that is apply 1:00PM Backup first, 2:30 PM backup second, 4:00 PM backup third. Now the SecondaryFG is synchronised with all the database files up to 4:00 PM. Finally apply the tail log backup that was taken after the failure to make SecondaryFG fully compatible with all the files of the database.

Page Restore:

Page restore is only possible for databases using Full Recovery model or Bulk-logged Recovery model. All Editions of SQL Sever other than Enterprise Edition  support offline Page Restore whereas SQL Server 2005 Enterprise Edition supports Online PAGE Restore when database is online.

A page may be marked as suspect page, when a query or DBCC CHECK TABLE or DBCC CHECKDB  failed to access it. Every page in a database that is marked as suspect will have an entry in msdb..suspect_pages table. Event_type column of this table may have either one of the following numbers, 1 for the pages marked with error number 824 other than Bad Page ID and Checksum Error; 2 for Bad PageID ,3 for Checksum Error. 4,5 and 7 for the repaired pages. This table is limited to size and if it is full, the errors could not be logged in this table. So it should be a DBA’s routine to delete the all the records in the msdb..suspect_pages table having event_type is greater than or equal to 4 in regular intervals.

Get the pageId and fileId from msdb..suspect_pages for the pages to be restored. Start RESTORE with full or File or Group Backup that contains the pages to be restored and use PAGE option. Then apply the most recent differential backup if any available and apply all subsequent log backups. Now backup the log and restore it again to match the last_target_LSN in sys.masterfile.

Limitations of Page Restore

· Only Database pages can be restored not the log pages.

· File boot page i.e Page 0 can not be restored and page 1:9 can not be restored that is database boot page.

· GAM , SGAM and PFS Pages can not be restored.

For more details see Books Online.

With this I conclude my Know The Transaction LOG Series. Some things are  purposely omitted in this series of post that are POINT-IN-TIME Restore using RESTORE with STOPAT option to avoid over doses.

Note: This post was originally published in my  SQLThoughts blog.

KNOW THE TRANSACTION LOG – PART- 3

This is third article in the KNOW THE TRANSACTION LOG series. In Part 1 I explained about the Transaction Log File and its behaviour. In Part 2 I explained about the Recovery Models available in SQL Server 2005 which affects the behaviour of Transaction log file of the database. In this Part 3, I am going to explain about the various Backup options available in SQL Server 2005, because, Backups are the backbone of the Restore Recovery of course for a DBA too. :)

Backups in SQL Server 2005.

Two major categories of Backups are available in SQL Server. They are Data backup and Log Backup.

Data backup includes image of one or more data files and log record data. It has three types.

· Full Database Backup includes all data files in the Database which is complete set of data. This also have enough log records that allow to restore the data during restore recovery. This is called base backup. Every restore situation need at least one base, full backup. For small databases, performing a Full Backup takes small amount of time and the backup occupies small amount of disk spaces. As database becomes larger, the full backup takes more time to finish, so as the restore takes more time during recovery. As for as larger databases concern, take Full backup along with supported differential backups, transaction log backups to reduce backup and restore time and associated system overhead.

When restoring a database from a Full backup , SQL Server re-creates the database in one step. As Full database backups include transaction log records within it, after restoring is over, all uncommitted transactions during the time of full database backup taken, are rolled back. So the restored database matched the original database when it was backed up minus the uncommitted transactions.

· Differential Backup : Differential backup of a database backs up only modified data since a last base database backup. It is small in size comparative to Full Database backup, obviously, runs fast, saves backup time. The base for first Differential backup after the full backup, is last full backup and for subsequent differential backups the base is the previous differential backup until the next full data backup is performed. This base is called as differential base. For a Simple Recovery model database, there should be only one differential base and for Full Recovery Model, there may be multibase Differential bases are allowed, but it is difficult to administer. For a Read-Write and online databases, sys.database_files system catalog view returns various information including three column information about differential base. That columns are differential_base_lsn , differential_base_guid, differential_base_time. For a read-only databases sys.master_files catalog view should be use to get the information about the differential base.

Have a full database backup and subsequent frequently taken differential backup for a large mission critical databases to avoid data loss. As the differential backup process takes smaller time to finish, the restore from it also takes minimum time.

When restoring from Differential Backups , a full backup restore should be done first and then a most recent Differential backup is to be restored even though a multiple differential backups has been taken between Full Backup and most recent Differential backup. No Log Backups that were taken between full backup and Differential backup need to be restore. If any tail log backup that has been taken before the full backup is restored, then that should be restored after restoring the differential backup.

· Partial Backup includes primary file groups and read-write file groups. Excludes read-only file groups by default. It can back up specified read-only file groups while taking backup. This is new to SQL Server 2005. It is different from differential backup. It is designed to provide flexibility for databases having simple recovery model. A Partial Backup of a read-only databases only have the Primary file groups files. To create Partial Backup we have to use READ_WRITE_FILEGROUPS [<filegrouplist] option in T-SQL Statement. Partial Backups can not be done through SSMS. Maintenance Plans also do not support Partial Backups.

A Partial Backup can be base for the Differential Partial Backup. Differential Partial backups back up all the data extents that are modified after a base partial backup of same set of file groups are performed. This can be performed with the help of the following command.

BACKUP DATABASE database_name READ_WRITE_FILEGROUPS [ , <file_filegroup_list> ] TO <backup_device> WITH DIFFERENTIAL

· File Or File Group Backup includes the file or file groups specified. An Individual file of a database alone can be backed up with this type of backup. This backup is very useful for the failure situation like if only one file is damaged in the database, we can restore that particular file only instead of having full database restore. This can minimize the restore time very much. There are two types of File backups. File Backup and Differential File Backup. A File Backup of a database can be the base for the Differential File Backup. Performing Differential File backup will give you an error if you changed the read/write file to read-only file after taking last full file backup. So whenever you change a read/write file to read-only file or a read-only file to read write file then take a full file backup.

An advantage of having file backup is recovery from damaged files or a file located in damaged media is very faster. The only damaged files can be restored. The disadvantage of this is maintaining complete file backup set can be more time consuming and complexity of administrative task is increased.

A complete set of file or file group backup is equivalent to Full database backup. When performing file group backups for a full or bulk logged model database do perform transition log backups additionally.

· Transaction LOG Backup includes only log records. For a full or bulk-logged recovery model regular transaction log backup is required. If not taken the transaction log file grows continuously till the disk is full. LOG Backup can be performed with the following command.

BACKUP LOG <database name> To <device name>.

There is a special type of LOG Backup that is Tail-Log backup. This log backup is taken immediately after the database failure if the log disk is accessible. This can be done if you include WITH NORECOVERY option in BACKUP LOG Command. When you issue this option the database becomes Restoring State and becomes offline to guaranty no modification can be done after finishing the tail-log backup. After taking Tail-Log Backup you have to restore the database.

· COPY-ONLY Backup : This is a special situation backup. It does not affect the regular SQL Server Backups and Restore sequences. After taking the COPY-Only Backup the transaction logs are not truncated. As the name specified it does only copying either Full Database or Full Log. This can be performed when issuing WITH COPY_ONLY option in BACKUP Command.

Backup History

The information about the backup history are stored in the msdb database which are very useful to manage backups. The following system tables in the msdb system database store history information about backups.

1) Backupfile stores a row for each data and log file in the database including a column is_present that indicates whether that file was backed up as a part of the backup set.

2) Backupfilegroup stores a row for each filegroup in a database at a time of a backup but this table does not indicate whether the filegroup was backed up or not. This table is new to SQL Server 2005.

3) Backupset stores a row for each backup set when a new backup set is created for each backup event.

4) Backupmediaset stores one row for each media set to which backupsets are written.

5) Backupmediafamily stores one row for each media family or its part of mirrored media set and one row for each mirror in the set.

For more information about Backup History see Books Online.

In the next and final part, Part-4, I will explain about Restoring Database.

Note:

This post was originally published in my SQLThoughts blogger blog.

KNOW THE TRANSACTION LOG –PART 2

In my previous post, I wrote about the Restart Recovery which is automatically done by SQL Server 2005 in the event of SQL Server startup. There is another type of recovery available that is, well known, commonly practiced and a manual process, RESTORE RECOVERY.

Restore Recovery is triggered manually by DBAs during the data loss events, to bring back the SQL Server database to a particular point of working state. The data is recovered from the BACKUP of the database taken and stored away in a media either tape or disk file.

Before looking further inside, let us discuss some basics of Backups.

What is the need of a Backup?

Backup is the backbone for the mission where critical data involved. Even though having high availability system configured with compatible RAID level of Disk Subsystems or fully redundant Storage Area Networks and for Servers that are clustered with failovers with Microsoft Cluster Services and SQL Server 2005 failover clusters, backups of mission-critical databases are so important for many reasons. Say suppose, a developer executed a DELETE FROM query forgetfully missed a WHERE clause in it against a production server, instead of Development Server where he supposed to execute! This is one simple example. A lot of such situations may arrive to test your Database Administrative abilities. You have to rely on your database Backup.

In a Restore Recovery, Backup is so important. But a behaviour of Restore Recovery is based on a property of the Database that is ‘Recovery’. There are three Recovery Models available in SQL Server. They are SIMPLE, FULL, and BULK-LOGGED. When you create a Database, the default value of this option is ‘FULL’. This can be changed with ‘Alter Database’ command with a SET RECOVERY option. For example Alter Database mydb SET RECOVERY SIMPLE.

Simple Recovery Model.

This model provides a very simplest form of backup. This model minimizes the administrative overheads to a DBA. When this RECOVERY option is set in a database, then its transaction log will not be included during the backup and it is not possible to take TRANSACTION LOG Backup. When you take backup, the SQL Server automatically truncates transaction log by dropping the inactive log records and free up the space used by them.

This model of recovery is advisable to

· The databases that are under development process.

· The databases that are mainly used for data ware houses.

· The databases that are used for read-only purposes.

There are no log backups involved in Simple Recovery model; the database can be restored to the end of the most recent backup. So the work done after the last full backup can be lost.

Simple recovery model has following restrictions.

1. Page restore can not be done.

2. File Restore and Piecemeal Restore are available only for read only databases.

3. Point-in-time restore is not available.

Full Recovery Model.

This is the default Recovery Model when you create a database in SQL Server 2005. This model provides a full protection to the data. Thus this is best option to prevent data loss. These recovery models full rely on transaction log backups. To avoid data loss you have to frequently take backup of transaction log along with data backups. If you have a transition log backup after a failure, then you can restore the data to the point of time when the failure occurs.

As all activities including Bulk Copy operations, SELECT INTO, and even Create Index, are logged into the Transaction Log file, and it keeps the log records even after taking the data backup, the Transaction log file may grow high in volume in disk size if you specify auto grow during creating the database. This is one disadvantage of this recovery model, but can be easily handled with DBA’s high attention. As the storage is growing high in this model, the restoration time will be relatively high. For each time a transaction log backup is performed, the inactive log records are truncated and the space used by them is freed up for future usages.

The following scenarios are highly suited to have a database with FULL Recovery Model.

1. If the database contains multiple filegroups or read-only file groups.

2. If having efficient DBAs who can perform point-in-time recovery, Individual page restorations.

3. For high cost tolerance scenarios to tolerate the disk cost due to highly growing transaction logs.

Bulk-logged Recovery model.

This model is very similar to the full recovery model excepts it won’t log the Bulk Copy Operations, SELECT INTO, CREATE INDEX, ALTER INDEX REBUILD, DROP INDEX, WRITETEXT and UPDATETEXT BLOB operations. That means these operations are minimally logged. Because these recovery model can not log these operations as they run very fast. But transaction log records are created for such operations that took place and the page extents which are affected by these operations are also recorded in that log records. Still Log Backup is required to free up the inactive transaction log records. The Bulk-Logged Recovery model does not support Point-In-Time Restoration. This model is very useful where frequent bulk copy operations take place, so that, they are minimally logged and the performance degradations due to bulk copy operations will not be there.

What is LOG Truncation?

If the log records are not eventually not deleted in a frequency of time, Transaction LOG file will grow in high volume (of course depends upon the file size mentioned CREATE DATABASE command) upto even completely full your disk drive. So, inactive transaction records should be deleted in frequency of time. Deleting all inactive Transaction records from Transaction Log file is called as LOG Truncation.

LOG truncates occurs automatically

· for a simple recovery model database after a checkpoint occurs

· for the FULL and Bulk-logged model after taking the Transaction Log backup, if a checkpoint occurs after the previous backup.

In the next part of this series, we will see the Backup Types and Restore Sequences.

 

 NOTE:

This post was originally posted in  my  SQLThoughts blogger blog

Know the Transaction LOG - Part - 1

After writing about the LOG shipping in my previous post, many questions raised that urged me to share the knowledge about the database logs in SQL Server 2005. As it is a huge concept to discuss, I planned to write a series of posts in Database Transaction Logs.

What is Database LOG?

A Database log always called as Transaction Log, is a critical component of a database. It is in a format of one or more disk files, created with 'Create Database' or 'Alter Database' command. The Transaction Log is required to bring back the last working state of your database when a failure occurs to your database.

Based on the 'RECOVERY' option of the database, Transaction Log records every database modification including the information about the pages which are being modified, the data values added or modified, start and end time of modification occurs to a series of LOG records. So that, whenever a 'undoing' or 'redoing' is required, SQL Server 2005 can do that to your database with the help of Transaction Log.

What is Undo or  Redo ?

Before  knowing these, it is necessary to know how a transaction  work with the modification or how SQL Server handles a transaction.

Whenever a data modification request is received by the SQL Server, regardless of explicit or implicit transaction, all the underlying pages are loaded into the buffer cache. A series of log records are created for this transaction including page numbers for which the modifications are to be carried out, before stage and after stage of the modification. All these logs records are internally linked together. Then the modifications take effect in the pages loaded in cache.

After modifications are done in cache, if a Rollback request for this particular transaction is received then all the undo operations for this transaction are carried out from the Transaction Log records. This Roll Backward operation is called 'Undoing'.

Suppose a Commit request for this transaction is received, then first log records are written to log disk files, prior to the data pages that are modified in cache are written to the data disk files. The buffer manager ensures this. Write LOG Records first and then DATA Records into Disk Files. This mechanism is called as 'Write-Ahead-Log'(WAL).

After writing log records into the log file or files and before writing the modified data records into the data files, say suppose, SQL Server stopped due to some resource problems, then, SQL Server uses these log records to recover all the transactions that are marked as committed  and not reflected in data,  during the restart of SQL Server. This recovery is called Restart Recovery. This restart recovery is always done for all the databases of an instance when that instance of SQL Server is restarted. This restart recovery, which is doing Roll Forward of all transactions to the data files is called 'Re-doing'.

LOG Sequence Number (LSN)

Every Log Record in Transaction log is associated with a LSN that is Log Sequence Number. As every transaction is associated with a series of log records, and every log records having LSN within it and all related log records are linked backward for Rollback operations. LSN of a log record is unique and it is a sequence number greater than the previous LSN associated with old transactions.

Every Data Page has a LSN number of the Log record which modified this page earlier, recorded in its header. Every LOG Record associated with a page for which the modification is being carried out, is also having the previous LSN stored in the Page’s header, and the new LSN number for this current modification. When a Redo operation is carried out by the SQL Server, it checks these two LSN numbers. If the LSN of the page is high then REDO skip for this page.

Active LOG Records and Inactive Log Records

The Records in LOG files are marked as two types, Active and Inactive. All the Log Records that are the part of live transactions which are not yet either committed or rolled back are called as Active Log Records. The Log Records related with earlier committed or rolled backed transactions are called as Inactive Records. The redoing or undoing operation carried out by the SQL Server, only deals with Inactive Log Records.

Virtual Log Files

SQL Server Database Engine, divide the physical Log disk file into Virtual log files internally. The number and size of the virtual log files cannot be configured explicitly by any DBA. It is based on the auto growth specification in Database for its log file. Database engine tries to have a minimum number of virtual log files for a physical log file. When a log file is created first, the number of Virtual log files may be 4 to 16. And it will go higher when the physical LOG file is enlarged. The performance will be slower, if number of virtual log files are high. SQL Server will automatically create VLFs during the expansion of Physical LOG File, so creating the LOG file with considerable size and file growth percentage should be adequately specified will reduce the number of Virtual Log Files in LOG. To view the Virtual log file use undocumented DBCC Command DBCC LOGINFO. The following is DBCC LOG Info of one of my active SQL Server Database in my development server. This Log file is having 16 Virtual Log files.

 

FieldID

 

 

 

 

 

FileSize

 

 

 

 

 

StartOffset

 

 

 

 

 

FSEQNO

 

 

 

 

 

STATUS

 

 

 

 

 

PARITY

 

 

 

 

 

CREATELSN

 

 

 

 

 

2

 

 

 

253952

 

 

 

8192

 

 

 

24

 

 

 

0

 

 

 

64

 

 

 

0

 

 

 

2

 

 

 

253952

 

 

 

262144

 

 

 

27

 

 

 

0

 

 

 

64

 

 

 

0

 

 

 

2

 

 

 

253952

 

 

 

516096

 

 

 

22

 

 

 

0

 

 

 

128

 

 

 

0

 

 

 

2

 

 

 

278528

 

 

 

770048

 

 

 

23

 

 

 

0

 

 

 

128

 

 

 

0

 

 

 

2

 

 

 

262144

 

 

 

1048576

 

 

 

25

 

 

 

0

 

 

 

64

 

 

 

24000000034800494

 

 

 

2

 

 

 

262144

 

 

 

1310720

 

 

 

26

 

 

 

0

 

 

 

64

 

 

 

25000000049500003

 

 

 

2

 

 

 

262144

 

 

 

1572864

 

 

 

28

 

 

 

0

 

 

 

64

 

 

 

27000000020000052

 

 

 

2

 

 

 

262144

 

 

 

1835008

 

 

 

29

 

 

 

0

 

 

 

64

 

 

 

28000000019100003

 

 

 

2

 

 

 

262144

 

 

 

2097152

 

 

 

30

 

 

 

0

 

 

 

64

 

 

 

29000000007300459

 

 

 

2

 

 

 

262144

 

 

 

2359296

 

 

 

31

 

 

 

0

 

 

 

64

 

 

 

29000000050700023

 

 

 

2

 

 

 

262144

 

 

 

2621440

 

 

 

32

 

 

 

0

 

 

 

64

 

 

 

31000000019800006

 

 

 

2

 

 

 

327680

 

 

 

2883584

 

 

 

33

 

 

 

0

 

 

 

64

 

 

 

32000000004600469

 

 

 

2

 

 

 

327680

 

 

 

3211264

 

 

 

34

 

 

 

0

 

 

 

64

 

 

 

33000000024100176

 

 

 

2

 

 

 

393216

 

 

 

3538944

 

 

 

35

 

 

 

0

 

 

 

64

 

 

 

34000000035400136

 

 

 

2

 

 

 

393216

 

 

 

3932160

 

 

 

36

 

 

 

0

 

 

 

64

 

 

 

35000000034400087

 

 

 

2

 

 

 

458752

 

 

 

4325376

 

 

 

37

 

 

 

0

 

 

 

64

 

 

 

36000000044200003

 

 

 

Checkpoints in Transaction Logs

Within a start and end of a complete transaction, we may use checkpoints or save points with the help of CHECKPOINT and SAVE TRANSACTION T-SQL statements, to store partially done transactions to write in disk files. These checkpoints may also internally triggered by SQL Server itself too. What a checkpoint does within the transaction?

Checkpoint is a SQL Server process that writing all modified data pages I buffer cache into disk files.It is also forces any pending transaction log records into log file. Performing Checkpoints reduces the recovery time of restart recovery, as it forced the transactions to written to log files and also writes the dirty pages into disk files. This process of Checkpoints minimize the Roll forward operations of Restore Recovery.

The Checkpoint Operation involves following steps.

  • Writing out all dirty pages into Data disk files.
  • Writing a list of active transactions to Transaction log.
  • Storing check point log records to Transaction Log.

Scope of the Checkpoints is the Database level. So the Checkpoint operation run only for the current database only.

Checkpoint occurs in the following cases.

  1. Whenever we issue CHECKPOINT T-SQL Command for the current database.
  2. Whenever SQL Server shuts down without option WITH NOWAIT. This checkpoint works for all the databases in that instance. WITH NOWAIT option skips the checkpoint.
  3. Whenever a Data Files are added to or removed from a Database using ALTER DATABASE Command.
  4. When Bulk copy operation or Select Into operation performed in a database for which ‘Bulk-logged’ Recovery model is set.
  5. When a database’s recovery model is changed from Bulk-logged or FULL to SIMPLE.
  6. For a Simple Recovery Model Database, if the size of the Transaction Log exceeds 70%.
  7. When number of log entries exceeds the estimated amount of work required by the SQL Server's 'Recovery Interval' configuration.

I think, I covered utmost every aspect of Restart Recovery often called as Crash Recovery.In my next part of this post, I will write about the another type of Recovery - Restore Recovery

 

This post is originaly published in my SQLThoughts Blogger blog

Hey Standby Server ! Your (Transaction Log) Ship(ment) has arrived…!-(LOGSHIPPING)

Having SQL Server as a Database Solution to your enterprise applications, you may have a little chance of losing your valuable data, due to some failures, as SQL Server 2005 is having a lot of features for High-Availability and Disaster Recoverability.

One of such feature is LOG Shipping. Even though LOG Shipping is available from predecessors of SQL Server 2005, it is more robust with SQL Server 2005. In SQL Server 2000, logshipping is only available in Enterprise edition. In SQL 2005, it is available from Standard edition.

All you require a stand by server with a same SQL Server Configuration, and a tamper proof network connection between the production server and the standby server.

How LOGSHIPPING works….?

Before delving into LOGShipping, let us think over, how we were maintaining a stand by server for our failures?

Normally, we backup our Database in to a Device (either tape or a disk file) and move the backup to another server, and restore the backup there. In these situations, the down time of the Database will be more, depending upon the standby or warm server’s setup and how frequently we are doing this manualy. If the standby server is located near by the primary production server then the job will be done frequently. Suppose if the Standby Server is located in another part of the world, then what will happen!?

LOGShipping does all the steps described above in an automatic way. SQL Agent Service plays a vital role here.

A SQL Server Agent Job first takes the Backup of the Transaction log of the Database from the Primary Server (for which Logshipping is enabled) in a file and store it in a (specified) network shared path in a specified time interval. Then secondary server’s SQL Server Agent’s Job gets the files from the network shared path in the order that they were taken backup from primary Server Database and copy it in its file system and restore the transaction log to the standby server’s database. For this purpose, the network service account which is maintaining the SQL Server Agent Service in the Secondary server must have the Read Permission for the Network Shared Path.

Setting up a Logshipping in a Domain scenario , where the Primary Server and Standby Server are the members of the same Domain, it is advisable to use the same domain user account for the both Servers’ SQL Server Agent Service, so that no conflict will happen and give read /write permission for the network shared folder to that account. But in a no-domain scenario, to set up a log shipping, you have to adopt a tricky way. Create an user account with a same name and same password in both the Prinmary and the Secondary Server and make it member of SQL Server Agent Group and give full control over the Network Shared Path where Primary Server’s Database's Transaction Log Backup were stored.

Steps Involved in Logshipping.
1. Backup the primary database in to a device.
2. Restore the backup in the secondary Server with NORECOVERY option.
(Note: First time transport log manually. If your LOG file is too big in size, then, change the RECOVERY model of the DATABASE to Simple and take the Backup. It will reduce the size of your log. After taking the backup change the recovery model to FULL, because, you can set up logshipping for the Database with FULL recovery model. Following steps are handled by the scheduled jobs.)
3. Backup the Transaction Log of the Primary Database.
4. Copy the Transaction Log to Secondary Database.
5. Apply the Transaction Log To Secondary Database.
Steps to be followed in Failover
1. Backup the last transactions after the last schedule of LOGShipping from the Primary (fault) sever, if possible.
2. Apply to the secondary database.
3. Synchronize the user related to the database in the secondary server.
4. Reconfigure the server as Production Server.
PROS & CONS of LOGShipping
Pros are :
o Easy to implement.
o Easy to maintain.
o It is more reliable.
o Multiple standby servers can be configured.
o Stand by Server Database can be used for reporting purposed to minimizing the workloads of the primary server.

Here are some cons too…
o No automatic failover (The Database Mirroring, an another feature of SQL Server 2005, supports automatic failovers.)
o Manual failover requires a technocrat during failover.
o At least a minimum data loss depending upon how frequently your logs are being shipped.

Some last considerations….

All the steps in LOGShipping involves SQL Server Jobs, all that jobs are executed in a frequency of time, and the failover is manual (main drawback of LOGShipping) so keep the frequency of logshipping in a low time interval to minimize the amount of data loss during failover.

Give some special consideration for synchronizing the user logins in both servers (primary and secondary) for failover situations, so that you can switch the standby server as a production server to all of your client applications by simply changing the name or IP or both, and keep trouble-shooting the faulty server. If you succeeded in troubleshooting, you can make this server as standby server and started log shipping from the production server. Synchronizing users in secondary server may cause SID conflicts. So do it carefully.

If you have enough infrastructure, then it is recommended to have another server as Monitoring Server which involves the LOGShipping operation, and track status and statistics of LOGShipping.

After setting up LOGShipping, you can monitor the operations, through three tools available in SQL Server 2005.
1. TRANSACTION LOG SHIPPING STATUS REPORT - an built-in Report available in SSMS Standard reports.
2. SQL Server LOG files
3. SQL Server Agent’s Job History.

For further studies:

1. Read Chapter 27:LOGShipping and Database Mirroring , from the book ‘MICROSOFT SQL SERVER 2005 – Administrator’s Companion’ by Edward Whalen, Marcilina Gracia and others - MS PRESS.
2. Refer Books Online.

 

NOTE : This post was originally posted in  my  SQLTHOUGHTS Blogger Blog.

Database Sanpshots in SQL Server 2005.

In SQL Server 2005, an interesting new feature is available is Database snapshots.(Of course it is available on Enterprise Edition only )

It is a point-in-time, read-only , Virtual copy of the source database that is a snapshot of a particular source database can be created in a particular time.It is most useful if your database contains historical data like Quarterly sales, year-wise employees performance. This snapshots can be used for further reporting purposes. Multiple snapshots can be created from single source database for different point of times.


When you create a snapshot for a Source database, SQL Server create an empty Sparse file(is NTFS File),If there are uncommitted transactions are there, it will not copied to Snapshot, the pages before the transactions are copied to Sparse file. NTFS Sparse file will not hold the user data at the time of creation.


Whenever the source database is modified after the snapshot is created, the Copy-On-Write operation is began for every snapshots! At the time of creation of a NTFS Sparse file, SQL Server creates a bitmap file for every sparse file, with bit for every page of the source database to check the page is copied to snapshot or not. When a page updating is in progress, SQL Server checks this bit and if the page is not copied then it will copy the page to snapshot(s). This is called Copy-On-Write operation. The Read from snapshot operation first checks this bit for whether read it from snapshot or from source database. This bitmap is stored in cache so it is always available until the SQL Server shuts down or the database is closed. If any one this two happen, then the bitmaps are need to be reconstructed when the database startups.


The snapshots can not be backed up an restored.More over if any snapshots are available for a particular database, that DB can not be dropped. Snapshots can not be attached or detached,
For security aspects, the all the security constraints are inherited to snapshots. If you drop a user from source database it will not dropped from snapshot!

Need more about Database snapshots


 
.NET CLR Integration in SQL Server 2005

Even though there are a lot of different thoughts in using CLR based codes inside SQL Server 2005, this feature of SQL SERVER provides a rich programming environment for both developers and DBAs. As the coding technology is in traditional way (with all of the language features) and integrating that code in a most secured area like SQL Server, this environment gives us to create safe, secure, scalable and feature - rich Stored Procedure, UDFs, UDTs, Triggers and User Defined Aggregates.

Prior to SQL Server 2005, the developers use their complex logics in COM Objects and call that COM objects with OLE Automation in SQL Server (using sp_OA*) Extended Stored Procedure. In SQL Server 2005, the error handling introduced in with TRY..CATCH block but it is still susceptible to untrappable errors, where we can handle these with structure error handling methods available in .NET languages. (http://download.microsoft.com/download/4/7/a/47a548b9-249e-484c-abd7-29f31282b04d/SQLCLRforDBAs.doc)

More advantages over using SQL CLR integration instead of XPROCs are

  • As CLR requests memory from SQL Server not directly from Windows, there is no managed user-code memory leaks making SQL Server slow or hanging up.
  • As CLR and SQL Server integrated within, the CLR code runs within SQL Server also gaining safe and secure environment by SQL Security and .NET Framework environment’s security.
  • This also makes safe SQL Server from user-code access violation cause crashes.
    This CLR integration of Complex logics in Data layer itself reducing high cost of network traffic of high marshaling of data to COM Server (in case of Distributed COMs are in use)
By default, after the installation of the SQL Server, this feature is disabled. We have to enable this feature (if we want to use that). Use Surface Area Configuration For Features Wizard to enable or use the following code.

sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
sp_configure 'clr enabled', 1;
GO
RECONFIGURE;
GO

Once you configured your server to enable this feature, now your .NET code can be integrated with SQL Server.

Assemblies

Managed code is compiled and deployed in units called Assembly. If you create a package in .NET language it will be either .exe or .dll. SQL Server supports .DLL assemblies. First you have to register an assembly using CREATE ASSEMBLY in SQL Server before its functionality is used or processed.

Create Assembly TSQL Statement will register an assembly in SQL Server. Using WITH PERMISSION_SET keyword, you can specify security permission of the assembly. Permission set may be SAFE, EXTERNAL_ACCESS, or UNSAFE. Default permission set is SAFE.
Books Online is saying

To create an EXTERNAL_ACCESS or UNSAFE assembly in SQL Server, one of the following two conditions must be met:
The assembly is strong name signed or Authenticode signed with a certificate. This strong name (or certificate) is created inside SQL Server as an asymmetric key (or certificate), and has a corresponding login with EXTERNAL ACCESS ASSEMBLY permission (for external access assemblies) or UNSAFE ASSEMBLY permission (for unsafe assemblies).
The database owner (DBO) has EXTERNAL ACCESS ASSEMBLY (for EXTERNAL ACCESS assemblies) or UNSAFE ASSEMBLY (for UNSAFE assemblies) permission, and the database has the
TRUSTWORTHY Database Property set to ON. “

After an assembly is registered in SQL Server, It can be used.

To monitor and manage CLR objects, we can use CLR Catalog views , CLR related DMVs and DMFs.
Catalog Views

SYS.ASSEMBLIES Catalog view returns each row per assembly registered in SQL Server.
SYS.ASSEMBLY_FILES Catalog view return each row per file for all the files that makes up the assembly.
SYS.ASSEMLY_REFERENCES catalog view returns all each row for a pair of assemblies which is directly referencing other assembly.

Dynamic Management Views and Functions

sys.dm_clr_appdomains : Returns a row for each application domain in the server

sys.dm_clr_loaded_assemblies: Returns a row for each managed user assembly loaded into the server address space

sys.dm_clr_properties: Returns a row for each property related to SQL Server common language runtime (CLR) integration, including the version and state of the hosted CLR

sys.dm_clr_tasks : Returns a row for all common language runtime (CLR) tasks that are currently running

For Further Studies Read
  1. Books Online : http://msdn2.microsoft.com/en-us/library/ms131102.aspx
  2. Blogs : http://blogs.msdn.com/sqlclr/default.aspx

 

Posted: Jan 31 2008, 04:44 PM by sureshbarathan | with no comments
Filed under: