25 July 2007 09:16 tonyrogerson

SQL 2000 - Yes! Lots of VLF's are bad! Improve the performance of your triggers and log backups on 2000

A transaction log is broken up into VLF’s (Virtual Log Files (not 'Fragments' as I state - thanks Kalen)), these files, I'll call them fragments from now on because you may get confused with the log file itself) vary in size and in SQL Server 2000 the number and size of them can have a big impact on the performance of various operations that involve the transaction log, specifically here I talk about Triggers and Log Backups. In SQL 2005 they appear to have fixed the performance problem that is currently in SQL 2000.

 

By default autogrow for SQL 2000 is 10% and the initial size is 1Mbytes, that means if your transaction log starts to grow you will end up with a lot of VLF’s. You can use the DBCC LOGINFO command to see how many log fragments you have.

 

The script below can be used to highlight the performance difference caused by an increased number of log fragments, note, in a real system the performance differential will probably be higher because of concurrent access to the transaction log.

 

Results I received here, I’m only going to show SQL 2000 because 2005 there was no difference:

 

                                                Many VLF’s        Few VLF’s

Backup Transaction Log:            27 seconds        22 seconds

Update with Triggers:                 98 seconds        28 seconds

 

As you can see, even on this small amount of data, single user there is a big difference in performance and especially on trigger performance.

 

My recommendation to anybody still on SQL 2000 is to a) size your log properly that means according to your load, you will have this already and b) fix autogrowth, don’t use 10% use something sensible.

 

One thing to watch on both SQL 2000 and SQL 2005 which makes even more important to size your log properly so you don’t suffer autogrowth is that while the log is ‘growing’ then it is essentially locked, any processes trying to do insert/update/delete activity will block until the growth has completed. Yes, this is true for SQL 2005 as well because Auto File Initalisation does not work for the log file – it must initialise the log structure.

 

If you find you have many  VLF’s then you can fix the problem by doing this (it may take a few goes because of the placement of the activity portion of the log which is denoted by a status of 2 in the results of DBCC LOGINFO)..

 

BACKUP LOG...

DBCC SHRINKFILE( 2 )

 

Now – size your log properly.

 

Although this can be done whenever, its best done when nothing is using the log.

 

BTW, one thing I nearly forgot to mention; with autogrowth you can also get OS file fragments – so that one log file can be scattered across your disk in hundreds of places and not in order which can be bad for something that does sequential processing! Another good reason to size your log properly to start with.

 

use master

GO

DROP DATABASE VLFDemo

GO

 

CREATE DATABASE [VLFDemo] ON  PRIMARY

      ( NAME = N'VLFDemo',

        FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\VLFDemo.mdf' ,

        SIZE = 10MB ,

        FILEGROWTH = 10%)

 LOG ON

      ( NAME = N'VLFDemo_log',

        FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\VLFDemo_log.ldf' ,

        SIZE = 1MB ,

        FILEGROWTH = 1024KB )

GO

--CREATE DATABASE [VLFDemo] ON  PRIMARY

--    ( NAME = N'VLFDemo',

--      FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\VLFDemo.mdf' ,

--      SIZE = 300MB ,

--      FILEGROWTH = 10%)

-- LOG ON

--    ( NAME = N'VLFDemo_log',

--      FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\VLFDemo_log.ldf' ,

--      SIZE = 1GB ,

--      FILEGROWTH = 10MB )

--GO

 

ALTER DATABASE VLFDemo SET RECOVERY FULL

GO

 

BACKUP DATABASE VLFDemo TO DISK = 'C:\VLFDemo.BAK' WITH INIT

GO

 

USE VLFDemo

GO

 

--    Show VLF's

DBCC LOGINFO

GO

 

--    Create some work

--DROP TABLE my_table

--GO

 

CREATE TABLE my_table (

      id                int               not null identity primary key clustered,

      some_data   char(1024)  not null

)

GO

 

SET NOCOUNT ON

DECLARE @i int

SET @i = 1

 

WHILE @i <= 200000

BEGIN

      INSERT my_table ( some_data ) VALUES( CAST( @i AS char(1024) ) )

      SET @i = @i + 1

 

END

GO

 

DBCC LOGINFO

GO

 

BACKUP LOG VLFDemo TO DISK = 'C:\VLFDemo.TRN' WITH INIT

GO

 

CREATE TRIGGER trg_my_table_modify ON my_table FOR UPDATE

AS

BEGIN

      UPDATE mt

            SET some_data = RTRIM( d.some_data ) + RTRIM( i.some_data )

      FROM deleted d

            INNER JOIN inserted i ON i.id = d.id

            INNER JOIN my_table mt ON mt.id = i.id

 

END

GO

 

SET NOCOUNT ON

DECLARE @i int

SET @i = 1

DECLARE @now datetime

SET @now = CURRENT_TIMESTAMP

 

BEGIN TRAN

 

WHILE @i <= 200000

BEGIN

 

      IF @i % 100 = 0

      BEGIN

            COMMIT TRAN

            BEGIN TRAN

 

      END

 

      UPDATE my_table

            SET some_data = 'X'

      WHERE id = @i

 

      SET @i = @i + 1

 

END

 

COMMIT TRAN

 

PRINT DATEDIFF( second, @now, CURRENT_TIMESTAMP )

GO

 

Filed under:

Comments

# Server level VLF report

25 July 2007 23:40 by Andrew Calvett

I read Tony Rogerson's blog on Virtual Log Files today and it reminded me that i really should knock

# Server level VLF report

25 July 2007 23:44 by Andrew Calvett

I read Tony Rogerson's blog on Virtual Log Files today and it reminded me that i really should knock...

# Un'altro motivo per passare a SQL Server 2005

26 July 2007 09:15 by Impedance Mismatch

Sappiamo tutti che le impostazioni di default dell&#39;autogrow dei file sono davvero troppo troppo basse

# re: SQL 2000 - Yes! Lots of VLF's are bad! Improve the performance of your triggers and log backups on 2000

29 August 2007 13:58 by TheSQLGuru

I have a client that I KNOW will benefit from this!  Thanks for the info.  Also, about your OS file fragmentation issue, I have a client that when I first got there to do a performance review I checked for that.  They had over 300000 mdf/ldf file fragments!!  After defrag they had an 18% throughput in performance!!!

# re: SQL 2000 - Yes! Lots of VLF's are bad! Improve the performance of your triggers and log backups on 2000

11 October 2007 14:07 by BigRed

When you say lots of VLFs, can you suggest an order of magnitude? I have seen an article stating that you get 4 - 16 VLFs initially depending on the size of the Log but BOL just says you get 2 or more. I have seen several of our larger databases with hundreds of VLFs, one with thousands.

# Numero elevato di vlf | hilpers

21 January 2009 15:09 by Numero elevato di vlf | hilpers

Pingback from  Numero elevato di vlf | hilpers

# Performance impact: a large number of virtual log files – Part I

09 February 2009 17:28 by Linchi Shea

It is generally known that having a large number of virtual log files (VLFs) in a database transaction

# SQL Server 2000, large transaction log, almost empty, performance issue? - Admins Goodies

Pingback from  SQL Server 2000, large transaction log, almost empty, performance issue? - Admins Goodies

# Microsoft &#8211; SQL Server &#8211; Virtual Log File (VLF) | Daniel Adeniji&#039;s Trail

Pingback from  Microsoft &#8211; SQL Server &#8211; Virtual Log File (VLF) | Daniel Adeniji&#039;s Trail

# Must-Have Resources - SQL Server Backup & Recovery | Kevin E. Kline

Pingback from  Must-Have Resources - SQL Server Backup & Recovery | Kevin E. Kline