24 August 2006 11:28 tonyrogerson

Put TEMPDB on a RAM DRIVE; Short lived # tables - do they get written to disk?

Grab your attention? This entry came out of trying to reproduce PFS / GAM / SGAM contention and trying to put some science around seeing it and what the benefit of using multiple files or trace flag 1118 really is. I got half way through writing that and then realised I had behaviour in tempdb that went against how I thought writing to tempdb worked – probably a throwback to 6.5 days.

 

Look at the script below, it basically creates a temporary table, populates it with about 100Mbytes of data and then drops it and repeats that logic for 5 minutes.

 

set nocount on

 

declare @i int

set @i = 1

declare @st datetime

declare @st2 datetime

set @st = current_timestamp

 

while current_timestamp <= dateadd( minute, 5, @st )

begin

      create table #t ( mycol char(8000) not null )

 

      set @st2 = current_timestamp

      while current_timestamp <= dateadd( second, 5, @st2 )

      begin

            insert #t values( 'abc' )

            set @i = @i + 1

      end

 

      drop table #t

end

 

print 'iterations ' + cast( @i as varchar(20) )

 

Running this on a machine that is doing nothing other than the script above, that has 512Mbytes allocated to SQL Server, what do you think happens in tempdb? The LDF (transaction log) is written with ‘undo only’ records, there is no redo on tempdb because it’s cleared down on SQL Server restart, what about the data files? Do you think the data files get written too even for this short lived temporary table?

 

The answer is it depends, in my tests if the cumulative size of the data files for tempdb is larger than the available data cache then it will write to the data files even though logically we don’t want it to.

 

I was expecting it to find and reuse the extents that had just been freed up by the drop table but that doesn’t happen, if you use DBCC EXTENTINFO( 2 ) to look at what extents are being allocated you will see that it progresses through the database until it runs out of free extents and then starts to round robin back through the used and free pages and starts using them which is why in that particular scenario we won’t get the writes out to the data files – the LAZYWRITER never gets chance to flush the page out to disk.

 

How can you tell if SQL Server is writing out to the data files, run this:-

 

select * from ::fn_virtualfilestats( 2, 2 )

union all

select * from ::fn_virtualfilestats( 2, 1 )

 

FileId 2 is the log, FileID <> 2 are the data files, check the NumberWrites column and if its increasing by a high rate then you’ve got a lot of writes going out to the data files in tempdb and tempdb is probably a bottleneck that needs fixing – remember, the only difference between tempdb and any other application database is that tempdb does not log redo records so if you have a heavily used database – it might even be more write intensive than your application one because of ORDER BY, Hashing, CREATE TABLE #, table variables etc… you probably need to get it off onto its own disk and maybe even separate the tempdb log and data.

 

There is another way; remember that tempdb is recycled every time SQL Server is restarted so with that in mind the old tempdb in RAM springs to mind; that was a feature removed many releases ago but my feeling is that for some systems that make heavy use of tempdb a RAM drive will benefit.

 

I’ve done some tests using a software RAM drive (http://www.amtsoftware.com/Ramdisk-Plus/) and the results are outstanding, installation is easy and it installed in minutes on my Windows 2003 Server R2 64bit OS; you create the drive and format it using their tool, the drive is then seen as a local drive (try exec master..fixeddrives).

 

Another method is to using Gigabytes i-RAM which looks pretty cool, it’s a PCI card that has 4 banks of DDR RAM that can hold up-to 4GB it uses the SATA interface so it just looks like another hard drive to your system, the benefit is that the seek times are memory speed (nanoseconds rather than milliseconds) and the Mbytes/second throughput is high. Due to dosh I haven’t evaluated it, the card costs £93 from scan and you could fill it out with 4GBytes for about another £200.

 

The tests where run on SQL 2000 Standard SP4 with hotfix 2187 and trace flag 1118 was applied so as not to use mixed extents; I basically run the script shown early with 10 concurrent connections. It’s probably worth noting that the disk is a 74GB SATA 1.5Gbit 10Krpm drive and the memory is DDR2 800, the motherboard supporting dual DDR, it’s also probably worth noting that no animals where hurt whilst conducting these tests – only my head against a brick wall….

 

Tempdb was configured as below, except when doing the RAM drive test, I kept one of the files on disk (the 1MByte one) so that if pressure where put on tempdb then it could autogrow successfully albeit to disk but your application wouldn’t fail, and why only 1Mbyte its because of the round robin write algorithm I describe in my other blog entry.

 

Tempdev                       200Mbytes         NO autogrowth

Tempdev2                     200Mbytes         NO autogrowth

Tempdev3                     200Mbytes         NO autogrowth

Tempdev4                     1Mbytes            Autogrowth 10MBytes

Templog                        50Mbytes          Autogrowth 10MBytes

 

Results

 

Tempdb: 4 data, 1 log files on disk                                 130,520 iterations

Tempdb: 3 data, 1 log files in RAM, 1 data on disk        1,587,359 iterations

 

Bit of a difference there!

 

To move tempdb to another drive run this and restart SQL Server…

 

ALTER DATABASE tempdb

      MODIFY FILE ( NAME = templog , FILENAME = 'f:\sql2000\templog.ldf' )

 

ALTER DATABASE tempdb

      MODIFY FILE ( NAME = tempdev , FILENAME = 'f:\sql2000\tempdev.mdf' )

 

ALTER DATABASE tempdb

      MODIFY FILE ( NAME = tempdev2, FILENAME = 'f:\sql2000\tempdev2.mdf' )

 

ALTER DATABASE tempdb

      MODIFY FILE ( NAME = tempdev3, FILENAME = 'f:\sql2000\tempdev3.mdf' )

 

ALTER DATABASE tempdb

      MODIFY FILE ( NAME = tempaug, FILENAME = 'f:\sql2000\tempaug.mdf' )

 

How does this translate into your own environment? Check your tempdb usage by monitoring ::fn_virtualfilestats(2, <fileid>) and also tempdb sizing. What are the IO characteristics of your applications? Does your existing machine facilitate enough memory for a RAM drive or PCI card for the Gigabyte i-RAM approach? Unfortunately without seeing and understanding your SQL Server system then I can’t comment but if you need some consultancy you know where I am ;).

 

I’ll get back to the PFS / GAM / SGAM contention thing sometime in the next couple of days – got to get back to client work…

 

 

Filed under:

Comments

# re: Put TEMPDB on a RAM DRIVE; Short lived # tables - do they get written to disk?

25 August 2006 10:53 by Colin Leversuch-Roberts

I think the tempdb is much misunderstood - the only question I have ( saves me spending hours looking ! ) if a tempdb object ( on disk or in ramdisk ) is being accessed a number of times won't it be put into data cache?   and - can you guarantee to split a ramdisk tempdb back onto disk if it gets full, 4Gb is a fair amount of space - but I've seen bigger tempdb's !! I'm interested in you tests with GAM/SGAM contention however - I've tried to simulate contention but can't seem to be able to run enough stuff quick enough.

# re: Put TEMPDB on a RAM DRIVE; Short lived # tables - do they get written to disk?

25 August 2006 12:58 by tonyrogerson

Yes, the pages are in the data cache, the problem is that even if you drop the temporary table, because you've touched (used) those pages in cache if they don't get reused in time they still get written out to the data files (MDF, NDF's) - mad I know. Your next CREATE TABLE # doesn't use the pages you've just freed up by the DROP TABLE # unless the buffer pool check round robins to reuse before the lazywriter writes them off to disk (even though they are free).

It's probably not that clear but to get round filling the RAM drive up you basically do this...

Create multiple data files with NO autogrowth of equal size (eg. 300MBytes) on the RAM Drive, create a single file on the disk but make this 1MBytes in size and have autogrowth of say 10MBytes; the round robin write algolrithm balances writes based on the amount of free space in the files so the single file won't get written to unless the others fill up - clever!

The log is a different beast, but I can't see why you'd get a big log file on tempdb because only undo stuff is logged, anyway, you can use the same trick but its not round robin for the log, its write to file 1 and when thats full write to file 2, so if file 1 is on the RAM drive you'll get good performance until it fills and starts writing to file 2 which is on the disk.

Trace flag 1118 makes a good positive difference in my own tests on the contention stuff, but I'll write it up next week...

Tony.

# re: Put TEMPDB on a RAM DRIVE; Short lived # tables - do they get written to disk?

25 August 2006 14:12 by beckylou

There is one question I have regarding the RAM files, why have 3x300MB files, and not just one 900MB file on the ram drive with the extra 1MB-autoexpand on the hard drive?

I like this idea of having a fast access tempdb - we have numourous data imports which merge huge datasets together and are very tempdb heavy.

# re: Put TEMPDB on a RAM DRIVE; Short lived # tables - do they get written to disk?

25 August 2006 14:44 by tonyrogerson

Hi Beckylou,

Glad you asked, we are advised here http://support.microsoft.com/kb/328551 to use multiple files in order to reduce GAM, SGAM and PFS contention - multiple files have more of these structures so contention is reduced.

I've got some research half done that I need to finish off early next week that will show the contention and the benefits of multiple files, however, one thing I've proved through this is that you can very easily shoot yourself in the foot by using multiple files because if you have a heavily used tempdb with lots of writes to the log and data files then having multiple files can be detremental to performance because of disk head movement hence the benefits of the RAM Drive coming through again...

More on the contention and multiple files thing next week...

Tony.

# re: Put TEMPDB on a RAM DRIVE; Short lived # tables - do they get written to disk?

30 August 2006 08:41 by Colin Leversuch-Roberts

I had a thought on the train, I guess you could install say four of these ram disks and raid them for better performance and redundancy?
Of course I guess ultimately reducing tempdb usage is also recommended?

# re: Put TEMPDB on a RAM DRIVE; Short lived # tables - do they get written to disk?

09 October 2007 15:08 by Sorin

Hi Tony,

I love this idea. However, I am concerned about how to size the 4 files and the log, depending of my situation.

After 1 work day I have the following sizes of my TempDB:

Size: 975 MB

Available: 933 MB

tempdev: total=704, used=27

templog:  total=270, used=11

Reads      Writes       BytesRead               BytesWritten           IOStall

215980 276240 13777608704 15768379392 17453

1482 32175 84026368 1767745536 547

# Microsoft Day One Review &laquo; Seven Seconds

# ram drive

23 June 2008 07:07 by ram drive

Pingback from  ram drive

# insufficient disk space in filegroup 'DEFAULT' | keyongtech

Pingback from  insufficient disk space in filegroup 'DEFAULT' | keyongtech

# Hard disk configuration question | keyongtech

Pingback from  Hard disk configuration question | keyongtech