May 2011 - Posts

Copying files–Slowly and Responsibly
16 May 11 09:18 PM | GavinPayneUK | with no comments

I’ve been working with some clients recently who needed to copy large database backups between environments and learnt a couple of things I’ll now share.

Copying Files Slowly

In one instance we had to copy the backup files over a shared network link so wanted to stop the copy monopolising the limited bandwidth.  Our first attempt was to use normal xcopy but out of hours when the link wasn’t in use and while the copy uses most of the bandwidth like we expected it would it was also affecting the performance of the server itself.  The server’s 100Mb NIC utilisation was pegged at 98% and the ASYNC_NETWORK_IO wait stat count in SQL Server was climbing by the second, making the database server’s responsiveness to their web application poor.  As a result, that option was ruled out.

We then considered robocopy knowing it has an Inter-Packet Gap setting which how it uses network bandwidth.  Although this parameter had some influence over the amount of bandwidth it uses it proved difficult to limit it to a large amount of bandwidth.  Reading around it seems as though the option was designed for <1Mb WAN links rather than the 100Mb backbone we were using.  In the interest of time I had to search for another option, but the IPG setting is something I want to look into further as robocopy is good for scripting copies.

In the end the solution came from Google search that found a small, simple and free GUI based tool called SuperCopier (link). Amongst its copy options is a slider tool which allows you to accurately dictate how much bandwidth it uses.  While you have to remember to convert MB into Mb the bandwidth option did what it said it would do and for us limited the copy to 80Mb/s.

image

 

Copying Files Responsibly

Another requirement I had was to copy files as quickly as possible from the server’s SAN storage to a portable USB drive and as part of some research into the best tool for the job the client found this MSDN blog article (link) about copying large files over slow links.

This article was written from the angle of making the copy work as fast as possible but also discussed an area I’d never read about before – un-buffered I/O operations.

When Windows copies files normally it uses the system cache on the assumption that the file is probably going to be read again soon so caching it is a good move.  This can be really annoying if we know the data is never going to be read again, and if we’d rather always keep the memory for something else, like for SQL Server.

As an example, I copied an 8GB file around my laptop and you can see the effect it had on system memory during the copy:

image

image


When I’m copying large files to or from a production database server the last thing I want to do is cause large amounts of memory re-allocation or memory pressure because of that seemingly low priority copy.  The MSDN article the client discussed a new XCOPY parameter in Windows Server 2008 R2 – /J.  Using the /J parameter will perform a file copy using the un-buffered Windows file copy APIs.  As a result, the file bypasses the Windows caches and the size of the system cache doesn’t have the potential to grow.

When I did the same copy as above but using the /J parameter things looked very different:

image

image

Incidentally, for the file I was copying the copy took the same amount of time, although I’ve yet to test larger files to see if it degrades the copy time.

 

What I’m not saying is that copying large files using the standard buffered I/O APIs is bad, Windows and SQL Server have good memory management processes to avoid such things typically being a threat to SQL Server’s performance.  What I do like though is knowing I have the ability to perform large file copies and remove the risk of Windows deciding to bloat the system cache un-necessarily.

Surrey SQL Server User Group–25th May
09 May 11 10:49 AM | GavinPayneUK | with no comments

Below is an overview of my next user group event for the Surrey region.  Once again we’re being hosted by VMware at their UK headquarters in Frimley, Camberley (10 miles from Guildford).

For those looking to travel by rail, Frimley railway station is a 10 minute walk away.  For those driving its literally just off of Junction 4 of the M3.

 

The evening will have three speakers:

What does SQL Azure mean for me?  An overview and service update
Keith Burns, Data Platform Technical Specialist – Microsoft

Following on from a session at our last session about what does the cloud mean for me, Keith will be updating us on what SQL Azure means for us as well as updates to Microsoft’s cloud based SQL Server service.

Upgrading SQL Server databases
Justin Langford, SQL Server MVP – Coeo

This session provides an overview of the upgrade methods available as well as considerations for each method.  The session provides details of risks involved with an upgrade project and potential mitigation steps for each technique to ensure a smooth upgrade.  The session will be relevant for anyone involved in upgrading from SQL Server 2000/2005 to SQL Server 2008, beyond to Denali.

Technical demo session by VMware

VMware will demo for us some quick tips and tricks for those who use their virtualisation technologies.

Registration is free and much appreciated.

http://www.eventbrite.com/event/1609446899