October 2007 - Posts

TODAY's TIP: execute EXECUTE command

Did you know?


In previous versions of SQL Server (Prior to SQL Server 2005) you could execute EXECUTE command only on the local server, with SQL Server 2005 EXECUTE command now has AT parameter which can be used for executing the statement on a remote linked server.




Let's setup a linked serverusing SP_AddLinkedServer and point it to some other SQL Server:



--Add the linked server to the local machine EXEC sp_addlinkedserver 'SQLSERVER2', 'SQL Server'


--Enable the linked server to allow RPC calls Exec SP_Serveroption 'SQLSERVER2','RPC OUT',TRUE


Now you are ready to execute T-SQL statements across linked servers using AT command:


EXEC('Select * from AdventureWorksDW..DatabaseLog') AT SQLSERVER2


Tempdb Best Practices

When I get an interesting tip, I'm always happy to share it with the rest of the world.

This one came from Salem Hakani, top SQL Expert!



Itay Braun



In this tip we will talk about some of the best practises for configuring TempDB database and its placement:




The size and physical placement of the tempdb database can affect the performance of a system. For example, if the size that is defined for tempdb is too small, part of the system-processing load may be taken up with autogrowing tempdb to the size required to support the workload every time you restart the instance of SQL Server. You can avoid this overhead by increasing the sizes of the tempdb data and log file.

For information about determining the appropriate amount of disk space required for tempdb, see Capacity Planning for tempdb.


TempDB Size and Placement Recommendations:

To achieve optimal tempdb performance, we recommend the following configuration for tempdb in a production environment:


1) Set the recovery model of tempdb to SIMPLE. (This model automatically reclaims log space to keep space requirements small)


2) Allow TempDB files to automatically grow as required. (This allows the file to grow until the disk is full) *If the production environment cannot tolerate the potential for application time-outs that may occur during autogrow operations, preallocate space to allow for the expected workload.



3) Set the file growth increment to a reasonable size to avoid the tempdb database files from growing by too small a value. If the file growth is too small, compared to the amount of data that is being written to tempdb, tempdb may have to constantly expand. This will affect performance. We recommend the following general guidelines for setting the FILEGROWTH increment for tempdb files.


        If TempDB file size > 0 and < 100 MB then the recommended Filegroup increment can be 10 MB

        If TempDB file size > 100 and < 200 MB then the recommended filegroup increment can be 20 MB

        If TempDB file size >= 200 MB then the recommended filegroup increment can be 10%*


Note: You may have to adjust this percentage based on the speed of the I/O subsystem on which the tempdb files are located. To avoid potential latch time-outs, we recommend limiting the autogrow operation to approximately two minutes. For example, if the I/O subsystem can initialize a file at 50 MB per second, the FILEGROWTH increment should be set to a maximum of 6 GB, regardless of the tempdb file size. If possible, use instant database file initialization to improve the performance of autogrow operations.


4) Preallocate space for all tempdb files by setting the file size to a value large enough to accommodate the typical workload in the environment. This prevents tempdb from expanding too frequently, which can affect performance. The tempdb database should be set to autogrow, but this should be used to increase disk space for unplanned exceptions.


5) VERY IMPORTANT: Create as many files as needed to maximize disk bandwidth. Using multiple files reduces tempdb storage contention and yields significantly better scalability. However, do not create too many files because this can reduce performance and increase management overhead. As a general guideline, create one data file for each CPU on the server (accounting for any affinity mask settings) and then adjust the number of files up or down as necessary. Note that a dual-core CPU is considered to be two CPUs.


6) Make each data file the same size; this allows for optimal proportional-fill performance.


7) Put the tempdb database on a fast I/O subsystem. Use disk striping if there are many directly attached disks.


8) Put the tempdb database on disks that differ from those that are used by user databases.


Note: The values for the file size and filegrowth parameters are used whenever tempdb is created. For example, if you increase the size of the tempdb data file to 20 MB and increase the file growth increment to 15 percent, the new values immediately take affect. If subsequent transactional activities cause tempdb to grow in size, the data file will return to the size of 20 MB every time you restart the instance of SQL Server.

Cumulative Update # 4 for SQL Server 2005 SP2 Released

Today (10/15/2007) we shipped the Fourth Cumulative Update for SQL Server 2005 SP2 again On Time


This CU had

·         52 new Customer Issues

·         47 unique customers


All teams across SQL and CSS deserve kudos for pulling together on this release.  We will be making improvements to the CU process based on post mortem analysis and feedback; look for CU#5 announcement soon!


As per POR, this CU is not released as a public download, but as a hotfix download.  Customers can obtain the hotfix package by submitting an online request form here


·         Public KB Article



Itay Braun                

Premier Field Engineer - SQL Server                  

 Microsoft Services - UK

Veni              Vidi              Fixit


Breaking News - SAP Acquires Business Objects for US$6.8B

SAP acquired Business Objects for US$6.8B. Find out the impact of this acquisition, how it affects our customers and understand suggested BI sales strategies. The acquisition provides SAP with BI tools and full BI stack capabilities that excel in ease-of-use and user interface technologies, including ETL, data quality, reporting, OLAP, dashboards, text analytics and others



SAP strategy has been to acquire niche companies. This acquisition is more than 20x larger than any other they have ever done. The integration will take a long time before it delivers tangible benefits to customers. In the short- and mid-term, SAP will have overlapping and non-integrated offerings, providing opportunity to showcase Microsoft BI value proposition.


Microsoft also has a very strong BI offering integrated with Microsoft Dynamics, a line of integrated, adaptable business management solutions that automate and streamline financial, customer relationship, and supply chain processes.

Microsoft will continue to have a solid relationship with SAP and Business Objects, competing in some areas and partnering in others.


Very interesting.

Free Microsoft Official Curriculum - SQL Server 2008

Clinic 7045: What's New in Microsoft® SQL Server™ 2008 (free)












Itay Braun               

Premier Field Engineer - SQL Server                   

Microsoft Services - UK

Veni              Vidi              Fixit

Microsoft Team Blogs

Listed below are official team blogs from the product teams at Microsoft which are categorized by specific product groups.

Microsoft SQL and Data Programmability (OPML):
Microsoft SQL Everywhere Blog
Microsoft SQL Server Support Blog
Microsoft SQL Manageability Team Blog
Microsoft SQL Protocols Blog
Data Programmability Blog
ADO.NET Team Blog
Project Astoria Team Blog
Microsoft JDBC Driver Team Blog
XML Team Blog
Microsoft Office Project Programmability Blog 

For the full list:






This Blog