June 2007 - Posts

You’ve probably heard the term banded around but do you know what it means and what it means to the performance of your application? I’m going to demonstrate a page split, use DBCC PAGE to show what happens, SQL Profiler to show the locks and talk round what’s going on.


IAM      Index Allocation Map
(See http://blogs.msdn.com/sqlserverstorageengine/archive/2006/06/24/645803.aspx for a good talk about these)
GAM     Global Allocation Map
SGAM    Shared Global Allocation Map
PFS       Page Free Space
 (See http://blogs.msdn.com/sqlserverstorageengine/archive/2006/07/08/under-the-covers-gam-sgam-and-pfs-pages.aspx for a good talk about these)
Page     8Kbytes of data
Extent   8 Pages (totals 64Kbytes)


A page is 8Kbytes of data which can be index related, data related, large object binary (lob’s) etc...

When you insert rows into a table they go on a page, into ‘slots’, your row will have a row length and you can get only so many rows on the 8Kbyte page. What happens when that row’s length increases because you entered a bigger product name in your varchar column for instance, well, SQL Server needs to move the other rows along in order to make room for your modification, if the combined new length of all the rows on the page will no longer fit on that page then SQL Server grabs a new page and moves rows to the right or left of your modification onto it – that is called a ‘page split’.


Create a new database, a new table and put some rows in it.

use master


drop database pagesplit



create database pagesplit


use pagesplit



create table mytest (

    something_to_see_in_data    char(5)        not null constraint pk_mytest primary key clustered,

    filler                      varchar(3000)  not null




insert mytest ( something_to_see_in_data, filler ) values( '00001', replicate( 'A', 3000 ) )

insert mytest ( something_to_see_in_data, filler ) values( '00002', replicate( 'B', 1000 ) )

insert mytest ( something_to_see_in_data, filler ) values( '00003', replicate( 'C', 3000 ) )


Now look at the contents of one of your data pages, use DBCC IND to identify what pages are targeted by our object.

DBCC IND ( 0, 'mytest', 1);





We can see that page 80 and 73 have data pertaining to our object, looking at the output, a PageType of 10 indicates an IAM page and a PageType of 1 a data page. I’m not going to go into IAM’s because I’d lose the focus of what I’m talking about, but I’ve put a reference at the top for further reading.


So here we will concentrate on the data page (page 73).


dbcc traceon( 3604 )        --  Output to the console so we can see output



dbcc page( 0, 1, 73, 1 )    --  page ( <0 for current db>, <file>, <page>, <level of detail> )


Page 73 contains all our data; the row offset table is shown below...



Row - Offset                        

2 (0x2) - 4128 (0x1020)             

1 (0x1) - 3112 (0xc28)              

0 (0x0) - 96 (0x60)                 

Before we update our middle row to force a page split we should get SQL Profiler running so we can capture the Locks used so we can identify when the split has occurred and the locks in use.

Start SQL Profiler, New Trace, select ‘blank trace’, now on the events selection bit – choose Locks (Lock:Aquired, Lock:Released and Lock:Esculation), on TSQL choose SQL:StmtStarting and SQL:StmtCompleted. Now remember to set the filter to the SPID that you are going to run the test in.

First, let’s update a row that does not cause a page split...

update mytest

    set filler = replicate( 'B', 1000 )

where something_to_see_in_data = '00002'

Looking at the profiler trace (below) we can see there is a simple exclusive lock on the page and key being updated. The important part here is that no other pages are locked, ok, in reality we may have indexes on the columns being updated that would indeed cause locks but I’m keeping it simple!

Lets now update the middle row but this time make it so that the combined length of the rows currently on the page will no longer fit into 8Kbytes.

update mytest

    set filler = replicate( 'B', 3000 )

where something_to_see_in_data = '00002'

Before looking at the profiler output, let’s take a look and see what pages we now have allocated to our object...

DBCC IND ( 0, 'mytest', 1);



Interestingly we now have 2 more, page 89 and page 109, looking at the PageType again, page 109 is a data page (type of 1) and 89 is an index page (type of 2); for a single data page there is no balanced tree to speak of because there is a one to one relationship between the root node of the index and the data page. Also, look at the page chain (columns NextPagePID and PrevPagePID ), 109 links to 73 and vice versa. Our IAM is still page 80.

Visualisation of what’s going on

Looking at the Profiler trace we see a number of things have happened...

1)    The exclusive lock is taken on the page and key being updated.

2)    The IAM page is locked (page 80), well, one of the slots in it – this is because we are being allocated some new data pages and IAM keeps track of these, you will notice that the lock is acquired and released within the transaction and not at the end, this keeps contention low; imagine how much slower inserts would be on a table if you also locked the IAM page until the end of the transaction – it would be disaster.

3)    The new index page is locked (page 89), again – notice how that lock is also released before the statement commits, why? Well, you aren’t updating the index keys so effectively there really isn’t a change so a rollback doesn’t really mean anything. One of the recommendations I make is that you be careful what columns you index – be aware of the additional locking time incurred when you are updating columns in an index.

4)    The new data page is locked (page 109), notice how that lock is kept until after the update has finished (the transaction completes).

5)    The page is split – some of the rows (id 2 and 3) are moved from page 73 to page 109, locks aren’t taken on the keys, in fact if we start a transaction and update row ‘3’ thus to prevent another connection from updating or deleting the row then SQL Server still (under the covers) moves the row to the other page – clever.

Summing Up

A page split does require resources and on a system that has continual page splits then that is going to affect performance and scalability.

Concurrency (blocking) problems are kept to a minimum by some internal tricks so they aren’t as expensive as they used to be in previous editions of SQL Server.

How do you avoid them? There are strategies for doing this, one such strategy is to use a low fill factor when you create and index, but that will be at the expense of read operations and because you usually have more reads to writes then that’s bad.

Personally, the usual thing I recommend (and by no means is this a one box fits all recommendation) is that you defrag indexes using DBCC INDEXDEFRAG (SQL 2000) or the equivalent ALTER INDEX REORGANIZE (SQL 2005), other things are to pre-populate columns so the row length doesn't grow when you update, don't use NULL's everywhere. If you have say a varchar type column that is frequently updated to different sizes then perhaps consider putting that in its own table.

Paul Randal has covered a lot of stuff in this area so make sure you check his blog out.

Background Reading



I've found a really good venue here in Harpenden.

It's effectively central for a lot of folks...

M1 Junction 9 its 8 min drive

25 minutes from Kings Cross

15 minutes from A1(M)

15 minutes from Luton airport.

I'm thinking of an evening or afternoon event with some really good content.

Anyway, what do people think? I know a lot of folk work in London, but a lot of folk live on the outskirts too and venues are easier to get outside London.

Feel free to email tonyrogerson@torver.net with all your ideas and suggestions.


Thankfully in my travels I see less and less heaps (a table without a clustered index), frankly they should be banned, taken out of the engine and a clustered index made mandatory. In this post I look at row hopscotch (row fragmentation) and the effects it has on application performance (IO’s and concurrency), you also see stark evidence at just why you should always have a clustered index.

A database is made up of a series of 8KB pages; a group of 8 pages is called an extent.

A heap is a table without a clustered index, the data is inserted anywhere in the page chain, so if you are deleting rows, those free ‘slots’ can be reused (this is where row hopscotch occurs).

A table with a clustered index has its data stored in the order of the clustering key so any inserts are placed in order, any page free space from deletes can only be used if the inserted row clustering key values fit within the range of values already on the page, for instance, if the page contains the rows 1, 2, 3, [FREE SLOT], 5, 6 and there is only room for 1 row then you can insert the id 4 but not 7, 8.

Let’s look at some SQL, first the HEAP...

create table somedata (

    id  int not null identity primary key nonclustered,

    somestuff   char(250)   not null



Notice that we have a primary key that is nonclustered, this puts a nonclustered index on the table to enforce the uniqueness.

Let’s populate the table, note the @i % 5 simply yields true on every 5th row inserted into the table, I’m doing a random delete to force some free space somewhere in the table allocation.

set nocount on

declare @i int

set @i = 1


while @i <= 40000


    if @i % 5 = 0

        delete somedata

        where id = ( select top 1 id

                     from somedata

                     order by newid() )


    insert somedata ( somestuff ) values ( '>>>>>' + cast( @i as varchar(10) ) + '<<<<<' )


    set @i = @i + 1





Let’s try our query, this causes a ‘Table Scan’ because we are doing the DISTINCT somestuff and the number of rows we are doing it against SQL Server decided it is cheaper to Table Scan rather than a seek and look up.

set statistics io on


select count( distinct somestuff )

from somedata

where id between 1 and 10000


set statistics io off


This yields the following IO stats...

Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'somedata'. Scan count 1, logical reads 1146, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

We can use the index to see how expensive that is too...

select max( somestuff )

from somedata

where id between 1 and 500

'somedata'. Scan count 1, logical reads 142, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.


At this point, we should take a look at a page of data to find out what the on-disk storage of rows looks like, we do this using the DBCC EXTENTINFO command and pick out the first non-mixed extent.

dbcc extentinfo( 0 )

Run down the output looking for your object (object_id( ‘somedata’) ), you want the first entry where pg_alloc = 8 and index_id = 0, make a note of the page_id and then view that page by using DBCC PAGE...

dbcc traceon( 3604 )

dbcc page( {your db name}, 1, {page_id}, 3 )

You will get a rather large output, what you are really interested in is the slots, I’ve reduced the output to make it readable, you’ll see something like this...

Slot 0 Column 0 Offset 0x4 Length 4

id = 278                            


Slot 1 Column 0 Offset 0x4 Length 4

id = 11556                           


Slot 3 Column 0 Offset 0x4 Length 4

id = 281                            


Slot 4 Column 0 Offset 0x4 Length 4

id = 23562                          


Slot 5 Column 0 Offset 0x4 Length 4

id = 38845                          


Slot 6 Column 0 Offset 0x4 Length 4

id = 284                            


Slot 7 Column 0 Offset 0x4 Length 4

id = 285                            


Slot 8 Column 0 Offset 0x4 Length 4

id = 24888                          

Note, ‘id’ is your data and is the id we inserted (in order!) - you can see the row hopscotch here, the high id rows you see are because we deleted rows for instance 282 and 283 and SQL Server has re-used these slots on the page for other rows being inserted. Now, take a moment and visualise this across our 32,000 row table; for the heap the rows are literally anywhere so if you do a query like BETWEEN 100 and 1000 then literally the rows at worse case could be located on every single allocated page to the table. Also, this is why SQL Server often chooses not to use an index for looking up, because seek op is fine – it’s just the secondary lookup back to the data that costs the resource.

I suppose one last thing to look at and its where the heap approach does benefit, it provides better space utilisation (that said, there is a major bug to do with open objects and how new extents get allocated that has caused a lot of hassle and performance degradation for my clients with a particular third party database, but I won’t mention it’s Sage).

This is the output from DBCC SHOWCONTIG...

- Pages Scanned................................: 1147

- Extents Scanned..............................: 147

- Extent Switches..............................: 146

- Avg. Pages per Extent........................: 7.8

- Scan Density [Best Count:Actual Count].......: 97.96% [144:147]

- Extent Scan Fragmentation ...................: 14.29%

- Avg. Bytes Free per Page.....................: 362.4

- Avg. Page Density (full).....................: 95.52%


Now, let’s look at why the Clustered Index approach is superior...

drop table somedata



create table somedata (

    id  int not null identity primary key clustered,

    somestuff   char(250)   not null




set nocount on

declare @i int

set @i = 1


while @i <= 40000


    if @i % 5 = 0

        delete somedata

        where id = ( select top 1 id

                     from somedata

                     order by newid() )


    insert somedata ( somestuff ) values ( '>>>>>' + cast( @i as varchar(10) ) + '<<<<<' )


    set @i = @i + 1





set statistics io on


select count( distinct somestuff )

from somedata

where id between 1 and 10000


select max( somestuff )

from somedata

where id between 1 and 500


set statistics io off


'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'somedata'. Scan count 1, logical reads 350, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.


Table 'somedata'. Scan count 1, logical reads 21, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.


As you can see from the IO stats the cost of the query has dramatically dropped even on this small 11MByte table – imagine the benefits as the table grows! Because with the clustered index approach the data is ordered and such is held closer together on a smaller number of pages there are two major benefits 1) more likelihood that the page will be in cache thereby reducing the number of physical reads from disk and 2) because you are reading less pages there is less likelihood of colliding with somebody updating data and having the page locked, I know it drops to row lock but consider the overhead required if an update or insert causes a page split. Also, each logical read is 8KB which needs to go through the CPU, so the less reads the less CPU usage there will be on the box.

Looking at the DBCC PAGE output for the clustered index you can see that ‘like’ rows, our clustered index is on ‘id’, are grouped together...

Memory Dump @0x06CCC286


Slot 0 Column 0 Offset 0x4 Length 4

id = 182                            


Slot 1 Column 0 Offset 0x4 Length 4

id = 184                             


Slot 2 Column 0 Offset 0x4 Length 4

id = 190                            


Slot 3 Column 0 Offset 0x4 Length 4

id = 191                            


Slot 4 Column 0 Offset 0x4 Length 4

id = 193                            


Slot 5 Column 0 Offset 0x4 Length 4

id = 194                            


Slot 6 Column 0 Offset 0x4 Length 4

id = 206                            


Slot 7 Column 0 Offset 0x4 Length 4

id = 208               

Because of this ordering of rows, space usage isn’t as good...

- Pages Scanned................................: 1380

- Extents Scanned..............................: 175

- Extent Switches..............................: 175

- Avg. Pages per Extent........................: 7.9

- Scan Density [Best Count:Actual Count].......: 98.30% [173:176]

- Logical Scan Fragmentation ..................: 0.43%

- Extent Scan Fragmentation ...................: 1.14%

- Avg. Bytes Free per Page.....................: 1672.8

- Avg. Page Density (full).....................: 79.33%

However, one of the benefits of a clustered index is that you can actually defragment it, so running DBCC INDEXDEFRAG...

dbcc indexdefrag( 0, somedata )

We get 1128 pages moved and 249 removed and our showcontig is now...


- Pages Scanned................................: 1130

- Extents Scanned..............................: 144

- Extent Switches..............................: 143

- Avg. Pages per Extent........................: 7.8

- Scan Density [Best Count:Actual Count].......: 98.61% [142:144]

- Logical Scan Fragmentation ..................: 0.44%

- Extent Scan Fragmentation ...................: 21.53%

- Avg. Bytes Free per Page.....................: 251.8

- Avg. Page Density (full).....................: 96.89%

You do need to be careful when using DBCC INDEXDEFRAG because it uses a lot of log because of the nature of how it works, this can be overcome by using Bulk Logged recovery model which still allows you to keep an unbroken log chain, however – that recovery model cannot be used on a mirrored database.

So, to summarise; if you are just inserting into a table then these problems are not really going to hit you, but, if like most systems you are doing deletes or updates then you will likely have this problem.

Anyway, no brainer for me – use a clustered index.

There is a lot more I can write on this, showing a scalability test to see the dramatic performance difference, also – blocking etc... but alas, out of time.

We want to a) get more good content and people blogging, especially here in the UK, we also want to b) create a face to face expert community for those who are participating in (a).

So, in way of thanks; for those a create enough quality content in each quarter, you will get invited to dine on us (do KFC sell beer?) either in London or Harpenden (for now) and talk SQL with other leading SQL experts.

Anybody can achieve this, but, what do you need to do?

Simple - get a blog and start creating useful content, remember - we said 'quality' content - so, that means your own, it means demonstrating/talking about some technique - look at the sort of posts I do as an example.

The first SQLBlogsters meeting will be held toward the end of July, its not too late - we'll be making our decision who to invite in the next 2 weeks (before 18th July).

Join us in making this one of the most popular, most useful sites in the community. And no, we will never (not so long as I own the domain) force adverts onto your blogs like certain other sites do! Doesn't stop you doing your own and we can help put you in front of sponsors once your content grows and you become popular....

Good luck!

What was I thinking? I double clicked on a profiler trace file, only 5GB and up it started to come in Profiler; I got distracted (the kettle boiled) and came back and realised my machine was strugglging a bit - strange, its a good workstation - 4GB, fast AMD etc...

Oh dear, then it dawned on me what I'd done, I was opening a 5GByte file in Profiler - it's trying to load it all into memory - lol - whoops....

What I should have done is this...


INTO prof

FROM fn_trace_gettable('d:\temp\TraceActivity.trc', default);


I guess the real advice here is this a) shouldn't be done on your live server unless you are using smaller trace files (10's-100'sMBytes depending on how much memory you have and how busy your system is) and b) profile into a table in the first place, in a database (and preferably a different server) so all you need to do is back it up, zip and send; or analyse away. Alas, if you are using server based traces rather than through profiler you have little choice than the file approach.

There was a really useful tool for loading the files and parsing the contents so you can group queries but I can't find it - if anybody knows then please feel free to comment.


While checking my stats to see why people are coming to my blog I found this guy: Craig Freedman (http://blogs.msdn.com/craigfr/default.aspx), his blog is outstanding; in depth examples on how joins and query execution stuff works.

It's exactly the style I try and follow myself - lots of working examples and some very good explanations around the gubins of what is going on.


Ok, so, my wife seems to think I need a holiday - no argument there. With Katmai coming up, I think my brain needs a rest before the next on slaught of new stuff blah blah...

Anyway, as I'm completely addicted to my email and the online community - I need fast internet access in Lulworth Cove (thats in England for all you US folks, <sarcasm>not London England but slightly out, a tad fiurther south</sarcasm>; I do have 3g via Vodafone and T-Mobile but just in case on the coast it doesn't recieve (It's nearer France after all), actually, that brings up a little story on my last holiday where we all went to Heartbeat country (near Whitby), anyway - I was staying in the village where the Adensfield Arms is (supposed), basically, I had to drive out of the village and to the top of a hill in order to get GPRS - funny at the time, and to prove it - I've got a picture of a sheep giving me a 'get off my land' glare - and Simon/Trevor, fyi - no, I didn't shag it, well, not this time - I didn't have the time ;), I sware to god I saw somebody who was the spit of Allan Mitchell there though.....

Anyway, I digress.

I know a few folks read my blog now-a-days; so please; if anybody has a) been near Lulworth Cove and b) know of a hotel / guest house that does WiFi / broadband then please please let me know, via a comment to this or to tonyrogerson@torver.net.



June 2007 - Technical Rollup Mail - SQL


SQL Server




Microsoft SQL Server Code Name “Katmai”

SQL Server code name "Katmai," the next release of Microsoft SQL Server, provides a comprehensive data platform that is more secure, reliable, manageable and scalable for your mission critical applications, while enabling developers to create new applications that can store and consume any type of data on any device, and enabling all your users to make informed decisions with relevant insights.




Performance and Scalability: SQL Server 2005 Enterprise Edition Benchmarks

SQL Server 2005 achieved the #1 price/performance result for 3TB TPC-H non-clustered benchmark. This is a record breaking TPC-H three terabyte benchmark for a database running on Microsoft Windows and an HP Integrity rx8640 Server. The recent TPC-H benchmark study with HP demonstrates Microsoft SQL Server’s ability to deliver enterprise scalability and performance enabling our customers to run large scale, mission critical applications with the best price/performance. This result is twice the size of previous SQL Server benchmarks achieved for the launch of SQL Server 2005 demonstrating that SQL Server continues to push the performance envelope. Furthermore, SQL Server 2005 outperformed the previous best result on Windows with performance of 60,359 QphH@3TB (queries per hour) compared to the previous record of 37,813 with a much better price/performance of $32.60 compared to $38.00.*




Tech·Ed 2007: Turn “doing your job” into “making your mark.”

Tech·Ed 2007, Microsoft’s premier annual conference* for IT Pros and Developers is dedicated to providing technical training, information and resources to help you maximize your Microsoft investment.  For five days in June, you will interact with experts, collaborate among peers and engage in trainings focused on Microsoft’s innovative technologies to move your business forward.  Let this year’s Tech·Ed be your first step to “making your mark” which you can draw from long after Tech·Ed is over.



Running SQL Server on "Microsoft Windows Server Longhorn" or Microsoft Windows Vista

In an effort to provide customers with more secure products, Microsoft Windows Server "Longhorn" and Microsoft Windows Vista are supported by SQL Server 2005 Express Edition Service Pack 1 (SP1), and for all other editions will be supported by SQL Server 2005 Service Pack 2 (SP2) or later when it becomes available.  Earlier versions of SQL Server, including SQL Server 2000 (all editions including Desktop Engine edition, a.k.a MSDE), SQL Server 7.0, and SQL Server 6.5, will not be supported on Windows Server "Longhorn" or Windows Vista. Customers running applications with these earlier versions of SQL Server should consider evaluating and upgrading to SQL Server 2005, which was designed to take advantage of the upcoming security and performance enhancements in the operating environment.


Microsoft SQL Server Release Services

The Global Release Services team within the SQL Server organization is a central team providing a variety of services to one of the biggest development organizations in the world.  We drive a lot of processes and business decisions which significantly impact both, our internal operations as well as our customers world wide.  This blog is intended to share some of this information with you straight from the source, drive interaction as well as to solicit your feedback on particular topics for us to make better decisions for you !





Reducing Database Size by Using Vardecimal Storage Format
The Vardecimal storage format became available starting with Microsoft SQL Server 2005 Service Pack 2 (SP2) in the Enterprise Edition.  This storage format can be enabled at a table-level granularity.  When enabled, SQL Server stores decimal and numeric data in the variable portion of the row instead the fixed portion.  You can use vardecimal storage format to reduce the size of your database if you have tables with decimal and numeric data types. How much space you save depends on the number of decimal or numeric columns, the data distributions, and the size of the table(s). Since this is a new storage format but not a new data type, you do not need to apply any application changes to benefit from it. This white paper covers the impact of enabling vardecimal storage format on different workloads.





Microsoft SQL Server 2000 / 2005 Management Pack

The SQL Server Management Pack monitors the availability and performance of SQL Server 2000 and 2005 and can issue alerts for configuration problems.  Availability and performance monitoring is done using synthetic transactions.  In addition, the Management Pack collects Event Log alerts and provides associated knowledge articles with additional user details, possible causes, and suggested resolutions.  The Management Pack discovers Database Engines, Database Instances, and Databases and can optionally discover Database File and Database File Group objects.



Microsoft SQL Server 2005 JDBC Driver 1.2 Community Technology Preview April 2007

In its continued commitment to interoperability, Microsoft has released a new Java Database Connectivity (JDBC) driver with SQL Server 2005.  The SQL Server 2005 JDBC Driver 1.2 CTP April 2007 download is available to all SQL Server users at no additional charge, and provides access to SQL Server 2000 and SQL Server 2005 from any Java application, application server, or Java-enabled applet. This is a Type 4 JDBC driver that provides database connectivity through the standard JDBC application program interfaces (APIs) available in J2EE (Java2 Enterprise Edition).  This release of the JDBC Driver is JDBC 3.0 compliant and runs on the Java Development Kit (JDK) version 1.4 and higher. It has been tested against all major application servers including BEA WebLogic, IBM WebSphere, JBoss, and Sun.



SQL Server Tables and Exchange Web Services

The SQL Server Tables and Exchange Web Services sample demonstrates a powerful integration of Microsoft® Exchange Server 2007 and Microsoft SQL Server™ 2005 features.  This integration enables you to provide data from both Microsoft Exchange and SQL Server to client applications so that the data appears as if it were stored in SQL Server.



Microsoft SQL Server 2005 Enterprise / Exchange Server 2007 32-bit VHD

The Microsoft VHD Test Drive Program provides customers with an enhanced server-based software evaluation experience that’s faster, better supported and more flexible.  You can now access the entire catalogue of pre-configured Microsoft and partner products and solutions in the VHD format and start evaluating and testing today from www.microsoft.com/vhd.  Microsoft Exchange Server 2007 provides built-in protection technologies to help keep the e-mail system up and running and better protected from outside threats while allowing employees to work from wherever they are using a variety of clients including Microsoft Outlook, Outlook Web Access, and mobile devices. Exchange Server 2007 makes it easier for IT to deliver these new capabilities to their organizations by making the messaging environment easier to manage and more cost efficient. Please see www.microsoft.com/exchange/preview for additional information.



.NET Rocks! - Paul Randal on SQL Server 2005 Performance and Recovery

Paul Randal started in the industry in 1994 working for DEC on the VMS file system and check/repair tools.  In 1999 he moved to Microsoft to work on SQL Server, specifically on DBCC.  For SQL Server 2000, he concentrated on index fragmentation (writing DBCC INDEXDEFRAG, DBCC SHOWCONTIG) and various algorithms in DBCC CHECKDB. During SQL Server 2005 development was the lead developer / manager of one the core dev teams in the Storage Engine, responsible for data access and storage (DBCC, allocation, indexes & heaps, pages / records, text / LOB storage, snapshot isolation, etc). He also spent several years rewriting DBCC CHECKDB / repair.  Since SQL Server 2005 shipped, Paul has managed the Program Management team for the core Storage Engine to become more focused on customer/partner engagement and feature set definition.  Paul regularly presents at conferences around the world on high-availability, disaster recovery and Storage Engine internals.  His popular blog is at http://blogs.msdn.com/sqlserverstorageengine.



Readme Files for SQL Server 2005 SP2 and SQL Server 2005 SP2 Express Editions and Tools

SQL Server 2005 Service Pack 2 (SP2) will upgrade all editions of SQL Server 2005 to SP2 except Express Edition.  You must use Microsoft SQL Server 2005 Express Edition (SQL Server Express) SP2 to upgrade instances of SQL Server Express to SP2. SQL Server Express SP2 can also be used to install a new instance of SQL Server Express.



SQL Server Migration Assistant for Access

Microsoft SQL Server Migration Assistant (SSMA) for Access is a tool for migrating databases from Microsoft Access to Microsoft SQL Server 2005. SSMA for Access converts Access database objects to SQL Server database objects, loads those objects into SQL Server, and then migrates data from Access to SQL Server.  SSMA for Access supports Access 97 - Access 2003 databases.



SQL Server Migration Assistant for Sybase ASE

SQL Server Migration Assistant (SSMA) for Sybase ASE is a complete suite of tools that dramatically cuts the effort, cost, and risk of migrating from Sybase ASE database to SQL Server.  SSMA automates almost every aspect of the migration process including assessment, schema and data migration, business logic conversion.  With SSMA, IT organizations now have the freedom to move to SQL Server and take advantage of SQL Server's exceptional security, scalability, developer productivity, and the industry’s lowest total cost of ownership.  SSMA for Sybase ASE consists of a client tool that you use to migrate ASE databases to SQL Server, and an extension pack that contains tables and stored procedures to support migration and functions that replace Sybase ASE system function usage in migrated databases.





SQL 2005 Webcasts and Virtual Labs

SQL Server 2005 is data management and analysis software that delivers enhanced security, availability, and scalability to mission-critical data applications—while making them easier to develop, deploy, and manage.  Choose from a wide range of live and on-demand webcasts.  Or take part in a virtual lab, which allows you to cut your teeth on the new platform by experimenting with its different capabilities.

SQL Server 2005 Webcasts


Support WebCast: How to use custom assemblies in SQL Server 2005 Reporting Services

This Support WebCast describes how you can use custom code in a Microsoft SQL Server 2005 Reporting Services report. We will cover a typical scenario where you would want to use a custom assembly in a report. The presentation simplifies and outlines all the steps to create, deploy, and troubleshoot custom assemblies that are used with Reporting Services. This presentation discusses how to use embedded code instead of a custom assembly and also discusses the pros and cons of using embedded code compared to using a custom assembly.

Support WebCast: How to use custom assemblies in SQL Server 2005 Reporting Services


TechNet Webcast: The Next Release of Microsoft SQL Server: Overview (Level 200) –  Monday, June 04, 2007. 12:00 PM Pacific Time (US & Canada)

This session provides an overview of the next release of Microsoft SQL Server that is currently under development.  We cover the core value proposition, major themes and scenarios, and some specific improvements. We also discuss the new development processes Microsoft is using to build this release, the release timeline, and the disclosure calendar.

TechNet Webcast: The Next Release of Microsoft SQL Server: Overview (Level 200)


Performance methodology best-practices for Siebel running on Microsoft Friday, June 08, 2007. 9:00 AM Pacific Time (US & Canada)

Yannick Germain is a Siebel professional with over 10 years of experience in software development, systems performance and scalability engineering.   This experience includes eight years at Siebel Systems, three of which were spent developing application modules for the Siebel Field Service product line, and the remaining five of which were spent on Performance and Scalability engineering projects for the Siebel Enterprise product line.   Yannick was frequently called upon by Siebel executives to work on escalated accounts to analyze and solve mission critical performance and scalability problems.  Yannick successfully enabled Siebel Partners perform ~30,000 users high-end Benchmarks; For this work, he was awarded the highest level of recognition by Siebel executives and, most importantly, by Siebel Customers.  In this webcast Yannick goes into the most valuable performance tuning and best practices for Siebel Customers running in Microsoft environments.

Performance methodology best-practices for Siebel running on Microsoft


MSDN Webcast: Building Smart Client Applications Using Synchronization Services for ADO.NET (Level 200) Monday, June 11, 2007. 11:00 AM Pacific Time (US & Canada)

Even in an increasingly connected world, offline applications are still a necessity. Attend this webcast to learn how you can build an offline smart client application that synchronizes a backend database with a local Microsoft SQL Server 2000 Windows CE Edition store using the Microsoft Synchronization Services for ADO.NET.

MSDN Webcast: Building Smart Client Applications Using Synchronization Services for ADO.NET (Level 200)


TechNet Webcast: Introduction to SQL Server 2005 Analysis Services (Level 200) Monday, June 18, 2007. 9:30 AM Pacific Time (US & Canada)

Microsoft SQL Server 2005 Analysis Services includes new capabilities that enable you to deliver rich and powerful analytics to your organization.  In this session, we demonstrate how you can use SQL Server Analysis Services to build a highly scalable business intelligence (BI) infrastructure that supports the analytics your business needs to improve overall performance. During the discussion, we cover the Unified Dimensional Model (UDM), proactive caching, multidimensional expressions (MDX), and new manageability enhancements.

TechNet Webcast: Introduction to SQL Server 2005 Analysis Services (Level 200)


TechNet Webcast: How Microsoft IT Uses SQL Server 2005 to Power a Global Forensic Data Security Tool (Level 300) Tuesday, June 19, 2007. 9:30 AM Pacific Time (US & Canada)

The Microsoft Information Security team uses an internal tool called Information Security Consolidated Event Management (ICE) version 3.0 to gather forensic data from more than 85 proxy servers around the world.  Powered by Microsoft SQL Server 2005, this 27-terabyte data management system collects different types of global evidence, such as inbound and outbound e-mail traffic, log-on events, and Web browsing, into a single database.  Join this session to learn how ICE 3.0 provides rapid security analysis and queries, robust proxy performance analysis, and extensive troubleshooting functionality.  We discuss how table-level partitions allow event processing and database loading without affecting the ongoing query activities and analysis work of security forensics engineers.  Storage area networks (SAN) manage enterprise storage for ICE 3.0, and the service maintains an uptime of nearly 99.9%.  A controller application dynamically launches multiple instances of Microsoft SQL Server Integration Services (SSIS), which seamlessly responds to changes in the volume of incoming evidence.  This tool was redesigned in 2006 using SQL Server 2005.  Join us for a closer look at how this SQL Server upgrade enhanced the data feed process and query response time, expanded the data retention period, and reduced manual data loading and data validations.

TechNet Webcast: How Microsoft IT Uses SQL Server 2005 to Power a Global Forensic Data Security Tool (Level 300)


TechNet Webcast: Real-Time Business Intelligence with SQL Server 2005 Analysis Services (Level 300) Wednesday, June 20, 2007.  9:30 AM Pacific Time (US & Canada)

There is an increasing demand to make business intelligence (BI) real time; however, making this happen can be difficult.  Join this webcast as we discuss the challenges of real-time BI and show how some obstacles can be overcome with help from new features in Microsoft SQL Server 2005 Analysis Services, along with other components of Microsoft SQL Server 2005.  Although some barriers that prevent complete real-time BI still exist, SQL Server 2005 can make your information close enough to real time to support better and faster business decisions.

TechNet Webcast: Real-Time Business Intelligence with SQL Server 2005 Analysis Services (Level 300)


TechNet Webcast: Introduction to Data Mining with SQL Server 2005 (Level 200) Monday, June 25, 2007.  9:30 AM Pacific Time (US & Canada)

Join this webcast to find out how Microsoft SQL Server 2005 can help you dramatically expand your data mining capabilities.  In this session, we introduce you to the possibilities of data mining and predictive analytics.  Learn about the data mining process and the specific mining capabilities present in SQL Server 2005.

TechNet Webcast: Introduction to Data Mining with SQL Server 2005 (Level 200)


Momentum Webcast: Selecting the Right SQL Server 2005 Edition to Fit Your Business (Level 200) Monday, June 25, 2007.  11:00 AM Pacific Time (US & Canada)

Microsoft SQL Server 2005 is available in a variety of editions to best match the demands of your business applications.  Attend this session to learn about the differences between editions of SQL Server, especially Microsoft SQL Server 2005 Enterprise Edition and Standard Edition, so you can avoid the common mistake of choosing the wrong edition to run your mission-critical applications. In this webcast, we present the value of SQL Server 2005 Enterprise Edition by describing its exclusive features and benefits, and by demonstrating practical usage scenarios. Join this presentation as we compare the capabilities of SQL Server 2005 Enterprise Edition with those available in SQL Server 2005 Standard Edition.

Momentum Webcast: Selecting the Right SQL Server 2005 Edition to Fit Your Business (Level 200)


Further on-demand webcasts are available here: http://www.microsoft.com/events/series/technetsqlserver2005.mspx


New KB’s


SQL Server 2000 Standard Edition


FIX: You may receive an assertion or database corruption may occur when you use the bcp utility or the "Bulk Insert" Transact-SQL command to import data in SQL Server 2000


FIX: Error message when you use the sp_addarticle stored procedure to add articles to an existing transactional replication publication in SQL Server 2000: “Cannot create a file when that file already exists”


FIX: The Sqldumper.exe utility cannot generate a filtered SQL Server dump file when you use the Remote Desktop Connection service or Terminal Services to connect  to a Windows 2000 Server-based computer in SQL Server 2000




FIX:  The sp_droparticle stored procedure performs very slowly if the @immediate_sync property is set to TRUE in SQL Server 2000 Service Pack 4




SQL Server 2005 Standard Edition


FIX: Failed assertion message in the Errorlog file when you perform various operations in SQL Server 2005: "Failed Assertion = 'fFalse' Attempt to access expired blob handle (3)"


FIX: SQL Server 2005 does not reclaim the disk space that is allocated to the temporary table if the stored procedure is stopped


The Batch Requests/sec counter of the SQL Statistics performance object increases one time for each RPC in SQL Server 2005 Service Pack 2


List of known issues when you install SQL Server 2005 on Windows Server Code Name "Longhorn"


How to use Windows Management Instrumentation in SQL Server 2005 to change the account of the SQL Server service or the password of the SQL Server service programmatically


A user who is not a member of a required role can still connect to a SQL Server 2005 database after you set the user access option for the database to RESTRICTED_USER


After you install a 64-bit version of SQL Server 2005 on a 64-bit version of Windows, the 32-bit entries for SQL Native Client Configuration and for SQL Server 2005 Network Configuration appear in SQL Server Configuration Manager


FIX: The query performance is slow when you run a query that  uses a user-defined scalar function against an instance of SQL Server 2005


FIX: High CPU utilization by SQL Server 2005 may occur when you use NUMA architecture on a computer that has an x64-based version of SQL Server 2005 installed


FIX: Event ID: 1 may be logged when you use a Volume Shadow Copy Service (VSS) writer to back up a snapshot in SQL Server 2005


FIX: When you run the "dbcc dbreindex" command or the "alter index" command, some transactions are not replicated to the subscribers in a transactional replication in SQL Server 2005



SQL Server 2005 Enterprise Edition


FIX: Indexes that at rebuilt after you run a maintenance plan that contains the Rebuild Index task have the same Ignore Duplicate Key setting as the Ignore Duplicate Key setting of the Rebuild Index task



SQL Server 2005 Workgroup Edition


FIX: Error message when you install Microsoft Dynamics CRM 3.0: “Setup failed to validate specified Reporting Services Report Server”



SQL Server 2000 Analysis Services


FIX: An access violation may occur in SQL Server 2000 Analysis Services when you try to use a 2007 Office system program to open an offline local cube



SQL Server 2005 Analysis Services


FIX: Error message when you run an MDX query in SQL Server 2005 Analysis Services: "An unexpected error occurred (file 'pffilestore.cpp', line 3267, function 'PFFileStore::HandleDataPageFault')"


Security permissions that are associated with the existing roles may reset when you use the Analysis Services Deployment Wizard to deploy a SQL Server 2005 Analysis Services project




This content is for informational purposes only. Microsoft makes no warranties express or implied, as to the information in this document. If you are a customer of Microsoft, please refer to the terms and conditions which cover the provision of support and consulting services to you/your organization.  If you are not corresponding in the course of, or in connection with a Microsoft contract or program with its own terms and conditions, please note that no liability is accepted by Microsoft for the contents of this document. 

This TRM section was created by Gareth Ford


SQL Server 2005 only, in the past we've had to use triggers to keep track of update activity but it has been impossible to determine when a table was last queried.

Enter the Data Management View sys.dm_db_index_usage_stats; this holds a number of useful columns and specific interest is last_user_update, last_user_seek and last_user_scan.

Note: This view is reset when SQL Server is restarted so it can only be relied on since SQL Server was started.

It even works for heaps (no index!).


create table test (

    id  int not null primary key,

    blah int )


create table heap (

    id  int not null,

    blah int )


Now query the table...

select *

from test


Check the DMV...

select *

from sys.dm_db_index_usage_stats

where database_id = db_id( 'readpasttest' )


As you can see, last_user_scan will be set to now.

Now try an insert...

insert test values( 1, 1 )


Check the DMV...

select *

from sys.dm_db_index_usage_stats

where database_id = db_id( 'readpasttest' )


You will see that last_user_update is now populated. 

To summarise, if you don't care that the DMV is reset on SQL Server startup then this is a good light weight method of capturing when the table was last accessed or updated; if not then you'll have to use a trigger and a column on the table itself. But, you can't capture when a table is accessed - only updated / inserted to.