May 2009 - Posts

Service Master Key Backup
31 May 09 10:01 AM | MartinBell | 1 comment(s)

I was just wondering how many people back up the Service Master Key (SMK) after installing SQL Server 2005 or 2008? The SMK is created the first time the SQL Server service is started after an installation. Even if you don’t use TDE, Certificates or Keys, the SMK is used to encrypt the credentials for linked servers, so you may not know that it is being used. If you started to use encryption backing up the SMK may not be something you remember to do, that is assuming that you are aware that an encryption feature is being rolled out!

The command BACKUP SERVICE MASTER KEY is use to backup the SMK. See the topic “How to: Back Up the Service Master Key” in Books Online for more, although this topic says “It is usually best to create multiple copies of the backup, and to store one copy locally. The local copy can be the file you created when you backed up the service master key.” I would recommend that the most important thing is to save the copy safely!!!

Scottish Area User Group Meeting 27 May 2009 - Slide decks and scripts
28 May 09 10:38 AM | MartinBell | with no comments

I thoroughly enjoyed the User Group meeting last  night, I hope everyone found it as interesting and useful as I did.

As promised I am posting Satya's slide deck and my slide deck and examples

I will email everyone who attended when I have finalised the details for the next meeting.

Filed under:
WMI Event Alerts
26 May 09 10:30 AM | MartinBell | 4 comment(s)

In my previous article on Event Notifications I showed you how to create and event notification for the DEADLOCK_GRAPH event, and how the deadlock could be extracted from the a Service Broker queue that you have created. The example was incomplete in that it did not show how to remove the entries from the queue, and therefore each time you had a deadlock the queue would grow.

Using WMI Event Alerts for Server Events you don’t have to worry about creating a Service Broker queue as the process uses the Service Broker Service SQL/Notifications/ProcessWMIEventProviderNotification/v1.0 in msdb. A fresh installation of SQL Server 2005 or 2008 will have Service Broker enabled for msdb, but if you are not sure you can check with the query I posted in the
Event Notification article.

To learn more about WMI Events for Server Events check out the topic “Understanding the WMI Provider for Server Events” in Books Online.

When writing a system to monitor and archive deadlock graphs, the first thing you will need is a repository. For this I create the table dbo.DeadlockEvents in the DBAEventManagement database.

-- Check Database Exists
IF  NOT EXISTS (SELECT * FROM sys.databases WHERE name = N'DBAEventManagement')
      CREATE DATABASE DBAEventManagement ;
GO

USE DBAEventManagement ;
GO

-- Create table to hold deadlock graphs
IF  EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[DeadlockEvents]') AND type in (N'U'))
      DROP TABLE [dbo].[DeadlockEvents]
GO

CREATE TABLE [dbo].[DeadlockEvents](
      [Id] [integer] NOT NULL IDENTITY,
      [AlertTime] [datetime2] NOT NULL CONSTRAINT DF_AlertTime DEFAULT SYSDATETIME(),
      [DeadlockGraph] [xml] NULL
)
GO

It is important that you enable the option to “Replace tokens for all job responses to alerts” on the Alert System tab of the SQL Server Agent Properties (right click SQL Server Agent in SSMS and choose properties). Doing this has certain security implications as it is a system setting and any windows user with write permissions on the Windows Event Log can access job steps that are activated by SQL Server Agent alerts or WMI alerts when it is enabled. See the topic Using “Tokens in Job Steps” in Books Online for more.

Agent Properties

Make sure this option is checked for WMI Alerts to work.

Token Replacement

The next part of creating the monitoring system is to create a job to service the event notifications. You can either do this using a script or through SQL Server Management Studio. I have created a job called “Capture Deadlock Graph”

Deadlokc Graph Job

With a job step Insert graph into DeadlockEvents, that executes a T-SQL INSERT statement to extract the TextData property from the WMI alert.

INSERT INTO DeadlockEvents(DeadlockGraph)
VALUES (N'$(ESCAPE_SQUOTE(WMI(TextData)))')

Any single quotes within the token is escaped to avoid syntax issues.

Deadlock Graph Job Step

Once the job has been created you can create the alert. You will find a script to do this here or to use SSMS. To create the alert  right click the Alerts tab under SQL Server Agent and choose to create a New Alert. I’ve called my alert “Respond to DEADLOCK_GRAPH”, the type should be set to WMI event alert. Leave the namespace as \\.\root\Microsoft\SqlServer\ServerEvents\MSSQLSERVER and enter the WQL Query

SELECT
* FROM DEADLOCK_GRAPH

Adding WMI Alert

The final thing is to setup the Job as the response to the Alert being fired. To do this choose the Response tab. Click the Execute job checkbox and select your job from the drop down. You can set up a Notification so that you are emailed, paged or have a network message when this alert fires. This will allow you to be pro-active in monitoring your system.

Now everything is in place you can test it using the deadly embrace scripts I posted in the Event Notification blog post Deadlock_Process_1.sql and Deadlock_Process_2.sql.

Be aware that there may be a delay between the event happening and it being registered. You can check the Alert History under the alert’s properties to see if it fired. The number of occurrences will be incremented when the alert fires.

Alert History

If the event has fired check  the job history for the “Capture Deadlock Graph” job. If that has worked you should have a new row in the dbo.DeadlockEvents table. If you output your query to a grid you can then open the XML in a new window.

WMI Alert XML

If you look at the XML generated, you will notice that the deadlock-list element is included in the TextData element. Similar to the event notification you will need to strip out this element if you wish to create a deadlock graph file that can be opened my SSMS. e.g. to get the most recent deadlock graph use the query:

SELECT TOP 1 [DeadlockGraph].query('/TextData/deadlock-list')
FROM [DBAEventManagement].[dbo].[DeadlockEvents]
ORDER BY Id DESC
GO

To create an XML file use this query in a BCP command e.g.

BCP "SELECT TOP 1 [DeadlockGraph].query('/TextData/deadlock-list') FROM [DBAEventManagement].[dbo].[DeadlockEvents] ORDER BY Id DESC" queryout xmlfile.xdl -T -c –q

This will create a file xmlfile.xdl in the current directory, which can then be opened in SQL Server Management Studio.

Deadlock Graph

A script to create the job and the  alert can be found here.

WMI Alerts may be the first steps you can take to be pro-active in monitoring your SQL Server 2005 and SQL Server 2008 instances.

What makes good swag?
25 May 09 05:44 PM | MartinBell | with no comments

At the last SQLBits we tried to vary some of the swag on offer and the committee actually had a discussion on what was good and bad swag! It’s interesting to know what others think and Craig Murphy posted blog entry this earlier this month http://www.craigmurphy.com/blog/?p=1440

Although I don’t personally would not count the competition prizes as swag under Craig’s rules they are… so my vote for the best swag of all must be for the Wiis and X-Boxes we have as prizes… and what’s more at SQLBits there are lots of competitions to enter :)

I’ve already packed the swag for the Scottish Area User group meeting on Wednesday 27th May and there will be loads at the Leeds Area user group meeting on Wednesday 3rd June as well.

Filed under: ,
GUID Fragmentation in SQL Server
25 May 09 12:47 PM | MartinBell | 3 comment(s)

I previously looked at the different methods you can use to create guids and how UuidCreateSequential does not order the guids in the same way as SQL Server. When NEWSEQUENTIALID() function was released with SQL Server 2005 it touted as the solution to problem of page splits when you use random guids. I therefore want to show how the different methods of generating guids affected fragmentation.

To do this I am re-using the test harness I created in the GUID Ordering article. You can use the view sys.dm_db_index_physical_stats to see the fragmentation of the clustered index (primary key) on GuidTable. I create a table IndexStats so that I would save the index information.

IF  EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[IndexStats]') AND type in (N'U'))
      DROP TABLE [dbo].[IndexStats]
GO

CREATE TABLE [dbo].[IndexStats](
      [IndexColumn] sysname NOT NULL,
      [database_id] [smallint] NULL,
      [object_id] [int] NULL,
      [index_id] [int] NULL,
      [partition_number] [int] NULL,
      [index_type_desc] [nvarchar](60) NULL,
      [alloc_unit_type_desc] [nvarchar](60) NULL,
      [index_depth] [tinyint] NULL,
      [index_level] [tinyint] NULL,
      [avg_fragmentation_in_percent] [float] NULL,
      [fragment_count] [bigint] NULL,
      [avg_fragment_size_in_pages] [float] NULL,
      [page_count] [bigint] NULL,
      [avg_page_space_used_in_percent] [float] NULL,
      [record_count] [bigint] NULL,
      [ghost_record_count] [bigint] NULL,
      [version_ghost_record_count] [bigint] NULL,
      [min_record_size_in_bytes] [int] NULL,
      [max_record_size_in_bytes] [int] NULL,
      [avg_record_size_in_bytes] [float] NULL,
      [forwarded_record_count] [bigint] NULL,
      [compressed_page_count] [bigint] NULL
)
GO

Originally the GuidTable from my GUID ordering article had the identity value as the primary key, therefore I would not expect any fragmentation from to occur on this index. So after the table was populated I ran the SQL statement:

INSERT INTO [dbo].[IndexStats]
           ([IndexColumn]
           ,[database_id]
           ,[object_id]
           ,[index_id]
           ,[partition_number]
           ,[index_type_desc]
           ,[alloc_unit_type_desc]
           ,[index_depth]
           ,[index_level]
           ,[avg_fragmentation_in_percent]
           ,[fragment_count]
           ,[avg_fragment_size_in_pages]
           ,[page_count]
           ,[avg_page_space_used_in_percent]
           ,[record_count]
           ,[ghost_record_count]
           ,[version_ghost_record_count]
           ,[min_record_size_in_bytes]
           ,[max_record_size_in_bytes]
           ,[avg_record_size_in_bytes]
           ,[forwarded_record_count]
           ,[compressed_page_count])
SELECT 'id',* FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('GuidTable'), 1, DEFAULT, DEFAULT ) ;
GO

By truncating the table changing the primary key for each of the columns, then re-running the test harness I could see the effect of having a clustered index on random and sequential guids is e.g.

TRUNCATE TABLE GuidTable;
GO
ALTER TABLE GuidTable DROP CONSTRAINT PK_GuidTable ;
GO
ALTER TABLE GuidTable ADD CONSTRAINT PK_GuidTable PRIMARY KEY (newidguid) ;
GO

/* Run the test harness */

The complete script is here.

 

After running the test harness 6 times. I ran the following query to obtain results for leaf level fragmention:


SELECT
[IndexColumn]
           ,[avg_fragmentation_in_percent]
           ,[fragment_count]
           ,[avg_fragment_size_in_pages]
           ,[page_count]
           ,[avg_page_space_used_in_percent]
           ,[record_count]
FROM [dbo].[IndexStats]
WHERE index_level = 0

GO

Index Column Average Fragmentation in Percent Fragment Count Average Fragment Size in Pages Page Count Average Page Space Used in Percent Record Count
id 4.35 7 16.43 115 99.89 10000
newidguid 98.77 162 1 162 70.90 10000
newsequentualidguid 4.35 7 16.43 115 99.89 10000
dotnetguid 98.78 164 1 164 70.04 10000
dotnetsequentialguid 98.73 157 1 157 73.16 10000
dotnetmodifiedsequentialguid 4.35 7 16.43 115 99.89 10000

This shows the amount of fragmentation for random guids is very significant (It is recomended that "Fragmentation in Percent" should be as near to zero as possible). The number of pages used by random guid is 40% higher and the amount of space used on each page is less, therefore the disc space required will increase. With a low "Average Fragment Size in Pages", random guids will require more disc I/O to return the same number of pages, decreasing range scan performance.

Even using guids created with UuidCreateSequential you will not significantly reduce the problem.

GUID Ordering in SQL Server
25 May 09 11:13 AM | MartinBell | 3 comment(s)

There has been a few postings on the newsgroups and forums recently about using GUIDs as primary keys. When the NEWSEQUENTIALID() function was released with SQL Server 2005, it was seen as a solution to the page split problem that random guids generated by NEWID(). This is fine if you are generating the guid on the database server, but if you generate the guid on the client, what do you do? According the SQL Programmability & API Development Team Blog NEWSEQUENTIALID() is a wrapper to the UuidCreateSequential function, but this is not actually entirely the case as SQL Server orders the guids differently, as described here.

I thought I would write a small test harness that would show this issue. The test harness expects a GUIDTEST database containing a GuidTable to exists. The following SQL will create this database and table:

CREATE DATABASE GUIDTEST ;
GO

USE [GUIDTEST]
GO

IF  EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[GuidTable]') AND type in (N'U'))
      DROP TABLE [dbo].[GuidTable]
GO

CREATE TABLE GuidTable ( id int not null identity,
                         newidguid uniqueidentifier not null default NEWID(),
                        
newsequentualidguid uniqueidentifier not null default NEWSEQUENTIALID(),
                         dotnetguid uniqueidentifier not null,
                         dotnetsequentialguid uniqueidentifier not null,
                         dotnetmodifiedsequentialguid uniqueidentifier not null );
GO

ALTER TABLE GuidTable ADD CONSTRAINT PK_GuidTable PRIMARY KEY CLUSTERED (id) ;
GO

The test harness will populate the GuidTable with random guids using the NEWID() function and Guid.NewGuid(), sequential guids using NEWSEQUENTIALID() and UuidCreateSequential and also sequential guids created with UuidCreateSequential that have the byte order modified to be the same SQL Server.

Once the table has been populated you can use a simple select statement to determine whether the order is correct or not i.e

Guids generated by NEWID()

SELECT * FROM GuidTable
ORDER BY newidguid;
GO

Order by NEWID() generated guid

As you can see, this will not return the results in the same order as which they were inserted.

GUIDs generated by NEWSEQUENTIALID()

SELECT * FROM GuidTable
ORDER BY newsequentualidguid;
GO

Order by NEWSEQUENTIALID() generated guid
   

As expected, the order which the rows are returned is the order the data was inserted.

Random GUIDs generated in .NET

SELECT * FROM GuidTable
ORDER BY dotnetguid;
GO

Order by .NET generated guid

Again, the order using the random guid is not the same as the order which the data was inserted.

GUIDs generated in .NET with UuidCreateSequential
 
SELECT * FROM GuidTable
ORDER BY dotnetsequentialguid;
GO

Order by .NET generated sequential guid

Even though this is a .NET sequential guid, the order returned is not the same as the order which the data was inserted.

Modified GUIDs generated in .NET with UuidCreateSequential

SELECT * FROM GuidTable
ORDER BY dotnetmodifiedsequentialguid;
GO

Order by modified .NET generated sequential guid

This time the order is the same as the order which the data was entered.

From this you can see that if you rely on the order of the guids generated, you can not use UuidCreateSequential without changing the byte order.

When comparing guids in ADO.NET you should also be aware that the behaviour of the SqlGuid type is different to the Guid type. http://msdn.microsoft.com/en-us/library/ms254976(VS.100).aspx, also SQL Server will  return a string representation of a uniqueidentifier in uppercase, but the ToString() functions of the guid and SqlGuid types will always return lowercase strings, which may cause issues when case sensitivity is require (such as XML).

Using the test harness I will also look at how the different methods of guid creation will affect fragmentation.

Community conference are worth the trip!
25 May 09 08:47 AM | MartinBell | 1 comment(s)

It’s always interesting to see how far people travel to the SQLBits Community conference. We added a page to the SQLBits website to show where delegates originate from. I think it’s a testament to our great content when delegates are willing to come from Spain, Cyprus, Holland, Denmark and Croatia.

SQLBits sponsored a track at Developer Day Scotland and demographics posted on the Scottish Developer website also show that people are willing to travel to good quality (content) conferences.

It would be good to reduce the number of no shows to these events, the 32% drop out rate for DDS is quite typical of community conferences. When people don’t turn up on the day, it denies others of the chance to go.

Filed under: ,
Table-valued parameters in SQL Server 2008 (Part 3)
22 May 09 07:57 PM | MartinBell | 5 comment(s)

In parts I and II of this series on TVP I wrote about the new table-valued parameter feature in SQL Server 2008 and how easy it is to use them compared to some of the methods you would have used in previous versions of SQL Server. I also mentioned the website created by Erland Sommarskog and how he has investigated the performance of the different methods used to pass and array to a stored procedure at http://www.sommarskog.se/arrays-in-sql-perftest.html. Ease of use is one reason to use TVPs, but as a DBA my focus is often on performance, and I wanted to try and answer the question of how TVPs perform compared to some of the methods you may have used previously and if it would be worthwhile converting legacy applications to use this new feature.

I was particularly interested in comparing TVPs against using XML as this was the easiest way to pass a structure (rather than a homogenous array) to a stored procedure, and I know this is a common solution being used. One scenario for this would be passing a set of tokens (“permission list”) relating to a user’s granted permissions. This “permission list” is often retrieved when a user logs onto the system, then when a stored procedure is called it is included in the parameters as an array.

For my test I wanted to see whether Table-valued parameters performed better than XML with a varying numbers of parameters. I also wanted to compare using integers and GUIDs, as GUIDs would make rows in the TVP wider, and may also be less efficient to compare.

For the integer value test, the setup script is TVPtest_int_setup.sql and for the GUID test use TVPtest_guid_setup.sql. These scripts create a database and populate the data needed to run the test in the table dbo.TestData. Executing the script TVPtest.sql runs the test, storing the results in the table dbo.TestResults. This table holds the durations for each section of the test.

For the table-valued parameter I declare a user-defined type called filtertable.

CREATE TYPE filtertable AS TABLE ( filterid uniqueidentifier  PRIMARY KEY)
GO

 I then created the four test procedures for each method of extracting the parameters

The procedure dbo.GetDetailsWithTVP uses a table-valued parameter. This parameter is simply joined to the dbo.TestData table to return the rows with the given ids.

CREATE PROCEDURE dbo.GetDetailsWithTVP ( @tvp filtertable READONLY )
AS
     SELECT t.id, t.name
     FROM dbo.TestData t
     JOIN @tvp f ON f.filterid = t.id
GO

The procedure dbo.GetDetailsWithXML uses the nodes method of the XML datatype parameter passed to shred the XML and join to the dbo.TestData id column.

CREATE PROCEDURE dbo.GetDetailsWithXML ( @xmlfilter xml )
AS
     SELECT t.id, t.nameFROM dbo.TestData t
    
JOIN @xmlfilter.nodes('/root/filter') AS f(filter) ON f.filter.value('@id', 'int') = t.id

GO

The procedure dbo.GetDetailsWithXMLExists uses the exist method of the XML datatype parameter passed to it to match with the given ids. In Example D for the “value() Method (xml Data Type)” topic in Books online it says “For performance reasons, instead of using the value() method in a predicate to compare with a relational value, use exist() with sql:column().” so I was interested to see if it did perform better.

CREATE PROCEDURE dbo.GetDetailsWithXMLExists ( @xmlfilter xml )
AS
     SELECT t.id, t.name
     FROM dbo.TestData t
     WHERE @xmlfilter.exist('/root/filter[@id = sql:column("t.id")]') = 1
GO

The procedure dbo.GetDetailsWithOpenXML uses OPENXML to shred the XML parameter and match with the given ids. This is a method that could have been used in SQL 2000.

CREATE PROCEDURE dbo.GetDetailsWithOpenXML ( @xmlfilter varchar(max) )
AS
   DECLARE @idoc  int,
           @err   int

   EXEC  @err = sp_xml_preparedocument @idoc OUTPUT, @xmlfilter
   SELECT @err = @@error + coalesce(@err, 4711)
   IF @err <> 0 RETURN @err

     SELECT t.id, t.name
     FROM dbo.TestData t
     JOIN OPENXML(@idoc, '/root/filter', 1)
              WITH (id int) AS s ON t.id = s.id

     EXEC sp_xml_removedocument @idoc
GO

MSXML was improved for SQL Server 2005 so I was interested how well OPENXML would perform.


When the test script is executed, each set of tests will be repeated 10 times, controlled by the value in @testrepeat. Within each set of tests I call each procedure 5 times (@numrepeats) with a given number of filters between 1 and 2000 determined by @runcount. For instance if @runcount is 7 then 25 filters are used, if it is 12 then 1000 are used. The number of filters is held in @num_filters.

The given number of rows is then inserted into the @filters table variable 

     INSERT INTO @filters ( filterid )
     SELECT TOP (@num_filters) id
     FROM dbo.TestData 
     ORDER BY NEWID();

I order by NEWID() to try to vary which GUIDs are returned. The xml to be passed to the procedure is created using the statement

SET @xmlstring = ( SELECT filterid AS [@id] FROM @filters FOR XML PATH('filter'), ROOT( 'root') ) ;

I convert this string to an XML variable so there isn’t an implicit conversions occurring when I call the procedures that take the XML variable as a parameter.

So for GUIDs if I had 3 filters my XML would be something like:

<root><filter id="EB4FB84B-E821-DE11-A008-001B38FE7298"/>
<filter id="6C5BB84B-E821-DE11-A008-001B38FE7298"/>
<filter id="315EB84B-E821-DE11-A008-001B38FE7298"/></root>

It’s then a matter of calling the procedures. I repeat the each procedure call 5 times to make sure that a time is recorded taking a start time and recording the difference into the dbo.TestResults table. I use nanoseconds as the precision of the datatime2(7) column is 100 nanoseconds. Check out my previous article on the new date and time features if you wish to know more about this.

   INSERT INTO dbo.TestResults ( num_Filters, num_repeats, Method, [nanoseconds] )
   SELECT @num_filters, @numrepeats, 'TVP', 
     DATEDIFF(ns,@startdatetime,SYSDATETIME()) ;

You may also notice that I have used other new SQL 2008 features in this script such as initialisation on declaration: 

   DECLARE @runcount int = 1;

and compound operators:

   SET @repeatcount += 1 ;

Results

Each time I ran the tests I got similar results. When I summarized the results, I got something like:

Integer

Average Differences Compared to TVPs

Method

Average

Maximum

Minimum

TVP

XML

OpenXML

TVP

129092307

481000000

0

XML

143292307

507000000

0

10.99988

OpenXML

159823076

797000000

14000000

23.80527

11.5364

XML Exists

534715384

838000000

458000000

314.2117

273.1641

234.5671

GUID

Average Differences Compared to TVPs

Method

Average

Maximum

Minimum

TVP

XML

OpenXML

TVP

184592307

526000000

0

XML

191476923

626000000

0

3.729633

XML Exists

216838461

627000000

49000000

17.46885

13.24522

OpenXML

218915384

865000000

17000000

18.59399

14.3299

0.95782

Int to GUID comparison

Method

TVP

42.992492

XML

33.626799

OpenXML

35.674063

XML Exists

-59.05946

Which shows a significant improvement using TVPs compared to one of the XML methods especially with Integers, although with GUID filters you would need to make your own decision if having < 4% improvement is justifiable over code changes and everything that would involve, but if you are still using OPENXML or have followed the advice in Books Online and used the exist method, it should certainly be worth considering.

If you look at the results in more detail and consider the number of filters you are likely to use, things are not necessary so clear cut, and the summary aggregation hides the fact that TVPs were not always fastest for a given set of parameters. For instance, OPENXML didn’t fair too well overall, although it was the best performer with 25 to 250 integer filters and 5 to 25 GUID filters. The XML method came out top for 500 integer and 50 to 250 GUID filters. What is significant is the performance of using integers compared to GUIDs for all methods apart from using OPENXML.

Looking at the actual query plans revealed that using the TVP and the XML nodes method both allowed a clustered index seek on the dbo.TestData table. This meant there were 6 logical reads on the table instead of 163 which is obviously beneficial to these methods.

Actual Query Plan TVP 
Figure 1 Actual Query Plan TVP

Actual Query Plan XML (nodes)
Figure 2 Actual Query Plan XML (nodes)

Actual Query Plan XML (exist)
Figure 3 Actual Query Plan XML (exist)

Actual Query Plan OPENXML

Figure 4 Actual Query Plan OPENXML

These are of course one set of results on my server. If you have your own scenario that may benefit from TVPs, try out your own tests and see if they will be beneficial.

Event Notifications
22 May 09 10:15 AM | MartinBell | 3 comment(s)

At the last Manchester SQL Server User Group one of Tony Rogerson’s tips was to enable event notifications for blocked processes. He has an excellent blog posting about it here, This is very similar to the WMI alerts that I have used for monitoring and storing deadlock graphs.

You can find out what events can be used by querying the sys.event_notification_event_types table. In the list you will see DEADLOCK_GRAPH which is the event required to give the XML for the deadlock graph. So creating the event notification will be:

CREATE EVENT NOTIFICATION notify_locks
    ON SERVER
    WITH fan_in
    FOR DEADLOCK_GRAPH
    TO SERVICE 'syseventservice', 'current database' ;
GO

An easy way to test for deadlocks is to create your own deadly embrace queries such as Deadlock_Process_1.sql and Deadlock_Process_2.sql. Open both of the files in Management studio and start them running in quick sucession.

Looking in the event queue:

SELECT cast( message_body as xml ), *
FROM syseventqueue ;
GO

Will show a message there, although the saving this XML as it is to a file will not create a valid deadlock graph (xdl) file. To do that you need to only extract the deadlock list.

SELECT cast( message_body as xml ).query('EVENT_INSTANCE/TextData/deadlock-list')
FROM syseventqueue ;
GO

The script I use to set up the notification can be found here. I’ll talk about setting up WMI Alerts in a future post.

Missing Row Constructors?
21 May 09 10:23 AM | MartinBell | 10 comment(s)

Row constructors are a new feature of SQL Server 2008, but if you enter ROW CONSTRUCTOR in the index of the RTM version of Books Online you only get one reference being returned. This is the topic “INSERT (Transact-SQL)”. In the January edition there are also references in “MERGE (Transact-SQL)” and “FROM (Transact-SQL)”. There is no reference to “Programmability Enhancements (Database Engine)” although that topic only has a link to “INSERT (Transact-SQL)” and not the other two!!

In essence that was the reason for my blog post. If you search the internet for ROW CONSTRUCTOR usually you will only get an example for the option to specify multiple values for an insert statement. The example in BOL is:

USE tempdb;
GO
IF OBJECT_ID (N'dbo.Departments', N'U') IS NOT NULL
    DROP TABLE dbo.Departments;
GO
CREATE TABLE dbo.Departments (DeptID tinyint NOT NULL PRIMARY KEY, DeptName nvarchar(30),
    Manager nvarchar(50));
GO
INSERT INTO dbo.Departments
    VALUES (1, 'Human Resources', 'Margheim'),(2, 'Sales', 'Byham'),
           (3, 'Finance', 'Gill'),(4, 'Purchasing', 'Barber'),
           (5, 'Manufacturing', 'Brewer');
GO

In earlier versions of SQL Server you would have needed to use INSERT..SELECT e.g.

INSERT INTO dbo.Departments (DeptID, DeptName, Manager)
SELECT 1, 'Human Resources', 'Margheim'
UNION ALL SELECT 2, 'Sales', 'Byham'
UNION ALL SELECT 3, 'Finance', 'Gill'
UNION ALL SELECT 4, 'Purchasing', 'Barber'
UNION ALL SELECT 5, 'Manufacturing', 'Brewer';
GO

But if you look in the “FROM (Transact-SQL)” topic you will find under the definition for derived table

“derived _table can use the Transact-SQL row constructor (table value constructor) feature to specify multiple rows. For example, SELECT * FROM (VALUES (1, 2), (3, 4), (5, 6), (7, 8), (9, 10) ) AS MyTable(a, b);.”

SELECT *
FROM (    VALUES (1, 'Human Resources', 'Margheim'),(2, 'Sales', 'Byham'),
           (3, 'Finance', 'Gill'),(4, 'Purchasing', 'Barber'),
           (5, 'Manufacturing', 'Brewer') ) Departments (DeptID, DeptName, Manager);
GO

This would have been written in earlier versions of SQL Server as:

SELECT *
FROM (
      SELECT 1 AS DeptID, 'Human Resources' AS DeptName, 'Margheim' AS Manager
      UNION ALL SELECT 2, 'Sales', 'Byham'
      UNION ALL SELECT 3, 'Finance', 'Gill'
      UNION ALL SELECT 4, 'Purchasing', 'Barber'
      UNION ALL SELECT 5, 'Manufacturing', 'Brewer') Departments;
GO

Entering “table value constructor” in the index of BOL only gives “MERGE (Transact-SQL)” and “INSERT (Transact-SQL)” !!!!

In the merge statement row constructors can be used in the USING clause, in the same way as the derived table. So if Barber has staged a coup and taken over as manager of all the departments..

MERGE INTO dbo.Departments AS Dest
USING  (    VALUES (1, 'Human Resources', 'Barber'),(2, 'Sales', 'Barber'),
           (3, 'Finance', 'Barber'),(4, 'Purchasing', 'Barber'),
           (5, 'Manufacturing', 'Barber') ) Src (DeptID, DeptName, Manager)
ON Dest.DeptId = Src.DeptId
WHEN MATCHED THEN
      UPDATE SET DeptName = Src.DeptName,
                 Manager = Src.Manager
OUTPUT $action, inserted.*, deleted.*;
GO
 

If you look at the query plans using ROW CONSTRUCTORS and the older methods, they are exactly the same. For the select statements this is (this is wrapped!):

|--Constant Scan(VALUES:(((1),'Human Resources','Margheim'),((2),'Sales','Byham'),((3),'Finance','Gill'),((4),'Purchasing','Barber'),((5),'Manufacturing','Brewer')))                                                                                          1           2           1           Constant Scan                  Constant Scan                  VALUES:(((1),'Human Resources','Margheim'),((2),'Sales','Byham'),((3),'Finance','Gill'),((4),'Purchasing','Barber'),((5),'Manufacturing','Brewer'))  NULL          5             0             5.157E-06     28          5.157E-06        [Union1015], [Union1016], [Union1017]  NULL     PLAN_ROW                                                         0        1

I think ROW CONSTRUCTORS should have their own topic in BOL, so I created a connect suggestion.

 

More Posts Next page »

This Blog

SQL Blogs

Syndication