April 2007 - Posts

 A lot of people, myself included would consider this job the job of a life time - it will give you the opportunity to be at the leading edge of Microsoft technology, give access to a ton of highly experienced SQL resource, a chance to meet a ton of people within the SQL Server field and be a lot of fun (and hard work). There is one missing requirement, that is - when it's a conference like TechEd / IT Forum where I go - you need the ability to consume lots of food and drink and be willing to stay up to the early hours gassing about SQL :)


Microsoft Developer and Platform Evangelism (DPE) group is looking for an IT Pro Evangelist (ITE) specialising in SQL Server/business intelligence. ITEs are excellent at driving community engagement to increase the satisfaction and loyalty of Microsoft’s IT Professional audience

The ideal candidate will know IT from the Helpdesk to the CIO’s office.  IT Evangelists must be able to talk about technology intelligently and enthusiastically to IT Professionals, IT managers and senior management. Candidates should be comfortable operating in large data centres and deploying global technology solutions. The candidate will have a lively mind, be a reliable team worker who can work under their own initiative, lead others, be creative and innovative.  The candidate should have an engaging personality and be people focused with an excellent grounding in technology and will have exceptional communication, interpersonal and presentation skills. 

Responsibilities include:

· Building and strengthening relationships between Microsoft and IT professionals by developing and delivering compelling face-to-face and online events.
· Engaging with local and online IT Pro communities as a member, facilitator and leader.
· Identifying and building relationships with key IT influencers across a range of industries and communities.
· Acting as liaison between Microsoft and the IT Pro audience.  Encouraging early adoption of our next set of products, finding customer advocates and success stories.
· Ensuring effective engagement with our broad IT Pro audience to enthuse them about SQL and BI so that they become early adopters of our BI technologies

Candidates must have a demonstrated ability to:

· Collaborate effectively across groups and as a member of virtual teams.
· Deliver engaging and compelling communications, both written and spoken.
· Favourably influence an audience of intelligent technology professionals
· Understand enterprise global IT requirements and translate technology solutions into business value propositions.


· Applicants should have a strong technical background working in IT
· Candidates will ideally have worked in a large IT department or Data centre performing in at least one of the following roles: IT Manager, Infrastructure Specialist, Application or Line Of Business (LOB) Database Analyst (DBA)
· An IT consulting background is highly desirable
· Comprehensive technical knowledge in Database and Business Intelligence (BI) technologies
· Candidates must be competent across Microsoft products and tools and also have deep technical knowledge on SQL Server and BI
· Must have a good understanding of security, reliability, scalability and platform management topics
· The individual will need to maintain knowledge about multiple versions of the same product and speak compellingly on future releases
· Must have a solid understanding of competitive product offerings and how to differentiate Microsoft from its’ competitors
· Graduate/Graduate calibre

Took delivery of my new laptop (HP NW9440) on Tuesday, put 64bit Vista on there and realised I needed my 3g card to work as well - doh.

So, reinstalled the 32bit; can I get either of these dam 3g cards going - can I hell; the T-Mobile and Globetrotter worked fine on my old laptop (well, 2 years old 32bit DELL Inspiron 9300).

The T-Mobile just blue screens with a trying to access a non paged page or something and the Vodafone just plain doesn't work.

I'm on my 4th install of Vista, the system restore points seem to work - but I needed to make sure nothing was there while testing.

Does anybody have a solution? My only way forward I can see is to either dual boot with XP or just go back to XP.

Anybody who gives me a solution that works I'll give them the set of Inside SQL 2005 books (3 of them).




Prior to SQL Server 2005 we had to do a lot of monitoring to see what processes are blocking and what they are executing, this was always a pain. In SQL Server 2005 they introduced a technology called ‘Event Notifications’, numerous events fire within the SQL Server engine – these are all traceable or can be captured and plonked on a Service Broker queue. Don’t be put off by this new technology, once you work out what to do and get the plumbing sorted its amazingly easy.

The BLOCKED_PROCESS_REPORT event can be enabled (sp_configure), this event fires when a process is blocked for more than a configurable amount of time. Once fired the event data contains the input buffers of the waiter and the blocker – note, the chain can contain lots of spid’s, the event only ever contains just two connections, for example if the head of the blocker chain is spid 90 and you have 3 blocked connections, spid 95 is waiting on spid 93 and spid 93 is waiting on spid 90 then you will get 2 reports – one containing spids 95 and 93 and one containing spids 93 and 90, like all everything else if we do the work we can get the complete chain.

You should also note, that the <inputbuf></> contains the current input buffer and not the original statement that took the locks and is creating the block.

Let’s get it going.

Step 1:  Configure the server to get this feature working.

The sp_configure option ‘Show Advanced Options’ needs to be enabled, if it is not already then run this.

sp_configure 'show advanced options', 1






Now we need to set our blocked process report capture threshold, this is the number of seconds the block is in effect after which you get an event firing; for example, setting this to 10 tells SQL Server to fire an event whenever a block is 10 seconds old or 20 seconds or 30 seconds etc... if the block lasted 60 seconds then you’d get 6 events.

--  Now, set the blocked process threshold to 10 seconds

sp_configure 'blocked process threshold', 10





That’s the server ready, now we can create a database to hold our service broker queue, this can be any user database, I prefer to create my own management database so that it is separate from the application databases.




USE DBAEventManagement


We now need to enable Service Broker in this database



Ok, what is Service Broker? In a nutshell it’s a messaging technology, you have a Service Broker Queue which holds your messages and you have a Service Broker Service which manages the transmission to the queue. The queue is simply rows in a database table with a XML column message_body holding the data you’ve sent. The technology offers significantly more than that but that is way outside this blog entry.

We start by creating our Service Broker queue.

--  Create a service broker queue to hold the events

CREATE QUEUE syseventqueue


We now need a service that can accept incoming messages and store them on our queue. Service Broker has the concept of contracts; a contract is an agreement between the two services communicating with each other it states that only certain message types will be used to communicate, a message type is the format your data is in, for example your data is XML and a specific schema.

--  Create a service broker service receive the events

CREATE SERVICE syseventservice

    ON QUEUE syseventqueue ( [http://schemas.microsoft.com/SQL/Notifications/PostEventNotification] )


As you can see we have used a specific contract that states that you should use PostEventNotification message types against this service.

We now have our working Service Broker queue, we can add our notification.



    WITH fan_in

    FOR blocked_process_report

    TO SERVICE 'syseventservice', 'current database';


Note the space in ‘current database’, I just spent 3 hours wondering why nothing was happening, no errors – nothing and it’s because I’d wrote current_database – a very easy typo to make when you are so used to object names!

The statement creates a notification called notify_locks (should be blocks but heyho), it is ‘server’ scoped that means it captures any BLOCKED_PROCESS_REPORT event firing in this instance of SQL Server, we can specify a specific DATABASE instead if you want to narrow it down and have different processing per database, perhaps where you have a production and qa/dev environment on the same SQL instance.

We can now do a test, first let’s set up a test table...

CREATE TABLE test ( mycol int )

INSERT test ( mycol ) VALUES( 1 )



Now, in connection A run this...

--  Connection A



UPDATE test SET mycol = 5


Now, in connection B run this...

--  Connection B


The block needs to be in place for at least 10 seconds or whatever you’ve set your threshold at.

We can check the broker queue by ‘browsing’ it thus:

SELECT cast( message_body as xml ), *

FROM syseventqueue

Click on one of the entries and you’ll see XML like this...






    <blocked-process-report monitorLoop="12813">


        <process id="processf88988" taskpriority="0" logused="0" waitresource="RID: 28:1:73:0" waittime="13734" ownerId="1740293" transactionname="SELECT" lasttranstarted="2007-04-06T10:47:05.680" XDES="0xffffffff802ac9a8" lockMode="S" schedulerid="2" kpid="3508" status="suspended" spid="60" sbid="0" ecid="0" priority="0" transcount="0" lastbatchstarted="2007-04-06T10:47:05.680" lastbatchcompleted="2007-04-06T10:47:05.603" clientapp="Microsoft SQL Server Management Studio - Query" hostname="TONYWS" hostpid="2668" loginname="TORVER\tonyrogerson" isolationlevel="read committed (2)" xactid="1740293" currentdb="28" lockTimeout="4294967295" clientoption1="671090784" clientoption2="390200">


            <frame line="2" stmtstart="36" sqlhandle="0x020000001ed3611e4589a687c58341acf7c4be9daeda7044" />



--  Connection b

select * from DBAEventManagement..test







        <process status="sleeping" spid="59" sbid="0" ecid="0" priority="0" transcount="1" lastbatchstarted="2007-04-06T10:47:16.837" lastbatchcompleted="2007-04-06T10:47:16.837" lastattention="2007-04-06T10:42:35.973" clientapp="Microsoft SQL Server Management Studio - Query" hostname="TONYWS" hostpid="2668" loginname="TORVER\tonyrogerson" isolationlevel="read committed (2)" xactid="1739624" currentdb="28" lockTimeout="4294967295" clientoption1="671090784" clientoption2="390200">

          <executionStack />



FROM syseventqueue

















  <SessionLoginName />


Look at the second <inputbuf> it’s not our UPDATE! That is because it takes last input buffer for the connection that is blocking and because I am doing the monitoring and UPDATE in the same connection I get the SELECT * FROM syseventqueue statement.

The <process> node is probably the most important here, it shows you what program and host is causing the problem. Because you have the xactid you can track that down through SQL Profiler if you’d have been tracing SQLTransaction as well (another blog post on that I guess – the list get’s longer...).

Anyway, back on track – we have just browsed the queue, we need to actually get our messages off the queue otherwise the queue is going to grow and grow, enter the statement RECEIVE.

DECLARE @msgs TABLE (   message_body xml not null,

                        message_sequence_number int not null );


RECEIVE message_body, message_sequence_number

FROM syseventqueue

INTO @msgs;


SELECT message_body,

       DatabaseId = cast( message_body as xml ).value( '(/EVENT_INSTANCE/DatabaseID)[1]', 'int' ),

       Process    = cast( message_body as xml ).query( '/EVENT_INSTANCE/TextData/blocked-process-report/blocked-process/process' )

FROM @msgs

ORDER BY message_sequence_number

The above example creates a table variable to hold the data pulled from the queue, the RECEIVE literally receives all the messages off the queue – you don’t have to do it that way, there are lots of options (see BOL). The SELECT is simply querying the XML, in the first instance we are getting the integer value of the DatabaseId element, to do that we use the XML .Value method; to get the Process column we simply use the XML .Query method and get the XML for the process information.

From this you now have access to the data from the event and you can do anything, I tend to use Database Mail (sp_send_dbmail) to email me when there is a block, but I do put logic in there to buffer the emails sent – I don’t want an email every 10 seconds for the same block!

One last thing to note and it’s really important; when the event notification fires, if there is an error placing your event on the service for instance if you’d set a remote service up in say a centrally managed environment then the notification literally gets dropped (like DROP EVENT NOTIFICATION being issued), you do get an information message in the SQL Server errorlog though). So, make sure you build something in that checks that the notifications are still there, for server scoped events...

SELECT * FROM sys.server_event_notifications

Well, that’s it folks – enjoy; I hope you find this useful – it’s in the product and greatly under used; there is not just the BLOCKED_PROCESS_REPORT event to capture there are a ton of others.


Database mirroring is a really cool technology introduced in SQL Server 2005 to simplify log shipping and give greater availability and data redundancy at the database level, in this article I talk about the various modes of operation, how it works, discuss statements (INSERT, UPDATE, DELETE, CREATE INDEX, ALTER INDEX REORGANIZE which replaces DBCC INDEXDEFRAG etc...) and there effect on logging. I also look at in depth at how performance of your online system can be significantly impacted by latencies in network performance and how to overcome or reduce some of these problems.


This has sat unfinished for 3 months because of my client workload, so, I'm breaking it up into parts, this is the first part - the configuration and planning bit, the bit that makes sure you know what your performance is going to be 'like', more 'parts' when I get time.


There are two flavours – synchronous (full safety) or asynchronous, synchronous basically waits until the log records are applied and written to disk on the mirror before allowing the principal to commit or rollback a workload – I’ll discuss this in detail as we go through.


Database mirroring is only available in the Standard and Enterprise editions of SQL Server, standard edition only supports synchronous (full safety), and it is also limited to a single redo thread (discussed later).


With synchronous mode you have the option of using a witness, I’m not going to talk about that as it has been covered so many times and books online does it in depth as well.


Forming a base line for performance

1 – The Disk Bits

At this point you need to understand the range in terms of the size of a block of data written to disk when a log flush event occurs, when writing to the transaction log SQL Server writes in blocks ranging from 512bytes through to 64Kbytes – this is explained in another blog entry of mine on Transaction Log Performance.


We need to see what the disk subsystem on each machine is capable of without mirroring in place, we do this by running a number of tests simulating various flog flush sizes.


First create a database on each machine, correct the drive letters so the MDF and LDF are placed on the drives you use for the data and logs…



( NAME = N'IOMeter', FILENAME = N'e:\IOMeter.mdf' , SIZE = 1048576KB , FILEGROWTH = 1024KB )


( NAME = N'IOMeter_log', FILENAME = N'g:\IOMeter_log.ldf' , SIZE = 1048576KB , FILEGROWTH = 10%)


EXEC dbo.sp_dbcmptlevel @dbname=N'IOMeter', @new_cmptlevel=90







Create the table to hold the test and result data…





    id  int not null identity primary key clustered,

    somedata char(4000) not null



CREATE TABLE io_results (

    test_name varchar(100) not null,

    entry_date  datetime    not null default( getdate() ),

    write_ios   int         not null,

    write_bytes bigint      not null,

    transaction_size int    not null,

    ios_per_row decimal( 10, 3 ) not null,

    ios_per_xact decimal( 10, 3 ) not null,

    bytes_per_io decimal( 10, 3 ) not null,

    mbytes_per_sec decimal( 10, 3 ) not null,

    rows int not null,

    commits int not null,

    duration_seconds int not null



Now our test proc, this stored procedure is what we call to do the test…


CREATE PROC test_perf

    @test_name varchar(100),

    @tran_batch_size int





    IF @@TRANCOUNT > 0

        ROLLBACK            --  Make sure no transaction open


    WAITFOR DELAY '00:00:05'    --  System settle, helps seperate stuff in PERFMON




    CHECKPOINT                  --  Write any dirty pages to disk


    DBCC DROPCLEANBUFFERS       --  Empty data cache

    DBCC FREEPROCCACHE          --  Empty execution cache


    WAITFOR DELAY '00:00:05'    --  System settle, helps seperate stuff in PERFMON


    DECLARE @i int

    DECLARE @c int

    SET @i = 1

    SET @c = 0


    SELECT 'START' AS op, GETDATE() AS op_time, *

    INTO #filestats

    FROM ::fn_virtualfilestats( db_id(), 2 )




    WHILE @i <= 100000


        INSERT mytest ( somedata ) VALUES( CAST( @i AS char(4000) ) )


        IF @i % @tran_batch_size = 0


            SET @c = @c + 1

            COMMIT TRAN

            IF @i < 100000

                BEGIN TRAN



        SET @i = @i + 1


    IF @@TRANCOUNT > 0



    INSERT #filestats (

        op, op_time, DbId, FileId, TimeStamp, NumberReads, BytesRead,

        IoStallReadMS, NumberWrites, BytesWritten,

        IoStallWriteMS, IoStallMS, BytesOnDisk, FileHandle )

    SELECT 'END', getdate(), DbId, FileId, TimeStamp, NumberReads, BytesRead,

        IoStallReadMS, NumberWrites, BytesWritten,

        IoStallWriteMS, IoStallMS, BytesOnDisk, FileHandle

    FROM ::fn_virtualfilestats( db_id(), 2 )


    INSERT io_results (











        duration_seconds )

    SELECT @test_name,

           Write_IOs   = e.NumberWrites - s.NumberWrites,

           Write_Bytes = e.BytesWritten - s.BytesWritten,

           transaction_size = @tran_batch_size,

           IOs_Per_Row = CAST( (e.NumberWrites - s.NumberWrites) AS decimal( 10, 3 ) ) / 100000,

           IOs_Per_Xact= CAST( (e.NumberWrites - s.NumberWrites) AS decimal( 10, 3 ) ) / @c,

           Bytes_Per_IO= CAST( ( e.BytesWritten - s.BytesWritten ) AS decimal( 20, 3 ) ) / (e.NumberWrites - s.NumberWrites),

           MBytes_Per_Sec = ( CAST( ( e.BytesWritten - s.BytesWritten ) AS decimal( 20, 3 ) ) / DATEDIFF( second, s.op_time, e.op_time ) ) / 1048576,

           Rows        = (SELECT COUNT(*) FROM mytest),

           Commits     = @c,

           duration_seconds = datediff( second, s.op_time, e.op_time )

    FROM #filestats s

        CROSS JOIN #filestats e

    WHERE s.op = 'START'

      AND e.op = 'END'





Now we are ready to conduct some tests;



2 – The Network Bits

Before going any further you need to determine just how good your connectivity is to your mirror machine and how much “Write Mbytes per second” you can expect which is the single point of failure in terms of system performance for write intensive or large update queries.


We are going to use IOMeter which is a free open source application; this really cool utility allows us to create a number of tests varying the “transfer request size” to simulate mirroring spewing log records across the network pipe.


On the machine that will be the Principal map a network drive to the Mirror machine e.g. NET USE Z: \\\E$


When you start IOMeter select the disk target to do the test against, this will be the drive the transaction log will reside on (step 2) or Z: drive you mapped in the proceeding step (when doing step 3). Set the maximum disk size to 4096, leave “# of Outstanding I/Os” at 1 which simulates the same behaviour as SQL Server transaction log flushes i.e. no outstanding I/O – stuff must be written (write thru rather than write back).



Figure 1 – IOMeter


I always recommend using dedicated NIC’s between the two machines and use a good quality cross over cable. The NIC’s should run at 1Gigabits at least full duplex, that bandwidth allows for over 100Mbytes per second throughput compared to 100Mbits which offers only around 12Mbytes per second; anyway – to testing.


At this point you need to understand the range in terms of size of data transmitted between the two servers, when writing to the transaction log SQL Server writes in blocks ranging from 512bytes through to 64Kbytes – this is explained in another blog entry of mine on Transaction Log Performance.



Figure 2 – Transmission Results


The graph tells us a lot, the larger your chunk of data the better the throughput. Before going any further – tune this! Make sure you are using the most recent drivers! Don’t be afraid of adjusting the parameters available for the NIC (do make sure you know what they represent and what they effect first though), set the speed instead of letting it auto-sense, different NIC’s give different performance for example using my onboard NIC’s I get a throughput of 102Mbytes per second for 512KB. There are two registry entries you can ‘play’ with, but make sure you do it at the interface level (for the specific NIC) rather than for all NIC’s – that is the MTU size (Maximum Transmission Unit) and TCPWindowSize – the white paper “Microsoft Windows Server 2003 TCP/IP Implementation Details” gives a very thorough discussion of what these mean.


When setting up the cross over cable and NIC’s put them on a separate subnet I’ve used ( as the principal and ( as the mirror, so my configuration is shown below. Make sure you set the metric on the NIC’s so that the cross over connection has a high value so other traffic won’t get routed through there. I also uncheck the register in DNS for those IP addresses, they are private and don’t belong on the DNS server; either use the IP addresses or set up an alias in your HOSTS file in c:\windows\system32\drivers\etc directory.


The diagram below shows my configuration.




Figure 3 – Network Topology


That’s the network, you now know what it’s capable of, so, onto your actual server – what throughput is it capable of in terms of writing to the transaction log?


Well, thats the end of part 1 - any questions? What do you expect or want to know for part 2 and (3)??