September 2006 - Posts

One of the solutions I am looking to implement is an aysnchronous logging mechanism. Whilst this is not the a truly native messaging solution that Service Broker has all the bells and whistles for, it can still be achieved.

There are a number of aspects that need to be addressed to turn implement such a solution, in looking into these I've been posting on the Service Broker forums and anyone thats been there will have come across Roger Wolter. He has provided a great blog entry on one asepct and that is the use of dialogs you can read it here recycling dialogs. As we are just transferring data and not really messages we the reuse of dialogs is a possible solution.



-

This isn't about changing your air filter, spark plugs or oil. Its about what to do when, like me, you don't get things right first time.

If developing Service Broker applications and you misspell your services, forget to create a master key, or do any of the many things that result in your messages not being delivered, then your transmission queue will fill up.

Do to the nature of SB errors don't occur at the client but rather your messages go in to the black box and are processed. If they fail they stay in the black box. Some errors can be resolved, i.e. missing services, however others can't (or are more awkward) i.e. you've misspelt something, or not used the correct security.

To clear your transmission queue you need to end the conversations, this is a handy little script for doing this. Be aware this ends ALL conversations in the queue and so should NEVER be used on a production system.

declare @conversation uniqueidentifier

while exists (select 1 from sys.transmission_queue )

begin

set @conversation = (select top 1 conversation_handle from sys.transmission_queue )

end conversation @conversation with cleanup

end



-

In my investigation of service broker as an synchronous engine I wanted to be able to put messages on a queue and then for a batch process to take them off again.

What I however found was that if a message is sent to a service i..e from a trigger, then when my batch process receives from the queue it only gets the one message. This I can understand but thought there must be a way around it.

Conversation groups seemed like the likely solution, these are meant to group conversations, sounded perfect. and is the key used when receiving from a queue, i.e. you can receive a whole conversation group at once.

BEGIN DIALOG has a property to assign an existing group and BOL states that if the group doesn't already exist one will be created, so it was looking more promising.

I put it all in place, and did my RECEIVE from the queue. Only 1 row.

On further investigation the Conversation Group isn't passed with the messages to the target service, so what happens is that you end up with unique conversaton groups for each of your conversations at the target queue.

So I've been stumped. There isn't a way for you to group messages sent via different dialogs and for them to be receivable in one statement from the target queue



-
Posted by simonsabin | with no comments

I've mentioned before that a good practice is to not use the table or view option in the OLEDB source and lookups due to returning more columns than you need and being tied to the structure (i.e column order) of the table.

Well I've just been stung by exactly this issue on the destination as well. My package was hapily inserting data until I add a column to the table, even though the column was nullable and I didn't need the package to do anything with the column, the package failed.

It failed because it realised the external metadata (i.e. columns) had changed.

This means if you want to change your tables you have to recompile your packages.

I am looking at the practicality of using views as an abstraction layer that would mean underlying tables could be changed with no impact on my SSIS packages



-

Those are the words my ops DBA utered this morning, was it some big new feature like database mirroring, snapshot isolation, ssis.

No it was the fact that failed login attempts, when recorded in the log file, now include the Client IP address.

This is another example of a little change that has been made to SQL Server which has a huge impact.



-
Posted by simonsabin | with no comments

Have you tested your DR plan? Do you have one?

Well if you do have one then unless you test it its worthless. There are many nuances that occur in a recovery process. The Storage Engine Team have posted about one suhc nuance.

The customer was testing their DR process which included the complete restore of the SQL Server (OS and binaries included). However SQL Server  wouldn't start after the restore. For some reason master was corrupt. So the next step was to restore master from a backup. However to restore master SQL Server needs to be running, and that needs master which is corrupt and you are trying to restore. A bit of a no win situation.

Read how you get round this on the Storage engine blog entry on restoring master when master is corrupt



-

I've started using service broker and its full of new terminology. This will be a set of posts of the gotchas I've found on the way.

The first is "case". Most of us run SQL in case insensitive mode so we can refer to our tables and columns by what ever case is trendy at the time, camel case, title case, upper case etc.

Well in Service Broker case is very important because a number of the bits of service broker can reside on more than one server and to avoid any comparision issues all comparisions are bitwise comparisions.

The objects that are case sensitive are

  1. services
  2. contracts
  3. message types

This is explained in this BOL page

ms-help://MS.SQLCC.v9/MS.SQLSVR.v9.en/sqlmsg9/html/4efdf2ff-2f09-4dfa-879b-d0dfb047fae4.htm



-

If you use SQL Server and have a great suggestion for performance improvements to the engine then post them at the sql tips blog because they want to know
http://blogs.msdn.com/sqltips/archive/2006/09/20/TOP_5_DW_PERF_IMP_NEXT_SQL.aspx

http://blogs.msdn.com/sqltips/archive/2006/09/20/TOP_5_OLTP_PERF_IMP_NEXT_SQL.aspx

My OLTP list is as follows, they are all links to connect

Allow full text to index documents on the file system

Will save on space in the DB, backups, etc.

Enhance Full Text Index to include other non-text columns

Hugely improve performance where a search combines full text and relational filtering, i.e. unstructured keywords and structured data

max value Operator 

Greatly simplify code and provide automated mechanism for maintaining

Create an index for performing bitwise comparisions i.e. where column1 & 15 = 15

Allows searching on bitmap options very efficient

Extend window functions to support the ORDER BY clause for SUM, COUNT etc

Will enable running sums to be produced in a resultset without the need to go to the client.



-
Posted by simonsabin | with no comments
Filed under: ,

Well there are a few reasons for some slowness.

One of them is when you display the details of a job. SSMS executes sp_helpjob which returns ALL jobs, even though your only displaying ONE.

If you think this sucks then vote now to try and get this fixed.

http://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=140010

Free SQL Server Management Studio from bad code

-
Posted by simonsabin | 4 comment(s)
Filed under: , ,

We've just bought some nice new servers with buckets of RAM and and plenty of 64 bit CPU power so why is the Full Text Indexing paused because of lack of resources.

Well previously we had SQL 2000 standard edition running on 32 bit servers with 4Gb of RAM. In this case SQL could only take 2Gb and generally only took 1.7Gb leaving a fare amount for Full Text, and in 2000 FT wasn't very memory effecient unless you tweaked some settings.

Our new servers are 64 bit boxes with a nice flat 8Gb of memory to use and SQL 2005 standard edition now supports all the memory the OS can give it. These factors mean that if you have more data in your DB than RAM then SQL will nicely suck it all up.

I am sure soon I will be teaching my son that sharing is good, he shouldn't keep all his toys for himself. Well SQL Server could do with the same lesson. Whilst the dynamic memory setting is just that, SQL will still tend to take all it can until a time that its given a big nudge to give it back. Unfortunately Full Text is a very gentile service that kindly stops playing with its toys when the other kiddies in the play ground start taking them, it then sits there until its toys are given back.

Our experience today was that SQL Server didn't want to play nicely and so even if it was being pressured it didn't release memory very quickly.

The solution. Make sure you configure SQL with a max memory limit that allows fulltext to operate, in our case Full Text required 600Mb to for catalogs holding ~1,000,000 records.

Unlike 2000, full text in 2005 will use more memory without any configuration changes. This is proportional to the size of your catalogs, uniqueness of words etc, I will be looking into the likely memory requirements in the near future.

-