January 2010 - Posts

I've been tearing my hair out for the past day trying to figure out why my SQL Server Integration services packages aren't working correctly.Whats not working was the configuration of them.

I've set up an indirect configuration framework that uses a batch file to run my packages. This batch file sets a group of evironment variables for the configuration each package needs. This allows me to break down my package configuration into smaller reuseable chunks. It also allows me to specify an environment for the running of a package so I can have multiple environments on the same machine (dev, test, live etc)

The reason for all this is the crap way that package configurations work in SQL 2005. You can't config the location of the configuration server (if using SQL configurations) unless you do it this way. You can't pass the connection string of the configuuration server in on the command line. Nicely they have changed it in SQL 2008 but that doesn't help me.

Whats odd is that I was getting different behaviour on 2 different servers. Now one is at SP2 Cu3 and one is at SP3 so I could expect some differences but debugging these has taken forever. What I found was that I was getting an Acquire connection error during validation "[Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified". This to me suggested the connection was being set incorrectly. Due to use of indirect configurations there are a number of areas that could be causing the problem.

So what have I learnt well it appears that if you have indirect SQL configurations you will have to be careful if you have different versions of SQL.

You will have a string like <server>;<table>;<filter>, for some reason I was setting the string as follows "<server>";"<table>";"<filter>". I must have got this from somewhere and it worked fine on SQL 2005 SP2. However on my dev server it didn't, but rather than complain SQL Server Integration services just silently continues. Thanks SSIS. Whats more the package was configured by default with the dev settings and so I didn't notice the configuration wasn't working.

Lesson 1: Always configure you package with rubbish so that you know if your configurations are working. (The down side is you have to pray to the environment variable god when doing dev and you want to change any indirect config).

Once I found that if I removed the quotes I was getting the same error in dev. Great so now I new something in my configuration was wrong. All the values were correct I re-copied the connection strings but no luck still the problem. So I added some debug to output the connection strings. This should that the connection sting was blank, what could be doing that.

Well the answer is, the configuration was. SSIS is strongly typed however in certain areas its a bit bad at reporting bad conversions. Configurations is one of those. I had populated my SQL Configuration table from a list of values I had. I created a new table and put a value in and noticed that in the new table the ConfiguredValueType was string in my table and String in the new table.

Do you notice the difference.

Yep Case.

I changed the string to String and it all worked correctly. What was annoying was that SSIS was reading the configuration and then just failing silently to set the values correctly. That sucks.

Lesson 2: Be careful when building system data by scratch, always compare against the data the tool produces.

Lesson 3: SSIS is never as easy as you think. A bit like driving down the roads in britain at the moment, stay on the main road and you will be ok. Veer off slightly and you will hit the snow and your in a whole world of hurt.

Lesson 4: CASE is very important. C#, XML, SSIS all have case dependencies, get into the habbit of standarising on your case.

Posted by simonsabin | 6 comment(s)

If you get the following error "Metadata file <some exe or dll> could not be found" in building a project in visual studio it is likely because you have a reference to an assembly or application that was done by path and not the GAC and the file no longer exists.

In your build output you will have something like

To check go to the references section of your project that won't build and look for a reference that matches the ones being reported not to exist. Delete it and re add the reference if required and then rebuild. Your "metadata could not be found error" should go away".


Posted by simonsabin | 1 comment(s)

Just a note that whilst you can generate a LINQ 2 SQL model using SQL Metal for SQL Server Compact edition databases (you can't do it in the visual studio IDE), you CANNOT use that model on Windows Mobile.

The reason is that there is on System.Data.Linq dll for the compact framework.


Posted by simonsabin | 1 comment(s)

If you've used the execute package task you will have been suprised you can pass anything to it. No parameters, connection configurations nothing.

If you think that sucks then vote here https://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=295885&wa=wsignin1.0

Microsoft does listen so if you have found this a problem please vote.

Posted by simonsabin | 1 comment(s)

I have a table where a coloumn contains the ordering of some preferences. Each preference is represented by a letter ,i.e. AYFT represents preferences A, Y, F and T in that order. 

All 26 characters are possible preferences. A preference can only appear once in the string.

Due to a bug the data has become corrupt and we have duplcates appearing i.e. XEDDDEFFE.

There are three of these columns in the table and the challenge is to remove the duplicate occurrences and maintain the correct order (left to right), so the example above goes from XEDDDEFFE to XEDF.

The columns can be up to 100 characters long, CLR can't be used.

create table Preferences


PreferenceId int identity(1,1) primary key,

Ordering1 varchar(100),

Ordering2 varchar(100),

Ordering3 varchar(100)


The schema can be downloaded here http://sqlblogcasts.com/files/folders/tsql_challenge/entry12784.aspx

and a population script here http://sqlblogcasts.com/files/folders/tsql_challenge/entry12783.aspx

Post proposals as comments or email me.

[SS 05/01/2010 13:29 You can assume there is a numbers table called num with a column n with values 1to 10,000 if that helps]
[SS 05/01/2010 23:17 Added a bit more explanation about uniqueness and the characters being used.]

Bill Graziano asked if  PASS should go to the east coast. I for one don't see any reason why not.

Firstly I'd ask what the purpose of the conference is. Is it to
a) foster the SQL community,
b) market the SQL Server Product
c) educate people on the use of SQL Server.

Personally I think PASS should be aiming for a and c and to that end I don't see any issue with moving to the east coast.

With regard Bills points.

Travel, Its a conference, whats the difference if its in east or west coast, I am sure there are just as many people working on the east coast that use SQL as there are on the west coast. And coming from Europe I have many more direct flight options than to Seattle (not hard really only 1 direct flight to Seattle from London)

Microsoft Support

Whilst the product team involvement is good I think its a small part in the scheme of things, if MS want PMs doing sessions then they can send them. IMHO sessions by independent experts are generally better than ones by product team PMs.

Using more indepenent people to run sessions I thing would be fine if the Microsoft people can't make it. So I don't think you would have less sessions and definitely not less focus on specific subjects.

The difference is the SQLCAT and CSS guys but as you mention these aren't based in Seattle and so moving to the east coast won't really make much difference.

The side sessions I agree will probably be impacted as will the ability to pop up to campus for a meeting, but that really only affects a small few and personally having MS send these people, who often aren't based in Seattle, over to the east coast isn't a big cost for MS in the scheme of things.


My experiences suggest that putting on a good event is more down to the venue being a good venue, that know what works and know how their building works. For events the size of PASS I would be shocked at the venues not being able to provide this service. Whilst it won't be as slick first time in your eyes, generally the attendees don't notice, as long they are fed and the sessions run ok.


Having items produced local to the event is much greener and will save costs. There will be extra travle costs to get to the venue but in the budget of conference I would expect that to be < 1% of the cost of the conference. Once you have moved round 4 cities you can repeat and it will become easier.


I'm sure there are other nice cities in the US.

Parting word

The one thing we have found running SQLBits and moving it around the UK is that we have a hard core of attendees but each event we have a new group of localised attendees. In not moving PASS around the US you are depriving these people of the opportunity of experiencing this valuable aspect of the SQL community and whats move deprive them of the valuable education that is available at PASS.

So get moving I vote for option (b) but more frequently move it around.


Posted by simonsabin | 8 comment(s)
Filed under: , ,

Two things that people tell you about Entity framework performance are, use the Compiled Queryies and potentially remove tracking when its not needed. You can achieve this by using the CompiledQuery.Compile function to obtain a function that is your compiled query and to set the MergeOption of the relevant entities on the ObjectContext.

I've just finished banging my head against a wall when trying to assess the performance of these together.

This was my simple code. Create a static readonly function that is the compiled query and then in by test function checking the status of a check box to set the MergeOption to MergeOption.NoTracking.

static readonly Func<MobileMinderEntities, int, IQueryable<Job>> JobSelect

      = CompiledQuery.Compile<MobileMinderEntities, int, IQueryable<Job>>

    ((ctx, jobid) => from j in ctx.Job

                     where j.Person.PersonId == jobid

                     && ctx.JobCompleted.Any(c => c.Signature == null)

                     select j);

private void btnTest_Click(object sender, EventArgs e)


      int var = 1;

      using (MobileMinderEntities me = new MobileMinderEntities())


        if (!chkTrack.Checked)


          me.Job.MergeOption = MergeOption.NoTracking;



        var l = JobSelect2(me, var);




        var l_l = l.ToList();



I couldn't figure out why I wasn't getting much difference, so I started profiling the code to see what was going on. Not much there (but I'm not a profiling expert). I then started hacking code to debug what was going on, what I couldn't figure was that when I turned tracking on, my object context was reporting no tracked objects. Uhh.

Can you spot whats wrong?

After making a small dent in the wall I figured the only way that this was happening was if the traking was being cached. Initially I thought it was the object context but then thought that its disposed every time, its not a singleton. So then looked at the function, ok so this is a static readonly, so only one but surely only the execution tree is cached.

Well it appears not. It appears that the compiled query also includes the tracking options.

That sort of put my test in a difficult position.The solution I came up with was to use two functions one that would only be used for tracking and one for non tracked execution. This worked a treat.

So a word to the wise if you are using CompiledQueries or ObjectQuery.Compile be careful if you are setting the mergeoption on your ObjectContext

More Posts « Previous page