October 2010 - Posts

Following my last post on the login failed message not being as helpful as it could be (half-error-message-login-failed-for-user-reason-failed-to-open-the-explicitly-specified-database) I thought it would be good if people could post any other unhelpful error messages.

I posted a related message a couple of years ago Whats-the-most-annoying-error-message

The most annoying message in my view has to be the "String or binary data would be truncated." Error 8152. This already has a connect submission (Improved error message when you try to put too large a value in a column) and is the 24th highest voted connect item.

They made a big investment in error messages in 2005 and hopefully they can tackle the last few bad messages.

Do you have any other error messages that you think could be more helpful?

Posted by simonsabin | with no comments

I’m setting up a new environment and I’m finding the following error in the log, whilst it is partially helpful.

Login failed for user '*'. Reason: Failed to open the explicitly specified database

It would be nice to say "what database was explicitly specified.

Following an email from Ben I will point out that this message is in the error log and not returned to the client. Doing that would be a security risk, but having it in the error log allows you to diagnose problems.

As one should, I decided to submit a suggestion to connect (http://connect.microsoft.com) but found I’ve been beaten to it by someone else


Tobias (Microsoft) thinks its a good thing and so hopefully it should get fixed, surely it can’t be that difficult.

If you think it would be a good thing then please also vote for the suggestion here

Posted by simonsabin | 3 comment(s)

When writing CLR code for SQL you need to handle nulls. You can do this in two ways, either your data types are nullable or you configure your functions to always return null when a parameter is null.

The last only works for some objects and so understanding nullable types is essential.

If you are using SQL 2008 then you are fortunate enough to be able to use Nullable types. These are generic types that implement INullable on top of the base type, i.e. Nullable<int> or int?. This provides you with a variable that has two properties .HasValue and .Value. HasValue allows you to check if the parameter is NULL, and .value gives you the base type value. If you access .Value when the value is NULL you will get an error, so its essential to check HasValue.

If you are using SQL 2005 whilst the CLR integration is based on a version of the CLR that supports nullable types they had to have nullable types before the languages supported them. This means they implemented their own nullable types. Sql* data types are the types you use in SQL 2005. If you try using the nullable types you will get the error.

Column, parameter, or variable #1: Cannot find data type Nullable`1

If you get that error then you need to switch to the SqlInt, SqlVarchar etc. These types can be found in the System.Data.SqlTypes namespace.

This is the first in a follow up from my SQLBits and DDD sessions.

Why is shrinking files bad?

Shrinking files is bad because

1. Your file will have grown for a reason

2. If its a regular occurrence SQL will just have to grow the file(s) again

3. Growing a file is a blocking activity (except for data files when instant file initialisation is turned on)

4. It causes fragmentation

Whatever you do, don’t stick shrink file into your code as a standard practice. Understand why the file is growing and do something about it.


One of the biggest issues with regular shrinking is that the files have to grow again. Growing of data files can be quick with instant file initialisation , but even, then if your data files share the same drive as other files then you can get file fragmentation.

For all growth operations they block operations in the database and so your applications will stop.

Growing of files

If your files are growing then understand why. Common causes are

1. updating more data than you need to.

This often happens when you have a cartesian join in your UPDATE which the update hides, this can result in the same row being updated > 1. Also the INSERT, UPDATE design pattern I will be talking about later will also cause this when you populate a table with a value that you then update with a larger (in bytes) value.

2. not getting minimally logged operations when you need them

The biggest cause of this is using full recovery mode. Other causes are, you’ve added an index or the table isn’t empty thus resulting in fully logged operations. Look up trace flag 610 if using SQL 2008.

3. You are updating too much in one transaction

Consider batching your updates/deletes/inserts so you don’t have to have space in the transaction log for ALL the updates. Also note that the overhead of the transaction log means that even if you insert 10GB your transaction log might need much more, say 20GB. So don’t be surprised if this happens.


Lets look at the fragmentation, create a blank database

use master


alter database shrink set single_user with rollback immediate


drop database shrink


create database shrink


use Shrink

Deploy the handy SP to find the pages allocated to an index/table

Put some data in the database so that the shrink has something to shrink

create table SpaceFiller (ID int identity(1,1), Padding char(8000))


insert into SpaceFiller

select top 256 ''

from msdb.sys.columns

Now create a table that is going to get fragmented

--Put some data in the file

create table OrderedTable (ID int  primary key, Padding char(8000))


--Add our ordered table and insert rows into it

insert into OrderedTable

select top 3200 row_number() over(order by object_id) id, ''

from msdb.sys.columns

order by id

Lets reindex the table this is a common scenario that causes free space in a file after which people shrink the file.

alter index all on OrderedTable rebuild

exec up_ShowIndexPageOrder 'OrderedTable'

This shows you that the pages are contiguous


Now lets shrink the file


--Shrink the file to free the space

DBCC SHRINKFILE (N'shrink' , 0)


--Look at the page order now

exec up_ShowIndexPageOrder 'OrderedTable'

And what do we get


This highlights that the pages are now effectively reversed where a page points to the previous page in the file and not the next page.

You can issue a reorganise and the pages will be correctly sorted. However this is more IO and changes that will get logged and thus cause problems with


If you are getting odd growth in your data files/transaction log, if you can’t afford the space then you need to find out the root cause of the growth.

Don’t shrink your files as a regular practice.

Size your files as big as they need to be so they don’t have to grow except in an emergency

This is a useful procedure to display the pages allocated to a table using the DBCC IND statement

if object_id('up_ShowIndexPageOrder') is null

    exec ('create procedure up_ShowIndexPageOrder as print ''not implemented''')


alter procedure up_ShowIndexPageOrder

  @index varchar(100)


declare @sql varchar(max) = 'dbcc ind(' + char(39) + db_name() + char(39) + ',' + char(39) + @Index +char(39) + ',1)'


create table #Ind


PageFID  bigint,

PagePID  bigint,

IAMFID   bigint,

IAMPID   bigint,

ObjectID bigint,

IndexID  bigint,

PartitionNumber  bigint,

PartitionID  bigint,

iam_chain_type   varchar(100),

PageType bigint,

IndexLevel   bigint,

NextPageFID  bigint,

NextPagePID  bigint,

PrevPageFID  bigint,

PrevPagePID bigint)


insert into #ind execute(@sql)


select  i.PagePID , i.NextPagePID , i.NextPagePID - i.PagePID OffsetToNextPage, iam_chain_type ,IndexLevel , IAMFID ,IAMPID

from #ind i


drop table #Ind

Posted by simonsabin | 2 comment(s)

Courtesy of an unnamed source we have an app to downgrade a SSIS package from the 2008 version to 2005.

Whilst it works with the common components any new components aren’t supported with the exception of the Lookup.

3rd party components are also not supported.

It was gone through very limited testing and is not a supported option from MS. Its is not advised to use this for production packages.

It is a useful tool to get  you out of a sticky situation.


Oracle / SQL Server virtual chapter meeting is TODAY Wednesday October 20, 2010 at 12:00 PM Eastern Time  /  GMT – 4

May I ask the chapter leaders to help us out by forwarding this on to your members?

More information may be found on our website http://oracle.sqlpass.org


Using Microsoft BI Tools to Access Oracle Applications Data

Date: Wednesday, October 20, 2010 12:00PM EDT    

Daryl Orts
Vice President
Engineering Technologies
Noetix Corporation

Location: https://www.livemeeting.com/cc/UserGroups/join?id=RP8Z2T&role=attend
Overview: http://oracle.sqlpass.org/

       Using Microsoft BI Tools to Access Oracle Applications Data

One of the challenges in using Microsoft’s BI tools (Reporting Services, Analysis Services, Excel, PowerPivot) to access Oracle applications data lies in building metadata models to make sense of the data.  Whether you’re accessing the data directly in the transaction database or building a data warehouse, you’ll need data models that make sense of the Oracle database schema.  

This beginner to intermediate level presentation will discuss design approaches for building these models, including commercial products available from the presenter,  Noetix Corporation.

Posted by simonsabin | 2 comment(s)
Filed under: , , ,

Ever tried to log a message to a table during a batch process which is then rolled back. You will have found that the message got rolled back as well.

How do you get round this?

Well you might think savepoints might be a solution. They aren’t, they just allow you to rollback to a specific point in your transaction. You might also think that nested transactions are the solution. Unfortunately thats not the case either. In SQL Server you only ever have one transaction, even if you use BEGIN TRANSACTION more than once.

One solution is to put your messages in a table variable. This works because table variables are not affected by transaction scope. However whilst this works it only really works when you have a single set of code/procedure doing the work, you log your messages in the table variable and then at the end after you have committed/rolled back your transaction you can store your messages.

So what is the solution.

When I was reviewing the profiler chapter of Professional SQL Server 2008 Internals and Troubleshooting with James Rowland Jones we were both looking at some of the edge case scenarios for some of the profiler functionality including the custom event mechanism. The thought then came that you might be able to use it for logging. Looking into it you can. What’s more there are some great features.

The key to this logging is the Custom Events that you can raise using the master..sp_trace_generateevent procedure.

You can capture these events in profiler but thats not very good. Whats really useful is that you can use Event Notifications to capture the events that you raise, further more with Event Notifications you can log to a remote source using the standard routing features in service broker.

You might think this all sounds very complex. Its not.

All you have to do is create 3 things, a queue, a service and an event notification.

The queue is the bucket your messages will be put in

The service is the address for you queue and says you are going to process event notification messages

And the Event notification tells SQL that you want your user event messages to go to your service.

create queue loggingQ


create service logging

on queue loggingQ ([http://schemas.microsoft.com/SQL/Notifications/PostEventNotification]);


create event notification logHandler0

on  server

for userconfigurable_0

to service 'logging', 'current database' ;

Note that userconfigurable_0 is the event and there are 9 more, so you can partition your messages.

To log a message you just call the stored proc specifying the event_class 82 corresponds to user event 0, event_class 83 for event 1 etc.

exec master..sp_trace_generateevent

   @event_class = 82, @userinfo = N'Inserted row into user_config_test'

To read messages you use the RECEIVE statement, i.e

receive  top (1000) message_body

from logDB..loggingQ

What’s nice is that the message contains lots of information that means you don’t have to pass that information in your message.





  <TextData>Inserted row into user_config_test</TextData>

  <BinaryData />


  <TransactionID />





  <ApplicationName>Microsoft SQL Server Management Studio - Query</ApplicationName>







  <IsSystem />




The complete script of what to do is here

create queue loggingQ


create service logging

on queue loggingQ ([http://schemas.microsoft.com/SQL/Notifications/PostEventNotification]);


create event notification logHandler0

on  server

for userconfigurable_0

to service 'logging', 'current database' ;


EXEC master..sp_trace_generateevent

   @event_class = 82, @userinfo = N'Inserted row into user_config_test'


create database logDB


alter database logDB set enable_broker


use logDB


create queue loggingQ

create service logging

on queue loggingQ ([http://schemas.microsoft.com/SQL/Notifications/PostEventNotification]);


create event notification logHandler0

on  server

for userconfigurable_0

to service 'logging', 'current database' ;





TO SERVICE 'logging', 'current database' ;






TO SERVICE 'logging', 'current database' ;






TO SERVICE 'logging', 'current database' ;


begin transaction


EXEC master..sp_trace_generateevent

   @event_class = 82, @userinfo = N'Inserted row into user_config_test'

EXEC master..sp_trace_generateevent

   @event_class = 83, @userinfo = N'Inserted row into user_config_test'

EXEC master..sp_trace_generateevent

   @event_class = 84, @userinfo = N'Inserted row into user_config_test'

EXEC master..sp_trace_generateevent

   @event_class = 85, @userinfo = N'Inserted row into user_config_test'


rollback transaction


declare @message_body  xml;

declare @handle uniqueidentifier;

receive  @message_body = message_body

        ,@handle = conversation_handle

from logDB..loggingQ

end conversation @handle

select @message_body

Posted by simonsabin | 2 comment(s)

If you find this behaviour very frustrating then please vote


The problem is that CAST invokes the intellisense drop down when you type AS, when it shouldn’t. What then happens is if you have an object or database that is starts with the same letters as the data type i.e. DATEADD for DATE and XML_NAMESPACES for XML.

What should happen is the intellisense should either include all the data types in the list or ideally only display datatypes in the list (that would be really useful).

There is a work around that is to create a database called xml and one called date but thats not really an appropriate work around.

If you find this behaviour annoying then please vote


Posted by simonsabin | 3 comment(s)
Filed under:

This bugs me everytime I use reporting services custom code. Why do I have to use VB.Net to write my custom code. Why can’t I use c#

So I raised a connect item https://connect.microsoft.com/SQLServer/feedback/details/613957/need-to-be-able-to-use-c-in-reporting-services-custom-code

If you feel the same please vote on it.

Posted by simonsabin | 1 comment(s)