January 2010 - Posts

I pointed out yesterday that the EntityFramework doesn't like smallints. Well it does and it doesn't The model handles them fine and the objects have datatypes of int16 which is great.

However the queries that are generated do this odd thing with any predicates on such columns. It applies a cast to column.i.e

where cast(col2 as int)= @d

At first glance I thought oh now not another screw up with the SQL from Entity framework, but on inspection of a query plan where the column in question was indexed I still got a seek operation. Very nice, well done optimisation team.

When I pointed this out, I was asked whether this worked on SQL2005. Having only got a SQL2008 instance on my laptop I took the question away for testing. The reason of the question could be related to the fact that in SQL 2008 the introduced the ability to cast a datetime to a date to be able to look for specific days of data and still use an index.

I have now tested and can state that it does work on SQL 2005.


Posted by simonsabin | 2 comment(s)
Filed under: ,

My code and presentation from my session at Developer Day 8 are now available here

DDD_8-Entity_Framework Code
Entity framework presentation

Any questions let me know.

Also for more on the new features in Entity Framework v next go to http://blogs.msdn.com/adonet/archive/2009/08/05/improvements-to-the-generated-sql-in-net-4-0-beta1.aspx


 You can watch all the sessions from SQLBits V now. They are available for download or for viewing online.

If there is a delay then just wait, the videos are high quality and so are very large.

These are the links to all the sessions

A whistlestop tour of SSIS add-ins
Achieve SQL Server 2008 High Availability and Disaster Recovery - technical case study
An introduction to Master Data Services
Building cubes from ODS or Operational Systems
Cache-warming strategies for Analysis Services 2008
Common Integration Services Problems
Creating High Performance Spatial Databases
Data & Backup Compression in SQL Server 2008
Data Visualisation with Bing Maps for Enterprise
Data Warehouse on a Fast Track forwards
Data warehousing features in SQL 2008
Designing for simplification
Designing I/O systems for SQL Server
Excel - An Excellent Data Mining Tool
Getting Dimensional with Data
I need my reports..........Yesterday!
Introducing Project Madison
Introducing SQL Server Master Data Services
Introduction of MS StreamInsight.
Introduction to the Microsoft BI Technology Stack
Let’s make SQL fly – a technical session for developers and administrators who believe in magic!
Lets go more functional - F# WHAT , WHY and BI
Microsoft SQL Server 2008 R2: What’s New in Reporting Services
Optimistic Concurrency Internals
Powershell – old tricks for a new dog?
Put Your Feet Up : Simplified Management using the Enterprise Policy Management Framework
Real World SQL Server High Availability
Report Builder 3
Self Service Business Intelligence- Project Gemini
Server consolidation with SQL Server
SQL 2008 Development Features
SQL Injection Attacks (and how to prevent them)
SQL Server 2008 - Unstructured Data Storage Solutions and Best practices
SQL Server optimization stuff you won’t find on Google (yet)
SQL, NoSQL, SomeSQL - A look at non-relational databases
SQLDataSources, LINQDataSources and EntityDatasources in Database Design
SSIS in SQL Server 2008
Strapped for cache? - Troubleshooting memory problems in SQL Server 2008
The R2 Duo: Running SQL Server 2008 R2 on Windows Server 2008 R2 Core
T-SQL Tuning with Colin Chapman, Enzo Ferrari, and The Stig
Using Perfmon and Profiler
Virtualising SQL Server
Vital Statistics
What's new in R2 for the DBA
When a query plan goes wrong


Dave has doe a couple of great posts on how Microsoft aren't adhering to best practices in there own products.

http://sqlblogcasts.com/blogs/sqlandthelike/archive/2010/01/25/microsoft-follow-best-practices-part-2.aspx

This is also related to the talk I'm giving at Developer Day on Entity Framework. The first version is shocking from a SQL perspective and really isn't fit for purpose. I'll be going into the details of what can happen and how to avoid them

http://developerdeveloperdeveloper.com/ddd8/Default.aspx 


Posted by simonsabin | 1 comment(s)
Filed under: ,

I am in the process of uploading all the SQLBits videos. They look great.

All the videos should be up by the end of the week.

As taster you can see my session here http://sqlbits.com/Agenda/event5/When_a_query_plan_goes_wrong/default.aspx

to watch in a local media player use this link http://sqlbits.com/information/Agenda/Video.ashx?SessionId=343&Regenerate=

or to download click here http://sqlsocial.com/videos/sqlbitsV/47%20When%20a%20Query%20Plan%20Goes%20Wrong_w.wmv 

 


Posted by simonsabin | 1 comment(s)
Filed under: , ,

I was asked by a client about the new Time datatype in SQL 2008. They wanted to store the duration of some work so they could use the nice date time functions to get the hours, minutes, seconds etc.

However you can't use the Time date type for that, because it can only hold time up to midnight after that it roles around to 0. Not much use if something takes longer than 24 hrs to run.

If you would like to have such a feature then vote for the reccomendation for an interval data type. https://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=432281

If you can't wait until fro a nother 3 or 4 years then you could cheat.

If you use the datetime data type then you can add these together and get the behaviour you want. So you can get 04:53 and add it to 20:52.

To find the hours, months etc you perform a datedfif with 0. like this

declare @time datetime

set @time = cast('4:53' as datetime ) + cast('20:52' as datetime)

select DATEDIFF (minute,0,@time)

NOTE: You can't do this type of calculation with the new date and time data types you have to use the date add functions. This IMHO is one of the poor aspects of the new data types


I've been workig with XML in SQL Server Integration services this week and to say the XML task in SQL Server Integration services sucks is an understatement. Its buggy, the documentation is woefully inadequate in some cases wrong. I believe this was one of the first components written back for SQL 2005 so you can imagine it was written along time ago, XML wasn't as standardised.

So what if you want to validate an XML document against a XSD Schema. Well in true SQL Server Integration services mode the answer is to do it yourself with a script task.

There are two ways of doing it, the first uses an XmlDocument object and the other is the XmlReader. If you were on my XML course at SQLBits you will remember that I pointed out the importance of straeming when dealing with large data and this is where these two differ. The XmlDocument loads the document into memory, whereas the XmlReader is a streaming interface to the XML document you have. So if you have a large document you should use the XmlReader.

In my case however I wanted to pass the XML to a stored procedure after it had been validated and so I used the XmlDocument.

The first step is to create a XmlSchemaSet and put your schema into it.

string Filepath = Dts.Variables["User::FullNewFilename"].Value.ToString();

string Schema= Dts.Variables["User::SchemaFilepath"].Value.ToString();

 

// Create a schema set

XmlSchemaSet schemaset = new XmlSchemaSet();

// Add the schema to the collection.

schemaset.Add(XmlSchema.Read(XmlReader.Create (Schema),null));

Here Schema is a variable to the path for the schema and Filepath is the variable to the Xml file to be validated.

After that you have to either use the XmlDocument or XmlReader. For the XmlDocument we are passing back the contents of the file using the InnerXml property.

//clear our errors string builder

ValidationErrors = new StringBuilder();

 

//Use this if you need to read the document for use elsewhere

XmlDocument doc = new XmlDocument();

doc.Schemas = schemaset ;

doc.Load(Filepath );

doc.Validate(new ValidationEventHandler(ValidationCallBack));

           

//Load the contents of file into the variable for later processing.

Dts.Variables["ImportFileContents"].Value = doc.InnerXml;//clear our errors string builder

or use the XmlReader, this requires another object the XmlReaderSettings object

//set the validation settings.

XmlReaderSettings settings = new XmlReaderSettings();

settings.ValidationType = ValidationType.Schema;

settings.Schemas = schemaset ;

settings.ValidationEventHandler += new ValidationEventHandler(ValidationCallBack);

             

using (XmlReader xr = XmlReader.Create(Filepath,settings ))

{

    //This reads the whole document through the reader and does the validation

    while (xr.Read()) ;

 

    //Close the reader

    xr.Close();

}

In both you will see that they have references to the ValidationCallBack. The validation model is a callback model and so you need this extra function. This is why we are using a global variable ValidationErrors to store the errors and then be able to access them after the validation succeeds. The validation callback function is described here

// Display any validation errors.

private void ValidationCallBack(object sender, ValidationEventArgs e)

{

    ErrorOccurred = ErrorOccurred || e.Severity == XmlSeverityType.Error;

    Warning= Warning|| e.Severity == XmlSeverityType.Error;

 

    if (ValidationErrors.Length > 0)

    {

        ValidationErrors.AppendLine();

    }

    ValidationErrors.Append(e.Message);

}

Complete Code

So putting it all together we get

StringBuilder ValidationErrors;

bool ErrorOccurred= false;

bool Warning = false;

 

public void Main()

{

    string Filepath = Dts.Variables["User::FullNewFilename"].Value.ToString();

    string Schema= Dts.Variables["User::SchemaFilepath"].Value.ToString();

 

    // Create a schema set

    XmlSchemaSet schemaset = new XmlSchemaSet();

    // Add the schema to the collection.

    schemaset.Add(XmlSchema.Read(XmlReader.Create (Schema),null));

 

    //clear our errors string builder

    ValidationErrors = new StringBuilder();

 

    //Use this if you need to read the document for use elsewhere

    XmlDocument doc = new XmlDocument();

    doc.Schemas = schemaset ;

    doc.Load(Filepath );

    doc.Validate(new ValidationEventHandler(ValidationCallBack));

           

    //Load the contents of file into the variable for later processing.

    Dts.Variables["ImportFileContents"].Value = doc.InnerXml;

 

 

    /*  //Otherwise use the following

    // Set the validation settings.

    XmlReaderSettings settings = new XmlReaderSettings();

    settings.ValidationType = ValidationType.Schema;

    settings.Schemas = schemaset ;

    settings.ValidationEventHandler += new ValidationEventHandler(ValidationCallBack);

 

    using (XmlReader xr = XmlReader.Create(Filepath,settings ))

    {

        //This reads the whole document through the reader and does the validation

        while (xr.Read()) ;

 

        //Close the reader

        xr.Close();

    }*/

           

    if (ValidationErrors.Length > 0)

        Dts.Variables["XMLError"].Value = ValidationErrors.ToString();

 

    Dts.Variables["Validation"].Value = ErrorOccurred?"Error":(Warning?"Warning":"");

}

 

// Display any validation errors.

private void ValidationCallBack(object sender, ValidationEventArgs e)

{

    ErrorOccurred = ErrorOccurred || e.Severity == XmlSeverityType.Error;

    Warning= Warning|| e.Severity == XmlSeverityType.Error;

 

    if (ValidationErrors.Length > 0)

    {

       ValidationErrors.AppendLine();

    }

    ValidationErrors.Append(e.Message);

}

Enjoy.


If you get the above error and you have no idea why (i.e you haven't changed the data type of a variable) then it is likely due to a data type inference issue with SSIS. You might think that SQL is returning a value of one type when in fact it isn't its returning something else.

This occurs with a number of functions in SQL, the key is that you think your SQL is returning an integer when in fact its returning a decimal or a data type that SQL Server Integration services maps to a string.

Two fiunctions I've encountered this are row_number and scope_identity.

The solution is to explicitly cast your columns in your query you return to SQL Server Integration services i.e. cast(scope_identity() as int) or cast(row_number() over (order by somecolumn) as int)

If you encounter this you can check [BOL] to see what data type is returned by the function.

The issue is documented on the SSIS team blog Why can’t I store my BIGINT result in an Int64 .... This is the same issue.


I really like the database projects in visual studio however there are a few bug bears that really annoy me.

This post is about deploying a solution.

When working in development i have the project set to build the delpoy script and to deploy it. Whats annoying is that I prefer to work in the schema view of the project and not the solution view and in schema view there is no context menu(that I know of) to deploy the project. So each time you want to deploy you have to go to the the solution explorer view, this is very time consuming (relatively).

I decided this week to find a way around this, and as with many of the Visual Studio options I found the answer in the customize menu/toolbar.

Here you will find almost all the commands avaialble to you in VS and you can drag them onto your menus and toolbars.

For this problem you will find Deploy Selection and Deploy Solution under the build Category.

From here you simply drag the item onto your toolbar/menu and heh presto you get the menu on the toolbar, so no switching between the schema and solution explorer views any more.

On a related note for Management Studio I do the same to add "Toggle Word Wrapping" and "Enable SQLCMD" onto my toolbar.


User defined functions looked like something great when they were introduced in SQL 2005. They are a very logical step to make more code reuseable by wrapping common code in functions. It is an approach that is in development 101 and something everyone does and looks to doing.

So what is the problem with SQL Server.

Well the issue is that whilst there are great from a reducation of duplicated code the performance sucks, and sucks big time. I wrote about this a year ago TSQL Scalar functions are evil and have spoke about it at many usergroups over the years.

Almost every client I've seen over the past 2 years has implemented functions because it is the logical thing to do. However they've then been bitten big style by the performance issue. To make the matter worse use of scalar functions also gets in the way of supporting code, try and do a stmt level profile or something I found this week try and capture query plans in profiler. Because each of these is considered a block of code it results in stmtCompeleted and starting events to fire and for query plan event to fire.

At a recent client I was able to tune a query from 3 minutes to 3 seconds using the tricks in "TSQL Scalar functions are evil"

Scalar user defined functions are the worst thing in SQL Server and something needs to be done about them. I feel this is a bug in the product and have posted as such here https://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=273443 

If you have been bitten by this bug please vote for the item on connect go here "Vote for changing user defined functions so they perform better "

[SS 2010/01/19 I've been told my connect item is a duplicate and so has been closed. SO please vote on this item instead https://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=273443 The links above now point to this connect item]

 


More Posts Next page »