November 2010 - Posts

Who hates having to install SQL Express to get SQL Server. Anyone ever used those automatigally attached databases you can do with SQL Express. Yes! Wow you must be the only one.

SQL express whilst its awesome to have a free version of SQL its a total pain to install.

OK you could go with SQL Compact edition but then you lose half of your features.

As part of the SQL Server Denali  feature announcements they have mentioned a new “serverless SQL Server” (Watch the video for some details). The key thing is that this is in process SQL Server that spins up when you application runs and shuts down when it stops. Importantly it DOES NOT require a service to be installed.

I think this will be an awesome feature and am looking forward to test drive this when its available.

  • Whats new in Denali for Spatial
  • Whats new in Denali for TSQL
  • Whats new in Denali for Reporting
  • Whats new in Denali for Indexing
  • Whats new in Denali for Administration
  • Whats new in Denali for Tools
  • Whats New in SQL Server Denali
  • If you need to find out when statistics were last updated run the following

    select OBJECT_NAME(s.object_id ) object


       ,STATS_DATE(s.object_id ,s.stats_id) StatsDate






       ,stuff((select ','  Star

       from sys.stats_columns sc

       join sys.columns col on sc.column_id = col.column_id

                           and sc.object_id = col.object_id

       where sc.stats_id = s.stats_id

          and sc.object_id = s.object_id

       for xml path ('')),1,1,'') cols

    from sys.stats s

    This also gives you the columns of the statistics.

    Run this on your database and see what the oldest date is. Are you suprised at how old your statistisc are?

    If you are then you need to be aware that statistics only get updated when 20% of the data in the table changes (its a bit more complicated but thats the general rule).

    That means if you have a table with 5 years worth of data, statistics won’t be updated until 1 years worth of data has been added/updated/deleted. Thats a lot of data.

    If you want to understand the issues associated with out of date statistics why not watch my session from SQLBits 5 “When a query plan goes wrong

    Posted by simonsabin | 2 comment(s)

    Interesting little situation occurred today, the person I was working with was trying to add a login to a server and was getting the following error.

    “The server principal '<domain>\User' already exists”

    They were using a command like this and couldn’t figure out why they were getting the error above because when they looked at the logins setup on the system the login definitely didn’t exist.


    LOGIN [<domain>\<user>] FROM WINDOWS WITH DEFAULT_DATABASE=[master]

    The key here is that for windows logins the login is not the identifier for the login instead the SID is used, this is a binary value (varbinary(85)).

    The nice thing is that the system security functions will work on accounts even if they aren’t logins in the sql server.

    The function you need to use in this situation is SUSER_SID, you can pass in a login and get the SID back.


    I did a quick query on the system tables for the SID and found the user already existed but with a different login name. The query I used was this.

    select *

    from sys.server_principals

    where SID = SUSER_SID('<domain>\<user>')

    How can this happen you ask? Well it is perfectly valid to rename user accounts but the SID stays the same.

    Posted by simonsabin | 4 comment(s)

    Thanks to Marcel van der Holst for providing this great information on the use of Trace Flag 610. This trace flag can be used to have minimal logging into a b tree (i.e. clustered table or an index on a heap) that already has data. It is a trace flag because in testing they found some scenarios where it didn’t perform as well. Marcel explains why below.

    TF610 can be used to get minimal logging in a non-empty B-Tree. The idea is that when you insert a large amount of data, you don't want to create a lot of transaction log. Initially the idea was to automatically do this in the engine, but we ran into a bunch of issues and thus we put it under a traceflag.


    Here are some of the things to be aware of:

    1) When the transaction commits, it needs to write all pages that were minimally logged to disk. Even though pages will get written asynchronously when they are full, it is possible that not all pages can be written in time in case of a slow I/O subsystem. This can make the operation slower than full logging, as for full logging, SQL only needs to write the commit log record and then it is done.

    2) There is a bug in SQL2008 where the transaction log grows very big (due to log reservation), even though the log is never used. This is due to how rollback works for minimally logged pages. This bug was fixed in SQL2008R2 and is fixed in one of the CUs for SQL2008 (I forgot which one)

    3) If you have a large bufferpool (hundreds of GBs or more) and the I/O subsystem cannot keep up, the commit could take very long as well.

    4) Don't make transactions to big. Even though the features is build to be used when inserting a large number of rows, it does not work very well if you make the transactions too big. For instance, inserting about 10000 rows at a time (about 3-5 GB max). We had one customers inserting 1TB in a single transaction, which caused some issues.

    5) The feature is NOT designed to make inserts go faster. It is mainly written to reduce the size of the transaction log. There are cases where perf might be better, but there are cases as well, where it is slower as well.


    My recommendation is to test this feature in a test environment that has similar I/O characteristics than the production system and see if it helps.

    I would only use it if you have fast enough I/O to keep up with creation of the minimally logged pages.


    However, in the cases where you have a fast enough I/O subsystem, it is a great way to reduce the size of the transaction log.

    Thanks to Jamie Thomson for pointing out the video of TechEd session by #SSIS Group Program Manager, Steve Swartz

    Posted by simonsabin | 1 comment(s)

    If you saw the keynote then you will have seen the awesomeness of the columnstore indexes that were demoed. If you’ve used power pivot then you will have experience the same technology in vertipaq (tm)  the engine underneath powerpivot.

    Read more here

    I personally see this as the biggest thing in Denali. However I also fear that people will create them not knowing they are static and also will create very wide tables and just put columnstore indexes on top.

    Pleas note this is not in CTP 1.

    Posted by simonsabin | with no comments


    Bob beat me to the first post on full text

    More to come

    Posted by simonsabin | with no comments

    To download the latest CTP of SQL Server Denali click here

    To view the latest Books online for SQL Server Denali click here

    To find out what’s new with SQL Server Denali why not book mark these pages

    Make sure you watch the keynote if you want to find out about Denali. This has the most about Denali

    Posted by simonsabin | with no comments

    The Tech Europe Keynote gives you a sneak peek into a new reporting client for SQL using Silverlight.

    I don’t have any more details, however.

    If you are in London tomorrow why not come along to the sql social event and maybe learn more.

    Or maybe go along to the usergroup meeting next week which is focusing on BI and Denali.

    Also we have a meeting next week in Kent

    Posted by simonsabin | with no comments

    We are going to have a little get together tomorrow from 6:30 to have a recap of the Keynote fro PASS. It will be a social event, with possibly some Denali content if we are allowed.

    Venue will be confirmed tonight but it will be in the temple/fleet street/black friars area.

    [Updated link]

    To register go to

    Posted by simonsabin | with no comments
    More Posts Next page »