SQL Server Blogs

Voices from the UK SQL Server Community

Rob's SQL Server Blog

... SQL Server thoughts from the field

February 2008 - Posts

  • SQL2K8 Launch Thoughts

    Been reading Dan Jones blog post regarding the delay to the launch of SQL Server 2008, and I think he makes some great points. There is obviously dissapointment from some people about the release date slipping back, but as Dan points out, a bad product will be remembered for a long time. SQL Server has built up a reputation for reliability, scalability and security over the last couple of releases. I'd rather Microsoft protected that reputation than rush something out to meet a marketing deadline. From what I have seen so far with the CTP's, SQL Server 2008 is going to be well worth the wait !

  • Windows Server 2008 Clustering

    I've been having a look at Windows Server 2008 clustering and the first thing I can say is... wow ! Setting up clusters, adding additional nodes and setting up clustered resources is far simpler than before, and the cluster validation tool means there is no more need for the HCL. Clusters can also be configured so that the quorum is not a single point of failure, which increases resilience. The GUI is much cleaner and intuitive than previous versions of the Cluster Administrator, which makes configuration and administration a lot easier. There is more information on Windows Server 2008 clustering improvements here.

    If you want a closer look, there is a Windows Server 2008 Failover Clustering virtual lab available which goes over some basic clustering scenarios (but not SQL Server clustering I'm afraid !!)

  • SQL Server 2005 Bug Report

    Saw this link to a SQL Server 2005 bug report on the Microsoft Connect site. I guess someone was having one of those days Big Smile

  • SQL Server 2008 Certifications

    As I am currently preparing for the MCDBA to MCITP 2005 upgrade exam (70-447) it may seem a bit premature to talk about SQL Server 2008 certifications, but there is a webcast available at Trika's blog (from the Microsoft Learning team) below.

    Something I found surprising from the presentation is that there are just over 4,000 MCITP Database Administrators in the world compared to 150,000 MCDBA's. This may be down to the take-up figures of SQL Server 2005 compared to SQL Server 2000 or it may be as a result of changing the exam format. I found the SQL Server 2005 MCTS exam (70-431) an improvement on the SQL Server 2000 exams. The exam is split into 2 sections: a multiple choice section and a simulation section. You need to pass both the sections, and the lowest mark is taken as your overall mark. This requirement to have at least some degree of hands-on experience is a major improvement and should help eliminate the braindumpers who ultimately devalue the certifications for everyone else !

    There is a useful search facility on that enables you to search sites to see if they are braindump sites or if they are legitimate. It's worth checking as you could risk being decertified if you continue to use braindumps.

  • Indexing Strategies

    I attended the Scottish SQL Server User Group meeting during the week and heard a presentation from Martin Bell on his favourite DMV's. Mentioned in the discussion was sys.dm_db_index_usage_stats and that got me thinking about indexing strategies. I have been involved in performance troubleshooting databases that have used a variety of indexing stategies, ranging from none to lots of narrow indexes on practically every column ! However, there is no right and wrong indexing strategy, it depends entirely on your application and the type and frequency of queries being executed against the database.

    So how do you know if you have a problem? For me, it's usually when users tell me that the application is "running slow" or they get timeouts. However, poor perfomance can be open to interpretation. There could be a whole host of factors to take into account when you are dealing with web applications, such as network connections and web servers problems. However, this shows that you need to have good benchmarks in place in order to compare performance over a period of time. Another sign could be high CPU, high memory usage or increased disk IO activity. The SQLCAT team has a post detailing the top OLTP performance issues on 2005 and some basic performance counters to monitor can be found here.

     So once you've determined that there is a problem, what do you do? If you know the specific query that is causing the issue, then you can verify the query execution plan. You do this by chosing the 'Display Estimated Execution Plan' option under the 'Query' menu in Management Studio. This does not execute the query, but returns the estimated execution plan based on the statistics the server has. As a result, you need to ensure that the statistics are up to date, or you may get the wrong results. It's also a good idea to turn on statistics IO. Things to look out for in the execution plan are table or index scans and hash aggregates. Scans imply that there are no indexes for SQL Server to use or the indexes are not selective enough and SQL Server has decided it's less expensive to run a scan. Bear in mind that a clustered index scan is exactly the same as a table scan. Hash aggregates have to create a worktable i.e. a temp table in TempDB. Watch out for Hash aggregates as Statistics IO does not show the cost of the worktables, which can often be very expensive. As a rule of thumb, anytime you see "Hash" in your plan it means temp tables and this can be done better ! Another cool thing about showplan is that you can force the queries to use different indexes and run them side by side. This will show the execution plan for both queries and the cost of each relative to the batch. This is a quick way to see which index choices are most expensive.

    However, on shared systems with multiple applications running against the SQL Server instance, chances are you will not know the queries that are causing the performance issues and you will need to do some digging. You can use DMV's in 2005, but as I work in a mixed environment (2000 and 2005) I prefer to use profiler to give me an idea what is going on. I run a server side trace and remotely log the results to a file as oppossed to a SQL Server table to minimise overhead on the system. I will typically run this for short periods (10 - 15 mins) just to get a feel for the queries executing against the server. This is a quick way to see if there are any expensive queries running and how frequently. This gives me some clues as to which database is causing the problems. Once I know this, I can really focus in on that database application. More information on running Profiler traces is available here.

    Once I have identified the database, the hardest part is deciding which queries to index. Lots of frequently run queries can give you a far bigger overall performance gain than a large query run once a month. It is important to use Profiler and run traces over longer periods to get a feel for the mix of reads and writes. Never build an index in isolation, always consider the workload over the course of time. You can then combine these traces with DTA in order to fully analyse your workload and come up with the proper indexing strategy based on your application's specific needs. It is also worth restoring a backup of your database on a test system and use this for running your DTA analysis on. I cannot stress highly enough, do not create indexes for indexes sake. I often ask the question when interviewing, "when would droping an index actually help to increase performance"? A lot of people are conditioned to believe that you need to build lots of indexes to increase performance, but this is not true. If you have lots of inserts, updates and deletes then this is a significant overhead to keep all these indexes updated if they are not required in the first place.

    Before going off and building indexes it is worth looking at alternative options. It is a good idea to update your statistics to ensure that SQL Server has most up to date information to work out the optimal execution plan. If you are working with stored procedures, you can consider recompiling them. You can also consider re-writting the code, especially if you are using cursors!

    Generally, SQL Server does better with wider indexes (covering several columns) and fewer of them than it does with narrower ones. Narrow indexes will require a bookmark lookup to get the rest of the data if you don't cover the query. Bookmark lookups are expensive and SQL Server may even decide not to use the index at all and perform a table scan. In this situation, the narrow indexes will not be used and are an unecesary overhead. You can use the DMV sys.dm_db_missing_index_details to see which multi-column indexes could give better performance. Another word of warning here if you are using the 2005 DMV's, such as sys.dm_db_index_usage_stats. Remember that the data in these DMV's is not persisted across server shutdowns, or database restores. Do not assume that because the DMVs show that an index has not been used you can safely drop it. This only means that it hasn't been used since the cache was cleared out. If you are going to rely on the DMV's, you need to persist the data over a period of time and then analyse it. This is easy to do as you can select directly from the DMV's into tables, which can then persist the data. A good time period would be to persist the data every 30 mins. Paul Randal has a post here with information how to persist the index usage data from DMVs.

    The final thing I'd like to say about indexing (for the time being !) is that you can't just create indexes and walk away. You need to regularly maintain them in order to reduce fragmentation. Fragmentation in indexes can be caused by inserts, updates and deletes and can be a problem if you have a busy system. By default, indexes are created with 100% fill factor, which means they are densely packed as soon as you create them. If you then need to insert rows or update data, SQL Server will need to carry out page splits in order to do so. To avoid this happening, it is advisable to create indexes with a fill factor of around 80 - 90%, however this may vary depending on the frequency of data updates. Not only will this increase the performance of inserts and updates, it will also keep fragmentation at a minimum. You can check for fragementation using DBCC SHOWCONTIG (2000) and sys.dm_db_index_physical_stats (2005) and remove fragmentation using DBCC INDEXDEFRAG or DBCC DBREINDEX in 2000, and ALTER INDEX REORGANIZE or REBUILD in 2005. There is further information and advce here for rebuilding indexes and updating statistics.

    This is a huge topic and I could go on all day, but the importance of good indexing cannot be stressed highly enough ! In summary, you need to ensure that you understand your application's workload, you need to check that the indexes you have are actually being used and that you are not missing any indexes. Finally, you need to maintain your indexes so they are kept in optimal shape. Good indexing means good performance which means happy users Smile

  • PFE Technical Rollups

    The latest Technical rollup mails from Premier Field Engineering are available here.

  • Top Tips for SQL Server Clustering

    As part of a current project I am working on, I came across this article, which includes a tip for clustering SQL Server workgroup edition using a combination of Windows clustering and virtual server !

  • SQL 2008 Resource Governor

    I came across this post in Andrew Fryer's blog, which has a short demo of the SQL Server 2008 resource governor in action. For me, this will be one of the main drivers for moving to SQL Server 2008. Most of the issues I come across with unresponsive SQL Server instances are caused by run-away processes or bad application code consuming all the system resources. The ability to throttle these processes and gaurantee service for other applications using the instance is very compelling and could slash the amount of support calls significantly !

    I attended a resource Governor session at Tech Ed IT Forum in Barcelona last year and gave a live demo of the features at my Christmas team meeting, but I can't wait to really get my hands on this feature and give it some thorough testing.

  • Free SQL Server 2008 E-Learning Available

    There are some free e-learning tools available for SQL Server 2008 on the Microsoft E-learning site below:

    There's also some free e-learning resourses for Windows Server 2008 here:

  • Update: Two-thirds of Oracle DBAs don't apply security patches

    This scary article reminded me of a conversation I had with a colleague in the middle of a team meeting recently. This centred around the perceived lack of security of SQL Server compared to Oracle and the Slammer worm was cited as an example. The damage done by slammer was entirely caused by DBA's and SysAdmins not applying critical updates on their SQL 2000 machines, allowing the worm to exploit these documented security holes.

    I guess that mud sticks and people remember the impact of slammer, but as a result DBA's (or ones that want to keep their jobs) must ensure that all critical security patches are deployed. Of course, to date there have not been any critical security patches released for SQL Server 2005, so looks like Microsoft are getting their act in order in this respect, but we can't afford to be complacent...

Powered by Community Server (Commercial Edition), by Telligent Systems