Welcome to the world (new face) of Knowledge Sharing Network.
To track the older blog posts refer to our historical knowledge sharing site  and you will find this as your destination for SQL Server knowledge pool.

Follow SQLMaster on Twitter SqlServer-QA.net - Knowledge Sharing Network (@sqlmaster)

May 2008 - Posts - SQL Server Knowledge Sharing Network (@sqlmaster)

May 2008 - Posts

SQL Server Upgrade - don't forget the Collation issues?

Whenever any user (forum or usergroup) asks me about Upgrade gotchas and FAQs I will refer this SQLUpgrade-blurb post that I have made from my experience. Further I would like to enhance the topic of upgrade and things you need to watch about Collation settings, think the task when you have to perform the consolidation of databases from a 2000 version to 2005 version.

By default and as a best practice you must run through the SQL Server 2000 instance using Upgrade Advisor (problems you might get on that post) in order to get prior information on upgrade process. So when you think about default collation, the SQL will be selected with SQL_Latin1_General_CP1_CI_AS collation within 2000 version and change within 2005 version will be system databases running Latin1_General_CI_AS. This is where you need to careful to adopt the relevant collation setting for your master database and TEMPDB database! Yes, by default usage of when databases running SQL_Latin1_General_CP1_CI_AS they will be creating and referencing temp tables. As in documentation the TEMPDB is used more within SQL Server 2005, posts such as WhatToDo-WhenTEMPDB_Full will help you too.

Make sure you haven't changed any of the Windows locale to your local language, such as to UK that will have British English as language. As per the design the collation for tempdb always uses the default collation of the server. So in order to use with new collation settings within the upgraded environment you have an option to script out the databases schema having COLLATION setting turned on and edit the scripted text to use new collation, then reapply the changed script. When it comes to the data SSIS is only saviour to transfer the data between source and target database(s), though it is not an easy option but only way out if you have no other choice to change your code or ignore the changed collation settings.

SQL Server 2008 and DTS supportability?

They (MS) said DTS will be wiped out after SQL Server 2005 and will not be supported, more and more SSIS is promoted to use. But within SQL Server "Katmai" DTS is included with a provision of management, run-time, and design-time support for DTS packages that were created by using the SQL Server 2000 tools and object model.

By using SQL 2008 CTP I can easily manage the SQL 2000 version DTS packages, though they are very basic ETL based packages to handle. As to use with SQL 2005 I have more practised with SSIS in this case.

As you are aware since SQL 2005 DTS and Integration Services are completely different products. There were times where community was under impression that SSIS is not a version upgrade from DTS, but it is. As it is evident you can easily manage and install both DTS and Integration Services on the same server. Run both DTS and Integration Services packages on the same server when both products are installed. Even if the SQL Server 2000 tools are not present on the server, you can run DTS packages by using the updated version of the DTS runtime that is installed with Integration Services.

The above are still applies to SQL 2008, with DTS packages that are created in SQL 2000 version can be saved in MSDB instance, for SSIS packages you can only save on an instance that is running SQL Server 2005 or a later version.

Within SQL 2008 using DTS designer you can modify  DTS packages even if you haven't got the SQL 2000 on your machine, get more information about DTS designer from here.

Wait and watch for more supportability for DTS in SQL Server 2008 version.

Further links and refernces on SSIS:

Creating a Simple ETL Package Tutorial

Integration Services Tutorials

Microsoft SQL Server Product Samples: Integration Services - Home

Deploying Packages Tutorial

Posted by ssqa.net | with no comments

SQL Server multiple instances and resource utilization - best practices

A newbie or FAQ within Forums and Newsgroups that having multiple instances and how best to set the resources on the server such as memory, CPU etc.

Should you configure each of the instances to use specific portions of the available Operating System resources then best to leave the default settings, for instance dynamic memory settings on SQL Server. As per the configuration of SQL Server you can easily mix versions of SQL Server (both 2000 and 2005) on the same machine running Windows server, such as 1 default instance (either SQL Server 2000 or SQL Server 2005) plus everything else as a named instance (SQL Server 2000 or SQL Server 2005) up to the number of supported instances for that version of SQL Server, or all named instances for everything. On the other hand for the Clustering basis configuration this is exactly the same—you can have only one default instance plus the supported number of named instances for that version of SQL Server on a cluster, or all named instances. 

Then coming to the Resource Utilization the answer is it depends. Say if you have 4 multiple instances and there isn’t any resource contention and there is enough of each resource to go around, then there is usually no reason to set any caps or limits. The Operating System and SQL Server will share the available resources without any problems. But when you find out that there is a resource contention, then depending on the resource and what you want to happen, the way that SQL Server and the OS manage the contention might not give you the result you want. In this situation you should consider setting caps on the resources in dispute.
The default setting of SQL Server dynamic memory settings when each SQL Server instance will take what memory it thinks it needs, up to the available physical memory. When there is memory contention, then sometimes one SQL Server instance won’t release memory as quickly as you might like. If you are in this situation you may need to configure maximum and minimum server memory settings to manually control this. As these are dynamic settings in SQL Server 2005, you can change them without a reboot and see an immediate effect.
Also having the Anti Virus software installed on the servers is a common placement and in this case make sure that if it’s not cluster-aware, can have strange effects and interact in a negative way with clusters. Be sure to check with your antivirus vendor to see if it is cluster-aware or not, refer to AntiVirus-SQLServer post as well. Ensure to have a complete understanding of when a full and incremental antivirus scan is performed during the maintenance window for a database server is essential. Typically these scans run in the same timeslot as the maintenance window. To avoid resource contention, you might work with the infrastructure support teams to lock down backups, antivirus scans, software patches, and other schedules. Then you could look at server activity over a 24-hour period to see user time, processing times, and maintenance window times.
So keeping this in mind whenever required the operating system and SQL Server instance will have a great job of sharing the CPU between all threads. Better to collect the statistics during busy times & less busy times using SYSMON tool, this will get you much information. Think about a server with one instance running a decision support system (DSS) with a few very complex long-running queries, and another instance running an online transaction processing (OLTP) system. The OLTP system needs an unfair share of the CPU to allow it to guarantee response times to its users. You have to make sure the DSS instance doesn’t consume all the resources and starve the other instances while executing its complex queries.  In a clustered environment, there may be additional factors concerning what happens when an instance fails over. In a multiple instance failover cluster configuration you have to take into account not just the resource needs of the instances running on this node, but also the resource requirements of the instances that may fail over to this node. This is a more complex topic and is discussed at length in the white papers located at "SQL Server Consolidation on the 32-Bit Platform using a Clustered Environment" and "SQL Server Consolidation on the 64-Bit Platform".



SQL Server - Have you observed DBCC SHRINKFILE operation performance, on huge databases?

In general it is not a best practice to perform SHRINK database operation on a production server, atleast regularly!

Sometimes it may be compulsory to keep them sized in order to ensure the disk storage is not compromised for any sudden changes to databases ETL processes, coming to the point by design the DBCC SHRINKFILE operation is a single-threaded operation that means you cannot define or configure the server to use multiple CPUs or a dedicated CPU. So troubleshooting the performance problems on a database system is very tricky, more important is where to look for a problem and for instance it is frustating to see why system reacts in such a bad way even for a simple query execution and this is where you need to look at how CPU, Memory & disks are performing during this operation

Coming to the subject the referred DBCC operation needs to perform the exercise of moving database pages from tail of the file to the beginning in order on the data file, with one page at a time though. SO you need to be careful to select this operation on a huge database as it will tend get the server down to its knees for a single simple-query execution, also the SHRINKFILE operation often make it defragmentation from bad to worse and in many situations I have seen this increases the file logical fragmentation whereby you will see huge difference in performance to produce few hundreds of rows result set.

If you see such performance issue then look at from a high level, there are two paths to identifying CPU performance problems. The first is reviewing the system's hardware performance, an exercise that helps determine where to look when you head down the second path, reviewing the server's query efficiency. This second path is usually more effective in identifying SQL Server performance issues. Unless you know exactly where your query performance issues lie, however, you should always start with a system performance evaluation that will lead to go on both of these routes. On such SQL Server instances where the relational engine actually handles its own extremely efficient queuing and threading to the Operating Sysstem and having CPUs with hyper-threading is a common scenario, this will have the affect to overload the physical CPUs on systems with already high CPU utilization. This is where the threads operation converted as queues from SQL Server with multiple requests to perform work on multiple schedulers. This is where the Operating System struggles to cope up to switch the context of the threads back and forth on the physical processors to satisfy the requests that are being made even if the two logical processors are sitting on top of the same physical processor.

Taking back to the SHRINKFILE operation on the database table(s) with a clustered index(es) on a huge table (rows) then you will see much degraded performance because of the heaps and those heaps have many non-clustered indexes, where it is different to the  clustered index situation. In this case the SHRINK operation of moving the pages having with IMAGE data or Large Object Blob will be too slow, as it has to read the data from each page to arrange. Further the most content of an index/table resides at the end of the file, you can rebuild the indexes to move them to the front end of the file. Also bear in mind to keep a close watch of Transaction log space on the databases in addition to the TEMPDB which is used extensively in SQL Server 2005 version.

So by the end you should ensure to schedule the SHRINK operation during the less traffic hours and when server consists with multipe CPUs to take advantage of performance to finish the operation in timely manner.