Welcome to the world (new face) of Knowledge Sharing Network.
To track the older blog posts refer to our historical knowledge sharing site  and you will find this as your destination for SQL Server knowledge pool.

Follow SQLMaster on Twitter SqlServer-QA.net - Knowledge Sharing Network (@sqlmaster)

April 2008 - Posts - SQL Server Knowledge Sharing Network (@sqlmaster)

April 2008 - Posts

SQL Server 2005 and Disk drive allocation unit size to 64K - any benefit or performance?

You may be aware that or seen within your Database platform about slower disk performance than expect having multiple disks in Windows Server 2003 environment when you use a hardware-based redundant array of independent disks (RAID) or a software-based RAID. This could be due to one of the reasons such as starting location of partition is not aligned properly with a stripe unit boundary in the disk partition that is created on that RAID. Further information we can refer from one of the KBA929491 article that:

A volume cluster may be created over a stripe unit boundary instead of next to the stripe unit boundary. This is because Windows uses a factor of 512 bytes to create volume clusters. This behavior causes a misaligned partition. Two disk groups are accessed when a single volume cluster is updated on a misaligned partition.

Windows creates partitions that are based on a predefined number of sectors. The starting location for a disk partition in Windows Server 2003 is either the 32nd or the 64th sector, depending on the information that is presented to the operating system by the mass storage controller.

It is the default configuration from the disk partitions that they reserve the first sector for code and number of sectors with the starting sector information, thats how the next sector will consists the information about data part. In this regard Microsoft recommends to use DiskPart.Exe tool which can be used to create disk partition and specify the starting offset, before that it is recommended to findout what is the current size set of partitioned disks and you can get more information about the Diskpart.exe tool from this DiskPart-TechnetArticle and Exchange-Server_IO_Alignment article links.

Within the recent project tasks I have proposed to use disk partition as 64K blocks for the disks that are used for transaction logs, as there is no doubt it enhances the performance when writing the huge chunk of logs for a database (if required). Also I have discussed with Microsoft Development team about the recommendations they use internally with 64K for both data & log, further I suggest to refer Bob Dorr's explanation on how best system can make use of it during backup/restores link. One of my colleague (Ken) has raised the concern that "I can see that sometimes it would help but not always. My *guess* is that SQL won't always have 64k of data that needs to be written urgently (e.g. after a COMMIT). I guess it all depends on the algorithm's used and I've not seen them described in detail anywhere".

Further I have found that in the past when the Infrastructure team raised a query with Microsoft team it has been discourage to set 64K blocks to the drives other than Data disk. The usual disk layout seperation we follow for the SQL Server setup as follows: 

  • SQL binaries (System databases and software)
  • Tempdb (and Tempdb logs)
  • Application Database data files
  • Application Database transaction log files
  • Database Backups (system & user database)

Going forward I strongly suggest to use 64K blocks for Data, Log  and backup drives in order to take maximum performance, to support this I would like to quote the text from this Pre-Deployment I/O Best Practices article as:

NTFS Allocation Unit Size

When formatting the partition that will be used for SQL Server data files, it is recommended that you use a 64-KB allocation unit size for data, logs, and tempdb. Be aware however, that using allocation unit sizes greater than 4 KB results in the inability to use NTFS compression on the volume. SQL Server, although it is not recommended that you use this, does support read-only data on compressed volumes.

How to configure SQL Server Reporting Services in a DMZ environment?

Many questions will come out if you need set your Reporting Server on Internet within DMZ environment?

The important aspects involves SQL Server database, Reports Services & Report Manager, for the sake of discussion say if Report Services and Report Manager are on a server in DMZ (behind firewall) that is not a member of the Domain then you need to setup port & authentication. Also you need to consider that Report server and DB server need not be in the same domain, if your database server supports SQL Server Authentication (By using username and password) then your report server can connect to the database server over internet and read the required data. Also on the authenication part you have to ensure that listen-on PORT for SQL Server to be set and open, not the default 1433 for the sake of security. Using SQL Server 2005 configuration manager you can set the relevant port no. and restart SQL Server services to take on affect, also you can check this within SQL Server error log. Also the SSRS (report server) can connect to SQL Server even if it is listening on another port. When SQL Server is configured on a port other than the default port, you should specify the port number in the connection string. for example: "server=servername_or_ip,port".

Further few things you need to consider before letting the users use Reporting Services within a DMZ environment where the Servers are exposed to the INternet and to safeguard the servers from hackers. 

In this case you must be aware that SQL Server uses TCP port 1433 and UDP port 1434, change it to a different port number and also considering the Browser service security too. Finally don't forget to secure the password if you are using Certificate (secured) based authentication then refer to http://blogs.msdn.com/tudortr/archive/2005/11/03/488731.aspx blog that helped me a lot, in the past.

 

SQL Server 2008 - What's new for IT Pro community [UK Usergroup conference]

 In continuaton to UK User Group conference (2 day) in Microsoft Campus, Thames Valley Park Reading here is my presentation to download that was delivered on 09th April 2008. 

If you are unable to download from SkyDrive then click here.

During that conference I have represented as a member from, UK SQL Server User Group & Scottish Area SQL Server User group and I believe this is an unique opportunity to promote your skills & learn something new in technology.

 

SQL Server 2005 Service Pack 3 - work started

Ok here is the hot news for SQL 2005 users!!!

From the Are you ready for SP3 post  here, it has been officially announced the SP3 for SQL Server 2005 will be available in CY2008 (calender year 2008) that means before 31st December 2008!

More from this DataInsider blog post.

Posted by ssqa.net | with no comments

Cumulative Update 7 available for SQL Server 2005 SP2 - get it now to test and apply (if needed)

Here is another candidate of 7 within Cumulative Update service for SQL Server 2005 SP2.

SQL Server Release Services team has announced (yesterday) that another instalment of Cumulative Update (series here) for SQL Server 2005 on Service Pack 2, it is not easy decision for User Community to apply with CU than going Service Pack release which is more stable as it proved from previous versions.

So the golden rule is always test this CU7 and to obtain the usual route (since CU5) was raising a online request (as referred on KBA949095):

..... it is intended to correct only the problems that are described in this article. Apply it only to systems that are experiencing these specific problems. This cumulative update package may receive additional testing. Therefore, if you are not severely affected by any of these problems, we recommend that you wait for the next SQL Server 2005 service pack that contains the hotfixes in this cumulative update package.

To resolve this problem, submit a request to Microsoft Online Customer Services to obtain the cumulative update package. To submit an online request to obtain the cumulative update package, visit the following Microsoft Web site:

In addition to this if you need to know about previous CU releases refer to KBAs below (that was articulated on the above KBA): 

CU#6 KB Article

CU#5 KB Article

CU#4 KB Article

CU#3 KB Article:

CU#2 KB Article:

CU#1 KB Article:

CumulativeBlog subject posts.

Again the best advice is to refer the above KBAs and download it if you are affected by the speciified problem within those articles, though these are public announced one but not with available for free, overall it is best to test it before applying on live instance.

 

 

SQL Server Reporting Services - avoid restarting IIS when security groups are added

You may be aware that using normal method of http://servername/reports will give access to the users and if you grant reporting service permissions using the folder properties security (or report manager) to the Windows groups you might need to restart the IIS in order to take the new groups affect.

 In this case it may not be possible to restart IIS services on a production server unless the user activity is low or arranged downtime is agreed, so how to resolve this issue. The default IIS configuration has delay of 15 minutes before users tokens are updated. For example, if you change the password on a user account, you will be able to connect to the server with both the old password and the new password.  So either you have to wait for 15 minutes or change this interval by referring following method that was referred in one of the MS Knowledge base article (I don't remember the number)

1. Run Registry Editor (Regedt32.exe or Regedit.exe).
2. From the HKEY_LOCAL_MACHINE subtree, go to the following key:
\System\CurrentControlSet\Services\InetInfo\Parameters
3. Click Add Value on the Edit menu, and add the following:
Value Name: UserTokenTTL
Data Type: REG_DWORD
Data: (Number of Seconds for token to be cached - 30 sec. Min)

 

End of Mainstream support for SQL Server 2000 & 2005 (SP1) today, then how about SP2?

The subject say it all, as of today the mainstream support for SQL Server 2000 (SP4) and SQL Server 2005 (SP1) will be ended. That means for SQL Server 2005 service pack 1 there will not be any support and that can be obtained with a Custom Support agreement where few applications may have issues in upgrading to Service Pack 2 for SQL Server 2005, also the general suggestion is to upgrade to Service Pack 2 on SQL 2005 that has been stable since 1 year from its release!

Further going towards the major implementation on global basis,many enterprises out there are sitll functioning with SQL Server 2000 version (SP4 for discussion sake) and this clarifies a bit further that technical support continues till 2013 (hopefull until April 9th) and the mainstream support for any kind of hotfix or cumulative fix will be ending from today, the usual reply you would get from a Support team is to upgrade the SQL installation to SQL Server 2005 SP2 (atleast) and if you have any extended support agreement with CSS team.

This brings a big question how about Service Pack 2 for SQL Server 2005, the usual Product Lifecycle page on Microsoft confirms that such support ends either 12 or 24 months after the next service pack releases or at the end of the product's support lifecycle or whichever comes first.  You can visit that page for more information on specified products.

Also related content from AndrewFryer's blog post.

SQL Server Integration Services - think about Transactions and Checkpointing?

Transactions and Checkpoints are important factors for data consistency, when it comes for ETL processes that too on SSIS what is your stand? 

I'm covering this important task for the sake of newbie & junior DBAs, also the similar question was raised by the users during a user group meeting. SQL Server Integration Services has this sophisticated method in evolving a failed package to execute from the point of failure, instead of rerunning the whole package. If you ever developed sophisticated DTS package, and there is another error in one of the task, usually you have to (re)execute the package from beginning. Making this point of execution for the complex packages, is a real problem  and having big data that get involved in ETL process, it will consume considerably amount of time wasted. This is where the CHECKPOINT process kicks in SSIS services where you have option not to start package from beginning in case the package gets error.

Similar to this point you have to ensure the packages are configured to use checkpoints, information about package execution is written to a checkpoint file. When the failed package is rerun, the checkpoint file is used to restart the package from the point of failure. If the package runs successfully, the checkpoint file is deleted, and then re-created the next time the package is run. The package execution is logged to a file, when the task in a package gets failure to execute. When you correct the task, and re-execute the package, the beginning of execution will start at the failed task not the first task of a package. The package will read execution in the file, and resume executing the task and the rest. Once it is successfully executed, the package will remove the file. Nice feature that help you a lot when you develop a complex calculation package. There is another downtrend using this checkpoint and transactions method, in the same package. This  could cause unexpected results, for instance when a package fails and restarts from a checkpoint, the package might repeat a transaction that has already been successfully committed.

Tip: To use this within the migrated packages from SSIS   using dtexec utility, you can opt to run /CheckPointing on option of dtexec is equivalent to setting the SaveCheckpoints property of the package to True, and the CheckpointUsage property to Always.

To use this approach in SSIS you can use BIDS (BI Development Studio) by enabling checkpointing of a package, where you can define package‚Äôs properties in Checkpoints section with a  'Checkpointfilename', set CheckPointUsage to 'IfExists' or 'always', and set to true for SaveCheckpoint. Then you set FailPackageOnFailure property of a task to true, otherwise the package will not create checkpoint file if it fails. To see whether it works or not you can create a package with  sample execute SQL Task package, set second package property name ForceExecutionResult to failure, and run the package. If you notice the output window, package status will have failure status, then you will find checkpoint file that you specify on Checkpointfilename property. Secondly you can perform the correct task, by keeping 'set ForceExecutionResult' property of second task from 'Failure to success'. Run the package again, this time you notice that package will be executed from the second task (failed task) rather than first task. Once it is successfully executed, the checkpoint file will no longer exist, because the package deletes the checkpoint file that no longer useful.

In addition to this you have to be aware about types of transactions, such as Distributed Transactions and Native Transaction (SQL Engine). Both of these types are suppoted where the initial one uses the 2 phase commit type with MSDTC service, so you have to ensure that DTC service is up and running in order to execute the package successfully. On top of this within the package you need not specify BEGIN TRAN.... COMMIT/ROLLBACK... ENDTRAN statements, as they are taken care by DTC service itself. Secondly the native approach of Transaction commit/rollback method by SQL engine you have to use the explicit specification of BEGIN TRAN... END TRAN statements under 'Execute SQLTask' component of SSIS, also ensure to setup the connection manager to keep the 'RetainSameConnection' property to TRUE. 

To wrapup you can take more help from local documentation of SQL Server aka BOL, that too updated one. How-To topics in BOL refers:

To configure a package to restart

  1. In Business Intelligence Development Studio, open the Integration Services project that contains the package you want to configure.

  2. In Solution Explorer, double-click the package to open it.

  3. Click the Control Flow tab.

  4. Right-click anywhere in the background of the control flow design surface, and then click Properties.

  5. Set the SaveCheckpoints property to True.

  6. Type the name of the checkpoint file in the CheckpointFileName property.

  7. Set the CheckpointUsage property to one of two values:

    • Select Always to always restart the package from the checkpoint.

       

    • Select IfExists to restart the package only if the checkpoint file is available.