One of the great things with powershell is the ability to easily script out sql server objects. With a bit of work you can also use it to script out replication objects, such as publications, articles and subscribers. This can be used as part of a disaster recovery plan so that you always have a script available to recreate replication. You can do all of this from management studio, but when you have lots of publications, this can take a long time

This article explains how to script transactional push scriptions. It is very easy to change the objects to script out other types of replication as well (eg merge, snapshot, pull)

First you need to start off with either by starting sqlps, or by loading the appropriate sql server assemblies. How to do that is detailed on Martin Bells blog at http://sqlblogcasts.com/blogs/martinbell/archive/2009/09/06/Using-SQL-Server-in-Powershell-outside-of-SQLPS.aspx

 

The next thing you need to do is load the RMO assembly. This is the assembly that encapsulates replication functionality for SQL Server

 

[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.Rmo")

 

Now you need to create a new replication server object and connect to it. One thing to note is that even if your server or database is set to a case insensitive collation, most of these objects need the variables to be defined with the same case. So if you have a publication called SalesTables, it wont show up if you use the examples below and set the publication name to be salestables.

 

 

$servername=”NAME OF PUBLISHER SERVER”

$repserver = New-Object "Microsoft.SqlServer.Replication.ReplicationServer"

$srv = New-Object "Microsoft.SqlServer.Management.Common.ServerConnection" $servername

$srv.connect()

$repserver.ConnectionContext = $srv

 

So now we have a $repserver which represents a replication object at a server level. But we are interested in scripting replication, which must be done at a database level, so we create a new object to be the database

 

$databasename=”DATABASE PUBLISHED FOR REPLICATION”

$repdb = $repserver.ReplicationDatabases[$databasename]

  

From this object, we can then list out the transactional publications using

 

$repdb.transpublications

 

If you want to get an individual publication, you can use this

 

$publication_name = “PUBLICATION NAME”

$publication_object=$repdb.transpublications[$publication_name]

 

From this $publication_object we can then see articles, by looking at

 

$publication_object.TransArticles

 

And subscriptions

 

$publication_object.TransSubscriptions

 

So now we have the 3 things that we would want to script for replication, the publication (in $publication_object), a collection of articles (in $publication_object.TransArticles) and a collection of subscriptions (in $publication_object.TransSubscriptions)

We can set various scripting options using the ScriptOptions enumeration. These vary slightly between different versions of sql server, but you can see the values for all versions at http://msdn.microsoft.com/en-us/library/microsoft.sqlserver.replication.scriptoptions.aspx

 

To include which options you want, you just need to create a variable like

 

$script_val = [Microsoft.SqlServer.Replication.ScriptOptions]::Creation -bxor

                        [Microsoft.SqlServer.Replication.ScriptOptions]::IncludeGo

 

So this option will include creation and a go seperator. To include more options, you just need to separate them using a -bxor statement,

 

Finally we can now script our replication objects using a simple

 

$publication_object.Script($script_val)

 

Foreach ($article in $publication_object.TransArticles) {

            $article.Script($script_val)

}

 

Foreach ($subscription in $publication_object.TransSubscriptions) {

            $subscription.Script($script_val)

}

 

One thing to note is that if you have a seperate publisher and distributor, then some of the job properties (such as profiles and schedules) may not be scripted correctly. This is due to problems with RMO and not this technique. You get the same issue if you script replication from management studio.

You now have the basis for creating a script which can be scheduled to run and automatically create a script of all your replication enviroment. You can customise this as much as you like - for instance to make the output re-runnable, or just more human readable.

 

 

 

Last night I gave a presentation on replication at the Leeds sql server user group. You can find the slides and scripts I used from the presentation here

Posted 04 March 2011 09:40 by antxxxx | with no comments
Filed under:

This weekend saw sqlbits 7 in York. This was a great conference and i learned a lot and met some really great people. A huge thanks to all the organisers and sponsors of the event

You can find the slides from my presentation on database projects here.

Microsoft released a component for SSIS 2005 to convert a packed decimal (comp 3 format) into a decimal so you can use it in a data flow. Details of it are at http://www.microsoft.com/downloads/en/details.aspx?familyid=0e4bba52-cc52-4d89-8590-cda297ff7fbd&displaylang=en

Unfortunately , this doesnt work in SSIS 2008, but with a few simple changes to use SSIS 2008 types rather than 2005 ones it does.

Here is the source code with these modifications and compiled dlls (in debug and release configurations)

This is not strictly a sql post, but it is to do with xml which is used in sql server.

 

xml files do not need to have any formatting in them. That means that when viewing an xml file produced by an application, all the data can be one line. Although this is valid xml and can be processed by sql server or ssis, sometimes it is useful to view the data in a human readable format.

 

You can use an xml editor for this, or you can open the file in visual studio and use the shortcut ctrl+k, ctrl+d which formats it all nicely with indents to make it easier to read.

Thanks for all who came to see my service broker presentation last night. It was my first time presenting to a user group and i really enjoyed it

As i promised, the presentation and demos from it should be in the attached zip file here

Here is a little trick to find out what stored procedures or statements are causing data modifications. You could use a trace for this, and in most situations this will be preferable as you can filter what is captured, and get more information This is another method that uses a trigger and dbcc inputbuffer

First create a table to store the results

create table dbcc_monitor (

EventType nvarchar(60),

Parameters int,

EventInfo nvarchar(510) )


 

You then create a trigger on the table you want to monitor that runs dbcc inputbuffer for the current @@spid

 

create trigger some_table_trigger

on some_table for insert, update, delete

as

declare @sql varchar(100)

set nocount on

select @sql = 'dbcc inputbuffer(' + convert(varchar, @@spid) + ')'

insert into dbcc_monitor exec(@sql)

 

Then whenever an insert, update or delete happens on the table, the results of dbcc inputbuffer are logged to the dbcc_monitor table. This should contain the name of the stored procedure that caused the data modification. You could also capture the data from the inserted and deleted tables in the trigger to find out what the modifications were.

A word of warning though. There will be a performance hit with this as it logs all data modifications, so dont do it on a heavily modified table

Database projects are available in visual studio database edition (which is covered under the same licence as visual studio developer edition). They allow you to store all the objects that make up a database in a project and easily deploy them to a server. The objects are stored as sql scripts that create the object. When you deploy it compares the object definitions to the target database and creates a difference sql script that makes the target server the same as the project.

With th GDR release for visual studio 2008 version, there are many improvements, including support for sql 2008, and the seperation of build and deploy steps. Gert Drapers has an excellent article at http://blogs.msdn.com/gertd/archive/2009/06/05/declarative-database-development.aspx explaining how database projects work in the GDR release

If you have an existing database project and you install the GDR release, you need to upgrade your projects to the new version.

There are 3 methods I will discuss here on how to do this.

The first method is to first deploy your old database project to a blank database, and then import it into the GDR version. This is probably the cleanest method. However, it does mean that the location of your scripts (ie object definitions) will change, which if you have your project under source control may cause problems. It is also very time consuming, so it may not be an option if you have many projects.

The second method is to just open the database projects in the GDR version and go through the upgrade wizard. This just changes the dbproj file, and adds some extra files, but leaves all the scripts in the same location. If you have your project under source control, this is a better method for that reason. However, it is time consuming and if you have many projects, it can take a long time.

There is a bug in the upgrade wizard. The database.sqlpermissions file gets added to the dbproj file like this

<PropertiesFile Include="Properties\Database.sqlpermissions" />

when it should get added as

<Build Include="Properties\Database.sqlpermissions">

<ModelBuilderType>Permissions</ModelBuilderType>

</Build>

So you will manually need to edit the dbproj file after the upgrade.

It also does not move the contents of the permissions.sql file into the database.sqlpermissions file, so this is something you need to do manually.

The third method is to automate the 2nd method using powershell. I have recently used this to upgrade hundreds of projects in a few hours. The main script is called upgrade-projects.ps1

The script finds all dbproj file, and then callls

C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\devenv.exe $dbprojname /upgrade

on them which does the upgrade without the need for intervention.

It also fixes the database.sqlpermissions file in the dbproj file automatically, and changes the model collation to be uk case-insensitive (as all my database projects have this collation)

It also calls another script - upgrade_permissions.ps1 which tries to upgrade the permissions.sql file into database.sqlpermissions format. This script works for basic grant statements on tables and stored procedures (eg grant select, grant delete, grant execute etc) and for most grants on xml schema collections. There may be problems with it if you try to do it with other objects.

It uses the get-tfs cmdlet to try and check out and add files to tfs. This is available from here - http://blogs.msdn.com/jmanning/archive/2006/09/28/776141.aspx. If you use another source control provider you will need to change these parts.

Finally the script calls another script called build-all.ps1 which tries to build all the projects to check for any errors.

All 3 scripts are in the file here. I hope this helps anybody that is upgrading database projects

If a server is behaving badly, or there is a long running process and you want to find out what it is actually doing, you might find these scripts useful. They can be used to show the query or stored procedure that is executing and the text of the statement that is executing within that procedure.

This gives a more complete picture of what is happening than by just using dbcc inputbuffer which  just displays the last statement sent by the client to sql server.

For example say have a stored procedure, uspProcedure1, which calls another procedure, uspProcedure2, inside it, and uspProcedure2 takes a long time to complete. If a client executes uspProcedure1 and you run dbcc inputbuffer on that spid, it will just show uspProcedure1 executing, even though the statement currently executing is in uspProcedure2

Both scripts create a procedure which does the same thing - sp_dba_showline is for sql 2000 with sp3 or greater installed and sp_dba_showline2005 is for sql 2005. The reason for the difference is the sql 2000 version uses fn_get_sql (a function introduced in sp3) which is being depreciated in favor of sys.dm_exec_sql_text (available in sql 2005). The sql 2005 version also uses a cross apply join to get the data.

 

The procedure takes 1 parameter, @spid, and the results returned are:

the results of dbcc inputbuffer to see what query or procedure was called from the client

the results of sp_who2 to show general process information

the first 100 lines of the procedure or query being executed (this can be useful to find the procedure name being executed)

the text of the statement actually being executed. 

If you create these on the master database as dbo, then they should be accessible from all user database


Posted 30 November 2006 13:43 by antxxxx | with no comments
Filed under:

Whilst I was looking at building custom reports in management studio, I finally got round to looking at reporting services in sql 2005.

 

One really neat feature I found is the ability to use an expression to build a connection string. Using this, you can dynamically set which server the report connects to when it runs. You can then embed this report as a subreport on another report and so get it to run against several servers all from one report. If you look after several servers, this can give you a nice easy way to check all your servers are OK.

 

To do this, first create the report you want to run against each server. Don't worry about the data source while you are designing it - just connect to any machine and make sure it is designed how want.

 

When you are happy with it, add a parameter to the report called ServerName. This should be a string datatype and the ‘allow blank value’ checkbox should be unchecked.

 

Next you need to change the datasource to use an expression.(one thing to note is you can’t use an expression on a shared datasource, so you may need to create a new datasource for the report). Edit the datasource and click the fx button next to the connection string and change the expression to

 

="Data Source=" & Parameters!ServerName.Value

 

If you now try and preview the report, you need to enter the servername parameter and it runs against whichever server you specify.

 

Now for the second report. First create a dataset that will return all the names of the servers you want to run the report against.

 

On the layout tab, add a table and bind it to that dataset. Then drag a subreport into the detail part of the table. Right click on the subreport and select properties and in the subreport drop down box, select the first report you created. Finally go to the parameters tab and in the parameter name column select ServerName and in the parameter value column select the column from the dataset that contains the servername.

 

In the download on this entry are couple of reports to show how this all works.

 

The first report is SingleServerInfo that just shows some information that it connects to. You can change which server it connects to using the ServerName parameter.

 

The second report is MultipleServerInfo and has a table with SingleServerInfo as subreport in one of the cells. This means it is called for each server returned by the dataset on this report (which needs a table called ServerNames which can be created using the script in the zip file)

 

These reports will only run against sql 2005 servers as they use some of the dynamic management views available on there. However, there is no reason why you cant use this technique to run a report against several sql 2000 servers (or a mixture of versions)

 

Please make sure you look at the queries these reports run and are happy with them before running the reports against any of your servers.