SSAS: Utility to check you have the correct data types and sizes in your cube definition

This blog describes a tool I developed which allows you to compare the data types and data sizes found in the cube’s data source view with the data types/sizes of the corresponding dimensional attribute. 

Why is this important?  Well when creating named queries in a cube’s data source view, it is often necessary to use the SQL CAST or CONVERT operation to change the data type to something more appropriate for SSAS.  This is particularly important when your cube is based on an Oracle data source or using custom SQL queries rather than views in the relational database.   The problem with BIDS is that if you change the underlying SQL query, then the size of the data type in the dimension does not update automatically.  This then causes problems during deployment whereby processing the dimension fails because the data in the relational database is wider than that allowed by the dimensional attribute.

In particular, if you use some string manipulation functions provided by SQL Server or Oracle in your queries, you may find that the 10 character string you expect suddenly turns into an 8,000 character monster.  For example, the SQL Server function REPLACE returns column with a width of 8,000 characters.  So if you use this function in the named query in your DSV, you will get a column width of 8,000 characters.  Although the Oracle REPLACE function is far more intelligent, the generated column size could still be way bigger than the maximum length of the data actually in the field.

Now this may not be a problem when prototyping, but in your production cubes you really should clean up this kind of thing as these massive strings will add to processing times and storage space. Similarly, you do not want to forget to change the size of the dimension attribute if your database columns increase in size.

Introducing CheckCubeDataTypes Utiltity

The CheckCubeDataTypes application extracts all the data types and data sizes for all attributes in the cube and compares them to the data types and data sizes in the cube’s data source view.  It then generates an Excel CSV file which contains all this metadata along with a flag indicating if there is a mismatch between the DSV and the dimensional attribute.  Note that the app not only checks all the attribute keys but also the name and value columns for each attribute.
Another benefit of having the metadata held in a CSV text file format is that you can place the file under source code control.  This allows you to compare the metadata of the previous cube release with your new release to highlight problems introduced by new development.

You can download the C# source code from here: CheckCubeDataTypes.zip

A typical example of the output Excel CSV file is shown below - note that the last column shows a data size mismatch by TRUE appearing in the column

Posted by DrJohn with 2 comment(s)
Filed under: ,

SSAS: Utility to export SQL code from your cube's Data Source View (DSV)

When you are working on a cube, particularly in a multi-person team, it is sometimes necessary to review what changes that have been done to the SQL queries in the cube's data source view (DSV). This can be a problem as the SQL editor in the DSV is not the best interface to review code. Now of course you can cut and paste the SQL into SSMS, but you have to do each query one-by-one. What is worse your DBA is unlikely to have BIDS installed, so you will have to manually export all the SQL yourself and send him the files. To make it easy to get hold of the SQL in a Data Source View, I developed a C# utility which connects to an OLAP database and uses Analysis Services Management Objects (AMO) to obtain and export all the SQL to a series of files. The added benefit of this approach is that these SQL files can be placed under source code control which means the DBA can easily compare one version with another.

The Trick

When I came to implement this utility, I quickly found that the AMO API does not give direct access to anything useful about the tables in the data source view. Iterating through the DSVs and tables is easy, but getting to the SQL proved to be much harder. My Google searches returned little of value, so I took a look at the idea of using the XmlDom to open the DSV’s XML and obtaining the SQL from that. This is when the breakthrough happened. Inspecting the DSV’s XML I saw the things I was interested in were called

  • TableType
  • DbTableName
  • FriendlyName
  • QueryDefinition

Searching Google for FriendlyName returned this page: Programming AMO Fundamental Objects which hinted at the fact that I could use something called ExtendedProperties to obtain these XML attributes. This simplified my code tremendously to make the implementation almost trivial. So here is my code with appropriate comments. The full solution can be downloaded from here: ExportCubeDsvSQL.zip

 

using System;
using System.Data;
using System.IO;
using Microsoft.AnalysisServices;

... class code removed for clarity

// connect to the OLAP server
Server olapServer = new Server();
olapServer.Connect(config.olapServerName);
if (olapServer != null)
{
// connected to server ok, so obtain reference to the OLAP database
Database olapDatabase = olapServer.Databases.FindByName(config.olapDatabaseName);
if (olapDatabase != null)
{
Console.WriteLine(string.Format("Succesfully connected to '{0}' on '{1}'",
  config.olapDatabaseName,
  config.olapServerName));
// export SQL from each data source view (usually only one, but can be many!)
foreach (DataSourceView dsv in olapDatabase.DataSourceViews)
{
Console.WriteLine(string.Format("Exporting SQL from DSV '{0}'", dsv.Name));
// for each table in the DSV, export the SQL in a file
foreach (DataTable dt in dsv.Schema.Tables)
{

Console.WriteLine(string.Format("Exporting SQL from table '{0}'", dt.TableName));

// get name of the table in the DSV
// use the FriendlyName as the user inputs this and therefore has control of it
string queryName = dt.ExtendedProperties["FriendlyName"].ToString().Replace(" ", "_");
string sqlFilePath = Path.Combine(targetDir.FullName, queryName + ".sql");

// delete the sql file if it exists
... file deletion code removed for clarity

// write out the SQL to a file
if (dt.ExtendedProperties["TableType"].ToString() == "View")
{

File.WriteAllText(sqlFilePath, dt.ExtendedProperties["QueryDefinition"].ToString());

}
if (dt.ExtendedProperties["TableType"].ToString() == "Table")
{
File.WriteAllText(sqlFilePath, dt.ExtendedProperties["DbTableName"].ToString());

}

}

}

Console.WriteLine(string.Format("Successfully written out SQL scripts to '{0}'", targetDir.FullName));

}

}

 

Of course, if you are following industry best practice, you should be basing your cube on a series of views. This will mean that this utility will be of limited practical value unless of course you are inheriting a project and want to check if someone did the implementation correctly.

Posted by DrJohn with 1 comment(s)
Filed under: ,

Building a Dynamic OLAP Environment at #SQLbits8

On Saturday I attended the SQLBits conference in sunny Brighton. Great conference, well organized with lots of great speakers presenting high quality content.

For me the highlight was meeting many of the senior guys from the Redmond SQL Server product team including Richard Tkachuk, Mark Souza and Ross Mistry.

Marco Russo presented a great session called Analysis Services Advanced Best Practice. Ok so I have read his book and implement most of the best practices already, but it was great to hear it all from the author himself!

I received very positive feedback on my own session Building a Dynamic OLAP Environment which I had discussed in a previous post. Sometime soon, the slide deck will be made available on the SQLBits web site along with the video of my presentation. However, if you are eager to get your hands on it now, it can be downloaded from here: BuildingDynamicOLAP.zip.

All in all, a great conference! I am looking forward to the next already.

Posted by DrJohn with no comments
Filed under: ,

The most dangerous SQL Script in the world!

In my last blog entry, I outlined how to automate SQL Server database builds from concatenated SQL Scripts. However, I did not mention how I ensure the database is clean before I rebuild it. Clearly a simple DROP/CREATE DATABASE command would suffice; but you may not have permission to execute such commands, especially in a corporate environment controlled by a centralised DBA team. However, you should at least have database owner permissions on the development database so you can actually do your job! Then you can employ my universal "drop all" script which will clear down your database before you run your SQL Scripts to rebuild all the database objects.

Why start with a clean database?

During the development process, it is all too easy to leave old objects hanging around in the database which can have unforeseen consequences. For example, when you rename a table you may forget to delete the old table and change all the related views to use the new table. Clearly this will mean an end-user querying the views will get the wrong data and your reputation will take a nose dive as a result!

Starting with a clean, empty database and then building all your database objects using SQL Scripts using the technique outlined in my previous blog means you know exactly what you have in your database. The database can then be repopulated using SSIS and bingo; you have a data mart "to go".

My universal "drop all" SQL Script

To ensure you start with a clean database run my universal "drop all" script which you can download from here: 100_drop_all.zip

By using the database catalog views, the script finds and drops all of the following database objects:

  1. Foreign key relationships
  2. Stored procedures
  3. Triggers
  4. Database triggers
  5. Views
  6. Tables
  7. Functions
  8. Partition schemes
  9. Partition functions
  10. XML Schema Collections
  11. Schemas
  12. Types
  13. Service broker services
  14. Service broker queues
  15. Service broker contracts
  16. Service broker message types
  17. SQLCLR assemblies

There are two optional sections to the script: drop users and drop roles. You may use these at your peril, particularly as you may well remove your own permissions!

Note that the script has a verbose mode which displays the SQL commands it is executing. This can be switched on by setting @debug=1.

Running this script against one of the system databases is certainly not recommended! So I advise you to keep a USE database statement at the top of the file.

Good luck and be careful!!

Posted by DrJohn with no comments
Filed under:

SSIS: Building SQL databases on-the-fly using concatenated SQL scripts

Over the years I have developed many techniques which help automate the whole SQL Server build process. In my current process, where I need to build entire OLAP data marts on-the-fly, I make regular use of a simple but very effective mechanism to concatenate all the SQL Scripts together from my SSMS (SQL Server Management Studio) projects. This proves invaluable because in two clicks I can redeploy an entire SQL Server database with all tables, views, stored procedures etc. Indeed, I can also use the concatenated SQL scripts with SSIS to build SQL Server databases on-the-fly.

You may be surprised to learn that I often redeploy the database several times per day, or even several times per hour, during the development process. This is because the deployment errors are logged and you can quickly see where SQL Scripts have object dependency errors. For example, after changing a table structure you may have forgotten to change any related views. The deployment log immediately points out all the objects which failed to build so you can fix and redeploy the database very quickly. The alternative approach (i.e. doing changes in the database directly using the SSMS UI) would require you to check all dependent objects before making changes. The chances are that you will miss something and wonder why your app returns the wrong data – a common problem caused by changing a table without re-creating dependent views.

Using SQL Projects in SSMS

A great many developers fail to make use of SQL Projects in SSMS (SQL Server Management Studio). To me they are invaluable way of organizing your SQL Scripts. The screenshot below shows a typical SSMS solution made up of several projects – one project for tables, another for views etc. The key point is that the projects naturally fall into the right order in file system because of the project name. The number in the folder or file name ensures that the projects the SQL scripts are concatenated together in the order that they need to be executed. Hence the script filenames start with 100, 110 etc.


Concatenating SQL Scripts

To concatenate the SQL Scripts together into one file, I use notepad.exe to create a simple batch file (see example screenshot) which uses the TYPE command to write the content of the SQL Script files into a combined file. As the SQL Scripts are in several folders, I simply use several TYPE command multiple times and append the output together.

If you are unfamiliar with batch files, you may not know that the angled bracket (>) means write output of the program into a file. Two angled brackets (>>) means append output of this program into a file. So the command-line

DIR > filelist.txt

would write the content of the DIR command into a file called filelist.txt. In the example shown above, the concatenated file is called SB_DDS.sql

If, like me you place the concatenated file under source code control, then the source code control system will change the file's attribute to "read-only" which in turn would cause the TYPE command to fail. The ATTRIB command can be used to remove the read-only flag.

Using SQLCmd to execute the concatenated file

Now that the SQL Scripts are all in one big file, we can execute the script against a database using SQLCmd using another batch file as shown below:

SQLCmd has numerous options, but the script shown above simply executes the SS_DDS.sql file against the SB_DDS_DB database on the local machine and logs the errors to a file called SB_DDS.log. So after executing the batch file you can simply check the error log to see if your database built without a hitch. If you have errors, then simply fix the source files, re-create the concatenated file and re-run the SQLCmd to rebuild the database. This two click operation allows you to quickly identify and fix errors in your entire database definition.


Using SSIS to execute the concatenated file

To execute the concatenated SQL script using SSIS, you simply drop an Execute SQL task into your package and set the database connection as normal and then select File Connection as the SQLSourceType (as shown below). Create a file connection to your concatenated SQL script and you are ready to go.

 

Tips and Tricks

  • Add a new-line at end of every file
    The most common problem encountered with this approach is that the GO statement on the last line of one file is placed on the same line as the comment at the top of the next file by the TYPE command. The easy fix to this is to ensure all your files have a new-line at the end.
  • Remove all USE database statements
    The SQLCmd identifies which database the script should be run against.  So you should remove all USE database commands from your scripts - otherwise you may get unintentional side effects!!
  • Do the Create Database separately
    If you are using SSIS to create the database as well as create the objects and populate the database, then invoke the CREATE DATABASE command against the master database using a separate package before calling the package that executes the concatenated SQL script.  

 

Posted by DrJohn with 2 comment(s)
Filed under: , ,

SSIS: Deploying OLAP cubes using C# script tasks and AMO

As part of the continuing series on Building dynamic OLAP data marts on-the-fly, this blog entry will focus on how to automate the deployment of OLAP cubes using SQL Server Integration Services (SSIS) and Analysis Services Management Objects (AMO).

OLAP cube deployment is usually done using the Analysis Services Deployment Wizard. However, this option was dismissed for a variety of reasons. Firstly, invoking external processes from SSIS is fraught with problems as (a) it is not always possible to ensure SSIS waits for the external program to terminate; (b) we cannot log the outcome properly and (c) it is not always possible to control the server's configuration to ensure the executable works correctly. Another reason for rejecting the Deployment Wizard is that it requires the 'answers' to be written into four XML files. These XML files record the three things we need to change: the name of the server, the name of the OLAP database and the connection string to the data mart. Although it would be reasonably straight forward to change the content of the XML files programmatically, this adds another set of complication and level of obscurity to the overall process.

When I first investigated the possibility of using C# to deploy a cube, I was surprised to find that there are no other blog entries about the topic. I can only assume everyone else is happy with the Deployment Wizard!

SSIS "forgets" assembly references

If you build your script task from scratch, you will have to remember how to overcome one of the major annoyances of working with SSIS script tasks: the forgetful nature of SSIS when it comes to assembly references. Basically, you can go through the process of adding an assembly reference using the Add Reference dialog, but when you close the script window, SSIS "forgets" the assembly reference so the script will not compile. After repeating the operation several times, you will find that SSIS only remembers the assembly reference when you specifically press the Save All icon in the script window. This problem is not unique to the AMO assembly and has certainly been a "feature" since SQL Server 2005, so I am not amazed it is still present in SQL Server 2008 R2!

Sample Package

So let's take a look at the sample SSIS package I have provided which can be downloaded from here: DeployOlapCubeExample.zip  Below is a screenshot after a successful run.

Connection Managers

The package has three connection managers:

  • AsDatabaseDefinitionFile is a file connection manager pointing to the .asdatabase file you wish to deploy. Note that this can be found in the bin directory of you OLAP database project once you have clicked the "Build" button in Visual Studio
  • TargetOlapServerCS is an Analysis Services connection manager which identifies both the deployment server and the target database name.
  • SourceDataMart is an OLEDB connection manager pointing to the data mart which is to act as the source of data for your cube. This will be used to replace the connection string found in your .asdatabase file

Once you have configured the connection managers, the sample should run and deploy your OLAP database in a few seconds. Of course, in a production environment, these connection managers would be associated with package configurations or set at runtime.

When you run the sample, you should see that the script logs its activity to the output screen (see screenshot above). If you configure logging for the package, then these messages will also appear in your SSIS logging.

Sample Code Walkthrough

Next let's walk through the code. The first step is to parse the connection string provided by the TargetOlapServerCS connection manager and obtain the name of both the target OLAP server and also the name of the OLAP database. Note that the target database does not have to exist to be referenced in an AS connection manager, so I am using this as a convenient way to define both properties.

We now connect to the server and check for the existence of the OLAP database. If it exists, we drop the database so we can re-deploy.

svr.Connect(olapServerName);

if (svr.Connected)

{

// Drop the OLAP database if it already exists

Database db = svr.Databases.FindByName(olapDatabaseName);

if (db != null)

{

db.Drop();

}

// rest of script

}

Next we start building the XMLA command that will actually perform the deployment. Basically this is a small chuck of XML which we need to wrap around the large .asdatabase file generated by the Visual Studio build process.

// Start generating the main part of the XMLA command

XmlDocument xmlaCommand = new XmlDocument();

xmlaCommand.LoadXml(string.Format("<Batch Transaction='false' xmlns='http://schemas.microsoft.com/analysisservices/2003/engine'><Alter AllowCreate='true' ObjectExpansion='ExpandFull'><Object><DatabaseID>{0}</DatabaseID></Object><ObjectDefinition/></Alter></Batch>", olapDatabaseName));

 

Next we need to merge two XML files which we can do by simply using setting the InnerXml property of the ObjectDefinition node as follows:

// load OLAP Database definition from .asdatabase file identified by connection manager

XmlDocument olapCubeDef = new XmlDocument();

olapCubeDef.Load(Dts.Connections["AsDatabaseDefinitionFile"].ConnectionString);

// merge the two XML files by obtain a reference to the ObjectDefinition node

oaRootNode.InnerXml = olapCubeDef.InnerXml;

 

One hurdle I had to overcome was removing detritus from the .asdabase file left by the Visual Studio build. Through an iterative process, I found I needed to remove several nodes as they caused the deployment to fail. The XMLA error message read "Cannot set read-only node: CreatedTimestamp" or similar. In comparing the XMLA generated with by the Deployment Wizard with that generated by my code, these read-only nodes were missing, so clearly I just needed to strip them out. This was easily achieved using XPath to find the relevant XML nodes, of which I show one example below:

foreach (XmlNode node in rootNode.SelectNodes("//ns1:CreatedTimestamp", nsManager))

{

node.ParentNode.RemoveChild(node);

}

Now we need to change the database name in both the ID and Name nodes using code such as:

XmlNode databaseID = xmlaCommand.SelectSingleNode("//ns1:Database/ns1:ID", nsManager);

if (databaseID != null)

databaseID.InnerText = olapDatabaseName;

Finally we need to change the connection string to point at the relevant data mart. Again this is easily achieved using XPath to search for the relevant nodes and then replace the content of the node with the new name or connection string.

XmlNode connectionStringNode = xmlaCommand.SelectSingleNode("//ns1:DataSources/ns1:DataSource/ns1:ConnectionString", nsManager);

if (connectionStringNode != null)

{

connectionStringNode.InnerText = Dts.Connections["SourceDataMart"].ConnectionString;

}

Finally we need to perform the deployment using the Execute XMLA command and check the returned XmlaResultCollection for errors before setting the Dts.TaskResult.

XmlaResultCollection oResults = svr.Execute(xmlaCommand.InnerXml);

 

// check for errors during deployment

foreach (Microsoft.AnalysisServices.XmlaResult oResult in oResults)

{

foreach (Microsoft.AnalysisServices.XmlaMessage oMessage in oResult.Messages)

{

if ((oMessage.GetType().Name == "XmlaError"))

{

FireError(oMessage.Description);

HadError = true;

}

}

}

If you are not familiar with XML programming, all this may all seem a bit daunting, but perceiver as the sample code is pretty short.

If you would like the script to process the OLAP database, simply uncomment the lines in the vicinity of Process method. Of course, you can extend the script to perform your own custom processing and to even synchronize the database to a front-end server. Personally, I like to keep the deployment and processing separate as the code can become overly complex for support staff.

If you want to know more, come see my session at the forthcoming SQLBits conference.

Posted by DrJohn with 1 comment(s)

Building dynamic OLAP data marts on-the-fly

At the forthcoming SQLBits conference, I will be presenting a session on how to dynamically build an OLAP data mart on-the-fly. This blog entry is intended to clarify exactly what I mean by an OLAP data mart, why you may need to build them on-the-fly and finally outline the steps needed to build them dynamically. In subsequent blog entries, I will present exactly how to implement some of the techniques involved.

What is an OLAP data mart?

In data warehousing parlance, a data mart is a subset of the overall corporate data provided to business users to meet specific business needs. Of course, the term does not specify the technology involved, so I coined the term "OLAP data mart" to identify a subset of data which is delivered in the form of an OLAP cube which may be accompanied by the relational database upon which it was built. To clarify, the relational database is specifically create and loaded with the subset of data and then the OLAP cube is built and processed to make the data available to the end-users via standard OLAP client tools.

Why build OLAP data marts?

Market research companies sell data to their clients to make money. To gain competitive advantage, market research providers like to "add value" to their data by providing systems that enhance analytics, thereby allowing clients to make best use of the data. As such, OLAP cubes have become a standard way of delivering added value to clients. They can be built on-the-fly to hold specific data sets and meet particular needs and then hosted on a secure intranet site for remote access, or shipped to clients' own infrastructure for hosting. Even better, they support a wide range of different tools for analytical purposes, including the ever popular Microsoft Excel.

Extension Attributes: The Challenge

One of the key challenges in building multiple OLAP data marts based on the same 'template' is handling extension attributes. These are attributes that meet the client's specific reporting needs, but do not form part of the standard template. Now clearly, these extension attributes have to come into the system via additional files and ultimately be added to relational tables so they can end up in the OLAP cube. However, processing these files and filling dynamically altered tables with SSIS is a challenge as SSIS packages tend to break as soon as the database schema changes.

There are two approaches to this: (1) dynamically build an SSIS package in memory to match the new database schema using C#, or (2) have the extension attributes provided as name/value pairs so the file's schema does not change and can easily be loaded using SSIS. The problem with the first approach is the complexity of writing an awful lot of complex C# code. The problem of the second approach is that name/value pairs are useless to an OLAP cube; so they have to be pivoted back into a proper relational table somewhere in the data load process WITHOUT breaking SSIS. How this can be done will be part of future blog entry.

What is involved in building an OLAP data mart?

There are a great many steps involved in building OLAP data marts on-the-fly. The key point is that all the steps must be automated to allow for the production of multiple OLAP data marts per day (i.e. many thousands, each with its own specific data set and attributes).

Now most of these steps have a great deal in common with standard data warehouse practices. The key difference is that the databases are all built to order. The only permanent database is the metadata database (shown in orange) which holds all the metadata needed to build everything else (i.e. client orders, configuration information, connection strings, client specific requirements and attributes etc.). The staging database (shown in red) has a short life: it is built, populated and then ripped down as soon as the OLAP Data Mart has been populated. In the diagram below, the OLAP data mart comprises the two blue components: the Data Mart which is a relational database and the OLAP Cube which is an OLAP database implemented using Microsoft Analysis Services (SSAS). The client may receive just the OLAP cube or both components together depending on their reporting requirements.

 

So, in broad terms the steps required to fulfil a client order are as follows:

Step 1: Prepare metadata

  • Create a set of database names unique to the client's order
  • Modify all package connection strings to be used by SSIS to point to new databases and file locations.

Step 2: Create relational databases

  • Create the staging and data mart relational databases using dynamic SQL and set the database recovery mode to SIMPLE as we do not need the overhead of logging anything
  • Execute SQL scripts to build all database objects (tables, views, functions and stored procedures) in the two databases

Step 3: Load staging database

  • Use SSIS to load all data files into the staging database in a parallel operation
  • Load extension files containing name/value pairs. These will provide client-specific attributes in the OLAP cube.

Step 4: Load data mart relational database

  • Load the data from staging into the data mart relational database, again in parallel where possible
  • Allocate surrogate keys and use SSIS to perform surrogate key lookup during the load of fact tables

Step 5: Load extension tables & attributes

  • Pivot the extension attributes from their native name/value pairs into proper relational tables
  • Add the extension attributes to the views used by OLAP cube

Step 6: Deploy & Process OLAP cube

  • Deploy the OLAP database directly to the server using a C# script task in SSIS
  • Modify the connection string used by the OLAP cube to point to the data mart relational database
  • Modify the cube structure to add the extension attributes to both the data source view and the relevant dimensions
  • Remove any standard attributes that not required
  • Process the OLAP cube

Step 7: Backup and drop databases

  • Drop staging database as it is no longer required
  • Backup data mart relational and OLAP database and ship these to the client's infrastructure
  • Drop data mart relational and OLAP database from the build server
  • Mark order complete

Start processing the next order, ad infinitum.

So my future blog posts and my forthcoming session at the SQLBits conference will all focus on some of the more interesting aspects of building OLAP data marts on-the-fly such as handling the load of extension attributes and how to dynamically alter the structure of an OLAP cube using C#.

Posted by DrJohn with 3 comment(s)

Microsoft Access: an elegant solution to Data Warehouse metadata

Before you send me flame mail telling me I must be off my rocker, I am not proposing Microsoft Access store any data! We all know that its JET-based database engine has severe limitations in terms of speed and scalability. No, what I am proposing is that Access is used as is a very quick way for you to deliver an easy-to-use front-end for your business users to edit metadata which is actually stored in a SQL Server back-end database.

Whether you have a classic data warehouse architecture, or some botched system that evolved over the years, you will no doubt have some metadata that is vital to your data warehouse yet did not come from any source system. Usually this data needs to be changed by business users on an infrequent basis.

In big data warehouse (DWH) implementations, where money is no object, you can take the time to build an entire web-based application to edit and manage metadata. This is fantastic for those with deep pockets and long delivery timescales, but realistically not every customer can afford such luxuries. On the other hand Microsoft Access provides a quick and inexpensive way to deliver a mechanism to edit metadata using Linked Tables. This blog provides a tutorial on how to create a front-end metadata editing tool with Microsoft Access and also provides a useful starting template for you to build your own metadata editor. So here goes.

Target Architecture

Before showing how to build your first front-end using my template, let's talk about the technical architecture.

 

The best approach is to store your metadata in a separate database. This offers better control over security as users can be given full permission on the metadata database but limited rights to the other databases in the data warehouse.  Also, the metadata database can have full recovery model, so it can be restored to any point; something that is not always applied to a staging database. However, your architecture may be a sub-set of this. Needless to say, the template I provide will prove useful in any situation.

Example front-end

The following screen shots shows exactly what can be achieved using Microsoft Access. Now these are actually very crude by comparison to what can be achieved by proper MS-Access developers, but they are quick and easy to build for those with limited time and skills. Also business users find them easy to understand.

My example screens allow the business user to edit a Region hierarchy which becomes a dimension in the OLAP cube. The region hierarchy is roughly geographical in nature, but it is business focused and not related to geo-political boundaries and therefore not be available in any source system; hence the need to hold this data in the data warehouse. The hierarchy has three-levels: the top level has Regions, the mid-tier has Locations and the bottom level holds Business Units. Clearly we could have more levels, but this is sufficient for my example.

Been able to edit this kind of structure delivers a lot of power to your business users as they are now able to control how all reports are presented to end-users and how numbers roll-up in the OLAP cube. Nothing to be sneezed at!

The tables in the metadata database that support the region hierarchy are normalized for editing and de-normalized by a view when been sucked into the DWH for use in the OLAP cube. So our MS-Access database has to edit the following SQL Server tables:


Main Menu

The main menu is the first thing the user sees. This is their jumping off point for editing the metadata. Of course the screen must be in their language, not techno-babble!


Edit Regions

Clicking "Regions" will show the following screen.


Although crude, this allows the user to a) change the name of a region and b) add new regions. Foreign key relationships or triggers in the underlying SQL Server database should stop users deleting rows unless you want them too.

In a classic DWH implementation, the user should indicate that a region is defunct by changing its status. In my simple example, they do so by deleting the row. Both approaches can be implemented using MS-Access, it is just a matter of design.

Edit Locations

The Edit Locations form is more sophisticated as it provides drop-down lists for the user to select the region (i.e. from what the user entered on the Edit Regions form). This is because the metadata tables are normalized for editing and de-normalized by a view when been sucked into the OLAP cube.


Edit Location to Business Unit Mapping

The final screen allows the user to map their business data to the bottom level of the regions hierarchy. For my client, this screen related locations to the identifiers used by the finance department to classify all revenue and costs.


The easy way to create Linked Tables

In the template I provide a very simple mechanism to help you create your linked tables as well as switching between servers (i.e. between test and production database servers). However, if you are new to MS-Access you will not appreciate the pain of creating and managing them manually, so let's create a linked table the standard way first.

To create a linked table through MS-Access, you go to the External Data tab and click More and select ODBC Connection as shown below.


You next select Link to a data source by creating a linked table and click OK. This will display the Select Data Source dialog. Now select your data source or create a new ODBC connection file. Be careful to select the correct SQL Server Client Library for your version of SQL Server. For example:


Once you have created your ODBC connection, MS-Access will show you a list of all the tables in your SQL Server database where you can select multiple tables to link.


Click OK and the tables are linked. Well that seemed easy enough, so where is the pain? Well the pain comes when you want to switch servers. The connection string for each table is held in the metadata associated with the table. It is far too easy to connect different tables to different servers during development (e.g. some to localhost, others to a dev server etc.), secondly the Linked Table Manager provided in MS-Access is not the most intuitive UI for changing the database links. Indeed it does not tell you on which server the tables reside! For that, you need to hover over the linked table in the Navigation Pane as shown below. Not too convenient!


The other thing about these newly linked tables is that the schema is displayed, which means the first thing you will do is edit the name displayed in MS-Access. Again a pain if you have to drop and recreate tables when switching between servers.

Change server and create linked tables all in one step

My template provides a far simpler mechanism to simultaneously change server and re-create all the linked tables in one step. You simply edit the SourceServers table and fill in names and descriptions of all your servers.


You then edit the LinkedTables table and fill in the names of the tables you want to link along with the name of the database in which they are stored.


Then select Change Server from the main menu and a list of servers will appear. Simply select your server and click OK. The code will delete all existing linked tables and then re-create them based on the information provided in the LinkedTables table. So changing between development and production servers is now extremely easy!

Basic Editing

The top level of the hierarchy is Region and is held as a simple list of names with an identity column. We only need a basic editor for the user to edit this list, so we will use the default datasheet view provided by MS-Access. To invoke the basic editor, simply double-click on the Region table and you will see the datasheet view (displayed below). Remember, if the left-hand Navigation Pane is not visible, simply click F11.


 

Editor with drop-down selections

Our Location table has a foreign key relationship with the Region table. Rather than have our users remember a lot of meaningless IDs, we want to provide them with a drop down menu. To do this, click on the Location table in the Navigation Pane and then open the Create tab and select the Forms Wizard as shown below.


Next you will see the Form Wizard's field selection dialog. Select all fields.


On the next Layout selection dialog, select Tabular.


On the Style dialog, pick your preferred style. I recommend sticking to the Office theme if you are unsure. On the final dialog, name the new form and click Finish.

You will now see the basic Location editing form ready for you to enter data. It looks much better than the basic form we created for Regions, but has the problem that the user has to remember the RegionID in order to make an entry.


We get around this by changing the RegionID field to use a combo box. Switch into design mode using the View option on the Home tab and select the RegionID field. Right-click and select change to combo box.


Now change the Row Source property by clicking the three dots in the Property Sheet (press F4 if this is not visible).


Use the query designer to create the query:

SELECT Region.RegionID, Region.RegionName FROM Region;

This SQL should be displayed in the Row Source property once you exit the designer.

Finally change the Column Count property to 2 and set the column widths to 0cm;4cm


This has the effect of hiding the RegionID from the user as shown below.


Edit Location Mapping table

The LocationMapping table maps business unit IDs from the source finance system to locations. So here the drop down list we provide to the user has to come from the source finance system. As this is a data warehouse, we have already extracted and cleaned that data with SSIS and written it into the staging database. So we simply need to create a linked table to the relevant table in the staging database. Again this is easily done using the LinkedTables table as this has a SourceDatabase column as shown below. Of course you will need to click the Change Server button to create the new linked table.


Using the same technique as we did with the RegionID column in the Location form, we can create a form containing two drop-down lists as shown below.


Creating the Main Menu

When the user first opens the database, you want to display a menu of options so they can find things easily. To create a new form, simply choose Form on the Create tab. Then switch to design mode and expand the detail section to reveal a white cross-hatched area. As you are in design mode, you will see the Form Design Tools tabs on the ribbon. Select Design tab and click Button. Now highlight an area of form and draw a button. When a dialog pops up, simply click Cancel as we do not want any standard option. Open the Property Sheet and give the button a sensible Name on the Other tab (e.g. cmdRegion). On the Event tab, select [Event Procedure] in the on-click event and click the three dots box to open the Visual Basic editor.


To open the region table directly for editing (i.e. because we do not need a special form) we use the OpenTable command as follows:

DoCmd.OpenTable "Region", acViewNormal, acEdit

We now repeat the steps to create buttons for our other forms. However, as we have created special forms to edit these tables we need to use the OpenForm command as follows:

DoCmd.OpenForm "Location", acNormal, , , acFormEdit

For some obscure reason MS-Access still uses macros to achieve certain tasks. In order to have the main menu appear when the user first opens the database, you need to create macro called AutoExec which does this.


User Permissions

If you have a separate metadata database, then your users only need read-write permissions to that database. If the MS-Access database uses other tables to populate drop-down lists, then the user will need read-only access to those tables. This can be managed on a table by table basis or by simply adding the user to a database role with db_datareader and/or db_datawriter roles.

Gotha: Tables must have a Primary Key

MS-Access can only edit tables that have primary keys defined. If no primary key is defined, then you will find that your form is locked in read-only mode with no indication whatsoever as to why you cannot edit the data. For me this was a painful learning curve, as I had most of my tables with primary keys but the odd one or two without. Given the lack of feedback from MS-Access as to why the forms were locked in read-only mode, it was quite some time until that I realised what caused the problem.

Deployment

Clearly your business users need MS-Access installed on their desktop in order to use you new metadata editor. I would recommend developing the database using the same version of MS-Access as your users have installed. Although my screen shots are of MS-Access 2007, this application can be built using MS-Access 2003 as well.

Although convenient, I would not recommend using a file share to deploy the database as I have found that slow VPN connections can corrupt the MS-Access database making it unusable for everyone. Instead, have the users copy the database to their desktop or install on their PC to a standard location. You could even write an installer, although that is rather over the top as a simple batch file would do!

The other thing to be careful about is the version of SQL Client libraries on the user's desktop. I would recommend using what is already available rather than requiring a separate install. Clearly testing on several machines will highlight any issues in this area. If you need to change the version of SQL Client used by my template, simply edit the ODBC_DRIVER constant in the VB module called Common.

Replication of metadata to the staging database

If you are not happy using cross-database joins in your stored procedures, you may well need to replicate the metadata to the staging database for use in the data warehouse. This can be done in a number of ways using SSIS or SQL replication. However, this is a topic outside the scope of this blog entry.

MS-Access Template

The MS-Access Template and SQL scripts to create the demo region hierarchy is available for download here: Metadata_Editor.zip.

Good luck!

Dr. John

 

Posted by DrJohn with 3 comment(s)
Filed under:

SSAS: Microsoft release fix for “Kerberos killing MDX” issue

In a previous post I reported how small MDX queries worked fine when client and server were Windows Vista/Windows Server 2008 but large MDX queries died due to the connection been forcibly closed by the transport layer. It turned out that this issue was due to a bug in the Kerberos.dll on these AES aware operating systems. AES = Advanced Encryption Standard.

Microsoft have how released a hot fix for this issue.  Read John Desch's blog post for details.

Note that this bug is not present in the Windows Server 2008 R2 or Windows 7 release.

 

Posted by DrJohn with 1 comment(s)
Filed under: ,

SSAS: Kerberos kills ‘large’ MDX queries on Windows Server 2008

Kerberos is nasty! I have had several encounters with Kerberos and none of them have been pleasant. I try to avoid Kerberos like try to avoid the dentist. However, some projects demand that Kerberos authentication be used to cater for the 'third hop' and then the pain begins.

The symptoms of my latest encounter were somewhat odd. Local MDX queries worked fine, but queries performed over the network failed with the message:

Executing the query ...
The connection either timed out or was lost.
Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.
An existing connection was forcibly closed by the remote host
Execution complete

Of course the error message does not tell us anything useful. Just that someone does not like us today!

At first we thought there was a fault with the MDX query. But no, the MDX query ran fine against the local development server and also ran fine against the production server when run in SQL Server Management Studio in local mode (i.e. not across the network). But as soon as we ran the query across the network, we got the error.

Suspecting a hardware fault on the brand new server, the IT support team checked out the memory and changed the network cards. No change; the queries still failed.

Anyone familiar with SQL Server Analysis Services (SSAS) knows that it uses a Windows-based security model (i.e. the user must have Windows account in order to connect to the OLAP database). Unfortunately, this becomes a problem when the user's credentials need to be passed from machine to machine before finally arriving at the OLAP server. Basically the NTLM protocol does not support delegation across more than two hops.

To cater for the third 'hop' you have to co-opt the help of the 'dark side': i.e. Kerberos and SPNs. Servers have to be 'registered for delegation' in Active Directory and SPNs have to be set up to configure Kerberos. This is a black art and my colleague and I have the battle scars to prove it!

Our configuration was not unusual. Users connected to a web server running SharePoint 2007 and SQL Server Reporting Services (SSRS). SSRS then queries SQL Server Analysis Services (SSAS) database which resides on the backend 'data server'. The complication occurs when users connect over the internet; they are authenticated by Microsoft ISA server which passes their credentials on to the web server which in turn passes their credentials to SSRS which in turn passes their credentials on to SSAS.

Now, this infrastructure was working fine in the current SQL Server 2005 / Windows Serve 2003 implementation. However, the new SQL Server 2008 / Windows Server 2008 implementation was having trouble with a single hop, so we were in big trouble!

Suspecting a corrupt installation of SSAS, we did a re-installation of the whole SQL Server 2008 suite, but the MDX queries still failed. So next we did a fresh install of the Windows Server 2008 operating system along with SQL Server 2008, but the MDX queries still failed!

At this point we still believed that all MDX queries failed. But then I observed that the filters on the SSRS reports were been populated, whereas running the main report query generated the same ugly error message.

Been convinced my MDX was at fault, I started a trace of activity on the Analysis Services service which showed that small MDX queries worked fine (all the relevant log entries were present) whereas, large MDX queries failed and the only log entry was an "Audit Logout notification" event.

I started to play with the MDX and found that if I padded out a very simple query with comments it started to fail around the 1096 character mark i.e. simply adding /******/ until the whole statement was over 1kB caused the query to fail! Clearly something major was going wrong!

Still believing it to be a hardware or software implementation error, we proceeded to install SQL Server 2008 OLAP database on another identical box. No problem! MDX of any size worked fine. So it must be a hardware or software implementation fault????

After much scratching of his head, my good friend and colleague Eric Moutell eventually announced that the only difference between the two machines is that one had been set up with SPN entries to allow Kerberos authentication. So he deleted the entries, rebooted the world, rebooted the world again and finally any size MDX query worked on the original box. Ah ha! We were getting somewhere. So now we knew that nasty old Kerberos was having a laugh at our expense!

We got in contact with Microsoft support and after sending detailed information about the bug and several gigabytes of server logs, they eventually admitted we had found a bug in Windows Server 2008 which they have agreed to fix!

For those of you experiencing a similar problem, I reproduce the email from Microsoft support which you may find useful. In our environment, the server is running Windows Server 2008 64-bit with Microsoft SQL Server 2008 with the latest cumulative update patch CU3. The client machines we tried were either Windows Server 2008 64-bit or 32-bit running Microsoft SQL Server 2008 CU3.

From: Camino De Vicente Rodriguez
Sent: 16 March 2009 13:58
Subject: RE: Your case with Microsoft (SRZ090205000308)
Escalation engineers have confirmed that is a problem on 
Kerberos.dll related to encryption with AES on Windows 2008 and Vista.
They have reported to the Product Group and,
after analyzing it, they have accepted to fix it.
The estimated time for the fix is May 
(it is required two month cycle hotfix test pass).
I am sending you the Workarounds if you cannot wait until May:
If Kerberos authentication is a requirement, run Analysis Services 
on a Windows 2003 Server, since Windows 2003 Server is not AES aware.
Use Windows 2003 Server, Windows XP, or Windows 2000 Server 
to run client applications that will be connecting to the
Analysis Server configured for Kerberos authentication and running on Windows 2008.
Since these operating systems are not AES aware,
that will avoid use of AES for encryption/decryption.
Avoid configuring the Analysis Server to use Kerberos 
Authentication, since this will result in the AES
not being used for encryption/decryption
Add ";SSPI=NTLM" or ";Integrated Security=SSPI" to the connection string, 
which will force use of NTLM and avoid use of AES for
encryption/decryption
Best regards,
Camino de Vicente
EMEA GTSC Development Support Engineer
Posted by DrJohn with 10 comment(s)
Filed under: ,
More Posts Next page »