April 2008 - Posts - Sparks from the Anvil

April 2008 - Posts

SSAS Stored Procedures - connecting to the transactional data source

There are often circumstances where an SSAS stored procedure needs to connect to the transactional database to perform a query.  This is certainly the case in a near-real-time OLAP solution where the cube sits directly on top of the application database and dynamic dimension security is implemented by a stored proc. Here the SSAS stored procedure has to query the transactional database in order to get the user's authorisation. Clearly the SSAS database has a data source and it uses this connection to process the cube. So how do we get the connection string?

Well the short code snippet below provides the solution. The code simply connects back to the current database using AMO and obtains the connection string from the data source object. It then removes the redundant "Provider=SQLNCLI.1;" before returning a valid SQL connection string that can be used by SqlConnection().

Of course, you must provide your stored proc assembly with the "Unrestricted" permission set and set its impersonation mode to "Service Account" for the connection string to work.

using AMO = Microsoft.AnalysisServices;

using Microsoft.AnalysisServices.AdomdServer;

...

/// <summary>

/// Obtains the SqlConnection string used by the OLAP database

/// </summary>

/// <returns></returns>

public static string GetSqlConnectionString()

{

    // connect to the current instance of Analysis Services and return the transactional data source

    AMO.Server svr = new AMO.Server();

    svr.Connect("Data Source=" + Context.CurrentServerID + ";Initial Catalog=" + Context.CurrentDatabaseName);

    // get the connection to the transactional database from the OLAP connection manager

    string connStr = svr.Databases.GetByName(Context.CurrentDatabaseName).DataSources[0].ConnectionString.Replace("Provider=SQLNCLI.1;", "");

    svr.Disconnect();

    return (connStr);

}

Posted by DrJohn with 1 comment(s)
Filed under: ,

The FACT is that it is not so DIM after all

I have always looked down upon the Fact and Dim prefixes applied to the tables within the AdventureWorksDW database with derision. "Why," I asked myself, "would anyone feel the need to use such prefixes? Clearly this is BI for dummies." Given that most BI projects use a nice clean, dedicated data warehouse database where all the tables are imported into the cube, perhaps my derision was justified.

However, in my current project I have been building a near-real-time OLAP solution directly on top of an application database. Of course, to isolate the cube from potential schema changes, I created an abstraction layer in the form of a set of views. However, as the project has progressed and the large team of developers has created more and more database objects, it has become increasingly difficult to find the views created for the cube among the debris.

I recently had to re-build the entire cube from scratch due to a bug caused by reverting to a previous version in source control. Of course, I took the opportunity to consolidate all my T-SQL code into a few files and adopt a consistent naming convention for all my views. So what naming convention did I adopt? Well Fact and Dim of course! Why? Well because anyone following on from my work who is half-familiar with the BI sample databases will immediately recognise the relevance.

I subsequently found out the big benefit of adopting this convention – it makes creating a DSV much, much easier! A quick search for all objects containing 'Fact' and 'Dim' soon had my new DSV populated with all the relevant objects.

So the Fact is that this practice is not so Dim after all!

Posted by DrJohn with 2 comment(s)
Filed under: