Architects and Architectures
12 March 13 06:55 PM | GavinPayneUK | 1 comment(s)

In the last few months I’ve been presenting a new session at community events that’s had a positive reception that I’m pleasantly surprised at.

It’s had several titles, but currently its “How to be a more successful architect” and I originally created it as an overview of my Microsoft Certified Architect journey, although I’ve taken it on a different course since then.

Hopefully of interest to everyone

The session is hopefully interesting to everyone in the IT industry, not just a specific type of SQL Server professional.  It’s foundation is the idea that regardless of your job title, you’re probably informally, or formally, performing the role of an architect at some point in your working day.

It could be you have to clarify someone’s requirements so you can understand how whatever you create for them demonstrates its value the most.  Or, you could you to justify to someone, or yourself, why the solution design you’ve come up with is the right design as you’ve taken a structured path to designing it.


Presentation slides

The slides from my most recent delivery of the session at SQLSaturday #194 in Exeter are available here.   Like all architectures its evolving, so I’m sure there’ll be newer versions in the future.


Presentation links

Within the presentation, and during my delivery of it, I mention numerous online resources that I recommend reviewing as you develop your IT architect skills.


Architect Certifications

Global IT Architect Association’s CITA-P here.

The Open Group’s Certified Architect here.

Microsoft’s Certified Architect here.

Architect Resources

Microsoft’s The Architect Journal here.

Perspectives Based Architecture for requirements gathering here.

ATAM for prioritising requirements here.

Waterfall vs. Agile methodologies here.

Influencing Without Authority book here.

MSDN Patterns and Practices library here.

Microsoft Application Architecture Guide here.

Solution Lifecycle Management

ITIL Application Life Cycle Process here.


Hopefully something somewhere in this post or the web links has caught your imagination, in which case, please share just what in the comments section.

Virtualisation limitations still exist, even in 2013
06 February 13 11:43 PM | GavinPayneUK | with no comments


While advances in technology are seemingly continuous, and the limitations of yesterday’s systems are now the minimum requirements of tomorrow’s, care should still be taken to make sure every capability scales as much as you need it to.

Virtualisation of yesterday

In the last few years, the capabilities of virtualisation have increased dramatically.  It was only a few years ago that virtual servers were limited to 4 vCPUs, had only average performance storage, and never had enough memory.  That explains why for a long time it was only ever our “small footprint” servers that we virtualised.

Virtualisation in 2013

Today, the enterprise virtualisation platform vendors deliver capabilities that we only dreamt of a few years ago; 32 vCPUs, 1TB of memory, and high availability by default, for every virtual server.

That’s a lot of engine power we can put behind our virtual servers, allowing us to virtualise hopefully most of our data centres if we choose to these days.

Watch for the subtle limitations

However, despite these great headline capabilities, there are still inherent capability limits that the small print carries.  I found one of these during a cluster build recently and that prompted me to write this article.

Our project was using VMware 5 to host some fairly large nodes of a SQL Server failover cluster instance.  Each virtual server had 16 vCPUs and 128GB of memory, something that VMware can adequately provide, but it was the storage configuration where we hit a limitation by surprise.

The SQL Server cluster design we initially created required upwards of 60 mount points to be presented to each of the cluster nodes, however we then discovered that VMware has a limit of 45 clustered/shared SAN volumes (raw device mappings) being presented to a virtual server. 

What it quickly demonstrates is that while our focus was initially on whether or not the virtual server would run fast enough, the limitation that actually mattered to us was actually related to storage provisioning.

Published configuration limitations

The links below provide details from VMware and Microsoft about the complete limitations of their virtualisation software, and are worth reading if you’re planning to deploy virtual servers with more than the default resources.

VMware 5 Configuration Maximums

Windows Server 2012 Hyper-V Scalability

Columnstore Indexes Overview
02 January 13 10:10 AM | GavinPayneUK | with no comments

In December 2012, I presented about “Accelerating the Data Warehouse Using xVelocity Columnstore Indexes” at a SQLSanta event in London.

My presentation’s purpose, and demos, were to show how this type of database engine index, that’s new in SQL Server 2012, significantly improves the performance of data warehouse style queries. 

Columnstore indexes, and their complimenting Batch-Mode Query Processing feature, are used transparently by SQL Server once they’re created so the benefit for SQL Server professional is that that there’s no need to change existing data warehouse applications or queries to benefit from these performance improvements. 


The presentation’s slides are available here.

Session Questions

Below are some of the answers to questions that I remember giving during the session:

  • Even though the levels of internal compression of data within a Columnstore index are affected by the cardinality of the index’s data, the performance benefits of Columnstore indexes are so great compared to a non-clustered index that your data’s cardinality is not likely to influence whether you use Columnstore indexes or not;
  • You can use Columnstore indexes with AlwaysOn Availability Groups;
  • The SQL Server query optimiser will automatically consider using a Columnstore index and Batch-Mode Query Processing itself, you don’t have to re-write queries to use start using them;
  • Batch-Mode Query Processing only works with Columnstore indexes;
  • The column order, and number of Columnstore index keys, do not affect the performance of a query that using the Columnstore index.
What’s coming up
06 December 12 10:47 PM | GavinPayneUK | with no comments

In the last couple of months my community activities list has had things leave it and new things join it, so I thought share my future plans, and promote some of the events I’m supporting.

Microsoft Certified Architect : SQL Server – Giving back

Preparing for my MCA Board was the hardest, yet in hindsight the most rewarding and interesting, thing I’ve ever done.  The subjects it covers still interest me to the extent that I’m now contributing to the MCA programme itself, allowing the next people through the certification’s process to benefit from someone else’s experiences and inputs.

For those of you with the pre-requisite SQL Server MCM certification and who are interested in taking the next level of certification, there are more details here:

Additionally, free feel to email me if you have any questions about my experience of the programme

Accelerating Data Warehouses with Columnstore Indexes - SQL Santa – 14th December

The fact that Columnstore indexes work in a totally different way to how we’re used to SQL Server working makes me curious about them.  I’m sharing the learnings from my curiousness, and outcomes from a customer proof of concept test, in a hour’s presentation at a London event in mid-December.  SQLSanta is a day long even at Microsoft’s London venue, and has a range of Microsoft Product Marketing, and industry professionals, presenting about contemporary SQL Server industry topics.

Although the event is now fully booked, you can find more details about the sessions here as I suspect some will be presented at future community events, I’ve certainly submitted mine for some already.

Mission Critical Data Platforms on Windows Server 2012 Pre-con – SQL Saturday 194 – Exeter – 8th March

Christian and I are hosting this one day workshop with the agenda of un-leashing the new capabilities of Windows Server 2012 to make SQL Server deployments even more highly available in even more scenarios.  Rather than reviewing the introductory level aspects of Failover Clustering or Availability Groups, the day looks at how previous platform limitations can now be overcome using native features in the latest Windows operating system.  These include subjects such as the merging of high-availability and disaster recovery design patterns, the use of new storage formats such as SMB file shares, and using Hyper-V virtual machines in HA and DR configurations.

You can read more about our pre-con, and book, here

Coeo sessions at SQLSaturday Cambridge
12 September 12 12:52 PM | GavinPayneUK | with no comments

This weekend saw the UK’s first SQLSaturday organised by Mark Broadbent, and held in Cambridge, that was without doubt a huge success.

Coeo were lucky to have four of us present a staggering five sessions on the day; so thank you to the SQLSaturday team for selecting our sessions, and to those who chose to attend them.

I’ve put a link to the presentation slides for all of our sessions below:

  • I want to be a better architect - Gavin Payne
    Slides here
  • NUMA internals of SQL Server 2012 – Gavin Payne
    Slides here

  • Deploying a highly available BI environment – Chris Testa-O'Neill
    Slides here

  • SQL Server 2012 certification – Chris Testa-O'Neill and Pete Harris Slides here

  • Troubleshooting SQL Server with PowerShell – James Boother
    Slides here
This is Columnstore – Part 1
23 July 12 12:23 AM | GavinPayneUK | with no comments

This weekend I finally spent some time getting up to speed on the new xVelocity Columnstore index feature in SQL Server 2012 Enterprise Edition.

This is part one of three blog articles about my initial experiences of them, but more interestingly, a quick look under the hood at how they work.

What are columnstore indexes?

In summary, the Columnstore indexes feature is both a type of non-clustered index AND a new query processing mode built into the SQL Server 2012 database engine. Their aim is to massively increase (approx. 10x) the performance of the types of queries used within a data warehouse, or in simpler terms – querying very large tables as quickly as possible.

There are many explanations of what columnstore indexes actually are, how they’re different to row-based indexes, and how SQL Server’s implementation works so I’ll point you to perhaps the best links for these topics:

  • How do Column Stores work? - Thomas Kejser here
  • Inside SQL Server’s implementation – Conor Cunningham here

How would I use them?

Hopefully, the two links above will give you an idea as to what the feature was designed to do and the circumstances in which you might use them. The next section focuses on how you would use them and for this series of blog articles I’m going to use the two simple tables shown below:


Having created and populated the tables, I used random number based values to create 35 million rows in the fact table for 10 products, I then created regular clustered indexes on them:

create table tblFact (rowID bigint identity(1,1), custID int, prodID int, Sale money);

create table DimProd (prodID int,prodName varchar(200));

create unique clustered index idx_cls_tblFact on tblFact(rowID);

create unique clustered index idx_cls_Prod on DimProd(prodID);

insert into DimProd (prodID, prodName)
values (1, 'Car'), (2, 'Van'), (3, 'Truck'),
(4, 'Taxi'), (5, 'Horse'), (6, 'Helicopter'),
(7, 'BizJet'),(8, 'JumboJet'),(9, 'Ferry'),
(10, 'Hovercraft')

insert into tblFact
(custID, prodID, Sale)
(select convert(int,rand()*10), convert(int,rand()*100), convert(money,rand()*10000))
go 1000000

Followed by:

insert into tblFact (custID, prodID, Sale)
(select top 500000 custID, prodID, Sale from tblFact)
go 58

This will give you 30 million rows for about 1GB of disk space.

I then ran a simple query against the table that’s designed to simulate a simple star schema and the type of query a data warehouse might use:

select DimProd.prodName,
from tblFact, DimProd
where tblFact.prodID = DimProd.prodID
group by DimProd.prodName

I could have added supporting non-clustered indexes, but traditional index design isn’t the point of this article, but with the 35 million rows I had in my fact table the query takes 4 seconds to run.

Adding a columnstore index

The next step is to actually add a columnstore index, this is quite a big step as:

  • You can only have 1 columnstore index per table
  • Creating the columnstore index makes the table read-only

The following command adds a columnstore index:

create columnstore index idx_cs_tblFact on tblFact(custID, prodID, Sale)

You can tell from the time it takes to create, on my SSD drive about 50 seconds, that there’s a lot of work going on to create it; for comparison, creating the equivalent regular non-clustered index takes about half the time.

If I then execute the query used above again, the query optimizer will use the columnstore index AND the new batch query processing mode that Conor mentioned, and the query will now execute in less than a second.


Although the performance improvements in our example might not seem great, 4 seconds down to 1 second, in further tests I’ve increased the fact table from 30 million to 90 million rows, yet didn’t seen a 3x increase in query runtime.


Finally, for this article, you can begin to see how columnstore indexes are both different and similar to normal indexes by looking at their properties in SSMS and sys.indexes with this query:

select * from sys.indexes where type = 6

Books OnLine lists Index type 6 books as a “Nonclustered columnstore index”, while Index type 5 interestingly is “Clustered xVelocity memory optimized columnstore index (Reserved for future use.)” This suggests there might be quite a lot more than just updatable columnstore indexes on the roadmap.

First and next articles

This first article was a quick introduction to creating and using a columnstore index, however, it should be enough to allow you to create your own demo environment and begin writing queries to see the performance benefits for yourself.

In the next two articles I’ll show you how columnstore indexes are physically stored within the database and when the query optimizer does and doesn’t decide to use them.

Of course, this is all just my first look at columnstore indexes, if you know something I’ve not mentioned, please feel free to comment on the post!

Efficient, partial, point-in-time database restores
03 June 12 06:52 PM | GavinPayneUK | with no comments

This article is about a situation that many of us could describe the theoretical approach to solving, but then struggle to understand why SQL Server wasn’t following that theoretical approach when you tried it for real.

Earlier this week, I had a client ask about the best way to perform:

  • a partial database restore, 1 of 1300 filegroups;
  • to a specific point in time;
  • using a differential backup, and therefore;
  • without restoring each transaction log backup taken since the full backup.

    The last point might sound un-necessary because you’re restoring a differential backup, but the restore script originally being used meant SQL Server still wanted every transaction log since the full backup restored.  This article explains the background to the situation, the successful restore commands, and identifies what was causing every transaction log to need to be restored.

    For this article, let’s imagine we have a database of the following configuration:


    And, let’s imagine it has the following backup schedule:



    Then, let’s assume we want to restore fgTwo to the point in time labelled above in order to recover data from a table it stores.  Performing a partial database restore which will be required also requires the primary filegroup to be restored so SQL Server will automatically restore it making the future database look like the following:


    You’d expect the path to restore fgTwo to look like the path on the left of the diagram below, but for some reason the client was being forced to perform the restore steps on the right, despite their T-SQL restore commands appearing to follow the syntax in Books Online.




    To show how to restore the filegroup using the path on the left, and highlight what was causing the path on the right to be required, I’ll use a series of T-SQL restore commands.

    This first command restores the parts of the database we’re interested in from the full backup file, labelled “A” in our timeline, and the two most important parameters shown in green are what turns a complete database restore into a partial filegroup restore.

    database partial2
    filegroup='primary', filegroup='fgTwo'
    from disk = 'c:\parttest\partial1_FULL_A.bak'
    with norecovery, partial, replace,
    move 'primary' to 'c:\parttest\priamry.mdf',
    move 'primary_log' to 'c:\parttest\primary_log.ldf',
    move 'fTwo' to 'c:\parttest\fTwo.ndf'

    The next command restores from the differential backup, labelled “B” in our timeline, and means we don’t need to restore transaction logs A1 and A2.  In the client’s real-world scenario they actually had 120+ transaction logs between the full and differential backups, hence the desire not to have to restore each of them.

    It was this command that was the cause of them having to restore every transaction log since the full backup, the crossed out parameters in purple were what was used and causing it to happen.  Telling SQL Server again which filegroups to restore was forcing it to need to restore the entire log chain since the full backup.

    database partial2
    from disk = 'c:\parttest\partial1_DIFF_B.bak'
    with norecovery,
    move 'primary' to 'c:\parttest\priamry.mdf',
    move 'primary_log' to 'c:\parttest\primary_log.ldf',
    move 'fTwo' to 'c:\parttest\fTwo.ndf'

    Finally, with that step performed, you can then restore the subsequent transaction logs from after the differential backup with a relevant STOPAT parameter:

    log partial2
    from disk = 'c:\parttest\partial1_Log_B1.trn'
    with norecovery, stopat='2012-05-29 15:13:12.750'

    log partial2
    from disk = 'c:\parttest\partial1_Log_B2.trn'
    with norecovery, stopat='2012-05-29 15:13:12.750'

    restore database partial2 with recovery

    At this point, the database is recovered with just the primary and fgTwo filegroups online, you can look in sys.master_files to see the state of all of the database’s data files and which are currently online.

    In summary, this article showed how to perform a partial database restore, and how you can easily have to restore more than you were expecting to due to a simple “over-clarification” in a restore command.

    I have a complete demo script of a more thorough test available for download from here.

  • Read Committed Snapshot Isolation– Two Considerations
    11 May 12 09:27 PM | GavinPayneUK | with no comments


    The Read Committed Snapshot database option in SQL Server, known perhaps more accurately as Read Committed Snapshot Isolation or RCSI, can be enabled to help readers from blocking writers and writers from blocking readers.  However, enabling it can cause two issues with the tempdb database which are often overlooked.

    One can slow down queries, the other can cause queries to fail.

    Overview of RCSI

    Enabling the option changes the behaviour of the default SQL Server isolation level, read committed.  By default, this isolation level means that if one session is updating a range of rows then another session cannot execute a select query against the same set of rows; the shared locks the select query needs are incompatible with the eXclusive or Update locks that the update query will be using; the session executing the select query ends up being blocked until the transaction involving the update query has committed.  That’s a very high level overview of what happens so those wanting to know more or become an MCM should watch this video here.


    Configuring a database to use the Read Committed Snapshot option aims to stop this happening by using Row Versioning.  Essentially, before a row is locked in preparation for changes a copy of it is put in a special part of the tempdb database, known as the version store.  When a select query then wants to read that row it uses the copy in the version store rather than waiting for the actual row to have its locks released.  Again, I recommend watching the video linked to above to see just what happens as I’m just giving a high level overview of the feature before I discuss the primary points in this article.


    Tempdb Performance

    One of the first things people mention when you suggest enabling the Read Committed Snapshot option is that it will put pressure on the tempdb database which stores the version store.

    That’s correct it will, but a properly configured tempdb database should never be a bottleneck in your environment.  You can read how super-tuning expert Thomas Kejser put the world’s biggest database workloads through SQL Server and never had to worry about tempdb performance here.

    There are two considerations I recommend for configuring your tempdb database in order to ensure good performance:

    • The number of tempdb data files – in the absence of your own monitoring for PFS, GAM or SGAM contention within tempdb I hear people at Microsoft these days recommending having 8 equally sized data files which I’d agree with.  They don’t need to each be massive, their total size could still be 8MB(!), but having more increases the number of internal system objects within the database so reduces the changes a single system objects becoming contended.  (The rule in SQL Server generally tends to be that if you have any kind of resource contention add more of that resource; whether it be IOPS, locks or PFS pages etc.).  Having too many data files is much less likely to cause you problems than having too few, just be sensible about how many you have.


    • The type of physical storage for the tempdb data files – as always, configure the storage you use for tempdb like you would for your database files.  Fast, reliable and striped arrays, and while Thomas will tell you that using dedicated disks isn’t important for him, others will tell you its crucial.  The only additional comment I have is that some people deploy NAND storage specifically for tempdb, in fact SQL Server 2012 had some changes to support this with failover clusters.


    Crucially though, having a poor performing tempdb database won’t break a user’s queries, they may just run slower than they could do, especially if you’re using the read committed snapshot option.


    Tempdb Sizing

    How much space your tempdb database needs depends on how you use SQL Server, some people need very little space whiles others need lots.  What’s relevant to this article is that the read committed snapshot option doesn’t just like tempdb space, it needs it.

    We mentioned how rows get written to the version store in tempdb as they get locked in the source user database, but what happens if there’s no free space in tempdb when that process goes to happen?  I did a test to find out, you can email me for my demo script if you want to try it for yourself.

    In summary, when tempdb is full update queries will still execute successfully but select queries which expect to find the rows they need in the version store will fail when they can’t find them there.  You’re likely to see this message:


    Msg 3958, Level 16, State 1, Line 1

    Transaction aborted when accessing versioned row in table 'dbo.t1' in database 's1'. Requested versioned row was not found. Your tempdb is probably out of space. Please refer to BOL on how to configure tempdb for versioning.

    So how do you stop that from happening? 

    The safety option I suggest is to place tempdb on a drive with lots of free space although its acceptable to configure tempdb to start with small initial data file sizes.  However, make sure you configure the date file auto-grow settings so that when tempdb runs out of free space SQL Server will automatically allocate it some more.  There are methodologies you can use for setting a good initial size for tempdb but for now enabling the autogrow option is a good failsafe.  Then, after a couple of months see what size the data files have grown to and consider setting that to be their initial sizes. 


    General Recommendations, Test

    Those are very general recommendations, along with setting the data file auto grow size to 500MB, but in the absence of specific testing or advice for your environment they might be something you want to try using in a test environment to help you learn more about tempdb behaviour in your environment.

    Knowing when you’re ready to attempt to become an MCM of SQL Server
    01 May 12 10:09 AM | GavinPayneUK | with no comments

    Since I became a “Microsoft Certified Master of SQL Server” I’ve had lots of people asking me how I prepared, how I knew I was ready to take the exams and what they were like. At the SQLBits certification session recently I was invited on to the panel to answer questions about the “modern” MCM program. This article hopes to answer some of the questions people ask yet they will always have very personal answers. What feels difficult for me might be slightly less difficult for you yet I might make it sound really difficult etc. Despite this, I thought I’d share my experiences with you and hopefully help you decide when you’re ready to attempt to become a Microsoft Certified Master.

    Before we start, I should point out a couple of things: this article has been reviewed by the former MCM program manager Robert Davies so don’t think it gives away secret information about how to prepare - none of that is covered by any NDA anyway, and also realise that the MCM is a very high level of certification, its not for everyone.  Read the experience the program assumes you have and ask yourself if that’s like your background.

    How I Prepared

    There were three specific steps I took for in my preparation and luckily I was able to make most of them part of my day to day role at Coeo. They were:

    1. Real-world experience

    2. Reading materials

    3. MCM readiness videos

    Equally as importantly, you need to find a way to make the intense preparation more exciting to you than just sitting down in an evening and reading a book. If you take that approach you’ll run out of mental energy and begin cutting corners in your preparation which will reduce your knowledge and familiarity with features. I introduce the word familiarity there for the first time, that’s an important word for the MCM’s requirements of you.


    This certification isn’t about knowing every syntax, command or trace flag ever published, it’s about knowing what features SQL Server has, how they work, how they work with each other and their limitations. No book will teach you all of that, only having used them a lot will you become familiar with them. For example, if the lab exam asks you to meet requirement X then you should immediately know that feature Y is the answer, then is not the time to be teaching yourself the difference between two similar features. Of course, if you also know there’s something which can stop a straightforward implementation of feature Y then your familiarity with it needs to be able to tell you in advance what that issue will be – and how to fix it.

    Real-world Experience

    Nothing can prepare you better for the MCM exams than real-world experience of having used SQL Server, that’s where your familiarity comes from. Of course, that probably doesn’t mean managing your company’s HR or Intranet databases as the MCM is focused at enterprise environments with requirements and issues which require MCM level experts to resolve them. I was lucky that my day job put me in those enterprise situations almost every day and while you may not think you have that luxury you’d probably be surprised.

    My advice is to do the following regardless of what your involvement with SQL Server is:

    • become familiar with the MCM’s exam objectives and your understanding of what they require you to know (the readiness videos may help if you’re not sure) in terms of features and best practices, they are available here
    • generalise each SQL Server request or requirement you get given in your current role, remove what makes it specific to your environment and most importantly, the limitations your current environment might put on how you would deliver it
    • then, in your own time, consider how you might meet that requirement in a way that best shows your understanding of the MCM’s exam objectives. For example, using peer-to-peer replication might not be possible in your environment but it could be the most appropriate way an MCM would implement a solution in an environment with no legacy restrictions
    • with your chosen approach planned, implement it on some virtual machines. You don’t need to worry about having complicated database schemas or large volumes of data, nor do your virtual machines have to be fast. You should be able to create the databases and data for your tests in 60 seconds, keep them simple so you can focus on getting the feature to work so you can learn about it. Spending a whole evening just trying to replicate all of the tables in your intranet database is an un-necessary way to waste time
    • with the feature installed, remove it and repeat the setup, determine if its best to use the GUI or T-SQL setup steps in case there are options you can’t chose through the GUI, then find out how you troubleshoot and diagnose that feature, are there DMVs or log files etc.

    That may seem like a lot of effort but that’s the level of preparation I put into learning about SQL Server for my exams. Of course, there will be some features you might know more than enough about already, great, tick those off and move onto those you don’t know so well.

    Reading Materials

    The MCM web site gives you a very thorough reading list, here, which covers almost every part of SQL Server in almost every depth you can imagine. How much of that reading list you can ever read is a very personal decision based on time, ability and ultimately your interest in some of the subjects. What I will say is that every little thing you know about SQL Server when you go into the exams will boost your confidence, especially if you find you know more than is required. Some of the books can be expensive and difficult to get hold of so don’t feel they’re compulsory reading. Instead, Books Online and blogs can give you the equivalent knowledge but you will have to spend more time searching for it. I’m not going to recommend any specific blogs, only that the types of blog you read should typically be authored by people who teach pre-cons or deliver level 500 training at conferences.

    When you do read a book, make sure you have a good note taking method. There’s no point in reading a book, getting to the end and forgetting what you read 2 days ago at the beginning. I used OneNote but there are many variants of the productivity tool around.

    MCM Readiness Videos

    I found the videos, available here, very useful but not in the way you’re probably imagining.
    I watched them at anywhere between 1.2 and 1.6x their normal speed depending on who was speaking. Why? Because I used them for revision and checking I knew what was being shown, not because I was using them as my only learning opportunity or because I wanted to emulate the classroom experience. If I didn’t know something that was being shown I either stopped the video and researched that subject, or watched that part at normal speed. However, you cannot pass the MCM exams just by watching 40 hours of videos! They are readiness videos, not a complete training course.

    That was “all” I did for my preparation. Notice I didn’t call it revision, as I was honest and had to learn new things for the MCM exams, you will need to as well. I also didn’t do any MCM training courses, practice labs, webinars or use any discussion groups etc. However, watching the #sqlmcm hash tag on Twitter though may be of interest though.

    Knowing When You’re Ready

    If someone asked if you’re ready to take the exams you’d always say you need another month or 2 months. In fact, the more you prepare the more you realise you don’t know so the more time you feel you need, that’s just because you’re aiming for a perfection that’ll never exist. Below are three recommendations my mentor, an MCM, gave me to help determine if I was ready and prepared:

    • if your preparation plan is going to be more than 6 months you’re not ready to even begin. You can’t sustain the preparation intensity for any longer than that
    • you should be able to confidently talk about of a feature of SQL Server to someone else who works with the product for 90 seconds and not end wondering if the other person believed what you were saying because everything you said should have been true and something you’d learnt
    • you should be able to take all four of the pre-req exams needed for the MCM the day before you take your first MCM exam and not think anything of it. If passing an MCITPro exam is still a big hurdle for you, you’ll need to re-consider your approach to the MCM exams

    So, that’s my advice, it’s not right, it’s not wrong, it’s just my approach that I used for taking, and passing first time, the MCM exams. Good luck with yours!

    What to do when the solution will never be in front of you
    25 April 12 10:46 PM | GavinPayneUK | with no comments

    Sometimes you have a requirement which you don’t have a tool in your toolbox for.  When that happens the solution may not be to just get another tool and add it to your existing toolbox, it might be you have to use an entirely new type of toolbox with tools you’ve never used before in it.  Doing that maybe a bold architectural decision but is often where the biggest performance or functionality gains can come from.

    Recently, in the SQL Server space there have been some technologies become available that showed how existing performance bottlenecks were removed by solving the problem in a completely new way to the previous successful but relatively temporary fixes.  Two examples we’ll consider are solid-state storage and Columnstore indexes.

    10x faster can’t always come from 10x the power

    Tonight I read the keynote presentation that Conor Cunningham gave at SQLBits recently about ColumnStore indexes (available here) but rather than read about how ColumnStore indexes work I read it from an architectural decision making perspective. 

    Midway through the presentation is the line “Making something 10x faster is HARD”, followed by a slide questioning how could you make a query run 10x faster.  The answer wasn’t optimising the existing way of doing things, it was to implement a completely new way of working, in this case a new query optimiser mode and index format.  The existing options just couldn’t be tuned anymore to meet the requirement.

    Game changing storage

    The other example I quoted was the introduction of solid state storage in database servers.  Making storage fast was always a goal of infrastructure teams, they used different RAID levels, different caches and different numbers of disks until the industry realised that 5 and 10% gains weren’t always worth the effort or cost of implementing.  Instead, what we wanted was something that was 10000% faster and the answer wasn’t more disks, it was a totally different approach to persistently storing data, in this case using memory chips instead of spinning platters.


    You’re probably never going to be asked to re-write a query optimiser or create a new type of storage hardware, however you maybe asked to keep making something go faster or scale further after having just done all you can do to make it faster.  It could be making reports run quicker or supporting more concurrent users.  My conclusion is that you shouldn’t just think about how to solve today’s problem using today’s options, think about a medium term option that may mean doing something radical but it won’t be a shock to you when you realise you need to do it.

    More Posts Next page »