Yup yet another instance of something breaking when a cluster runs on the other node.
I don’t wish to apportion blame here or point any fingers but .. following on from my last post I managed to have one of the production clusters that normally runs on node1 left running on node2 after a restart .. this time it was my own monitoring application which stopped working, sigh!
I have customised performance dashboards I create with SSRS and these point to the cluster .. normally, well I say normally but that’s not really true as every cluster appears to be different, I’m essentially collecting o/s stats so I don’t want to really connect to the SQL virtual server and I certainly don’t want to connect to the node, so in this instance I was connected to the cluster name – this worked fine when the cluster was on node1 but could not connect when the cluster was on node2.
First check was to see if the core resources had been moved over to node2 , nope, but that didn’t fix the issue. I tried to connect by using the sql server instance name but that didn’t work either so in the end I had to resort to using the cluster ip address.
The end result is that I have three failover clusters all running windows 2012 and I have to use a different method ( name ) to connect to each.
Just proves that if you don’t test you just don’t know what will happen.
A quick disclaimer .. I don’t build the clusters :)