Are you trying to build a SQL Server database project and getting
The imported project "C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\SSDT\Microsoft.Data.Tools.Schema.SqlTasks.targets" was not found. Confirm that the path in the <Import> declaration is correct, and that the file exists on disk.
We had this recently when trying to build one SSDT solution but not when building another.
Checking the build agent the error was correct that file didn’t exist. But later versions did.
A nice feature of TFS is that on clicking the error in build it takes you to the exact point in the sqlproj file that the error relates to.
< Import Project="$(MSBuildExtensionsPath)\Microsoft\VisualStudio\v$(VisualStudioVersion)\SSDT\Microsoft.Data.Tools.Schema.SqlTasks.targets" />
You will see that the path is made up of a number of components. Importantly $(VisualStudioVersion) defines the folder it looks for the targets file in.
Where is this property set?
We did some digging and found that the solution files differed.
The one that worked looked like this
Microsoft Visual Studio Solution File, Format Version 12.00
# Visual Studio 2012
and the one that failed looked like this,
Microsoft Visual Studio Solution File, Format Version 11.00
# Visual Studio 2010
As I#m sure you’ve done we put 2 and 2 together and figured that $(VisualStudioVersion) was being set by VS based on the solution file version.
We changed the solution file that wasn’t working to point to Visual Studio 2012 and hey presto it all worked.
This means that if you have multiple versions of Visual Studio installed then you could end up with odd behaviour with automated builds.
There are a few things that are bad IMHO
1. there is no way in the IDE (that I can find) to see that the solution is targeted to Visual Studio 2010 and not Visual Studio 2012
2. no way to change it without editing the solution
3. the fact this property is in the solution and not the project.
Options you have as described in the blog post
1. Put <VisualStudioVersion>11.0</VisualStudioVersion> in a property group in your msbuild file
2. Put <VisualStudioVersion>11.0</VisualStudioVersion> in a property group in your sqlproj file
3. Pass as a property when calling msbuild i.e. /p:VisualStudioVersion=11.0
4. Create a new solution with a header matches the version of VS you want to use
5. Change the solution so the header matches the version of VS you want to use
Are you looking to go to SQLBits on the cheap. If so then we have arranged some great value accommodation.
£39 per night includes a full english breakfast at the Priorslee Rooms which is a 5 minutes from the venue.
For more details on accommodation options available visit the SQLBits accommodation page
If you’ve been reading the wires about SQL Azure you will have seen a change to the tiers that are being offered. Thsi is all so that you can get a more predictable performance level from the platform and also to enable Azure to manage the resources.
You now have 3 categories basic, standard and premium, and within those you have some other levels. This enables you to start your service at a low level and then as required request more capacity (through the portal or API).
For details on the performance aspects of the levels you can read here
Something to note is that the switch between levels may not be instant. This depends on what level you are moving from and to and the capacity available on the physical servers your DBs are currently residing on.
“An SLO change for a database often involves data movement and therefore many hours may elapse before the change request completes and the associated billing change becomes effective. Data movement occurs for changes when upgrading downgrading a database and may also occur when changing performance level of the database.”
If you’re data has to be moved then for a 150GB it could take 2 days to make the transition, 17 Hours for a 50GB DB, the rule is as follows
3 * (5 minutes + database size / 150 MB/minute)
For the exact details go to http://msdn.microsoft.com/library/azure/dn369872.aspx
As a note I’ve created a test DB as a standard edition. I was able to increase and decrease the level instantly. I was also able to increase the the database to a premium database instantly. This would have been due to there being capacity on the servers that the database resides. You cannot rely on this behaviour, it may or may not be instant.
It was sad to hear today about computermanuals.co.uk closing down after a period of administration.
Whilst I do love books, the access to technical information, of high quality on the internet and accessible on your PC does mean the printed technical book does look to be going the way of the dinosaur.
The silver lining is that you can get some books really cheap in their closing down sale
Our next SQLBits
event in Telford in July has topped the number of registrations for the Friday and we are still over 2 months to go. This is going to be our biggest regional SQLBits ever by a long way both in terms of attendees and sessions.
I’m currently in the planning for the Friday evening party and it will be a special event.
If you care about data and work with SQL Server the SQLBits XII is an event that you do not want to miss.
Its not too late to register but I’ve been told that accommodation is in short supply so make sure you make your booking soon.
My career started at a company where we hardly had email, the network was a 10base2 affair with cables running all around the office. You used floppy disks and the thought of a GB of data was absurd. You had to look after every byte and only keep what you really needed.
Whilst the cost of the spinning disks gradually falls the cost and size of flash storage continues to plummet.
The new Crucial SSD is £380 for 1TB
I can now keep 128GB of data on a SD card the size of my finger. It only costs $50 a month to store 1TB of data in Azure ($61.29 on Amazon S3 Europe)
This brings long with it a whole host of problems.
Its too cheap
Whilst before you had to manage your mailbox, your desktop, your database because you ran out of space.
Now its cheaper to keep data than it is to get rid of it,
Keeping data around leads to many problems, and considering the cost of the media is only a small part of the cost of maintaining the data
1. You have to manage it, i.e. Backups
2. You have to secure it.
3. The more you have of it the more risk there is of loosing some of it.
4. When in a database having more data affects your query performance.
5. Can your employees find the data they need, is it a game of find the needle in the hay stack,
6. Most data protection acts state you should only keep data for as long as you need it.
7. You are also liable to give customers copies of the data you have on them. Do you know all the data you have on someone. Even those notes the sales person wrote about someone in a onenote notebook.
Its just like the blob
The reason that this is a problem for many organisations is that its never something that is on the list of priority things when a company starts or a project starts. Its never a problem as there is always enough space when things start. However as time goes by the problem gets bigger and bigger until it becomes a problem.
When it becomes a problem its then such a big thing to deal with, to do the work, change attitudes, implement policy, that no one really has the appetite.
What are you doing about it?
Do you think about what you are storing and decide if you really need to store it?
Do you have a data retention policy?
Do you have a data deletion policy?
What about a data archive policy?
Are you actively reviewing what data you are holding?
Do your IT guys really push back when a team says we need 1TB of storage for project X?
Do your developers have data retention in their definition of done?
Deal with it sooner rather than later or it will be just too big to digest.
Data is a huge part of what we do and I need someone that has a passion for data to lead our SQL team.
If you’ve got experience with SQL and want to lead a team working in an agile environment with aggressive CI processes.
Do you have a passion about data and want to use technology to solve problems then you are just the person I am looking for
The role is based in London working for on of the top tech companies in Europe.
Contact me though my blog or linkedin (http://uk.linkedin.com/in/simonsabin), if you are interested.
Then contact me we need your services for project.
Use the contact form on this blog or contact me through linkedin http://www.linkedin.com/profile/view?id=2895653
When you go shopping for a new gadget do you just go to your nearest electronics store (currys, best buy, etc) and read the marketing ticket next to the product?
No you probably don’t. You more than likely ask your friends if they’ve got one and what they think. Or you go and look for reviews that compare the different options.
The same applies to recruitment.
Employers don’t just rely on your cv (your marketing ticket), who knows what rubbish you’ve put in it. Employers want to validate that information.
Linkedin I find key to that validation process.
It takes 2 minutes to complete your profile and keep it up to date. Whilst it is similar to your CV it has a few benefits.
1. Its a very small world. I can see if you are connected to anyone I know or have worked with someone I know.
2. It validates what you have in your CV. Because its public you aren’t likely to exaggerate as much on Linkedin than you may do in your CV
3. You can get endorsements and recommendations from previous colleagues.
You wouldn’t included snippets of recommendations in your CV you can have them in your linkedin profile. Whats more, with the blandness of most references these provide some great information to base a decision on.
So make sure that you complete your profile.
When I started looking at SQL conferences I had heard about this conference held in Denmark in the middle of nowhere with everyone staying in huts and was always interested as to how it worked, confining all the delegates in one place with no opportunity for escape. Later on when I became an MVP I started hearing stories about a conference in Denmark, the same one. It was the extreme of the conference work hard play hard ethic. The play hard being the legend. The huts are holiday homes and the conference venue is on the site of Legoland. On one of the nights they have a bbq and on the last night they have a dinner followed by the pool party. This isn’t any old pool party but here they take over a water park with a bar. You can imagine the scene, a conference of Oracle and SQL guys (a few woman), drinking and then going down water slides in groups of 4 on huge rubber rings.
Almost two years ago I was invited to speak at said event and unsurprisingly decided it was a must. The event was Miracle Open World 2011.
Now whilst the event and the beach party was something to behold, you try holding a beer whilst shooting down a vertical slide, my overriding memory was a talk I attended.
“You’ve got to go and see this guy. Last year he dissected the internals of a PDF document.”
This was a comment by on of the attendees about a session on dissecting mdf files.
Being a fairly hard core SQL guy I though this was going to be a poor show, mdf files are very complex. But I was intrigued as I was looking at how to explain to developers how SQL worked, and thats what the session promised.
Well I wasn’t disappointed, the session was by Mark S. Rasmussen, on his OrcaMDF project. Essentially he has written code that allows you to point it at an MDF file and it will reconstruct rows and columns from the data. Essentially he’s written the storage engine parser, that parses pages into the relevant information.
Even at that stage Mark had made huge progress and could different types of pages, have linked lists of pages and the allocation maps, decode page values into the column values. Not simple stuff by any stretch.
Mark is a seriously clever and nice guy. If you ever see him at a conference say hi and have a chat. He’s also got a fascinating past, more examples of your stupidly clever he is.
Since that talk Mark has been talking around the world (no idea how he finds the time, or the money to do it) and I’m really please today as I found out that Mark has been awarded an MVP for SQL Server. Fantastic and well deserved.
You can followed Mark on twitter http://twitter.com/improvedk or read his blog http://improve.dk
He has made the OrcaMDF project available on Github https://github.com/improvedk/OrcaMDF
This is where it started http://improve.dk/archive/2011/04/17/miracle-open-world-2011-follow-up.aspx
More Posts Next page »