My presentation slide decks and demo scripts from SQLSaturday #150 have been uploaded.
Thanks to the planning team for selecting my sessions and thanks to everyone who attended my sessions – I enjoyed the opportunity to share my passion.
My presentation slide decks and demo scripts from SQLSaturday #150 have been uploaded.
Thanks to the planning team for selecting my sessions and thanks to everyone who attended my sessions – I enjoyed the opportunity to share my passion.
There’s an awesome FREE technical training event coming to Baton Rouge on August 4, 2012. That’s right; SQLSaturday and Tech Day 2012 will be held at LSU’s new College of Business facility. This is the fourth year that the Baton Rouge technical community has held this event and they expect around 400 people – if you live anywhere close by, then you should be there! William Assaf (blog | twitter) even got some local TV exposure for the event this year.
This event is bigger than your normal SQLSaturday. In addition to tracks for the SQL Server professional, there are also tracks for .NET developers, Windows Phone developers, SharePoint, and general professional development. Check out the full schedule here, and then sign up here.
Why am I plugging this event? Well, for one thing the Baton Rouge SQL Server community has always come west across the state line to support our SQLSaturdays in Houston. Secondly, I’ll be speaking at their event this year on “Managing SQL Server in the Enteprise with TLAs”. TLA is “Three-Letter Acronym” for those unsure. We have lots of those in techno-speak. I’ll be covering CMS, PBM, EPM, MDW, and more…. If you work with SQL Server and don’t know what those are or how they can help you, then register today for SQLSaturday #150 and come to my session at 8:20am in Room 1700!
Addendum: I’ll now also be presenting a second session “SQL Server 2012 Database Engine – Why Upgrade?” in the 2:45pm slot in Room 1700.
If you can’t attend this event, then check here for all the currently scheduled SQLSaturdays in the US and around the world!
TSQL2sday is a monthly SQL Server blogger event started back in late 2009 by Adam Machanic (blog | twitter). For more info on its beginning and purpose see the origin of TSQL2sday. Each month a different SQL Server blogger is the host (announces the theme and compiles a recap). This month’s event is hosted by Erin Stellato (blog | twitter) and the selected theme for this month is “A Day in the Life”.
Erin challenged us to track what we did in our jobs for a specific day and write about it. This is great because I often have trouble explaining to others (especially non-IT folk) what my title of SQL Server Service Engineer really means. However, as this exercise is just supposed to cover a single day, this is just a small sample of what I do. There is no such thing as a “normal” day for me. Sometimes my tasks are based on the “crisis du jour” prioritization method, and sometimes I can actually follow the team work plan. The variety in my job is one of the things I like about it. So here goes…
Unless, I have an early morning meeting with global colleagues, my day nearly always begins with processing email. Since I work in a global organization in the region whose workday is last to begin, even if I’d cleared my Inbox the day before, I always open my mailbox to encounter new emails from European and Asia-Pacific colleagues who have already completed or are wrapping up their workday. In that sense, this day starts out as just a normal day (no early meetings!).
Unfortunately for this write-up, it appears that summer time may be impacting my email load in a positive sense as I have only a handful of emails and only as a cc on a couple of subjects which one of my teammates is handling. One of the issues has to do with deploying SQL Server Enterprise Edition versus Standard Edition and licensing implications for the customer. My team is comprised of technical experts – we can tell the customer if what they are trying to do requires a specific edition of SQL Server to use the requested feature, but we are not involved in the licensing agreements between Microsoft and each customer. That is for others to figure out!
Email done and no looming crisis for today, I can get back to the task I’ve been working on previously – writing an automated process to rollout multiple T-SQL Scripts to multiple instances using PowerShell. These are the scripts which update the standard tables and stored procedures in the admin database we install on all instances along with a set of SQLAgent jobs which the operational DBAs use for system maintenance. Every so often, we need to rollout updates to these objects. Our current automated process for doing this (which was developed for SQL 2005) isn’t as automated as we’d like it to be. We have since created a CMS and are utilizing registered groups to run various processes (like EPM) and now want to extend that concept to this activity as well. I’m thinking within a couple of hours I can write a script to save our operational DBAs literally hundreds of man-hours. Easy, right?
If you’ve worked with PowerShell any at all – or any programming language for that matter – you know there is always more than one way to write a process to accomplish the task at hand. The challenge is in finding the most efficient way that gives you what you want. Our old script to run a set of .sql files was written in VBScript and called the sqlcmd utility. I figured I’m writing this in PowerShell, I’m using Invoke-Sqlcmd to get the list of instances from the CMS, I can use the Invoke-Sqlcmd cmdlet as shown in BOL in the second example and it will work just like sqlcmd. Wrong! It seems that example only works if you are running a SELECT statement in your InputFile. This particular set of .sql files should have no output unless it is an error and in my test I have a script which I know produces an error – but my output file is empty.
I try various parameters such as -ErrorLevel and -SeverityLevel and I even use -Verbose to no avail – still nothing is piped to my output file. I consult with my team mates to see if they tried this before; I search for examples on the Internet and the best I can find in one of the forums was someone else encountering the same thing, but with no solution for me. I can be stubborn some times and I’m not about to give up yet – after a couple of hours of struggling – I fire off an email to my SQL PowerShell buddy Allen White (blog | twitter) asking for his input – can I do what I’m trying to do with Invoke-Sqlcmd or should I revert to calling sqlcmd?
While waiting for Allen to respond, a couple of more emails have hit my Inbox. Yea! It appears that our request to rebuild one of our team’s test servers has been completed. We try not to do this too often, but part of engineering is writing scripts \ installing \ testing \ uninstalling \ enhancing scripts…repeat; over the course of time sometimes things get so messed up from all the testing (and occasional bad script) you just have to start over with a clean image. This is now a box we plan to use for testing our processes on SQL Server 2012.
It doesn’t take long before I have a reply from Allen – I hope he doesn’t mind if I quote him:
I honestly believe that it’s best to use the tool that best suits the task, so I’d use sqlcmd here, because it works the way you want it to.
Thanks Allen for the reminder not to use a hammer when a screwdriver is what you need! Sometimes, a hammer is all you have, but not in this case.
Now, it’s time for lunch. I head down to the cafeteria with my team mates and join other colleagues at our usual table. I don’t hang around too long chit-chatting as I want to get back to my desk and switch out my code and test so I can announce success at our afternoon team meeting.
Remember earlier what I said about more than one way to do something? Now, I have to decide how to go about calling sqlcmd.exe from PowerShell. I need to specify variables to all the parms based on the target instance and input file to execute – and the output filename and location is dynamically determined as well based on the target instance and input filename. I start with looking at Invoke-Command, then move to Invoke-Expression, but I’m still not getting my output file like I want it and I’m not able to detect if sqlcmd experienced an error to report in my general execution log. I have an example using [diagnostics.process]::start($exe,$args).WaitForExit() that seems to be getting me close to what I want, but now it is time to break for my afternoon meeting.
I’m the Technical Team Lead for a team of three. We each have our areas of specialization within the overall work plan, but try to keep each other in the loop so we can back each other up at any time. As needed (usually every 1-2 weeks), we meet formally to update the work plan, assign/reassign new/old tasks if needed, catch each other up on what we’ve each been working on and brainstorm areas for improvement. This is one of those meetings and since last week was a holiday week and we didn’t meet, we have a lot to catch up on. The nice thing about a team is having others to bounce ideas off of and this is what I do with my frustration in finding the exact syntax I need to be using to get the results I want from calling sqlcmd inside PowerShell. The next thing I know, one of my colleagues has done their own search and found a code example – I look and express skepticism as it is very much like what I’m already doing, but with one key difference that might make a difference; what can it hurt to try?
We continue to discuss how far we want to take this initial rewrite of our update process. We are also in progress of redesigning our whole automated install process and ultimately we want the update process to utilize what we are putting into place there. However, we have a more immediate need to have the operations team rollout some updates and decide that version 1 of the update process will do no more than we have already in place today (in terms of reporting), but it will be automated such that the DBAs only need to review the central output file for any problems. Selection of the systems requiring an update into a special CMS group can be done in an automated fashion as well as scheduling the update itself in SQLAgent. We decide to make further enhancements for logging the process’s results into a central table in a future version.
Our meeting continues with more brainstorming about the consequences of developing an install and configuration solution for SQL Server which can account for multiple versions and differing customer standards (e.g. install locations). We plot out on the whiteboard differing ways we can handle this – probably the umpteenth discussion like this that we’ve had; but each time we come in with new experiences and thoughts from what we decided previously and in some cases started trying to implement and we are therefore continually refining the solution. We are becoming more confident that we are developing a standardized, but flexible solution which is also more sustainable across multiple versions of SQL Server than our existing process.
The meeting concludes and although I’m anxious to try the code snippet my colleague found, it is really time for me to head home. I arrived at the office much earlier this morning than my normal start time trying to beat the rain and now I need to try to get home before the next round hits. There is some flooding already occurring around town. Working further on this script can wait until later. I know that once I do get started back on it, I won’t stop until I have it totally finished. That’s my life!
I probably learned more today in trying all the ways that didn’t work the way I thought they would than if the first code I tried had worked. This experience will pay off later, I know.
Today was an “Edison day”:
I am not discouraged, because every wrong attempt discarded is another step forward.
I have not failed. I’ve just found 10,000 ways that didn’t work.
P.S. I did finally get the script functioning the way I wanted the following day and it will save our operations team hundreds and maybe even thousands of hours. This is what I do!
What is TSQL2sday? Back in late 2009, Adam Machanic (blog | twitter) had this brilliant idea for a monthly SQL Server blogger event (the origin of TSQL2sday). This month’s event is hosted by David Howard (blog | twitter) and this month Dave is letting us chose our topic from any of the prior 25 topics! As my first foray into this event wasn’t until the 14th occurrence, I really like this idea and selected “TSQL2sday #007 Summertime in the SQL” as my second chance topic. Okay, so it is January, but it was 70+ degrees in Houston today, so quite balmy. However, that wasn’t why I chose this topic; I really chose it because this topic was about what is your favorite “hot” feature in SQL Server 2008 or R2. I thought about “updating” the topic to SQL Server 2012, but I’m really not sure yet which new “hot” feature of SQL Server 2012 will turn out to be my favorite – and after 3 years, I definitely know which SQL Server 2008 set of features is my personal favorite – CMS and PBM.
The Central Management Server (CMS) and Policy-Based Management (PBM) features have made the overall management of large numbers of SQL Server instances, well, manageable.
The CMS enables us to organize instances into multiple different classifications based on version, location, etc. We rebuild the CMS on a regular schedule based on the data in our asset management system. This ensures that all DBAs have access to a CMS with all known instances. If you are not familiar with the CMS – it does not grant any access to the instances themselves and connectivity using it only works with Windows Authentication, so there are no security loopholes here.
We then use these CMS groups as input into our various meta-data and compliance collection processes. Approximately 90% of our technical baseline compliance evaluation is accomplished via policies in PBM. We’ve incorporated all of this using the EPM (Enterprise Policy Management) Framework available on Codeplex with a few tweaks of our own to work better in our environment.
If you haven’t yet checked out the CMS and PBM features, I encourage you to do so today. I have two previous blog entries relating to this topic – “Managing the Enterprise with CMS” and “Taking Advantage of the CMS to Collect Configuration Data”. I’d also highly recommend that you watch the SQL Server MCM Readiness Videos on the Multi-Server Management and PBM topics.
And, it is good to know that by the time this entry is posted – we should be back to our normal 50 degree January weather in Houston!
So, what’s in your wallet msdb?
If you’ve been working with Microsoft SQL Server anytime at all you can name the system databases fairly quickly – master, model, msdb, and tempdb. Almost rolls off your tongue as smoothly as Tenaha, Timpson, Bobo, and Blair (if you don’t know that reference, click here for some interesting Texas folklore). I’ve been working with SQL Server since 4.21 and the system db names have not changed from release to release. But, what did change drastically over all those releases are the contents – especially for msdb. Have you taken a look lately? I did and here is what I found.
Lots of new tables, lots of new views, and lots of new roles – well, just lots of new objects period! I always investigate what’s changed in master with every release, but it seems I’ve been delinquent in thoroughly examining msdb changes. It appears every new feature being added to the SQL Server database engine is being “managed” from the msdb database.
The objects supporting SQLAgent have always been in msdb – jobs, operators, alerts; along with the objects supporting database backup and recovery sets. Then DTS came along which later morphed into SSIS – guess where those objects are stored? Maintenance plans, log shipping, and database mirroring also join the party in msdb. And, oh yeah – SQLMail which became DatabaseMail…
Now with SQL 2008 and R2, you can add to the list – CMS (Central Management Server), Data Collector, PBM (Policy Based Management), DAC (Data Tier Applications), UCP (Utility Control Point)… but “Wait!” you say. “Don’t the Data Collector and UCP have their own databases?” Well, yes, they do – typically MDW and UMDW respectively. But, those databases only hold the collection results – the management and configuration of those features is all contained in msdb.
Here’s what the SQL Server 2008 R2 BOL says about msdb:
The msdb database is used by SQL Server Agent for scheduling alerts and jobs and by other features such as Service Broker and Database Mail.
After seeing this, I had to laugh, I totally missed Service Broker in my list, but then I’ve never implemented it. On the other hand, look at all the features using msdb that BOL just lumped into “other features”! Bottom line – the BOL description really sells msdb short on its importance to so many features. AND, the default recovery mode for msdb is Simple. Before SQL 2008 I would have said that a daily full backup was typically sufficient. You didn’t usually make too many changes to jobs such that you couldn’t easily recreate a day’s work if you lost it restoring to the prior day. And, best practice said to take a backup after making modifications which impacted it anyway… but now, with so many features tied to msdb – do you really know when you are making changes to msdb? Considering all that is now dependent upon msdb – is your backup and restore strategy for msdb adequate?
However, before I go examine my msdb recovery strategy, I’m thinking of opening a Connect suggestion – rename msdb to KitchenSinkDB. Do you think I could get some votes for that?
Filed under: SQL Server, SQLServerPedia Syndication | Tagged: backup, CMS, DAC, Data Collector, Microsoft Connect, msdb, Policy-Based Management, recovery, SQL Server, SQL Server 2008 R2, UCP | 1 Comment »
In an earlier blog entry, I showed you how you could build and populate your CMS groups to use with SSMS to run multi-server queries. Okay, so now you are probably thinking of lots of configuration type info which you’d like to collect periodically from your instances. Obviously, you don’t want to have to do this manually in an SSMS Query Editor window all the time, you want to automate it, right? You can probably guess where I’m headed next… Yep – PowerShell!
OK, so that is interesting, you have the SQL Server instance version results for all the instances registered in the defined CMS Group in a file on your hard drive. But what you really want to do is collect several different properties about your server instances and store that data such that you can query (report) it later.
For this case, let’s assume that you have a database called DBAdmin on the same instance as your CMS – this is also your “repository instance”. In the DBAdmin database create the following table:
You do need to be aware in multi-versioned environments that some properties are not available in all versions. Thus you’ll need to be able to handle null values returned by the SMO properties. For these properties, you will want to collect in a variable prior to building the insert statement as shown by the $WebXPs variable in the sample code below.
Now, loop through the CMS group as before and collect the instance level information using SMO and store the results in the repository.
Hopefully, you see the possibilities for extending beyond the properties shown here.
So now all that remains is to save your code into a PowerShell script and schedule it to run from your repository instance’s SQLAgent to collect your SQL Server configuration data on a regular basis. Don’t forget to decide how much historical data you want to keep and set up a step in your job to purge the repository of old data. And finally, in case you didn’t think of this already, you’ll need to schedule the job steps running the PowerShell script to execute as an account which has the necessary access to all of the registered servers in the CMS Group.
So, why use CMS Groups for this task? I think this gives you more flexibility in setting the scope of your collection scripts without requiring you to change any code within those scripts once you’ve finalized them. The only thing you need to manage is the CMS Groups, which can be manual or automated or both, as discussed in my blog entry Managing the Enterprise with CMS. It allows you to use multiple repositories which run the same scripts but against different CMS groups (for example regionally or by customer).
Finally, this is sample code to demonstrate what is possible. It should work, but use at your own risk and test, test, test in your own environment!
In Microsoft SQL Server 2008, a feature called Central Management Server (CMS) was added to enable the execution of T-SQL commands and Policy Based Management policy evaluation on multiple servers at one time. As someone working in a large enterprise environment, I’m constantly looking for features which enable better, more efficient ways to manage hundreds of servers within a team support environment. While the minimum version requirement to perform as a CMS is SQL 2008, you can register lower level version instances. Now we are talking – because how frustrating is it to have a new feature available, but virtually useless until you have a majority of systems on the new version. So, now you need only one SQL Server 2008 instance and you can start taking advantage of this new feature to manage the current SQL 2000 and SQL 2005 instances already in place.
However, if you read BOL (Books Online), then you’ll see that the instructions for adding groups and instances are very manual and done via the SQL 2008 SSMS (SQL Server Management Studio). Did I mention that we have hundreds of instances to manage? And, did you catch the reference to “groups”? It doesn’t say explicitly in BOL (at least that I’ve found), but an instance can belong to more than one group. This opens up a whole plethora of possibilities for using the CMS. But first we must determine how to automatically create the groups and populate them and maintain them.
A little further investigation reveals that there are now a couple of tables in the msdb database named dbo.sysmanagement_shared_server_groups and dbo.sysmanagement_shared_registered_servers which are designed to contain the groups and registered servers for the CMS. So, now I can bypass the manual registration in SSMS and write a PowerShell script (what else?) to create the desired groups and populate them from our central asset database based on whatever criteria I want the group based on. Then I can schedule that script as a job to run in SQLAgent to ensure the CMS groups are refreshed and relatively in sync with the asset database (considered “the truth”).
The msdb database in SQL 2008 also contains two new roles: ServerGroupAdministratorRole and ServerGroupReaderRole. If you want someone other than the Windows Authenticated logins already in the sysadmin role on the CMS instance to manage your CMS, then you can assign their Windows login to the ServerGroupAdministratorRole. Similarly, if you want others to use the registered groups and servers in this CMS, then their Windows login must be assigned to the ServerGroupReaderRole on the CMS instance.
BOL (2008) states that the CMS Server cannot be a member of a group that it maintains, I haven’t figured out why yet. The work around is to register the CMS server name as servername\instancename, but register it as servername\instancename,port in its CMS Group, this seems to work just fine, but be aware of this caveat.
SQL Server 2008 R2 did not provide any changes to the CMS feature, so you should be able to use either SQL 2008 or SQL 2008 R2 as your CMS. I’ve had no problems using SSMS 2008 with a CMS based in SQL 2008 R2 or using SSMS 2008 R2 with a CMS based in SQL 2008.
The first key to using CMS is to determine what groups make the most sense for your organization. Depending upon what data is available in your asset database, the choices are almost limitless. For instance, you might chose groups based on region alone or based on region plus version for another combination. Thus, you might have groups which look similar to AM, AP, EU, AM_2000, AM_2005, AM_2008, SQL2000, SQL2005, SQL2008, etc. You could also have groups based on classifications such as DEV, QA, and PROD; or even based on application landscapes such as Biztalk and Sharepoint.
Some things to consider when setting up your CMS include connectivity and security. The CMS is not magic – although it might seem like it the first time you perform a multiple-instance query! The CMS does not store authentication credentials, only the server connection info. Thus, when using SSMS to execute T-SQL or policies against a group, Windows Authentication is used. The authenticated account will have no different privileges than if you had connected to each instance in an individual query window using Windows Authentication. You’ll actually see in the status bar of the Query Window in SSMS how many instances in the group were successfully connected (e.g. 9 of 10 or 10 of 10). Therefore, even if you connect successfully to every instance in the group, if your login has differing privileges on the instances, then your results may differ as well depending upon the privileges required to execute the query submitted. So far, this works exactly like Registered Servers in your individual SSMS, however, the benefit is the ability to access the CMS groups and registered servers from anywhere in your organization, by anyone (with the proper access of course!). No longer do you have to keep registering your servers – and your whole team now has access to the same groups and registered servers.
On the connectivity side, if a CMS group contains hundreds of instances, especially if regionally dispersed, it may take a while to establish the connection to all of the instances, so be patient – or consider further dividing your groups into smaller subsets. If an instance is offline or there is a connection timeout or you do not have permissions, then an error\warning will be reported for that instance when you try to execute a query in SSMS against the group; results will still be returned for all other instances.
Also, be very, very, very careful when executing anything other than a SELECT statement – always check the status bar in the Query Editor window to validate the number or instances to which you are connected, which group you are executing against, who you are executing as and to which database you are connected by default.
So for a very simple example of what it can help you do, consider that you have version stored in your asset database and use that to build version groups, but you aren’t 100% sure that these are correct. In SSMS, you can right-click the version group in the Registered Servers in SSMS and select “Run Query”. Then, you can execute the query “Select @@version” or “Select SERVERPROPERTY(‘ProductVersion’)”. You can quickly scan the results for any instances which do not return the expected version.
Hopefully, this provides a good introduction to CMS and has started you thinking about how you can use it outside of SSMS…which will be the subject of my next post!