• Categories

  • Recent Posts

  • Archives

  • Copyright Notice

    Copyright © Nancy Hidy Wilson, 2010-2013. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Nancy Hidy Wilson and nancyhidywilson.wordpress.com with appropriate and specific direction to the original content.

TSQL2sday #70 – Strategies for Managing an Enterprise

tsql2sday

Jen McCown (Twitter) of Midnight DBA is the guest host for this month’s SQL blogger event known as T-SQL Tuesday (#TSQL2sday) which was started almost 6 years ago by Adam Machanic. This month, Jen has assigned us the topic: Strategies for Managing an Enterprise. Jen will be doing a wrap-up summary of all blog posts submitted on this topic per the rules and I’m looking forward to everyone’s input on this subject.

I’ve been presenting a session for the past several years at SQLSaturday events entitled “Managing SQL Server in the Enterprise with TLAs”. The TLAs (three-letter acronyms) are CMS (Central Management Server), PBM (Policy Based Management) and EPM (Enterprise Policy Management Framework). I’ll be presenting this session at SQLSaturday #447 Dallas on Oct. 3rd, 2015, so you can come learn the details of these features then. But, per the assigned topic for this post, let’s focus on the “strategies” driving the usage of these features.

For me, one of the main goals in managing the enterprise is finding ways to reduce the effort in managing that landscape –whether two instances of SQL Server or two thousand instances. A strategy for getting there is organization. The CMS enables you to define groups to which you register your SQL Server instances and then you can perform tasks against those groups. Why perform a task per instance when you can do it for multiple instances at one time? The CMS is actually defined in tables in the msdb database of the designated instance. I would recommend having a dedicated “central management server” instance which you will use for CMS, PBM, EPM, and other administrative tasks.

With CMS, you can create many groups and register instances in multiple groups based on the tasks that you may want to perform against those groups. For example, you can create groups to organize by SQL Server version, by Production\UA\QA\Dev\Test, by Application, by location, and be sure to have one group with all your SQL Server instances registered to it. SQL Server Management Studio (SSMS) enables you to run “multi-instance” queries using a CMS group. That is, you execute the contents of the Query window against every server instance in the specified group and the results are returned to the SSMS console.

A second strategy in managing the enterprise is standardization. Policy Based Management enables you to define expected settings (e.g. conditions) and verify whether an instance of SQL Server meets those conditions. Examples of policies could be checking that the sa login is disabled or ensuring the AUTO_SHRINK option is off on all databases. My recommendation is to configure the policies on the same instance as your CMS groups (e.g. your dedicated central management server) so that you only have to manage one set of policies. Policy definitions are also stored in the msdb database. You will also want to export the policies to a central file server. Policies are exported as XML formatted files. When evaluating the policies on a specific instance, you may use either the central management SQL Server instance or the file server where they are stored as the source. SSMS also allows you to manually evaluate policies against a CMS group – returning all the results to your SSMS console.

The third strategy is automation. If you have a CMDB (Configuration Management Database), then you can utilize it as the source for populating your CMS groups by scripting the entire process to keep your CMS groups current with the CMDB contents and setting this up as a SQLAgent job to schedule as needed. Policies can be assigned to categories. The EPM Framework provides a mechanism (a PowerShell script) to automate the PBM evaluations by category against a specific CMS group and store the results for reporting. EPM requires a database repository to store the results, so again I recommend creating this database on a dedicated central management server. Once you’ve been through the exercise of setting up your CMS, establishing policies, and configuring the EPM Framework for your environment, you’ll see additional opportunities to utilize the CMS for automating other tasks.

So, start leveraging the CMS, PBM, and EPM features today to reduce your efforts by organizing your instances, increasing standardization, and automating tasks in your enterprise!

Advertisements

TSQL2sday #32 – A Day in the Life

TSQL2sday is a monthly SQL Server blogger event started back in late 2009 by Adam Machanic (blog | twitter). For more info on its beginning and purpose see the origin of TSQL2sday. Each month a different SQL Server blogger is the host (announces the theme and compiles a recap). This month’s event is hosted by Erin Stellato (blog | twitter) and the selected theme for this month is “A Day in the Life”.

Erin challenged us to track what we did in our jobs for a specific day and write about it. This is great because I often have trouble explaining to others (especially non-IT folk) what my title of SQL Server Service Engineer really means. However, as this exercise is just supposed to cover a single day, this is just a small sample of what I do. There is no such thing as a “normal” day for me. Sometimes my tasks are based on the “crisis du jour” prioritization method, and sometimes I can actually follow the team work plan. The variety in my job is one of the things I like about it. So here goes…

Unless, I have an early morning meeting with global colleagues, my day nearly always begins with processing email. Since I work in a global organization in the region whose workday is last to begin, even if I’d cleared my Inbox the day before, I always open my mailbox to encounter new emails from European and Asia-Pacific colleagues who have already completed or are wrapping up their workday. In that sense, this day starts out as just a normal day (no early meetings!).

Unfortunately for this write-up, it appears that summer time may be impacting my email load in a positive sense as I have only a handful of emails and only as a cc on a couple of subjects which one of my teammates is handling.  One of the issues has to do with deploying SQL Server Enterprise Edition versus Standard Edition and licensing implications for the customer. My team is comprised of technical experts – we can tell the customer if what they are trying to do requires a specific edition of SQL Server to use the requested feature, but we are not involved in the licensing agreements between Microsoft and each customer.  That is for others to figure out! 

Email done and no looming crisis for today, I can get back to the task I’ve been working on previously – writing an automated process to rollout multiple T-SQL Scripts to multiple instances using PowerShell. These are the scripts which update the standard tables and stored procedures in the admin database we install on all instances along with a set of SQLAgent jobs which the operational DBAs use for system maintenance. Every so often, we need to rollout updates to these objects. Our current automated process for doing this (which was developed for SQL 2005) isn’t as automated as we’d like it to be. We have since created a CMS and are utilizing registered groups to run various processes (like EPM) and now want to extend that concept to this activity as well. I’m thinking within a couple of hours I can write a script to save our operational DBAs literally hundreds of man-hours. Easy, right?

If you’ve worked with PowerShell any at all – or any programming language for that matter – you know there is always more than one way to write a process to accomplish the task at hand. The challenge is in finding the most efficient way that gives you what you want.  Our old script to run a set of .sql files was written in VBScript and called the sqlcmd utility. I figured I’m writing this in PowerShell, I’m using Invoke-Sqlcmd to get the list of instances from the CMS, I can use the Invoke-Sqlcmd cmdlet as shown in BOL in the second example and it will work just like sqlcmd. Wrong! It seems that example only works if you are running a SELECT statement in your InputFile.  This particular set of .sql files should have no output unless it is an error and in my test I have a script which I know produces an error – but my output file is empty.

I try various parameters such as -ErrorLevel and -SeverityLevel and I even use -Verbose to no avail – still nothing is piped to my output file.  I consult with my team mates to see if they tried this before; I search for examples on the Internet and the best I can find in one of the forums was someone else encountering the same thing, but with no solution for me. I can be stubborn some times and I’m not about to give up yet – after a couple of hours of struggling – I fire off an email to my SQL PowerShell buddy Allen White (blog | twitter) asking for his input – can I do what I’m trying to do with Invoke-Sqlcmd or should I revert to calling sqlcmd?

While waiting for Allen to respond, a couple of more emails have hit my Inbox.  Yea! It appears that our request to rebuild one of our team’s test servers has been completed.  We try not to do this too often, but part of engineering is writing scripts \ installing \ testing \ uninstalling \ enhancing scripts…repeat; over the course of time sometimes things get so messed up from all the testing (and occasional bad script) you just have to start over with a clean image.  This is now a box we plan to use for testing our processes on SQL Server 2012.

It doesn’t take long before I have a reply from Allen – I hope he doesn’t mind if I quote him:

I honestly believe that it’s best to use the tool that best suits the task, so I’d use sqlcmd here, because it works the way you want it to. 

Thanks Allen for the reminder not to use a hammer when a screwdriver is what you need! Sometimes, a hammer is all you have, but not in this case. 

Now, it’s time for lunch. I head down to the cafeteria with my team mates and join other colleagues at our usual table. I don’t hang around too long chit-chatting as I want to get back to my desk and switch out my code and test so I can announce success at our afternoon team meeting.

Remember earlier what I said about more than one way to do something? Now, I have to decide how to go about calling sqlcmd.exe from PowerShell. I need to specify variables to all the parms based on the target instance and input file to execute – and the output filename and location is dynamically determined as well based on the target instance and input filename.  I start with looking at Invoke-Command, then move to Invoke-Expression, but I’m still not getting my output file like I want it and I’m not able to detect if sqlcmd experienced an error to report in my general execution log. I have an example using [diagnostics.process]::start($exe,$args).WaitForExit() that seems to be getting me close to what I want, but now it is time to break for my afternoon meeting.

I’m the Technical Team Lead for a team of three. We each have our areas of specialization within the overall work plan, but try to keep each other in the loop so we can back each other up at any time. As needed (usually every 1-2 weeks), we meet formally to update the work plan, assign/reassign new/old tasks if needed, catch each other up on what we’ve each been working on and brainstorm areas for improvement. This is one of those meetings and since last week was a holiday week and we didn’t meet, we have a lot to catch up on.  The nice thing about a team is having others to bounce ideas off of and this is what I do with my frustration in finding the exact syntax I need to be using to get the results I want from calling sqlcmd inside PowerShell.  The next thing I know, one of my colleagues has done their own search and found a code example – I look and express skepticism as it is very much like what I’m already doing, but with one key difference that might make a difference; what can it hurt to try?

We continue to discuss how far we want to take this initial rewrite of our update process.  We are also in progress of redesigning our whole automated install process and ultimately we want the update process to utilize what we are putting into place there.  However, we have a more immediate need to have the operations team rollout some updates and decide that version 1 of the update process will do no more than we have already in place today (in terms of reporting), but it will be automated such that the DBAs only need to review the central output file for any problems. Selection of the systems requiring an update into a special CMS group can be done in an automated fashion as well as scheduling the update itself in SQLAgent. We decide to make further enhancements for logging the process’s results into a central table in a future version.

Our meeting continues with more brainstorming about the consequences of developing an install and configuration solution for SQL Server which can account for multiple versions and differing customer standards (e.g. install locations). We plot out on the whiteboard differing ways we can handle this – probably the umpteenth discussion like this that we’ve had; but each time we come in with new experiences and thoughts from what we decided previously and in some cases started trying to implement and we are therefore continually refining the solution.  We are becoming more confident that we are developing a standardized, but flexible solution which is also more sustainable across multiple versions of SQL Server than our existing process.

The meeting concludes and although I’m anxious to try the code snippet my colleague found, it is really time for me to head home. I arrived at the office much earlier this morning than my normal start time trying to beat the rain and now I need to try to get home before the next round hits. There is some flooding already occurring around town. Working further on this script can wait until later. I know that once I do get started back on it, I won’t stop until I have it totally finished. That’s my life!

I probably learned more today in trying all the ways that didn’t work the way I thought they would than if the first code I tried had worked. This experience will pay off later, I know.

Today was an “Edison day”:

I am not discouraged, because every wrong attempt discarded is another step forward.

I have not failed. I’ve just found 10,000 ways that didn’t work.

 

P.S. I did finally get the script functioning the way I wanted the following day and it will save our operations team hundreds and maybe even thousands of hours. This is what I do!

TSQL2sday #015 – Automation in SQL Server

What is T-SQL Tuesday (twitter hashtag: #TSQL2sday)? Back in late 2009, Adam Machanic (blog | twitter) had this brilliant idea for a monthly SQL Server blogger event (the origin of T-SQL Tuesday) hosted each month by a different blogger.  This month’s event is hosted by Pat Wright (blog | twitter) and his selected topic is “Automation in SQL Server”.

If you’ve read my home page then you know the topic of automation is my primary focus – so this topic for TSQL2sday got me excited as it will also definitely help me towards my “techie” goals posted during TSQL2sday #014!

I think everyone knows why we automate – improve efficiency, reduce errors/waste; what we automate – repetitive tasks; and when we automate – cost of implementing automation is less than continuing “manual” method (calculate the ROI).   So – where and how should we apply automation in the SQL Server world?

Where? Installations, of course! As hopefully you are aware – there is more to installing SQL Server than just clicking next, next, next, … finish!  Sure SQL Server will be up and running if you do that – but will it meet your corporate standards?  Plus, there really is more you should do (configuration-wise) after running setup.exe. But the details of that are for a future post!  Right now, I want to focus on the how of automating the installation and configuration process.

SQL Server provides the ability to specify a configuration file containing the parameters required by the setup.exe so that you can run an “unattended” installation.  This has been available as far back as I can remember (which although I’ve worked with SQL Server since 4.21 days, I only confess to remember SQL 2000 forward!). Each version of SQL Server has a slightly different configuration file, so be sure to refer to the current BOL for the version you are implementing (although hopefully very few of you still have to do SQL 2000 installs!).

If you want to take your deployment automation even another step forward, SQL Server 2008 R2 provides the ability to SysPrep your SQL Server image – http://technet.microsoft.com/en-us/library/ee210664.aspx.  I haven’t personally tried this, but depending on your environment you may want to check out this feature.

Once you have your unattended installation files configured to ensure all your deployments install exactly the same –the possibility for mental errors by a well-meaning human (including yourself!) who simply misses changing a default on a screen to your corporate standard is greatly reduced. But, you probably want to further automate your post-install configurations, such as changing the default size and number of files allocated to tempdb, setting up your standard maintenance jobs, etc.  Or – maybe you want to run some “pre-install” steps before executing the SQL Server setup (like ensure PowerShell 2 is installed).  So how to package all this up into ultimate flexibility and reduce the possibility of missing a script?  Enter DBLaunch.

DBLaunch is a product from DBA24Hrs for helping DBAs with automating installation, configurations, and maintenance tasks.  From an installation task perspective, DBLaunch allows you to define all of your setup files (including service packs) and “configuration” scripts whether batch files, PowerShell scripts, or T-SQL scripts. Then you create “templates” for each distinct installation type that you need – SQL Server 2005 x86 Standard Edition w/SP4, SQL Server 2008 Enterprise Edition w/SSIS, etc. You can clone templates in cases where you want the same configurations overall, but maybe need a different collation setting for example.

The benefits which I personally witnessed? Documentation of the steps for an installation was reduced by about 75%! Personnel resource time to perform an install dropped from 4 hours (including wait time for processes to complete before continuing to the next step) to 10-15 minutes! After running manual pre-checks (mostly to assure storage is correctly configured), all the installer has to do in DBLaunch is select the template to use and specify the ServerName and InstanceName to be installed and click “start”.  The installer is then free to go do another task (including starting another installation on a different server); come back in 45 minutes to an hour (depends on target server location and bandwidth to installation source) and verify that the install completed successfully.  Prior to DBLaunch, it could take 2-4 hours to manually execute all of our installation process.  And, we are assured that no steps were missed thereby reducing time for troubleshooting failed installations almost completely which ultimately reduced our delivery time to the customer.

If you are in an enterprise environment, then you absolutely must use automated means to ensure a consistently delivered installation and configuration.  Start with developing your unattended installation configuration file, then look to see what other processes in your configuration can be automated via scripts, then take a look at DBLaunch to see if it can help you pull it all together.  I’m not aware of another tool in the marketplace which does this (and more) – we justified it strictly on the ROI from installations.

So, regardless of whether you build your own deployment tool or go with a third-party product, bottom line is that you will ultimately save time, money and frustration by automating your installs!

Taking Advantage of the CMS to Collect Configuration Data

In an earlier blog entry, I showed you how you could build and populate your CMS groups to use with SSMS to run multi-server queries.  Okay, so now you are probably thinking of lots of configuration type info which you’d like to collect periodically from your instances.  Obviously, you don’t want to have to do this manually in an SSMS Query Editor window all the time, you want to automate it, right?  You can probably guess where I’m headed next… Yep – PowerShell!

OK, so that is interesting, you have the SQL Server instance version results for all the instances registered in the defined CMS Group in a file on your hard drive. But what you really want to do is collect several different properties about your server instances and store that data such that you can query (report) it later.

For this case, let’s assume that you have a database called DBAdmin on the same instance as your CMS – this is also your “repository instance”.  In the DBAdmin database create the following table:

You do need to be aware in multi-versioned environments that some properties are not available in all versions.  Thus you’ll need to be able to handle null values returned by the SMO properties.  For these properties, you will want to collect in a variable prior to building the insert statement as shown by the $WebXPs variable in the sample code below.

Now, loop through the CMS group as before and collect the instance level information using SMO and store the results in the repository.

Hopefully, you see the possibilities for extending beyond the properties shown here.

So now all that remains is to save your code into a PowerShell script and schedule it to run from your repository instance’s SQLAgent to collect your SQL Server configuration data on a regular basis. Don’t forget to decide how much historical data you want to keep and set up a step in your job to purge the repository of old data.  And finally, in case you didn’t think of this already, you’ll need to schedule the job steps running the PowerShell script to execute as an account which has the necessary access to all of the registered servers in the CMS Group.

So, why use CMS Groups for this task?  I think this gives you more flexibility in setting the scope of your collection scripts without requiring you to change any code within those scripts once you’ve finalized them.  The only thing you need to manage is the CMS Groups, which can be manual or automated or both, as discussed in my blog entry Managing the Enterprise with CMS.  It allows you to use multiple repositories which run the same scripts but against different CMS groups (for example regionally or by customer).

Finally, this is sample code to demonstrate what is possible. It should work, but use at your own risk and test, test, test in your own environment!

Managing the Enterprise with CMS

In Microsoft SQL Server 2008, a feature called Central Management Server (CMS) was added to enable the execution of T-SQL commands and Policy Based Management policy evaluation on multiple servers at one time.  As someone working in a large enterprise environment, I’m constantly looking for features which enable better, more efficient ways to manage hundreds of servers within a team support environment.  While the minimum version requirement to perform as a CMS is SQL 2008, you can register lower level version instances.  Now we are talking – because how frustrating is it to have a new feature available, but virtually useless until you have a majority of systems on the new version.  So, now you need only one SQL Server 2008 instance and you can start taking advantage of this new feature to manage the current SQL 2000 and SQL 2005 instances already in place.

However, if you read BOL (Books Online), then you’ll see that the instructions for adding groups and instances are very manual and done via the SQL 2008 SSMS (SQL Server Management Studio).  Did I mention that we have hundreds of instances to manage?  And, did you catch the reference to “groups”?  It doesn’t say explicitly in BOL (at least that I’ve found), but an instance can belong to more than one group.  This opens up a whole plethora of possibilities for using the CMS.  But first we must determine how to automatically create the groups and populate them and maintain them.

A little further investigation reveals that there are now a couple of tables in the msdb database named dbo.sysmanagement_shared_server_groups and dbo.sysmanagement_shared_registered_servers which are designed to contain the groups and registered servers for the CMS.  So, now I can bypass the manual registration in SSMS and write a PowerShell script (what else?) to create the desired groups and populate them from our central asset database based on whatever criteria I want the group based on.  Then I can schedule that script as a job to run in SQLAgent to ensure the CMS groups are refreshed and relatively in sync with the asset database (considered “the truth”).

The msdb database in SQL 2008 also contains two new roles: ServerGroupAdministratorRole and ServerGroupReaderRole.  If you want someone other than the Windows Authenticated logins already in the sysadmin role on the CMS instance to manage your CMS, then you can assign their Windows login to the ServerGroupAdministratorRole.  Similarly, if you want others to use the registered groups and servers in this CMS, then their Windows login must be assigned to the ServerGroupReaderRole on the CMS instance.

BOL (2008) states that the CMS Server cannot be a member of a group that it maintains, I haven’t figured out why yet. The work around is to register the CMS server name as servername\instancename, but register it as servername\instancename,port in its CMS Group, this seems to work just fine, but be aware of this caveat.

SQL Server 2008 R2 did not provide any changes to the CMS feature, so you should be able to use either SQL 2008 or SQL 2008 R2 as your CMS.  I’ve had no problems using SSMS 2008 with a CMS based in SQL 2008 R2 or using SSMS 2008 R2 with a CMS based in SQL 2008.

The first key to using CMS is to determine what groups make the most sense for your organization. Depending upon what data is available in your asset database, the choices are almost limitless.  For instance, you might chose groups based on region alone or based on region plus version for another combination.  Thus, you might have groups which look similar to AM, AP, EU, AM_2000, AM_2005, AM_2008, SQL2000, SQL2005, SQL2008, etc.  You could also have groups based on classifications such as DEV, QA, and PROD; or even based on application landscapes such as Biztalk and Sharepoint.

Some things to consider when setting up your CMS include connectivity and security.  The CMS is not magic – although it might seem like it the first time you perform a multiple-instance query!  The CMS does not store authentication credentials, only the server connection info.  Thus, when using SSMS to execute T-SQL or policies against a group, Windows Authentication is used.  The authenticated account will have no different privileges than if you had connected to each instance in an individual query window using Windows Authentication.  You’ll actually see in the status bar of the Query Window in SSMS how many instances in the group were successfully connected (e.g. 9 of 10 or 10 of 10).  Therefore, even if you connect successfully to every instance in the group, if your login has differing privileges on the instances, then your results may differ as well depending upon the privileges required to execute the query submitted.  So far, this works exactly like Registered Servers in your individual SSMS, however, the benefit is the ability to access the CMS groups and registered servers from anywhere in your organization, by anyone (with the proper access of course!).  No longer do you have to keep registering your servers – and your whole team now has access to the same groups and registered servers.

On the connectivity side, if a CMS group contains hundreds of instances, especially if regionally dispersed, it may take a while to establish the connection to all of the instances, so be patient – or consider further dividing your groups into smaller subsets.  If an instance is offline or there is a connection timeout or you do not have permissions, then an error\warning will be reported for that instance when you try to execute a query in SSMS against the group; results will still be returned for all other instances.

Also, be very, very, very careful when executing anything other than a SELECT statement – always check the status bar in the Query Editor window to validate the number or instances to which you are connected, which group you are executing against, who you are executing as and to which database you are connected by default.

So for a very simple example of what it can help you do, consider that you have version stored in your asset database and use that to build version groups, but you aren’t 100% sure that these are correct.  In SSMS, you can right-click the version group in the Registered Servers in SSMS and select “Run Query”.  Then, you can execute the query “Select @@version” or “Select SERVERPROPERTY(‘ProductVersion’)”.  You can quickly scan the results for any instances which do not return the expected version.

Hopefully, this provides a good introduction to CMS and has started you thinking about how you can use it outside of SSMS…which will be the subject of my next post!