• Categories

  • Recent Posts

  • Archives

  • Copyright Notice

    Copyright © Nancy Hidy Wilson, 2010-2023. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Nancy Hidy Wilson and nancyhidywilson.wordpress.com with appropriate and specific direction to the original content.

SQLSaturday #461 Austin 2016 – I’m Presenting

I will be presenting my SQL Server Deprecated and Discontinued Features session at SQLSaturday #461 in Austin on January 30th. If you have older versions of SQL Server which need to be upgraded to one of the more recent versions, come to my session to find out how to detect potential issues, what you need to fix before upgrading\migrating, and what you’ll need to fix after upgrading\migrating.

Check out the entire schedule here. There are sessions for SQL Developers, DBAs, Business Intelligence folks, and even a couple of Professional Development sessions. And, in case you are unfamiliar with SQLSaturday – it is FREE day of training thanks to these generous sponsors.

I’m looking forward to seeing my Austin #SQLFamily – Jim Murphy, Wes Brown, John Sterrett, and you! Register here!

 

SQLSaturday #447 Dallas – I am Speaking

SQLSat_logoThe first SQLSaturday I ever attended was #35 in Dallas and the North Texas SQL Server User Group (NTSSUG) set the bar high as host. That was several years ago and scheduling conflicts have prevented me from making the drive up I-45 from Houston to attend another SQLSaturday there. However, this year, I’m honored to be presenting at SQLSaturday #447 Dallas on October 3, 2015. My topic is “Managing SQL Server in the Enterprise with TLAs”.  What are TLAs, you ask? Why Three-Letter-Acronyms, of course! The TLAs that I will be discussing which you as a SQL Server DBA should be utilizing are CMS, PBM, and EPM. Come to my session in Room 100 at 8:30am (updated 9/30 for schedule change) and find out how using these features will improve your productivity and help you ensure standards are being followed in your environment.

If you are a data professional within driving distance of the DFW Metroplex, you should consider attending this free day of learning at the University of Texas at Arlington (UTA) hosted by the North Texas SQL Server User Group.

Check out the entire schedule, including low-priced Pre-con sessions on Friday, and register today to take advantage of this free training!

If you can’t attend this event, then check here for all the currently scheduled SQLSaturdays in the US and around the world! There is likely one occurring near you soon!

Learn Something New (or Old) Every Day

I’m a big proponent of constant learning – looking to learn something new every day. It doesn’t have to be some big revelation. Sometimes it is just a tip on how to do something more efficiently (like in PowerShell) and sometimes it isn’t even new!

Recently, I was in a discussion about proposing a best practice recommendation in the CIS SQL Benchmarks to ensure deleted Active Directory Windows logins are also removed from all SQL Server instances where they were granted a login. One of the team members did a little research and found a reference to a system stored procedure which might help – sys.sp_validatelogins. My first question was – is it a documented procedure? Microsoft warns against using undocumented commands as they could get changed with most any update. Yes, it is officially documented in BOL (2008+). Second question – since which SQL Server version? Since at least SQL 2000 per this BOL! I have to confess, as many years and versions that I’ve been working with SQL Server and researching various security aspects, I was surprised that I didn’t recall this procedure – especially when the first non-BOL reference that I found in my own search turned up an article written by my friend Tim Ford for MSSQLTips!

In large, complex environments, both the processes (coordination between teams with varying responsibilities) and the technical aspects (how to identify these logins) can be, well, complex! But, depending upon your AD structure and the trusts in place, you as a DBA could periodically run this system stored procedure on your instances to find Windows logins or groups which are SQL Server logins but no longer exist in AD. You can then do a more thorough search of the specific instance’s databases and remove the login from all databases where it is a user, ensure that it isn’t a database owner, and ultimately remove the login from the instance. There is no currently known security risk to leave these orphaned logins on your SQL Server, but just like cleaning up orphaned users in your databases which do not have a specific instance login, it is considered a best practice to perform this task. And, for those of us who are neat freaks – it just makes your instance “cleaner” to get rid of the clutter of obsolete logins!

While it would be tempting to automate this check and just drop the database users, then drop the login, I did find that Thomas LaRock documented an anomaly he found several years ago which would make me always want to manually double-check the AD for any accounts reported as orphaned to ensure the account is really no longer valid. But at least you have narrowed down the search by using this procedure.

So, when looking to learn something new – don’t forget sometimes what you may learn is old – both Tim’s and Tom’s blogs were from 2009!

TSQL2sday #70 – Strategies for Managing an Enterprise

tsql2sday

Jen McCown (Twitter) of Midnight DBA is the guest host for this month’s SQL blogger event known as T-SQL Tuesday (#TSQL2sday) which was started almost 6 years ago by Adam Machanic. This month, Jen has assigned us the topic: Strategies for Managing an Enterprise. Jen will be doing a wrap-up summary of all blog posts submitted on this topic per the rules and I’m looking forward to everyone’s input on this subject.

I’ve been presenting a session for the past several years at SQLSaturday events entitled “Managing SQL Server in the Enterprise with TLAs”. The TLAs (three-letter acronyms) are CMS (Central Management Server), PBM (Policy Based Management) and EPM (Enterprise Policy Management Framework). I’ll be presenting this session at SQLSaturday #447 Dallas on Oct. 3rd, 2015, so you can come learn the details of these features then. But, per the assigned topic for this post, let’s focus on the “strategies” driving the usage of these features.

For me, one of the main goals in managing the enterprise is finding ways to reduce the effort in managing that landscape –whether two instances of SQL Server or two thousand instances. A strategy for getting there is organization. The CMS enables you to define groups to which you register your SQL Server instances and then you can perform tasks against those groups. Why perform a task per instance when you can do it for multiple instances at one time? The CMS is actually defined in tables in the msdb database of the designated instance. I would recommend having a dedicated “central management server” instance which you will use for CMS, PBM, EPM, and other administrative tasks.

With CMS, you can create many groups and register instances in multiple groups based on the tasks that you may want to perform against those groups. For example, you can create groups to organize by SQL Server version, by Production\UA\QA\Dev\Test, by Application, by location, and be sure to have one group with all your SQL Server instances registered to it. SQL Server Management Studio (SSMS) enables you to run “multi-instance” queries using a CMS group. That is, you execute the contents of the Query window against every server instance in the specified group and the results are returned to the SSMS console.

A second strategy in managing the enterprise is standardization. Policy Based Management enables you to define expected settings (e.g. conditions) and verify whether an instance of SQL Server meets those conditions. Examples of policies could be checking that the sa login is disabled or ensuring the AUTO_SHRINK option is off on all databases. My recommendation is to configure the policies on the same instance as your CMS groups (e.g. your dedicated central management server) so that you only have to manage one set of policies. Policy definitions are also stored in the msdb database. You will also want to export the policies to a central file server. Policies are exported as XML formatted files. When evaluating the policies on a specific instance, you may use either the central management SQL Server instance or the file server where they are stored as the source. SSMS also allows you to manually evaluate policies against a CMS group – returning all the results to your SSMS console.

The third strategy is automation. If you have a CMDB (Configuration Management Database), then you can utilize it as the source for populating your CMS groups by scripting the entire process to keep your CMS groups current with the CMDB contents and setting this up as a SQLAgent job to schedule as needed. Policies can be assigned to categories. The EPM Framework provides a mechanism (a PowerShell script) to automate the PBM evaluations by category against a specific CMS group and store the results for reporting. EPM requires a database repository to store the results, so again I recommend creating this database on a dedicated central management server. Once you’ve been through the exercise of setting up your CMS, establishing policies, and configuring the EPM Framework for your environment, you’ll see additional opportunities to utilize the CMS for automating other tasks.

So, start leveraging the CMS, PBM, and EPM features today to reduce your efforts by organizing your instances, increasing standardization, and automating tasks in your enterprise!

TSQL2sday #68 – Just Say No to Defaults

T-SQL Tuesday (aka #TSQL2sday) is a monthly SQL Server blogger event started back in late 2009 by Adam Machanic (blog | twitter). For more info on its beginning and purpose see the origin of TSQL2sday. Each month a different SQL Server blogger is the host (announces the theme and compiles a recap). This month’s event is hosted by Andy Yun (blog | twitter) and the selected theme for this month is “Just Say No to Defaults”.

This is really embarrassing, but I’ve had a blog post started for this topic for years and somehow never got around to finishing it! Thanks, Andy, for giving me a reason to finally address a couple of my “must change” defaults.

Ease of installation is definitely a feature that has helped SQL Server to proliferate. You can have a functional system just by running setup and clicking Next, Next, Next….Finish! Without having to make any real decisions about what you are doing, you can be up and running in no time.

When installing the database engine component, the first change to be considered from the defaults presented during setup is the location of the various file types – however, I’m going to save that for others to address and may come back to it in a future post.

Today, I’m going to address a default that you can’t change during the setup dialog or via configuration file parameters. It must be adjusted post-install and is for SQLAgent.

Ever go to review the job history for a job and find nothing or only a couple of recent entries and you know the job has been running for weeks or even months? So, where is all that history? Sorry, you were the victim of the ridiculous defaults shown below which limit the total number of rows in the msdb.dbo.sysjobhistory table as well as set a max number of history rows per job.

To find this dialog in SSMS, right-click on SQLAgent, then select Properties, then select History.

SQLAgentHistoryProperties

These are defaults that you definitely want to change. In fact, instead of just increasing the number of maximum rows for the table and per job, I’d recommend that you decide on the time frame that you want to keep your SQLAgent job history and uncheck the “Limit size of job history log” option and check the “Remove agent history” option and specify the desired time frame instead as shown below. Many companies already have specifications for how long to retain activity logs, so using the time period that meets or exceeds those requirements should be helpful when it comes audit time.

SQLAgentHistoryProperties_ByTime

Depending on the number of jobs and the frequency at which each is run, you may also need to keep a close watch on the size of msdb after changing this setting to find the optimum size for your msdb files to allow the sysjobhistory table to grow to its expected size without causing autogrow of your files. Remember to manage msdb just like any other application database with appropriate purges, backups, and other maintenance.

I can’t wait to see what others say “no” to in Andy’s round-up for this event. I’ll be looking for my other must change items and if I don’t see them, then I will be posting more soon!

SQL Server 2012 Security Benchmark Released

The Center for Internet Security (CIS) Security Benchmarks Division released “CIS Microsoft SQL Server 2012 Database Engine Benchmark V1.0.0” on January 6, 2014. 

CIS_SQL2012_Benchmark_V1This is a consensus-based development of security best practices which have become the de facto security configuration standards.  If you are in charge of your SQL Server security configuration, you need a copy of this document – it is what your auditors will be using soon! 

I am currently serving as one of the editors for the SQL Server benchmarks. We are in progress with updating the SQL Server 2008 R2 benchmark previously released in late 2012.  If you discover any items we should update\add\delete in that document or in the newly released 2012 benchmark, please either leave a comment here on my blog or better yet join the benchmark community consensus team (http://benchmarks.cisecurity.org/community)!

SQL Server 2008 R2 Security Benchmark Released

The Center for Internet Security (CIS) Security Benchmarks Division released “CIS Microsoft SQL Server 2008 R2 Database Engine Benchmark V1.0.0” on November 16, 2012. The best I can tell, this benchmark can also be used with SQL Server 2008.

CISSQL2008R2This is a consensus-based development of security best practices which have become the de facto security configuration standards.  If you are in charge of your SQL Server security configuration, you need a copy of this document – it is what your auditors will be using soon!  

 

 

SQLSaturday #150 – Baton Rouge – Signup Now!

There’s an awesome FREE technical training event coming to Baton Rouge on August 4, 2012. That’s right; SQLSaturday and Tech Day 2012 will be held at LSU’s new College of Business facility.  This is the fourth year that the Baton Rouge technical community has held this event and they expect around 400 people – if you live anywhere close by, then you should be there!  William Assaf (blog | twitter) even got some local TV exposure for the event this year.  

This event is bigger than your normal SQLSaturday. In addition to tracks for the SQL Server professional, there are also tracks for .NET developers, Windows Phone developers, SharePoint, and general professional development. Check out the full schedule here, and then sign up here.

Why am I plugging this event? Well, for one thing the Baton Rouge SQL Server community has always come west across the state line to support our SQLSaturdays in Houston. Secondly, I’ll be speaking at their event this year on “Managing SQL Server in the Enteprise with TLAs”.  TLA is “Three-Letter Acronym” for those unsure. We have lots of those in techno-speak. I’ll be covering CMS, PBM, EPM, MDW, and more…. If you work with SQL Server and don’t know what those are or how they can help you, then register today for SQLSaturday #150 and come to my session at 8:20am in Room 1700! 

Addendum: I’ll now also be presenting a second session “SQL Server 2012 Database Engine – Why Upgrade?” in the 2:45pm slot in Room 1700.

 

If you can’t attend this event, then check here for all the currently scheduled SQLSaturdays in the US and around the world! 

 

SQL Server “Agent XPs” Behavior

Ever realize you’ve been doing something a certain way for so long and you don’t remember why? Or what the consequences would be if you changed how you did things?

I was testing our new SQL Server installation process this week. We create a configuration file programmatically from user input and some set defaults and then run the install unattended followed by executing a series of T-SQL and PowerShell scripts to make sure certain configurations are done.

Vicky Harp (blog | twitter) has a presentation on edge case testing which really caught my attention a few months back. For years, I’ve always made sure all parameters, required and optional, were explicitly defined for our installs. But, what if someone else runs this and isn’t as precise as I tend to be?(I know some of you are thinking of another word.) For the current test I was doing with our new process, I was only providing the absolute minimum required parameters and letting everything else default. The install ran great, and then the configuration files started executing. Surprisingly, I had two scripts report failure. I had successfully run the same set of scripts on a prior install earlier in the day. What changed?  You’ve probably already guessed, but bear with me through my analysis.

My first thought was a SQL Server 2012 difference from SQL Server 2008 R2. The current install I was performing was 2012 while the earlier one was for 2008 R2. So, that seemed like the logical place to start – a difference between versions. But, I was confident that these scripts worked for both versions and had previously tested them independently of the install process. Then I looked a little more carefully at the error. The failing scripts were calling system stored procedures to modify SQLAgent’s job history settings. The error being reported was that the Agent XPs weren’t enabled.  I was confused – why did my SQL 2008 R2 install work fine with the same scripts and SQL 2012 fail?

While I’m pondering the mystery of what was different I remembered that the SQL 2012 Setup had returned error 3010 which means that a reboot is needed following the install; so I rebooted that box. In the meantime, I searched and verified that I indeed did not have a script which was explicitly enabling the Agent XPs, but somehow they were enabled on the SQL 2008 R2 system.  The SQL 2012 system came back up and I logged back on to continue my investigation. That was when I noticed that SQLAgent had not started following the reboot. Its start mode was set to “manual” which is the default if you do not specify the AGTSVCSTARTUPTYPE=”AUTOMATIC” in your configuration file. I opened SQL Server Configuration Manager and changed the SQLAgent service’s start mode to “Automatic” and started the service.  Then, I checked the value for sp_configure ‘Agent XPs’ – and sure enough it was now 1 (enabled) and I could execute the scripts which previously failed.

At first I thought it was just changing the start mode to Automatic triggered the enabling of the Agent XPs, but I finally verified after testing several scenarios that whenever the SQLAgent service is started or stopped regardless of the start mode, it will enable or disable the Agent XPs respectively.  This is really confusing if you know that you have explicitly run:

sp_configure ‘Agent XPs’, 1;

reconfigure;

Then you stop SQLAgent and don’t restart – it is now disabled as if you’d never run the above command. If you want to modify some aspect of SQLAgent while it is now stopped, you’ll have to run the above command again.

sp_configure ‘Agent XPs’, 1;

reconfigure;

EXEC msdb.dbo.sp_set_sqlagent_properties @jobhistory_max_rows=150000;

So, you can modify the properties while SQLAgent is not running, you just have to remember to always enable the Agent XPs immediately beforehand. Since this behavior is by design for security purposes, then you’ll probably want to get the current setting before enabling it and if it was disabled, then set it back to disabled when you’ve finished running your command(s). 

It’s pretty obvious when you use SSMS and the Agent XPs are disabled. You cannot access any SQLAgent nodes or even its properties.

You can leave the SQLAgent service stopped and run the sp_configure command to enable the XPs, and then you can access the SQLAgent properties in SSMS. Most of the time this shouldn’t be an issue, you’ll have SQLAgent running and the XPs will have been automatically enabled. But, just in case you ever run into an unexpected error with the Agent XPs like I did, hopefully, you’ll remember this behavior.

TSQL2sday #32 – A Day in the Life

TSQL2sday is a monthly SQL Server blogger event started back in late 2009 by Adam Machanic (blog | twitter). For more info on its beginning and purpose see the origin of TSQL2sday. Each month a different SQL Server blogger is the host (announces the theme and compiles a recap). This month’s event is hosted by Erin Stellato (blog | twitter) and the selected theme for this month is “A Day in the Life”.

Erin challenged us to track what we did in our jobs for a specific day and write about it. This is great because I often have trouble explaining to others (especially non-IT folk) what my title of SQL Server Service Engineer really means. However, as this exercise is just supposed to cover a single day, this is just a small sample of what I do. There is no such thing as a “normal” day for me. Sometimes my tasks are based on the “crisis du jour” prioritization method, and sometimes I can actually follow the team work plan. The variety in my job is one of the things I like about it. So here goes…

Unless, I have an early morning meeting with global colleagues, my day nearly always begins with processing email. Since I work in a global organization in the region whose workday is last to begin, even if I’d cleared my Inbox the day before, I always open my mailbox to encounter new emails from European and Asia-Pacific colleagues who have already completed or are wrapping up their workday. In that sense, this day starts out as just a normal day (no early meetings!).

Unfortunately for this write-up, it appears that summer time may be impacting my email load in a positive sense as I have only a handful of emails and only as a cc on a couple of subjects which one of my teammates is handling.  One of the issues has to do with deploying SQL Server Enterprise Edition versus Standard Edition and licensing implications for the customer. My team is comprised of technical experts – we can tell the customer if what they are trying to do requires a specific edition of SQL Server to use the requested feature, but we are not involved in the licensing agreements between Microsoft and each customer.  That is for others to figure out! 

Email done and no looming crisis for today, I can get back to the task I’ve been working on previously – writing an automated process to rollout multiple T-SQL Scripts to multiple instances using PowerShell. These are the scripts which update the standard tables and stored procedures in the admin database we install on all instances along with a set of SQLAgent jobs which the operational DBAs use for system maintenance. Every so often, we need to rollout updates to these objects. Our current automated process for doing this (which was developed for SQL 2005) isn’t as automated as we’d like it to be. We have since created a CMS and are utilizing registered groups to run various processes (like EPM) and now want to extend that concept to this activity as well. I’m thinking within a couple of hours I can write a script to save our operational DBAs literally hundreds of man-hours. Easy, right?

If you’ve worked with PowerShell any at all – or any programming language for that matter – you know there is always more than one way to write a process to accomplish the task at hand. The challenge is in finding the most efficient way that gives you what you want.  Our old script to run a set of .sql files was written in VBScript and called the sqlcmd utility. I figured I’m writing this in PowerShell, I’m using Invoke-Sqlcmd to get the list of instances from the CMS, I can use the Invoke-Sqlcmd cmdlet as shown in BOL in the second example and it will work just like sqlcmd. Wrong! It seems that example only works if you are running a SELECT statement in your InputFile.  This particular set of .sql files should have no output unless it is an error and in my test I have a script which I know produces an error – but my output file is empty.

I try various parameters such as -ErrorLevel and -SeverityLevel and I even use -Verbose to no avail – still nothing is piped to my output file.  I consult with my team mates to see if they tried this before; I search for examples on the Internet and the best I can find in one of the forums was someone else encountering the same thing, but with no solution for me. I can be stubborn some times and I’m not about to give up yet – after a couple of hours of struggling – I fire off an email to my SQL PowerShell buddy Allen White (blog | twitter) asking for his input – can I do what I’m trying to do with Invoke-Sqlcmd or should I revert to calling sqlcmd?

While waiting for Allen to respond, a couple of more emails have hit my Inbox.  Yea! It appears that our request to rebuild one of our team’s test servers has been completed.  We try not to do this too often, but part of engineering is writing scripts \ installing \ testing \ uninstalling \ enhancing scripts…repeat; over the course of time sometimes things get so messed up from all the testing (and occasional bad script) you just have to start over with a clean image.  This is now a box we plan to use for testing our processes on SQL Server 2012.

It doesn’t take long before I have a reply from Allen – I hope he doesn’t mind if I quote him:

I honestly believe that it’s best to use the tool that best suits the task, so I’d use sqlcmd here, because it works the way you want it to. 

Thanks Allen for the reminder not to use a hammer when a screwdriver is what you need! Sometimes, a hammer is all you have, but not in this case. 

Now, it’s time for lunch. I head down to the cafeteria with my team mates and join other colleagues at our usual table. I don’t hang around too long chit-chatting as I want to get back to my desk and switch out my code and test so I can announce success at our afternoon team meeting.

Remember earlier what I said about more than one way to do something? Now, I have to decide how to go about calling sqlcmd.exe from PowerShell. I need to specify variables to all the parms based on the target instance and input file to execute – and the output filename and location is dynamically determined as well based on the target instance and input filename.  I start with looking at Invoke-Command, then move to Invoke-Expression, but I’m still not getting my output file like I want it and I’m not able to detect if sqlcmd experienced an error to report in my general execution log. I have an example using [diagnostics.process]::start($exe,$args).WaitForExit() that seems to be getting me close to what I want, but now it is time to break for my afternoon meeting.

I’m the Technical Team Lead for a team of three. We each have our areas of specialization within the overall work plan, but try to keep each other in the loop so we can back each other up at any time. As needed (usually every 1-2 weeks), we meet formally to update the work plan, assign/reassign new/old tasks if needed, catch each other up on what we’ve each been working on and brainstorm areas for improvement. This is one of those meetings and since last week was a holiday week and we didn’t meet, we have a lot to catch up on.  The nice thing about a team is having others to bounce ideas off of and this is what I do with my frustration in finding the exact syntax I need to be using to get the results I want from calling sqlcmd inside PowerShell.  The next thing I know, one of my colleagues has done their own search and found a code example – I look and express skepticism as it is very much like what I’m already doing, but with one key difference that might make a difference; what can it hurt to try?

We continue to discuss how far we want to take this initial rewrite of our update process.  We are also in progress of redesigning our whole automated install process and ultimately we want the update process to utilize what we are putting into place there.  However, we have a more immediate need to have the operations team rollout some updates and decide that version 1 of the update process will do no more than we have already in place today (in terms of reporting), but it will be automated such that the DBAs only need to review the central output file for any problems. Selection of the systems requiring an update into a special CMS group can be done in an automated fashion as well as scheduling the update itself in SQLAgent. We decide to make further enhancements for logging the process’s results into a central table in a future version.

Our meeting continues with more brainstorming about the consequences of developing an install and configuration solution for SQL Server which can account for multiple versions and differing customer standards (e.g. install locations). We plot out on the whiteboard differing ways we can handle this – probably the umpteenth discussion like this that we’ve had; but each time we come in with new experiences and thoughts from what we decided previously and in some cases started trying to implement and we are therefore continually refining the solution.  We are becoming more confident that we are developing a standardized, but flexible solution which is also more sustainable across multiple versions of SQL Server than our existing process.

The meeting concludes and although I’m anxious to try the code snippet my colleague found, it is really time for me to head home. I arrived at the office much earlier this morning than my normal start time trying to beat the rain and now I need to try to get home before the next round hits. There is some flooding already occurring around town. Working further on this script can wait until later. I know that once I do get started back on it, I won’t stop until I have it totally finished. That’s my life!

I probably learned more today in trying all the ways that didn’t work the way I thought they would than if the first code I tried had worked. This experience will pay off later, I know.

Today was an “Edison day”:

I am not discouraged, because every wrong attempt discarded is another step forward.

I have not failed. I’ve just found 10,000 ways that didn’t work.

 

P.S. I did finally get the script functioning the way I wanted the following day and it will save our operations team hundreds and maybe even thousands of hours. This is what I do!