• Categories

  • Recent Posts

  • Archives

  • Copyright Notice

    Copyright © Nancy Hidy Wilson, 2010-2013. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Nancy Hidy Wilson and nancyhidywilson.wordpress.com with appropriate and specific direction to the original content.

TSQL2sday #68 – Just Say No to Defaults

T-SQL Tuesday (aka #TSQL2sday) is a monthly SQL Server blogger event started back in late 2009 by Adam Machanic (blog | twitter). For more info on its beginning and purpose see the origin of TSQL2sday. Each month a different SQL Server blogger is the host (announces the theme and compiles a recap). This month’s event is hosted by Andy Yun (blog | twitter) and the selected theme for this month is “Just Say No to Defaults”.

This is really embarrassing, but I’ve had a blog post started for this topic for years and somehow never got around to finishing it! Thanks, Andy, for giving me a reason to finally address a couple of my “must change” defaults.

Ease of installation is definitely a feature that has helped SQL Server to proliferate. You can have a functional system just by running setup and clicking Next, Next, Next….Finish! Without having to make any real decisions about what you are doing, you can be up and running in no time.

When installing the database engine component, the first change to be considered from the defaults presented during setup is the location of the various file types – however, I’m going to save that for others to address and may come back to it in a future post.

Today, I’m going to address a default that you can’t change during the setup dialog or via configuration file parameters. It must be adjusted post-install and is for SQLAgent.

Ever go to review the job history for a job and find nothing or only a couple of recent entries and you know the job has been running for weeks or even months? So, where is all that history? Sorry, you were the victim of the ridiculous defaults shown below which limit the total number of rows in the msdb.dbo.sysjobhistory table as well as set a max number of history rows per job.

To find this dialog in SSMS, right-click on SQLAgent, then select Properties, then select History.

SQLAgentHistoryProperties

These are defaults that you definitely want to change. In fact, instead of just increasing the number of maximum rows for the table and per job, I’d recommend that you decide on the time frame that you want to keep your SQLAgent job history and uncheck the “Limit size of job history log” option and check the “Remove agent history” option and specify the desired time frame instead as shown below. Many companies already have specifications for how long to retain activity logs, so using the time period that meets or exceeds those requirements should be helpful when it comes audit time.

SQLAgentHistoryProperties_ByTime

Depending on the number of jobs and the frequency at which each is run, you may also need to keep a close watch on the size of msdb after changing this setting to find the optimum size for your msdb files to allow the sysjobhistory table to grow to its expected size without causing autogrow of your files. Remember to manage msdb just like any other application database with appropriate purges, backups, and other maintenance.

I can’t wait to see what others say “no” to in Andy’s round-up for this event. I’ll be looking for my other must change items and if I don’t see them, then I will be posting more soon!

SQL Server “Agent XPs” Behavior

Ever realize you’ve been doing something a certain way for so long and you don’t remember why? Or what the consequences would be if you changed how you did things?

I was testing our new SQL Server installation process this week. We create a configuration file programmatically from user input and some set defaults and then run the install unattended followed by executing a series of T-SQL and PowerShell scripts to make sure certain configurations are done.

Vicky Harp (blog | twitter) has a presentation on edge case testing which really caught my attention a few months back. For years, I’ve always made sure all parameters, required and optional, were explicitly defined for our installs. But, what if someone else runs this and isn’t as precise as I tend to be?(I know some of you are thinking of another word.) For the current test I was doing with our new process, I was only providing the absolute minimum required parameters and letting everything else default. The install ran great, and then the configuration files started executing. Surprisingly, I had two scripts report failure. I had successfully run the same set of scripts on a prior install earlier in the day. What changed?  You’ve probably already guessed, but bear with me through my analysis.

My first thought was a SQL Server 2012 difference from SQL Server 2008 R2. The current install I was performing was 2012 while the earlier one was for 2008 R2. So, that seemed like the logical place to start – a difference between versions. But, I was confident that these scripts worked for both versions and had previously tested them independently of the install process. Then I looked a little more carefully at the error. The failing scripts were calling system stored procedures to modify SQLAgent’s job history settings. The error being reported was that the Agent XPs weren’t enabled.  I was confused – why did my SQL 2008 R2 install work fine with the same scripts and SQL 2012 fail?

While I’m pondering the mystery of what was different I remembered that the SQL 2012 Setup had returned error 3010 which means that a reboot is needed following the install; so I rebooted that box. In the meantime, I searched and verified that I indeed did not have a script which was explicitly enabling the Agent XPs, but somehow they were enabled on the SQL 2008 R2 system.  The SQL 2012 system came back up and I logged back on to continue my investigation. That was when I noticed that SQLAgent had not started following the reboot. Its start mode was set to “manual” which is the default if you do not specify the AGTSVCSTARTUPTYPE=”AUTOMATIC” in your configuration file. I opened SQL Server Configuration Manager and changed the SQLAgent service’s start mode to “Automatic” and started the service.  Then, I checked the value for sp_configure ‘Agent XPs’ – and sure enough it was now 1 (enabled) and I could execute the scripts which previously failed.

At first I thought it was just changing the start mode to Automatic triggered the enabling of the Agent XPs, but I finally verified after testing several scenarios that whenever the SQLAgent service is started or stopped regardless of the start mode, it will enable or disable the Agent XPs respectively.  This is really confusing if you know that you have explicitly run:

sp_configure ‘Agent XPs’, 1;

reconfigure;

Then you stop SQLAgent and don’t restart – it is now disabled as if you’d never run the above command. If you want to modify some aspect of SQLAgent while it is now stopped, you’ll have to run the above command again.

sp_configure ‘Agent XPs’, 1;

reconfigure;

EXEC msdb.dbo.sp_set_sqlagent_properties @jobhistory_max_rows=150000;

So, you can modify the properties while SQLAgent is not running, you just have to remember to always enable the Agent XPs immediately beforehand. Since this behavior is by design for security purposes, then you’ll probably want to get the current setting before enabling it and if it was disabled, then set it back to disabled when you’ve finished running your command(s). 

It’s pretty obvious when you use SSMS and the Agent XPs are disabled. You cannot access any SQLAgent nodes or even its properties.

You can leave the SQLAgent service stopped and run the sp_configure command to enable the XPs, and then you can access the SQLAgent properties in SSMS. Most of the time this shouldn’t be an issue, you’ll have SQLAgent running and the XPs will have been automatically enabled. But, just in case you ever run into an unexpected error with the Agent XPs like I did, hopefully, you’ll remember this behavior.

TSQL2sday #32 – A Day in the Life

TSQL2sday is a monthly SQL Server blogger event started back in late 2009 by Adam Machanic (blog | twitter). For more info on its beginning and purpose see the origin of TSQL2sday. Each month a different SQL Server blogger is the host (announces the theme and compiles a recap). This month’s event is hosted by Erin Stellato (blog | twitter) and the selected theme for this month is “A Day in the Life”.

Erin challenged us to track what we did in our jobs for a specific day and write about it. This is great because I often have trouble explaining to others (especially non-IT folk) what my title of SQL Server Service Engineer really means. However, as this exercise is just supposed to cover a single day, this is just a small sample of what I do. There is no such thing as a “normal” day for me. Sometimes my tasks are based on the “crisis du jour” prioritization method, and sometimes I can actually follow the team work plan. The variety in my job is one of the things I like about it. So here goes…

Unless, I have an early morning meeting with global colleagues, my day nearly always begins with processing email. Since I work in a global organization in the region whose workday is last to begin, even if I’d cleared my Inbox the day before, I always open my mailbox to encounter new emails from European and Asia-Pacific colleagues who have already completed or are wrapping up their workday. In that sense, this day starts out as just a normal day (no early meetings!).

Unfortunately for this write-up, it appears that summer time may be impacting my email load in a positive sense as I have only a handful of emails and only as a cc on a couple of subjects which one of my teammates is handling.  One of the issues has to do with deploying SQL Server Enterprise Edition versus Standard Edition and licensing implications for the customer. My team is comprised of technical experts – we can tell the customer if what they are trying to do requires a specific edition of SQL Server to use the requested feature, but we are not involved in the licensing agreements between Microsoft and each customer.  That is for others to figure out! 

Email done and no looming crisis for today, I can get back to the task I’ve been working on previously – writing an automated process to rollout multiple T-SQL Scripts to multiple instances using PowerShell. These are the scripts which update the standard tables and stored procedures in the admin database we install on all instances along with a set of SQLAgent jobs which the operational DBAs use for system maintenance. Every so often, we need to rollout updates to these objects. Our current automated process for doing this (which was developed for SQL 2005) isn’t as automated as we’d like it to be. We have since created a CMS and are utilizing registered groups to run various processes (like EPM) and now want to extend that concept to this activity as well. I’m thinking within a couple of hours I can write a script to save our operational DBAs literally hundreds of man-hours. Easy, right?

If you’ve worked with PowerShell any at all – or any programming language for that matter – you know there is always more than one way to write a process to accomplish the task at hand. The challenge is in finding the most efficient way that gives you what you want.  Our old script to run a set of .sql files was written in VBScript and called the sqlcmd utility. I figured I’m writing this in PowerShell, I’m using Invoke-Sqlcmd to get the list of instances from the CMS, I can use the Invoke-Sqlcmd cmdlet as shown in BOL in the second example and it will work just like sqlcmd. Wrong! It seems that example only works if you are running a SELECT statement in your InputFile.  This particular set of .sql files should have no output unless it is an error and in my test I have a script which I know produces an error – but my output file is empty.

I try various parameters such as -ErrorLevel and -SeverityLevel and I even use -Verbose to no avail – still nothing is piped to my output file.  I consult with my team mates to see if they tried this before; I search for examples on the Internet and the best I can find in one of the forums was someone else encountering the same thing, but with no solution for me. I can be stubborn some times and I’m not about to give up yet – after a couple of hours of struggling – I fire off an email to my SQL PowerShell buddy Allen White (blog | twitter) asking for his input – can I do what I’m trying to do with Invoke-Sqlcmd or should I revert to calling sqlcmd?

While waiting for Allen to respond, a couple of more emails have hit my Inbox.  Yea! It appears that our request to rebuild one of our team’s test servers has been completed.  We try not to do this too often, but part of engineering is writing scripts \ installing \ testing \ uninstalling \ enhancing scripts…repeat; over the course of time sometimes things get so messed up from all the testing (and occasional bad script) you just have to start over with a clean image.  This is now a box we plan to use for testing our processes on SQL Server 2012.

It doesn’t take long before I have a reply from Allen – I hope he doesn’t mind if I quote him:

I honestly believe that it’s best to use the tool that best suits the task, so I’d use sqlcmd here, because it works the way you want it to. 

Thanks Allen for the reminder not to use a hammer when a screwdriver is what you need! Sometimes, a hammer is all you have, but not in this case. 

Now, it’s time for lunch. I head down to the cafeteria with my team mates and join other colleagues at our usual table. I don’t hang around too long chit-chatting as I want to get back to my desk and switch out my code and test so I can announce success at our afternoon team meeting.

Remember earlier what I said about more than one way to do something? Now, I have to decide how to go about calling sqlcmd.exe from PowerShell. I need to specify variables to all the parms based on the target instance and input file to execute – and the output filename and location is dynamically determined as well based on the target instance and input filename.  I start with looking at Invoke-Command, then move to Invoke-Expression, but I’m still not getting my output file like I want it and I’m not able to detect if sqlcmd experienced an error to report in my general execution log. I have an example using [diagnostics.process]::start($exe,$args).WaitForExit() that seems to be getting me close to what I want, but now it is time to break for my afternoon meeting.

I’m the Technical Team Lead for a team of three. We each have our areas of specialization within the overall work plan, but try to keep each other in the loop so we can back each other up at any time. As needed (usually every 1-2 weeks), we meet formally to update the work plan, assign/reassign new/old tasks if needed, catch each other up on what we’ve each been working on and brainstorm areas for improvement. This is one of those meetings and since last week was a holiday week and we didn’t meet, we have a lot to catch up on.  The nice thing about a team is having others to bounce ideas off of and this is what I do with my frustration in finding the exact syntax I need to be using to get the results I want from calling sqlcmd inside PowerShell.  The next thing I know, one of my colleagues has done their own search and found a code example – I look and express skepticism as it is very much like what I’m already doing, but with one key difference that might make a difference; what can it hurt to try?

We continue to discuss how far we want to take this initial rewrite of our update process.  We are also in progress of redesigning our whole automated install process and ultimately we want the update process to utilize what we are putting into place there.  However, we have a more immediate need to have the operations team rollout some updates and decide that version 1 of the update process will do no more than we have already in place today (in terms of reporting), but it will be automated such that the DBAs only need to review the central output file for any problems. Selection of the systems requiring an update into a special CMS group can be done in an automated fashion as well as scheduling the update itself in SQLAgent. We decide to make further enhancements for logging the process’s results into a central table in a future version.

Our meeting continues with more brainstorming about the consequences of developing an install and configuration solution for SQL Server which can account for multiple versions and differing customer standards (e.g. install locations). We plot out on the whiteboard differing ways we can handle this – probably the umpteenth discussion like this that we’ve had; but each time we come in with new experiences and thoughts from what we decided previously and in some cases started trying to implement and we are therefore continually refining the solution.  We are becoming more confident that we are developing a standardized, but flexible solution which is also more sustainable across multiple versions of SQL Server than our existing process.

The meeting concludes and although I’m anxious to try the code snippet my colleague found, it is really time for me to head home. I arrived at the office much earlier this morning than my normal start time trying to beat the rain and now I need to try to get home before the next round hits. There is some flooding already occurring around town. Working further on this script can wait until later. I know that once I do get started back on it, I won’t stop until I have it totally finished. That’s my life!

I probably learned more today in trying all the ways that didn’t work the way I thought they would than if the first code I tried had worked. This experience will pay off later, I know.

Today was an “Edison day”:

I am not discouraged, because every wrong attempt discarded is another step forward.

I have not failed. I’ve just found 10,000 ways that didn’t work.

 

P.S. I did finally get the script functioning the way I wanted the following day and it will save our operations team hundreds and maybe even thousands of hours. This is what I do!

My Denali First Glance – Product Update Feature

Based on my first experience with the Product Update feature in Denali (SQL Server 11) CTP3, I’d have to say that “the third time is the charm”.  This is the third version of SQL Server to provide a method for incorporating updates during an initial product install. Slipstreamed updates were first provided for SQL Server 2008 and also in SQL Server 2008 R2.  I was really excited about that capability until I realized that in the large, global enterprise environment in which I worked that it caused more pain than it solved.  I won’t go into all the details, but since we ran our installs over the network and from multiple data centers, slipstreamed source was not efficient for us due to our install methodology and the number of combinations of source we needed to provide to our customers. However, based on what I’ve read about the Product Update feature in Denali as well as the testing I have performed so far, this solution will provide exactly what we need to easily and successfully deploy installations with the exact updates we want at setup time.

There are three ways to incorporate the Product Updates for Denali:

  • Windows Update / Microsoft Update
  • WSUS (Windows Server Update Services)
  • File Share

In an enterprise environment, you won’t find many servers with Internet access, so the first method, Windows Update, is probably out.  Also, enterprise environments typically have specific deployment schedules for changing even initial installs to a new service pack level. So, even if your servers do have Internet access, I don’t see this option being used by large enterprises. But, I love this option for my home installs!

As for the second method, WSUS, this one might be an option pending how well your DBAs interface with the team supporting WSUS. However, considering the various requirements in an enterprise environment – there are valid scenarios where a “down-level” install is required. Thus, this method might be eliminated as it would always provide the most current approved patch level.

That brings us to the File Share method.  At first glance, this looks like slipstream.  However, in my opinion, it is an order of magnitude better than slipstream. Why?

  1. I can maintain a single RTM code base. This reduces my storage footprint.
  2. I don’t have to copy/merge any files from the update into my RTM code base. This reduces potential for errors in translating what to copy/merge.
  3. I can specify the location of the updates using either a direct local path (e.g. C:\SQLUpdate) or a UNC name (e.g. \\SourceServer\SQLUpdate). This increases my installation methodology flexibility.
  4. I can have multiple update folders to choose from (e.g. \SQLUpdate\V11.Update1 versus \SQLUpdate\V11.Update2, etc.). This increases my ability to support varying customer version requirements.

With the File Share method, you simply add the /UpdateSource parameter to your
setup.exe command and tell it where that source resides.

Local Drive Source example:

setup.exe /Action=Install /UpdateSource=C:\SQLUpdate\V11.Test

Network File Share Source example using a local computer variable to designate the source server name:

setup.exe /Action=Install /UpdateSource=\\%Src%\SQLUpdate\V11.Test

When running setup using the command above, you’ll get the dialog shown below. You can see that the designated product update at the UNC location will be included in the installation.

It’s as easy as that!

But, what if you don’t want any updates included during installation?  That is easy, too.  Just add the /UpdateEnabled=False parameter to your setup command. By default, this parameter’s value is True.  And, finally, these parameters can always be incorporated into a configuration ini file, so that you can initiate the whole install unattended, but that’s a topic for another day.

If you’d like to test out the Product Update feature for yourself, check out these links for obtaining, implementing and commenting on the Denali CTP3 Product Update feature:

The Case of the Odd CPU

I love mysteries – always have.  I think I had read every volume of Alfred Hitchcock and The Three Investigators and of course Nancy Drew by the time I was finished with fourth grade. I loved trying to figure out the answer to the mystery before it was revealed. Of course, the authors of these mysteries (and even the ones I read today) always held back some key detail that they didn’t reveal until the end that helped to put all the pieces together and solved the puzzle.

A while back, my team was presented with a “mystery”.  Our rock solid automated installation process for SQL Server 2005 was suddenly failing intermittently and we had not made any changes to our installation scripts. But, something had obviously changed in the environment that was causing the failures.

After troubleshooting the SQL Server installation logs for error messages, we found we had encountered a known problem which thankfully already had a documented workaround (http://support.microsoft.com/kb/954835).  The change causing the problem was that we were now getting virtual servers delivered with an odd number of CPUs as seen by the OS and SQL Server. For the additional CPU\core scenarios which can cause an installation failure of SQL Server 2005, please reference the KB article. This issue has actually been documented several years, so it isn’t a new thing, but it was a new condition to us in our environment. If you still find the occasional need to install SQL Server 2005, then you may have a good chance of encountering this condition if you haven’t already.

However, just performing the workaround wasn’t the end of the story for us.  Like any good mystery, there are often “loose ends” to tie up once the main mystery is solved. In our case, we have a post-install configuration script to allocate multiple tempdb files based on a ratio to the number of processors detected. But, our script, which was also applicable to our SQL Server 2008 environment, expected the number of processors to always be a power of 2! As we learned the hard way – that may not always be the case now.

Lessons learned\confirmed:

  • If a process breaks which worked previously, something changed!
  • The impact of the “change” may reach beyond whatever drew it to your attention initially.
  • Expect change and when possible plan for it.

TSQL2sday #015 – Automation in SQL Server

What is T-SQL Tuesday (twitter hashtag: #TSQL2sday)? Back in late 2009, Adam Machanic (blog | twitter) had this brilliant idea for a monthly SQL Server blogger event (the origin of T-SQL Tuesday) hosted each month by a different blogger.  This month’s event is hosted by Pat Wright (blog | twitter) and his selected topic is “Automation in SQL Server”.

If you’ve read my home page then you know the topic of automation is my primary focus – so this topic for TSQL2sday got me excited as it will also definitely help me towards my “techie” goals posted during TSQL2sday #014!

I think everyone knows why we automate – improve efficiency, reduce errors/waste; what we automate – repetitive tasks; and when we automate – cost of implementing automation is less than continuing “manual” method (calculate the ROI).   So – where and how should we apply automation in the SQL Server world?

Where? Installations, of course! As hopefully you are aware – there is more to installing SQL Server than just clicking next, next, next, … finish!  Sure SQL Server will be up and running if you do that – but will it meet your corporate standards?  Plus, there really is more you should do (configuration-wise) after running setup.exe. But the details of that are for a future post!  Right now, I want to focus on the how of automating the installation and configuration process.

SQL Server provides the ability to specify a configuration file containing the parameters required by the setup.exe so that you can run an “unattended” installation.  This has been available as far back as I can remember (which although I’ve worked with SQL Server since 4.21 days, I only confess to remember SQL 2000 forward!). Each version of SQL Server has a slightly different configuration file, so be sure to refer to the current BOL for the version you are implementing (although hopefully very few of you still have to do SQL 2000 installs!).

If you want to take your deployment automation even another step forward, SQL Server 2008 R2 provides the ability to SysPrep your SQL Server image – http://technet.microsoft.com/en-us/library/ee210664.aspx.  I haven’t personally tried this, but depending on your environment you may want to check out this feature.

Once you have your unattended installation files configured to ensure all your deployments install exactly the same –the possibility for mental errors by a well-meaning human (including yourself!) who simply misses changing a default on a screen to your corporate standard is greatly reduced. But, you probably want to further automate your post-install configurations, such as changing the default size and number of files allocated to tempdb, setting up your standard maintenance jobs, etc.  Or – maybe you want to run some “pre-install” steps before executing the SQL Server setup (like ensure PowerShell 2 is installed).  So how to package all this up into ultimate flexibility and reduce the possibility of missing a script?  Enter DBLaunch.

DBLaunch is a product from DBA24Hrs for helping DBAs with automating installation, configurations, and maintenance tasks.  From an installation task perspective, DBLaunch allows you to define all of your setup files (including service packs) and “configuration” scripts whether batch files, PowerShell scripts, or T-SQL scripts. Then you create “templates” for each distinct installation type that you need – SQL Server 2005 x86 Standard Edition w/SP4, SQL Server 2008 Enterprise Edition w/SSIS, etc. You can clone templates in cases where you want the same configurations overall, but maybe need a different collation setting for example.

The benefits which I personally witnessed? Documentation of the steps for an installation was reduced by about 75%! Personnel resource time to perform an install dropped from 4 hours (including wait time for processes to complete before continuing to the next step) to 10-15 minutes! After running manual pre-checks (mostly to assure storage is correctly configured), all the installer has to do in DBLaunch is select the template to use and specify the ServerName and InstanceName to be installed and click “start”.  The installer is then free to go do another task (including starting another installation on a different server); come back in 45 minutes to an hour (depends on target server location and bandwidth to installation source) and verify that the install completed successfully.  Prior to DBLaunch, it could take 2-4 hours to manually execute all of our installation process.  And, we are assured that no steps were missed thereby reducing time for troubleshooting failed installations almost completely which ultimately reduced our delivery time to the customer.

If you are in an enterprise environment, then you absolutely must use automated means to ensure a consistently delivered installation and configuration.  Start with developing your unattended installation configuration file, then look to see what other processes in your configuration can be automated via scripts, then take a look at DBLaunch to see if it can help you pull it all together.  I’m not aware of another tool in the marketplace which does this (and more) – we justified it strictly on the ROI from installations.

So, regardless of whether you build your own deployment tool or go with a third-party product, bottom line is that you will ultimately save time, money and frustration by automating your installs!