• Categories

  • Recent Posts

  • Archives

  • Copyright Notice

    Copyright © Nancy Hidy Wilson, 2010-2013. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Nancy Hidy Wilson and nancyhidywilson.wordpress.com with appropriate and specific direction to the original content.

TSQL2sday #70 – Strategies for Managing an Enterprise

tsql2sday

Jen McCown (Twitter) of Midnight DBA is the guest host for this month’s SQL blogger event known as T-SQL Tuesday (#TSQL2sday) which was started almost 6 years ago by Adam Machanic. This month, Jen has assigned us the topic: Strategies for Managing an Enterprise. Jen will be doing a wrap-up summary of all blog posts submitted on this topic per the rules and I’m looking forward to everyone’s input on this subject.

I’ve been presenting a session for the past several years at SQLSaturday events entitled “Managing SQL Server in the Enterprise with TLAs”. The TLAs (three-letter acronyms) are CMS (Central Management Server), PBM (Policy Based Management) and EPM (Enterprise Policy Management Framework). I’ll be presenting this session at SQLSaturday #447 Dallas on Oct. 3rd, 2015, so you can come learn the details of these features then. But, per the assigned topic for this post, let’s focus on the “strategies” driving the usage of these features.

For me, one of the main goals in managing the enterprise is finding ways to reduce the effort in managing that landscape –whether two instances of SQL Server or two thousand instances. A strategy for getting there is organization. The CMS enables you to define groups to which you register your SQL Server instances and then you can perform tasks against those groups. Why perform a task per instance when you can do it for multiple instances at one time? The CMS is actually defined in tables in the msdb database of the designated instance. I would recommend having a dedicated “central management server” instance which you will use for CMS, PBM, EPM, and other administrative tasks.

With CMS, you can create many groups and register instances in multiple groups based on the tasks that you may want to perform against those groups. For example, you can create groups to organize by SQL Server version, by Production\UA\QA\Dev\Test, by Application, by location, and be sure to have one group with all your SQL Server instances registered to it. SQL Server Management Studio (SSMS) enables you to run “multi-instance” queries using a CMS group. That is, you execute the contents of the Query window against every server instance in the specified group and the results are returned to the SSMS console.

A second strategy in managing the enterprise is standardization. Policy Based Management enables you to define expected settings (e.g. conditions) and verify whether an instance of SQL Server meets those conditions. Examples of policies could be checking that the sa login is disabled or ensuring the AUTO_SHRINK option is off on all databases. My recommendation is to configure the policies on the same instance as your CMS groups (e.g. your dedicated central management server) so that you only have to manage one set of policies. Policy definitions are also stored in the msdb database. You will also want to export the policies to a central file server. Policies are exported as XML formatted files. When evaluating the policies on a specific instance, you may use either the central management SQL Server instance or the file server where they are stored as the source. SSMS also allows you to manually evaluate policies against a CMS group – returning all the results to your SSMS console.

The third strategy is automation. If you have a CMDB (Configuration Management Database), then you can utilize it as the source for populating your CMS groups by scripting the entire process to keep your CMS groups current with the CMDB contents and setting this up as a SQLAgent job to schedule as needed. Policies can be assigned to categories. The EPM Framework provides a mechanism (a PowerShell script) to automate the PBM evaluations by category against a specific CMS group and store the results for reporting. EPM requires a database repository to store the results, so again I recommend creating this database on a dedicated central management server. Once you’ve been through the exercise of setting up your CMS, establishing policies, and configuring the EPM Framework for your environment, you’ll see additional opportunities to utilize the CMS for automating other tasks.

So, start leveraging the CMS, PBM, and EPM features today to reduce your efforts by organizing your instances, increasing standardization, and automating tasks in your enterprise!

Advertisements

TSQL2sday #68 – Just Say No to Defaults

T-SQL Tuesday (aka #TSQL2sday) is a monthly SQL Server blogger event started back in late 2009 by Adam Machanic (blog | twitter). For more info on its beginning and purpose see the origin of TSQL2sday. Each month a different SQL Server blogger is the host (announces the theme and compiles a recap). This month’s event is hosted by Andy Yun (blog | twitter) and the selected theme for this month is “Just Say No to Defaults”.

This is really embarrassing, but I’ve had a blog post started for this topic for years and somehow never got around to finishing it! Thanks, Andy, for giving me a reason to finally address a couple of my “must change” defaults.

Ease of installation is definitely a feature that has helped SQL Server to proliferate. You can have a functional system just by running setup and clicking Next, Next, Next….Finish! Without having to make any real decisions about what you are doing, you can be up and running in no time.

When installing the database engine component, the first change to be considered from the defaults presented during setup is the location of the various file types – however, I’m going to save that for others to address and may come back to it in a future post.

Today, I’m going to address a default that you can’t change during the setup dialog or via configuration file parameters. It must be adjusted post-install and is for SQLAgent.

Ever go to review the job history for a job and find nothing or only a couple of recent entries and you know the job has been running for weeks or even months? So, where is all that history? Sorry, you were the victim of the ridiculous defaults shown below which limit the total number of rows in the msdb.dbo.sysjobhistory table as well as set a max number of history rows per job.

To find this dialog in SSMS, right-click on SQLAgent, then select Properties, then select History.

SQLAgentHistoryProperties

These are defaults that you definitely want to change. In fact, instead of just increasing the number of maximum rows for the table and per job, I’d recommend that you decide on the time frame that you want to keep your SQLAgent job history and uncheck the “Limit size of job history log” option and check the “Remove agent history” option and specify the desired time frame instead as shown below. Many companies already have specifications for how long to retain activity logs, so using the time period that meets or exceeds those requirements should be helpful when it comes audit time.

SQLAgentHistoryProperties_ByTime

Depending on the number of jobs and the frequency at which each is run, you may also need to keep a close watch on the size of msdb after changing this setting to find the optimum size for your msdb files to allow the sysjobhistory table to grow to its expected size without causing autogrow of your files. Remember to manage msdb just like any other application database with appropriate purges, backups, and other maintenance.

I can’t wait to see what others say “no” to in Andy’s round-up for this event. I’ll be looking for my other must change items and if I don’t see them, then I will be posting more soon!

TSQL2sday #32 – A Day in the Life

TSQL2sday is a monthly SQL Server blogger event started back in late 2009 by Adam Machanic (blog | twitter). For more info on its beginning and purpose see the origin of TSQL2sday. Each month a different SQL Server blogger is the host (announces the theme and compiles a recap). This month’s event is hosted by Erin Stellato (blog | twitter) and the selected theme for this month is “A Day in the Life”.

Erin challenged us to track what we did in our jobs for a specific day and write about it. This is great because I often have trouble explaining to others (especially non-IT folk) what my title of SQL Server Service Engineer really means. However, as this exercise is just supposed to cover a single day, this is just a small sample of what I do. There is no such thing as a “normal” day for me. Sometimes my tasks are based on the “crisis du jour” prioritization method, and sometimes I can actually follow the team work plan. The variety in my job is one of the things I like about it. So here goes…

Unless, I have an early morning meeting with global colleagues, my day nearly always begins with processing email. Since I work in a global organization in the region whose workday is last to begin, even if I’d cleared my Inbox the day before, I always open my mailbox to encounter new emails from European and Asia-Pacific colleagues who have already completed or are wrapping up their workday. In that sense, this day starts out as just a normal day (no early meetings!).

Unfortunately for this write-up, it appears that summer time may be impacting my email load in a positive sense as I have only a handful of emails and only as a cc on a couple of subjects which one of my teammates is handling.  One of the issues has to do with deploying SQL Server Enterprise Edition versus Standard Edition and licensing implications for the customer. My team is comprised of technical experts – we can tell the customer if what they are trying to do requires a specific edition of SQL Server to use the requested feature, but we are not involved in the licensing agreements between Microsoft and each customer.  That is for others to figure out! 

Email done and no looming crisis for today, I can get back to the task I’ve been working on previously – writing an automated process to rollout multiple T-SQL Scripts to multiple instances using PowerShell. These are the scripts which update the standard tables and stored procedures in the admin database we install on all instances along with a set of SQLAgent jobs which the operational DBAs use for system maintenance. Every so often, we need to rollout updates to these objects. Our current automated process for doing this (which was developed for SQL 2005) isn’t as automated as we’d like it to be. We have since created a CMS and are utilizing registered groups to run various processes (like EPM) and now want to extend that concept to this activity as well. I’m thinking within a couple of hours I can write a script to save our operational DBAs literally hundreds of man-hours. Easy, right?

If you’ve worked with PowerShell any at all – or any programming language for that matter – you know there is always more than one way to write a process to accomplish the task at hand. The challenge is in finding the most efficient way that gives you what you want.  Our old script to run a set of .sql files was written in VBScript and called the sqlcmd utility. I figured I’m writing this in PowerShell, I’m using Invoke-Sqlcmd to get the list of instances from the CMS, I can use the Invoke-Sqlcmd cmdlet as shown in BOL in the second example and it will work just like sqlcmd. Wrong! It seems that example only works if you are running a SELECT statement in your InputFile.  This particular set of .sql files should have no output unless it is an error and in my test I have a script which I know produces an error – but my output file is empty.

I try various parameters such as -ErrorLevel and -SeverityLevel and I even use -Verbose to no avail – still nothing is piped to my output file.  I consult with my team mates to see if they tried this before; I search for examples on the Internet and the best I can find in one of the forums was someone else encountering the same thing, but with no solution for me. I can be stubborn some times and I’m not about to give up yet – after a couple of hours of struggling – I fire off an email to my SQL PowerShell buddy Allen White (blog | twitter) asking for his input – can I do what I’m trying to do with Invoke-Sqlcmd or should I revert to calling sqlcmd?

While waiting for Allen to respond, a couple of more emails have hit my Inbox.  Yea! It appears that our request to rebuild one of our team’s test servers has been completed.  We try not to do this too often, but part of engineering is writing scripts \ installing \ testing \ uninstalling \ enhancing scripts…repeat; over the course of time sometimes things get so messed up from all the testing (and occasional bad script) you just have to start over with a clean image.  This is now a box we plan to use for testing our processes on SQL Server 2012.

It doesn’t take long before I have a reply from Allen – I hope he doesn’t mind if I quote him:

I honestly believe that it’s best to use the tool that best suits the task, so I’d use sqlcmd here, because it works the way you want it to. 

Thanks Allen for the reminder not to use a hammer when a screwdriver is what you need! Sometimes, a hammer is all you have, but not in this case. 

Now, it’s time for lunch. I head down to the cafeteria with my team mates and join other colleagues at our usual table. I don’t hang around too long chit-chatting as I want to get back to my desk and switch out my code and test so I can announce success at our afternoon team meeting.

Remember earlier what I said about more than one way to do something? Now, I have to decide how to go about calling sqlcmd.exe from PowerShell. I need to specify variables to all the parms based on the target instance and input file to execute – and the output filename and location is dynamically determined as well based on the target instance and input filename.  I start with looking at Invoke-Command, then move to Invoke-Expression, but I’m still not getting my output file like I want it and I’m not able to detect if sqlcmd experienced an error to report in my general execution log. I have an example using [diagnostics.process]::start($exe,$args).WaitForExit() that seems to be getting me close to what I want, but now it is time to break for my afternoon meeting.

I’m the Technical Team Lead for a team of three. We each have our areas of specialization within the overall work plan, but try to keep each other in the loop so we can back each other up at any time. As needed (usually every 1-2 weeks), we meet formally to update the work plan, assign/reassign new/old tasks if needed, catch each other up on what we’ve each been working on and brainstorm areas for improvement. This is one of those meetings and since last week was a holiday week and we didn’t meet, we have a lot to catch up on.  The nice thing about a team is having others to bounce ideas off of and this is what I do with my frustration in finding the exact syntax I need to be using to get the results I want from calling sqlcmd inside PowerShell.  The next thing I know, one of my colleagues has done their own search and found a code example – I look and express skepticism as it is very much like what I’m already doing, but with one key difference that might make a difference; what can it hurt to try?

We continue to discuss how far we want to take this initial rewrite of our update process.  We are also in progress of redesigning our whole automated install process and ultimately we want the update process to utilize what we are putting into place there.  However, we have a more immediate need to have the operations team rollout some updates and decide that version 1 of the update process will do no more than we have already in place today (in terms of reporting), but it will be automated such that the DBAs only need to review the central output file for any problems. Selection of the systems requiring an update into a special CMS group can be done in an automated fashion as well as scheduling the update itself in SQLAgent. We decide to make further enhancements for logging the process’s results into a central table in a future version.

Our meeting continues with more brainstorming about the consequences of developing an install and configuration solution for SQL Server which can account for multiple versions and differing customer standards (e.g. install locations). We plot out on the whiteboard differing ways we can handle this – probably the umpteenth discussion like this that we’ve had; but each time we come in with new experiences and thoughts from what we decided previously and in some cases started trying to implement and we are therefore continually refining the solution.  We are becoming more confident that we are developing a standardized, but flexible solution which is also more sustainable across multiple versions of SQL Server than our existing process.

The meeting concludes and although I’m anxious to try the code snippet my colleague found, it is really time for me to head home. I arrived at the office much earlier this morning than my normal start time trying to beat the rain and now I need to try to get home before the next round hits. There is some flooding already occurring around town. Working further on this script can wait until later. I know that once I do get started back on it, I won’t stop until I have it totally finished. That’s my life!

I probably learned more today in trying all the ways that didn’t work the way I thought they would than if the first code I tried had worked. This experience will pay off later, I know.

Today was an “Edison day”:

I am not discouraged, because every wrong attempt discarded is another step forward.

I have not failed. I’ve just found 10,000 ways that didn’t work.

 

P.S. I did finally get the script functioning the way I wanted the following day and it will save our operations team hundreds and maybe even thousands of hours. This is what I do!

TSQL2sday #026 – Second Chances

What is TSQL2sday? Back in late 2009, Adam Machanic (blog | twitter) had this brilliant idea for a monthly SQL Server blogger event (the origin of TSQL2sday).  This month’s event is hosted by David Howard (blog | twitter) and this month Dave is letting us chose our topic from any of the prior 25 topics! As my first foray into this event wasn’t until the 14th occurrence, I really like this idea and selected “TSQL2sday #007 Summertime in the SQL” as my second chance topic. Okay, so it is January, but it was 70+ degrees in Houston today, so quite balmy. However, that wasn’t why I chose this topic; I really chose it because this topic was about what is your favorite “hot” feature in SQL Server 2008 or R2. I thought about “updating” the topic to SQL Server 2012, but I’m really not sure yet which new “hot” feature of SQL Server 2012 will turn out to be my favorite – and after 3 years, I definitely know which SQL Server 2008 set of features is my personal favorite – CMS and PBM.

The Central Management Server (CMS) and Policy-Based Management (PBM) features have made the overall management of large numbers of SQL Server instances, well, manageable.

The CMS enables us to organize instances into multiple different classifications based on version, location, etc. We rebuild the CMS on a regular schedule based on the data in our asset management system. This ensures that all DBAs have access to a CMS with all known instances. If you are not familiar with the CMS – it does not grant any access to the instances themselves and connectivity using it only works with Windows Authentication, so there are no security loopholes here.

We then use these CMS groups as input into our various meta-data and compliance collection processes. Approximately 90% of our technical baseline compliance evaluation is accomplished via policies in PBM. We’ve incorporated all of this using the EPM (Enterprise Policy Management) Framework available on Codeplex with a few tweaks of our own to work better in our environment.

If you haven’t yet checked out the CMS and PBM features, I encourage you to do so today. I have two previous blog entries relating to this topic – “Managing the Enterprise with CMS” and “Taking Advantage of the CMS to Collect Configuration Data”.  I’d also highly recommend that you watch the SQL Server MCM Readiness Videos on the Multi-Server Management and PBM topics.

And, it is good to know that by the time this entry is posted – we should be back to our normal 50 degree January weather in Houston!  

TSQL2sday #024 – Prox ‘n’ Funx

What is TSQL2sday? Two years ago, Adam Machanic (blog | twitter) had this brilliant idea for a monthly SQL Server blogger event (the origin of TSQL2sday).  This month’s event is hosted by Brad Schulz (blog) and the selected topic is “Prox ‘n’ Funx” (aka Procedures and Functions).

Today, I’m sharing my favorite SQL Server metadata function – SERVERPROPERTY(‘propertyname’).  This handy function has been around since at least SQL Server 2000. If you need to get a quick “report” of the high level configuration information about your SQL Server instance – this is the function to use. Of course, with each new version of SQL Server it is subject to change, so always check usage in BOL (this link is to SQL 2008 R2 Books Online, but you can get to other versions from there).

If a property isn’t valid for a particular version of SQL Server (or if you just flat out typo the property name!), then NULL will be returned. Here’s a sample query for you to try out.  

And if you need database property info – guess what?  There are DATABASEPROPERTY and DATABASEPROPERTYEX functions you should check out! Until next time – happy TSQL2sday!



		

	

TSQL2sday #20 – T-SQL Best Practices

What is TSQL2sday? Back in late 2009, Adam Machanic (blog | twitter) had this brilliant idea for a monthly SQL Server blogger event (the origin of TSQL2sday) on a unified topic.  This month’s event is hosted by Amit Banerjee (blog | twitter) and the selected topic is “T-SQL Best Practices”.

This will be short and to the point.  My #1 “best practice” tip when writing any code is to include comments! Whether a SELECT statement from a single table or a complex multi-table join using CROSS APPLY, please write a comment stating the objective of the command. You’ll be surprised how soon you forget why you wrote the command in the first place and why you wrote it the way you did.

I’ll open up the proverbial can of worms, though by stating my preferences for when to use block comments /* */ versus dash comments –.

I prefer to use the block method for actual comments.

/* Uncomment the code below in order to list all databases */

I prefer to use the dash comments to comment out actual T-SQL code.

–Select name from sys.databases

What’s your preference for T-SQL comment indicators?

TSQL2sday #19 – Disasters & Recovery (or Keep Your Chain Saw Handy)

What is TSQL2sday? Back in late 2009, Adam Machanic (blog | twitter) had this brilliant idea for a monthly SQL Server blogger event (the origin of TSQL2sday).  This month’s event is hosted by Allen Kinsel (blog | twitter) and the selected topic is “Disasters & Recovery”.

Like Allen, I also live in the Greater Houston area – albeit far enough away from the coast that storm surge is not an issue in a hurricane like it is for Allen, but the rain, wind, and potential tornados are.  Companies in Houston don’t just have Disaster Recovery (DR) plans, they have specific HURRICANE DR plans. I grew up hearing the stories of Hurricane Carla (1961) and experienced first-hand the aftermaths of Hurricane Alicia (1983) and Hurricane Rita (2005).  But, Hurricane Ike (2008) was the first hurricane I stayed home for – and here is what I learned!

DR is all about preparedness.  You have to think about what can happen in a disaster and then what you will need to survive and recover in the short-term and in the long-term.  Short-term recovery is more about protecting assets from further damage.  My neighborhood experienced a direct hit from Ike – we were in the “eye” for over 1.5 hours before the “backside” hit us.  During that time, everyone did an initial assessment of the damage incurred during the “front-side” attack.  Ike hit in the wee hours of the morning – so strong flashlights were a must have item.  We were lucky – no damage could be seen to our roof or windows (we had boarded most, but started too late to get them all).  However, several neighbors had trees fall through their roofs during this time. Those of us without damage helped out those with damage to quickly cover the holes with tarps before the backside hit.  The problem was – we didn’t know we had 1.5 hours – for all we knew the backside would be on us in just a few minutes.

By the time that the worst of the storm had passed, it was just beginning to be daylight and we could start assessing the latest round of damage.  We’d heard trees crashing to the ground all night long – including a house-shaking thud about 8am when a neighbor’s tree fell towards our house and just grazed our back porch. As the rain and wind subsided enough that we felt safe to venture outside, we were able to start assessing the damage to our house and neighborhood.  However, we were not able to assess beyond our immediate neighbors’ houses due to multiple large trees which had fallen across the road in both directions.

My parents live about a half mile down the street, but decided to come “hunker down” at our house for the storm as we have a very large interior closet which could hold the four of us comfortably in case of tornadoes. We now needed to get to their house and check for damage, but due to the trees blocking the road – this was impossible via car.  That was when we realized that our chain saw had been left at my parents’ house!  My husband and father hiked the half mile over the trees and downed power lines. My parents’ house thankfully had no damage; so my father got on his tractor and my husband loaded the chain saws in the Kawasaki Mule and they began working their way back up the street, clearing the trees and debris to make the street passable.

Then it was time to hook up the generator. The houses in my neighborhood each have their own well and septic; so if we wanted water and basic facilities – we needed electricity. We had stock piled enough gasoline for 5-7 days to run the generator just enough each day to keep the refrigerator\freezer cold enough for our food and generate the water we needed.  We cooked meals using the gas grill on the back porch – which we normally used 3-4 times a week.  And, we had extra propane bottles for the grill.

Amazingly, for the first 2-3 days we actually still had the use of our landline phone. This was good, because we had no cell phone service those days!  Then, about the time we started getting cell service again, the landline went out – I think someone cleaning up fallen trees wiped it out.  Anyway, it was a good thing that we had both landline and cell services – I know a lot of people are giving up their landlines, but this experience will make me hold on to ours a little longer.

And of course, our cable modem for Internet connectivity was out of commission. But, that wasn’t a necessity in the immediate aftermath – especially as we did not have full electrical service.

All in all – we were very well prepared for our short-term recovery.  We survived – we had shelter, food, and water – even ice!

At the end of the fourth day of cleanup with the sound of chain saws and generators constantly buzzing in my ears, we started the generator but could not get the water well pumping.  It was already dark, and we decided that perhaps we’d not had proper power from the generator and had blown out our pump. I could live without A/C, but not water.  We had finished as much cleanup as we could do, so we unloaded the contents of our freezer into an ice chest for our neighbor; and then we headed out of town to join my sister’s family at a hotel in Waco.  My parents had already left to visit friends in the Texas Hill Country until power could be restored to the neighborhood.

From Waco, I was able to actually perform my job duties – all I needed was an Internet connection and my laptop!  Our office building in Houston was officially closed except for essential personnel, and travel anywhere within the Houston area was still very risky due to all the down power lines and debris. Houston area government officials were still asking people to restrict their area travel due to these conditions.  So, I surprised my international colleagues when I was able to participate in our regular weekly teleconference and catch up on email.

After a couple of days in Waco, we came back home to pack up and leave again – for Denver. My husband was already scheduled to attend a conference there and since power still wasn’t going to be restored to our neighborhood anytime soon, I decided to go with him (thanks to some frequent flyer miles!).  Like Waco, Denver also has Internet connectivity and I had my laptop! J  We did discover in the interim that our generator and water pump were okay – the circuit breaker on the generator had tripped and wasn’t providing the proper voltage to run the water well; we didn’t notice that in the dark. As our week in Colorado was drawing to a close, our neighbor called with the news that power had just been restored to the neighborhood – about 16 hours before we planned to be home.  It had been 15 days since Ike invaded our lives.

So – what were the lessons learned?

Short-term recovery needs:

  • Have all of your necessary equipment with you (e.g. the chain saw).
  • Have redundant communication options (e.g. landline & cell – also you can use Onstar minutes, if you have it).
  • Understand fully how to operate and troubleshoot your equipment (e.g. the generator’s circuit breaker switches).
  • Physical resources (i.e. manpower and tools) – we got to know all of our neighbors much better as we all pitched in on the cleanup in our neighborhood. Those with less damage helped out those with more damage.  Those with tractors and chain saws loaned them to neighbors without.

Long-term recovery needs:

  • Once basic necessities are met, the ability to find facilities which allow you to “return to work” (i.e. somewhere with Internet connectivity) to start regaining a sense of normalcy.
  • Consider a whole-house generator! (We considered and decided we can take several more trips to Colorado for the cost when assessed against the history of storms impacting the area. Of course, we recognize that similar to the warnings when investing in the stock market, past history is not a guaranteed indication of the future!)

These same lessons can be applied to your DR plan for your data center and SQL Servers.  Do you have the proper redundancies available?  Do you know the order in which all servers in the data center should be shut down and restarted, if needed?  If you have limited power after the disaster, what are the critical servers required to be running? (e.g. in my household case it was the water first, then the refrigerator, then optional items).  SQL Servers might be using Database Mirroring or Log Shipping to secondary data centers.  Do you have scripts to stop or move processing between the primary and secondary sites, if an entire data center is likely to be down?  Does your Operations staff understand what steps those scripts actually perform in case they need to troubleshoot, or perform the steps manually? That is, do they know how to reset the “circuit breaker”? Will your staff be able to work “remotely” if they can access the Internet? Do you know how long of an outage your data center can sustain on generator or other backup power?  Do you have a plan if it unexpectedly goes out or exceeds its limit before main power is restored?

While there is sufficient warning to take precautions when hurricanes approach, other disasters (e.g. the recent massive tornados across the U.S. and earthquake in Japan) strike without warning. The time to plan for all disasters is now.  Be sure that you have a family DR plan as well as one for your workplace!

Here’s hoping none of us has to implement either our personal or business hurricane plans this summer!