• Categories

  • Recent Posts

  • Archives

  • Copyright Notice

    Copyright © Nancy Hidy Wilson, 2010-2013. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Nancy Hidy Wilson and nancyhidywilson.wordpress.com with appropriate and specific direction to the original content.

PASS Summit 2011 – What I Learned

In prior posts, I reflected on the things that I think make the PASS Summit such a great value – content, volunteers, Microsoft support, and networking opportunities.  This post will concentrate on the content provided at this year’s Summit and basically what I learned over those 3 days. 

In no particular order – here’s the brain dump (from my notes, of course, with occasional commentary thrown in!):

  • SQL Server Code Name “Denali” will officially be SQL Server 2012 and released in the first half of 2012.
  • Project Crescent is now Power View.
  • Microsoft is now promoting both on-premise databases and cloud-based databases as co-existing for the foreseeable future; but the cloud is getting bigger for SQL Azure (150 GB databases supported by year-end).
  • Big Data has arrived and Microsoft has taken notice – and action – by releasing the Microsoft SQL Server Connector for Apache Hadoop … with more coming in this arena.
  • I run the Houston Area SQL Server User Group so these next items are of importance to me personally:
    • PASS may finally establish a speaker bureau to help local chapters find speakers
    • PASS will be increasing services to chapter leaders, providing DNN (DotNetNuke) training, etc.
  • At (Principal Architect Escalation Engineer in Microsoft CSS) Bob Ward’s “Inside Tempdb” half-day session:
    • The model database in Denali SQL Server 2012 will be 4MB up from the 2MB it has been forever. Why is this important? Well, recovery isn’t complete until tempdb is started and the default size for tempdb (if you don’t change it) is the size of model. 4MB isn’t much these days, but it is a change you should be aware of.
    • tempdb is the “garbage dump of the engine”. It is used for user objects such as temp tables, table variables, temp procs, user-defined objects and online index space. It is also used by internal objects – sorts, work tables, work files (used for hash joins) and version store. 
    • Consider moving tempdb to its own storage volume if it has heavy I/O. SSD is an option, but as expensive as that is ensure you can’t make better use of it somewhere else first.
    • Bob’s rule of thumb for how many data files to allocate for tempdb:
      • Less than 8 CPUs (cores), 1 file per CPU
      • 8 or more CPUs, start with 8 files and increase by sets of 4 until contention resides
      • Also see Paul Randal’s blog for a slight variation on this theme.
  • At the “What’s New in Manageability in Denali” Microsoft session:
    • Since SSMS is now built on Visual Studio 2010, you will get multi-monitor support! Hooray!
    • There is a new Database Recovery Advisor which will be able to build a restore plan for you based on available backup files (database, log, etc.) in a folder even if you don’t have any msdb backup history. It can also handle split backups.  I’d cheer for this too, but seriously – this should have been in the product years ago!
    • Log viewer will now work with offline instances and has improved performance.
    • SCOM Management Pack (will be released at the same time as SQL Server 2012)
      • Ability to discover Availability Groups
      • Detailed knowledge of AlwaysOn tasks
      • Performance counter collections for AlwaysOn
      • Policy-Based Management integration, yes, integration – it’ll pull your PBM policies right into SCOM and alert on failures
      • Enhanced mirroring support – discovers and diagrams
      • 20 new rules for replication
      • Support for mount points (another “hooray/finally!”)
  • At the “AlwaysOn: Active Secondaries” Microsoft Session:
    • You can offload backups to any replica so as to not impact production. The LSN is communicated back to the primary which then notifies all replicas so that all log backups in all replicas form a single log chain.
    • If that sounds like a nightmare for a restore, remember the Database Recovery Advisor mentioned previously and restore is simple!
    • Differential backups are not supported on secondaries.
    • Only “Copy Only” full backup is supported on secondaries.
    • Advisable to store backups centrally (so you can just point the Database Recovery Advisor to a single location to create the restore plan).
    • Declarative policy to determine where backups occur automatically. This is advisory only, not enforced. Implement via a system function which returns a Boolean indicating if this is the preferred backup location.
    • And, of course, probably the number one reason for multiple secondaries (besides general DBA paranoia) is to offload reporting workloads without having to use database snapshots.
  • In MVP Tim Ford’s “Periodic Table of DMOs” session:
    • Using the format of the Periodic Table of the Elements from general science, Tim gives us a creative way to organize all the DMOs (Dynamic Management Objects – views and functions) now available to DBAs for troubleshooting and general understanding of what is happening within a database engine instance. I might actually use this to remember when and what to use.
  • SQL Server guru Paul Randal (blog) busted more DBA Myths in a spotlight session. As usual, not all myths were blatantly true or false, there were a few “it depends”! This session lasted until 6:30pm and I think most would have stayed until 9:30pm to listen to Paul’s in-depth, but humorous explanations about why these myths were true/false or especially “it depends”.  
  • Allan Hirt, Clustering MVP, demonstrated “Denali on Windows Server Core”:
    • Currently Denali is only supported on W2K8R2 SP1, no Windows 8 support announced yet (as of October 14, 2011).
    • You’ll need to add .NET and Allan will post on his blog how to do this effectively
    • You’ll need to add PowerShell 2.0. See KB976736 and use option 2!
    • If installing a Windows Cluster, then all nodes must be Server Core or all must allow GUI.
  • Microsoft Senior Program Manager for the Database Engine Security features, Il-Sung Lee, presented “What’s New in Security for SQL Server Code Name Denali”:
    • Windows groups can now have a default schema. The first group with the lowest principal id will be chosen if a user belongs to more than one group.
    • User-defined schema roles
    • All SKUs will now have the ability to specify server audits; database audits still require Enterprise Edition or higher.
    • User-defined audit events (via sp_audit_write)
    • Database Authentication (for contained databases)
    • Lots of cryptography changes – deprecating older methods, supporting newer, more secure ones
    • Il-Sung’s team blogs at: http://blogs.msdn.com/b/sqlsecurity/
  • SQLCAT brought in speakers from 4 companies who are early adopters of Denali & AlwaysOn to discuss their HA/DR requirements and migration paths to AlwaysOn. I’ll definitely be revisiting their solutions in the future and suspect there will be some whitepapers posted on SQLCAT’s website soon.
  • In “Where should I be encrypting my data”, MVP Denny Cherry provided a good refresher on some encryption basics:
    • Encryption will almost always increase the size of stored data
    • Encryption will decrease the usefulness of data deduplication
    • Encryption will add CPU load
    • Enabling TDE for an application database will also enable it for tempdb
    • TDE is for “data at rest” including backups
    • For “data on the wire” use IPSEC or SSL
    • If your storage uses MPIO, then “data at rest” protection at the LUN level. If you copy a file to another server, it would be readable; if you detach and attach the LUN elsewhere it would not be readable.
  • I also attended one of the “Lightning Talk” sessions, where 8 speakers had 5 minutes each to convey their topic to the audience. Some were humorous, some were serious, and one was seriously humorous! Grant Fritchey’s “Backup Testing, The Rant” is not something any of us present will ever forget – nor will we ever forget the importance of actually testing your backup! I really wish that had been caught on video, because the audio on the DVD just won’t do the presentation justice. Two other speakers in the session covered topics near and dear to me – “Thinking of Hosting a SQLSaturday?” (John Sterett) and “Build Up” (Niko Neugebauer). Both encouraged attendees to get involved in their local SQL Server community activities – and if you don’t have a local group – start one!

Now to start making the list of all the interesting sessions I couldn’t get to that I’ll want to watch (or re-watch) as soon as the Conference DVDs arrive! I foresee team “lunch ‘n learns” being scheduled.

My Denali First Glance – PowerShell Enhancements

First, PowerShell 2.0 is now required as a pre-requisite for installing SQL Server Denali (SQL Server 11). Hopefully, you’ll only be installing Denali in a Windows 2008 R2 environment which has PowerShell 2.0 installed by default. But, if not, you’ve now been forewarned – just in case you don’t read the “readme” prior to installing!

SQLPS is now a module (a new feature in PowerShell 2.0). The SQLPS utility has been deprecated, but if you use it, it apparently will start PowerShell 2.0 and import the SQLPS module.

Additionally, there is now a module of cmdlets for SSAS (Analysis Services). After installing Denali, if you execute the command Get-Module -ListAvailable you’ll see that in addition to SQLPS, SQLASCMDLETS also is displayed. This module contains 11 cmdlets as shown below:

But, my focus is Database Engine, so that’s all I’ll say about SSAS. If you are a BI guy or gal – check it out!

When you first import the SQLPS module, you will get the following warning unless you import the module using the -DisableNameChecking parameter.

You’ll notice that as soon as you import the SQLPS module, your location changes to the SQLSERVER provider. To see what’s new in this hierarchy, just type: dir

XEvent, IntegrationServices and SQLAS are all new. Thus, more features and components of the SQL Server stack are now being exposed for PowerShell management.

Then, there are all the new cmdlets available in SQLPS.

Considering there were only 5 cmdlets (highlighted in yellow above) for SQL Server 2008, I now count 29! Most appear directed at new features in Denali such as High Availability\Disaster Recovery (HADR) and Availability Group configurations. I’m sure I’ll be looking at those in detail when I get around to studying AlwaysOn.  However, two very traditional database commands have been provided now with their own cmdlets – Backup-SqlDatabase and Restore-SqlDatabase. These two cmdlets are modeled after the Microsoft.SqlServer.Management.Smo.Backup class, so if you are used to the properties in that class, then the parameters for these cmdlets will be very familiar.  

That means you can now execute commands like this as a SQLAgent PowerShell job step:

Dir SQLSERVER:\SQL\MyServer\MyInstance\Databases | Backup-SqlDatabase

This simple command will backup all user databases on an instance using all the defaults, of course. But, you can start imagining the possibilities here – especially if you start integrating any additional parameters needed in your environment.  

As indicated by the title, this is just a “first glance” at what is coming in Denali; I’m sure I’ll be investigating and writing more about using PowerShell with Denali in the future. What new PowerShell enhancements are you looking forward to implementing?

What’s in Your msdb?

So, what’s in your wallet msdb?

If you’ve been working with Microsoft SQL Server anytime at all you can name the system databases fairly quickly – master, model, msdb, and tempdb. Almost rolls off your tongue as smoothly as Tenaha, Timpson, Bobo, and Blair (if you don’t know that reference, click here for some interesting Texas folklore). I’ve been working with SQL Server since 4.21 and the system db names have not changed from release to release. But, what did change drastically over all those releases are the contents – especially for msdb. Have you taken a look lately? I did and here is what I found.

Lots of new tables, lots of new views, and lots of new roles – well, just lots of new objects period! I always investigate what’s changed in master with every release, but it seems I’ve been delinquent in thoroughly examining msdb changes. It appears every new feature being added to the SQL Server database engine is being “managed” from the msdb database.

The objects supporting SQLAgent have always been in msdb – jobs, operators, alerts; along with the objects supporting database backup and recovery sets. Then DTS came along which later morphed into SSIS – guess where those objects are stored?  Maintenance plans, log shipping, and database mirroring also join the party in msdb. And, oh yeah – SQLMail which became DatabaseMail…

Now with SQL 2008 and R2, you can add to the list – CMS (Central Management Server), Data Collector, PBM (Policy Based Management), DAC (Data Tier Applications), UCP (Utility Control Point)… but “Wait!” you say. “Don’t the Data Collector and UCP have their own databases?”  Well, yes, they do – typically MDW and UMDW respectively. But, those databases only hold the collection results – the management and configuration of those features is all contained in msdb.

Here’s what the SQL Server 2008 R2 BOL says about msdb:

The msdb database is used by SQL Server Agent for scheduling alerts and jobs and by other features such as Service Broker and Database Mail.

After seeing this, I had to laugh, I totally missed Service Broker in my list, but then I’ve never implemented it. On the other hand, look at all the features using msdb that BOL just lumped into “other features”!  Bottom line – the BOL description really sells msdb short on its importance to so many features.  AND, the default recovery mode for msdb is Simple. Before SQL 2008 I would have said that a daily full backup was typically sufficient. You didn’t usually make too many changes to jobs such that you couldn’t easily recreate a day’s work if you lost it restoring to the prior day. And, best practice said to take a backup after making modifications which impacted it anyway… but now, with so many features tied to msdb – do you really know when you are making changes to msdb? Considering all that is now dependent upon msdb – is your backup and restore strategy for msdb adequate?

However, before I go examine my msdb recovery strategy, I’m thinking of opening a Connect suggestion – rename msdb to KitchenSinkDB. Do you think I could get some votes for that? ;-)

Follow

Get every new post delivered to your Inbox.

Join 415 other followers