• Categories

  • Recent Posts

  • Archives

  • Copyright Notice

    Copyright © Nancy Hidy Wilson, 2010-2023. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Nancy Hidy Wilson and nancyhidywilson.wordpress.com with appropriate and specific direction to the original content.

SQL Server 2012 RC0 – SSMS Review

Microsoft released SQL Server 2012 RC0 (Release Candidate) this week – just in time for those of us traveling over the holidays to have something to do as we head to Grand-Ma’s (assuming we aren’t the one driving).  With any new release, I suspect most of us do the same thing – run as fast as we can through the install process and open SSMS and start playing!  If you haven’t been previously playing with “Denali” CTP3, then I encourage you to get the RC0 install (on a test server, of course) – if only for the new SSMS.

The first time you connect to an instance in Object Explorer, you’ll immediately notice differences from SSMS 2008 R2.

Oh look! There’s a folder called “AlwaysOn High Availability” – let’s go click it!

Yikes! That wasn’t nice. It seems like SSMS should already know this and have a “not available” icon showing like it does for my SQL Server Agent which is currently disabled.

OK – let’s try the other new folder “Integration Services Catalog”.  That’s interesting because I do have SSIS 2008 installed, but haven’t upgraded it. The only option here when I right-click is to “Create Catalog”. However, then I see that I need CLR enabled and once I do that then I need to make some other decisions.

Ok – I guess this will be something I need to do some more research on to know when we’ll want to use this and exactly how to use it. But, that is the whole purpose of this exercise, seeing what is new and where to focus future in-depth investigation.

So, back to drilling down through Object Explorer to see what else looks new…

Looks like I’m going to have to study up on Extended Events now that they’ve made it into SSMS; and Microsoft appears to be setting up some default entries here.

Now, I’ll start drilling down to the next layer of some of these objects. We use PBM (Policy-Based Management), so let’s see if anything is new there. Yep! Looks like Microsoft is installing policies by default in preparation for AlwaysOn usage.  So, when I get around to playing with AlwaysOn, it will be interesting to revisit these and see if they are automatically enabled when setting up AlwaysOn.

I didn’t show it in the image, but there are new Conditions created upon install for the AlwaysOn policies to use. And, obviously with new features in SQL Server 2012, new facets related to those features have also been added (76 facets in SQL Server 2008 R2 versus 84 in SQL Server 2012). Interestingly, if I connect to a SQL Server 2008 or SQL Server 2008 R2 instance from SSMS 2012, I see all the facets for SQL Server 2012. I think this means I’ll be able to keep using my central PBM repository from SQL Server 2008 R2, but run policies specific for SQL Server 2012 (using our version of EPM). More to investigate!

Lastly for this initial drive-by look, I checked out the Management\Legacy folder. Here’s the comparison of SQL Server 2008 R2 SSMS (first) and SQL Server 2012 (second):

SSMS 2008 R2 Legacy Objects

SSMS 2012 Legacy Objects

Looks like DTS and SQL Mail are finally gone as promised! 

Until next time, Happy Exploring!

PASS Summit 2011 – What I Learned

In prior posts, I reflected on the things that I think make the PASS Summit such a great value – content, volunteers, Microsoft support, and networking opportunities.  This post will concentrate on the content provided at this year’s Summit and basically what I learned over those 3 days. 

In no particular order – here’s the brain dump (from my notes, of course, with occasional commentary thrown in!):

  • SQL Server Code Name “Denali” will officially be SQL Server 2012 and released in the first half of 2012.
  • Project Crescent is now Power View.
  • Microsoft is now promoting both on-premise databases and cloud-based databases as co-existing for the foreseeable future; but the cloud is getting bigger for SQL Azure (150 GB databases supported by year-end).
  • Big Data has arrived and Microsoft has taken notice – and action – by releasing the Microsoft SQL Server Connector for Apache Hadoop … with more coming in this arena.
  • I run the Houston Area SQL Server User Group so these next items are of importance to me personally:
    • PASS may finally establish a speaker bureau to help local chapters find speakers
    • PASS will be increasing services to chapter leaders, providing DNN (DotNetNuke) training, etc.
  • At (Principal Architect Escalation Engineer in Microsoft CSS) Bob Ward’s “Inside Tempdb” half-day session:
    • The model database in Denali SQL Server 2012 will be 4MB up from the 2MB it has been forever. Why is this important? Well, recovery isn’t complete until tempdb is started and the default size for tempdb (if you don’t change it) is the size of model. 4MB isn’t much these days, but it is a change you should be aware of.
    • tempdb is the “garbage dump of the engine”. It is used for user objects such as temp tables, table variables, temp procs, user-defined objects and online index space. It is also used by internal objects – sorts, work tables, work files (used for hash joins) and version store. 
    • Consider moving tempdb to its own storage volume if it has heavy I/O. SSD is an option, but as expensive as that is ensure you can’t make better use of it somewhere else first.
    • Bob’s rule of thumb for how many data files to allocate for tempdb:
      • Less than 8 CPUs (cores), 1 file per CPU
      • 8 or more CPUs, start with 8 files and increase by sets of 4 until contention resides
      • Also see Paul Randal’s blog for a slight variation on this theme.
  • At the “What’s New in Manageability in Denali” Microsoft session:
    • Since SSMS is now built on Visual Studio 2010, you will get multi-monitor support! Hooray!
    • There is a new Database Recovery Advisor which will be able to build a restore plan for you based on available backup files (database, log, etc.) in a folder even if you don’t have any msdb backup history. It can also handle split backups.  I’d cheer for this too, but seriously – this should have been in the product years ago!
    • Log viewer will now work with offline instances and has improved performance.
    • SCOM Management Pack (will be released at the same time as SQL Server 2012)
      • Ability to discover Availability Groups
      • Detailed knowledge of AlwaysOn tasks
      • Performance counter collections for AlwaysOn
      • Policy-Based Management integration, yes, integration – it’ll pull your PBM policies right into SCOM and alert on failures
      • Enhanced mirroring support – discovers and diagrams
      • 20 new rules for replication
      • Support for mount points (another “hooray/finally!”)
  • At the “AlwaysOn: Active Secondaries” Microsoft Session:
    • You can offload backups to any replica so as to not impact production. The LSN is communicated back to the primary which then notifies all replicas so that all log backups in all replicas form a single log chain.
    • If that sounds like a nightmare for a restore, remember the Database Recovery Advisor mentioned previously and restore is simple!
    • Differential backups are not supported on secondaries.
    • Only “Copy Only” full backup is supported on secondaries.
    • Advisable to store backups centrally (so you can just point the Database Recovery Advisor to a single location to create the restore plan).
    • Declarative policy to determine where backups occur automatically. This is advisory only, not enforced. Implement via a system function which returns a Boolean indicating if this is the preferred backup location.
    • And, of course, probably the number one reason for multiple secondaries (besides general DBA paranoia) is to offload reporting workloads without having to use database snapshots.
  • In MVP Tim Ford’s “Periodic Table of DMOs” session:
    • Using the format of the Periodic Table of the Elements from general science, Tim gives us a creative way to organize all the DMOs (Dynamic Management Objects – views and functions) now available to DBAs for troubleshooting and general understanding of what is happening within a database engine instance. I might actually use this to remember when and what to use.
  • SQL Server guru Paul Randal (blog) busted more DBA Myths in a spotlight session. As usual, not all myths were blatantly true or false, there were a few “it depends”! This session lasted until 6:30pm and I think most would have stayed until 9:30pm to listen to Paul’s in-depth, but humorous explanations about why these myths were true/false or especially “it depends”.  
  • Allan Hirt, Clustering MVP, demonstrated “Denali on Windows Server Core”:
    • Currently Denali is only supported on W2K8R2 SP1, no Windows 8 support announced yet (as of October 14, 2011).
    • You’ll need to add .NET and Allan will post on his blog how to do this effectively
    • You’ll need to add PowerShell 2.0. See KB976736 and use option 2!
    • If installing a Windows Cluster, then all nodes must be Server Core or all must allow GUI.
  • Microsoft Senior Program Manager for the Database Engine Security features, Il-Sung Lee, presented “What’s New in Security for SQL Server Code Name Denali”:
    • Windows groups can now have a default schema. The first group with the lowest principal id will be chosen if a user belongs to more than one group.
    • User-defined schema roles
    • All SKUs will now have the ability to specify server audits; database audits still require Enterprise Edition or higher.
    • User-defined audit events (via sp_audit_write)
    • Database Authentication (for contained databases)
    • Lots of cryptography changes – deprecating older methods, supporting newer, more secure ones
    • Il-Sung’s team blogs at: http://blogs.msdn.com/b/sqlsecurity/
  • SQLCAT brought in speakers from 4 companies who are early adopters of Denali & AlwaysOn to discuss their HA/DR requirements and migration paths to AlwaysOn. I’ll definitely be revisiting their solutions in the future and suspect there will be some whitepapers posted on SQLCAT’s website soon.
  • In “Where should I be encrypting my data”, MVP Denny Cherry provided a good refresher on some encryption basics:
    • Encryption will almost always increase the size of stored data
    • Encryption will decrease the usefulness of data deduplication
    • Encryption will add CPU load
    • Enabling TDE for an application database will also enable it for tempdb
    • TDE is for “data at rest” including backups
    • For “data on the wire” use IPSEC or SSL
    • If your storage uses MPIO, then “data at rest” protection at the LUN level. If you copy a file to another server, it would be readable; if you detach and attach the LUN elsewhere it would not be readable.
  • I also attended one of the “Lightning Talk” sessions, where 8 speakers had 5 minutes each to convey their topic to the audience. Some were humorous, some were serious, and one was seriously humorous! Grant Fritchey’s “Backup Testing, The Rant” is not something any of us present will ever forget – nor will we ever forget the importance of actually testing your backup! I really wish that had been caught on video, because the audio on the DVD just won’t do the presentation justice. Two other speakers in the session covered topics near and dear to me – “Thinking of Hosting a SQLSaturday?” (John Sterett) and “Build Up” (Niko Neugebauer). Both encouraged attendees to get involved in their local SQL Server community activities – and if you don’t have a local group – start one!

Now to start making the list of all the interesting sessions I couldn’t get to that I’ll want to watch (or re-watch) as soon as the Conference DVDs arrive! I foresee team “lunch ‘n learns” being scheduled.

My Denali First Glance – AlwaysOn Availability Groups

I’m finally finding some time to get more familiar with the new feature in Microsoft SQL Server Code-Named “Denali” called “AlwaysOn”. No, that is not a typo – capitalize “Always”, capitalize “On” and concatenate – “AlwaysOn”!

There are two flavors of AlwaysOn – failover clustering and availability groups.  AlwaysOn Failover Clustering is exactly what it sounds like – enabling a SQL Server Denali Instance to failover to another node in a Windows Server Failover Cluster. This is also known as a Failover Cluster Instance (FCI). On the other hand, AlwaysOn Availability Groups can best be described as database mirroring on steroids – and that is what I’ll cover in this post. I’ll cover the differences in Denali’s FCI implementation from prior versions in a later post.

First, think of all the things you don’t like about database mirroring:

  •         Only 1 mirror
  •         The mirror is always “in recovery” and unusable outside of a snapshot
  •         Failover is per user database (i.e. one at a time)
  •         Have to choose between synchronous (high availability) and asynchronous (disaster recovery)
  •         Requires a 3rd instance as a witness if you want “high safety mode” and automatic failover

Here’s what AlwaysOn Availability Groups will provide:

  •         Up to 4 replicas (the term “replica” essentially replaces “mirror”) in CTP3
  •         Read-only access is provided to replicas
  •         Multiple databases can be grouped together for failover defined as “Availability Groups”
  •         Synchronous and Asynchronous replicas at the same time – covering both high availability and disaster recovery (hence HADR)
  •         Windows Server Failover Clustering detects when failover is required – no third (or fourth or fifth) server required

Yes, you read that right; the Windows Failover Clustering feature is required to be installed on your Denali servers in order to use AlwaysOn Availability Groups. It does not mean, however, that you have to actually implement a Windows failover cluster of two or more nodes. But, it does mean that Windows Server 2008 Enterprise Edition is required….and that makes me believe that this feature will likely only be available in the Denali Enterprise Edition or higher once the final SKU feature sets are announced by Microsoft. However, SQL Server Enterprise Edition is required today to use asynchronous mode for database mirroring, so this shouldn’t come as a surprise.

Similar to database mirroring failover options, the availability failover modes are automatic or planned manual failover for synchronous replicas to prevent data loss or forced manual failover for asynchronous replicas which may incur data loss.  If the secondary replica in synchronous mode is not synchronized with the primary replica, then a forced manual failover is also the only option.

For seamless client connectivity, when you establish an Availability Group you may also create a Virtual Network Name (VNN) which will be assigned a Virtual IP (VIP). Whenever the Availability Group fails over, the VNN will redirect connections to the new primary replica. Alternatively, if there are only two availability replicas in an Availability Group, then “old-style” database mirroring connection strings may be used until you can establish a VNN and update your application to use the VNN. If you want to add a third replica, then you must convert to VNN for your client connectivity.

In addition to having read-only access to secondary replicas, you can take backups of your secondary replicas.  There are several options you will want to specify when setting up your Availability Groups (or alter afterwards) in order to properly control the backup of your primary and secondary replicas. Since this is still the CTP as I’m writing this, I’ll just point you to this BOL link for the most current info as it may change by the time you get around to implementing this.

The main purpose of this article is to make you aware of what is coming so that you can start planning for your future HADR requirements as you make the move to Denali when it is finally released. To dig deeper, check out the current BOL entry for AlwaysOn Availability Groups.

My Denali First Glance – PowerShell Enhancements

First, PowerShell 2.0 is now required as a pre-requisite for installing SQL Server Denali (SQL Server 11). Hopefully, you’ll only be installing Denali in a Windows 2008 R2 environment which has PowerShell 2.0 installed by default. But, if not, you’ve now been forewarned – just in case you don’t read the “readme” prior to installing!

SQLPS is now a module (a new feature in PowerShell 2.0). The SQLPS utility has been deprecated, but if you use it, it apparently will start PowerShell 2.0 and import the SQLPS module.

Additionally, there is now a module of cmdlets for SSAS (Analysis Services). After installing Denali, if you execute the command Get-Module -ListAvailable you’ll see that in addition to SQLPS, SQLASCMDLETS also is displayed. This module contains 11 cmdlets as shown below:

But, my focus is Database Engine, so that’s all I’ll say about SSAS. If you are a BI guy or gal – check it out!

When you first import the SQLPS module, you will get the following warning unless you import the module using the -DisableNameChecking parameter.

You’ll notice that as soon as you import the SQLPS module, your location changes to the SQLSERVER provider. To see what’s new in this hierarchy, just type: dir

XEvent, IntegrationServices and SQLAS are all new. Thus, more features and components of the SQL Server stack are now being exposed for PowerShell management.

Then, there are all the new cmdlets available in SQLPS.

Considering there were only 5 cmdlets (highlighted in yellow above) for SQL Server 2008, I now count 29! Most appear directed at new features in Denali such as High Availability\Disaster Recovery (HADR) and Availability Group configurations. I’m sure I’ll be looking at those in detail when I get around to studying AlwaysOn.  However, two very traditional database commands have been provided now with their own cmdlets – Backup-SqlDatabase and Restore-SqlDatabase. These two cmdlets are modeled after the Microsoft.SqlServer.Management.Smo.Backup class, so if you are used to the properties in that class, then the parameters for these cmdlets will be very familiar.  

That means you can now execute commands like this as a SQLAgent PowerShell job step:

Dir SQLSERVER:\SQL\MyServer\MyInstance\Databases | Backup-SqlDatabase

This simple command will backup all user databases on an instance using all the defaults, of course. But, you can start imagining the possibilities here – especially if you start integrating any additional parameters needed in your environment.  

As indicated by the title, this is just a “first glance” at what is coming in Denali; I’m sure I’ll be investigating and writing more about using PowerShell with Denali in the future. What new PowerShell enhancements are you looking forward to implementing?