• Categories

  • Recent Posts

  • Archives

  • Copyright Notice

    Copyright © Nancy Hidy Wilson, 2010-2013. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Nancy Hidy Wilson and nancyhidywilson.wordpress.com with appropriate and specific direction to the original content.

Deprecated Features in SQL Server 2012

Earlier in the year, I wrote a series of posts on Deprecated Features with a promise to revisit as the final version of SQL Server “Denali”, now known as SQL Server 2012, neared. With RC0 recently released, now is the time for that review!

To recap, deprecated features are still available in a version, but are targeted for removal in a future version. Thus, you should avoid using these features in any new development work and consider replacing them where currently used as soon as convenient. So without further delay, the features which are still available in SQL Server 2012 Database Engine, but now deprecated include:

Data programmability features which should be replaced with Windows Communications Foundation (WCF) or ASP.NET:

Slipstream functionality will go away – sort of. Starting with SQL Server 2012, the command line parameters /PCUSource and /CUSource should be replaced with the Product Update feature (/UpdateSource) which I have previously discussed. Although the old parameters are still valid, I’d switch to the new parameters ASAP as they are much easier to use and maintaining your source is also simplified.  

There are also Database Engine Backup and Restore T-SQL command changes: RESTORE { DATABASE | LOG } WITH [MEDIA]PASSWORD is deprecated. While BACKUP { DATABASE | LOG } WITH PASSWORD and BACKUP { DATABASE | LOG } WITH MEDIAPASSWORD are actually discontinued in SQL Server 2012. These commands have no replacement.

Database Compatibility level 90 and upgrades from SQL Server 2005 will not be supported in the version after SQL Server 2012.  This is in keeping with Microsoft’s stated support model of only supporting 2 down-level versions for upgrades. You should be well aware of this by now, but I’m including it as a reminder for those new to SQL Server version management. For example, SQL Server 2000 (or database compatibility level 80) is not a supported version to upgrade to SQL Server 2012.   

The ability to return results sets from triggers will not be supported post-SQL Server 2012.

As stronger encryption algorithms are now available, encryption using RC4 and RC4_128 has been deprecated. However, decrypting RC4 and RC4_128 has not been deprecated.

Several stored procedures, a global variable, and SET command related to remote servers have been deprecated. It has been recommended for several versions now to replace remote servers with linked servers.  Hopefully you have already done this task.

SET ROWCOUNT for INSERT, UPDATE, and DELETE has been replaced by the TOP keyword, so if you have been using SET ROWCOUNT to limit the number of rows affected by your INSERT, UPDATE, or DELETE statements, you’ll need to switch these to use the TOP keyword instead.

If you’ve been using the table hint HOLDLOCK without parenthesis, then you will need to add the parenthesis in the next release of SQL Server.

On the tools side of things, the SQLMAINT utility has been deprecated with the recommendation to use SQL Server maintenance plans as the replacement.

And if you want to really get a jump on things, along with the list of deprecated features just described for the “next” version of SQL Server (after 2012), you can start eliminating the usage of features listed in “Features Not Supported in a Future Version of SQL Server” – both can be found here:  http://msdn.microsoft.com/en-us/library/ms143729(v=SQL.110).aspx

If you are using the Full-Text feature of SQL Server then you’ll want to check-out the list of deprecated items specific to that feature here: http://msdn.microsoft.com/en-us/library/cc646010(v=SQL.110).aspx. These include many stored procedures (e.g. sp_fulltext_catalog) which have been replaced with new T-SQL syntax (e.g. CREATE\ALTER).

And, finally, if you are using components other than the Database Engine (i.e. SSAS, SSIS, SSRS), then you’ll need to review the deprecated/discontinued lists for them in their respective Backward Compatibility sections of Books Online.

As I pointed out in Part 3 of my earlier blog series, you can detect most deprecated features by using the SQL Server: Deprecated Features object performance counter and trace events.   A future blog will discuss which previously deprecated features have actually been discontinued in SQL Server 2012 – meaning you will have to address them before upgrading.

SQL Server 2012 RC0 – SSMS Review

Microsoft released SQL Server 2012 RC0 (Release Candidate) this week – just in time for those of us traveling over the holidays to have something to do as we head to Grand-Ma’s (assuming we aren’t the one driving).  With any new release, I suspect most of us do the same thing – run as fast as we can through the install process and open SSMS and start playing!  If you haven’t been previously playing with “Denali” CTP3, then I encourage you to get the RC0 install (on a test server, of course) – if only for the new SSMS.

The first time you connect to an instance in Object Explorer, you’ll immediately notice differences from SSMS 2008 R2.

Oh look! There’s a folder called “AlwaysOn High Availability” – let’s go click it!

Yikes! That wasn’t nice. It seems like SSMS should already know this and have a “not available” icon showing like it does for my SQL Server Agent which is currently disabled.

OK – let’s try the other new folder “Integration Services Catalog”.  That’s interesting because I do have SSIS 2008 installed, but haven’t upgraded it. The only option here when I right-click is to “Create Catalog”. However, then I see that I need CLR enabled and once I do that then I need to make some other decisions.

Ok – I guess this will be something I need to do some more research on to know when we’ll want to use this and exactly how to use it. But, that is the whole purpose of this exercise, seeing what is new and where to focus future in-depth investigation.

So, back to drilling down through Object Explorer to see what else looks new…

Looks like I’m going to have to study up on Extended Events now that they’ve made it into SSMS; and Microsoft appears to be setting up some default entries here.

Now, I’ll start drilling down to the next layer of some of these objects. We use PBM (Policy-Based Management), so let’s see if anything is new there. Yep! Looks like Microsoft is installing policies by default in preparation for AlwaysOn usage.  So, when I get around to playing with AlwaysOn, it will be interesting to revisit these and see if they are automatically enabled when setting up AlwaysOn.

I didn’t show it in the image, but there are new Conditions created upon install for the AlwaysOn policies to use. And, obviously with new features in SQL Server 2012, new facets related to those features have also been added (76 facets in SQL Server 2008 R2 versus 84 in SQL Server 2012). Interestingly, if I connect to a SQL Server 2008 or SQL Server 2008 R2 instance from SSMS 2012, I see all the facets for SQL Server 2012. I think this means I’ll be able to keep using my central PBM repository from SQL Server 2008 R2, but run policies specific for SQL Server 2012 (using our version of EPM). More to investigate!

Lastly for this initial drive-by look, I checked out the Management\Legacy folder. Here’s the comparison of SQL Server 2008 R2 SSMS (first) and SQL Server 2012 (second):

SSMS 2008 R2 Legacy Objects

SSMS 2012 Legacy Objects

Looks like DTS and SQL Mail are finally gone as promised! 

Until next time, Happy Exploring!

SSRS Log Files Retention Bug

Last October (2010), I posted the following bug on Microsoft Connect (https://connect.microsoft.com/SQLServer/feedback/details/615123/ssrs-2008-r2-keepfilesfordays-config-file-item-does-not-appear-to-work).  Essentially, the SQL Server Report Server (SSRS) 2008 R2 KeepFilesForDays parameter in the ReportingServicesService.exe.config file isn’t working as advertised in BOL for deleting expired log files.  And, until today, when I went to manually delete them, I still had every log file since I installed in June 2010!

A few weeks ago, Microsoft officially closed my Connect item as “Fixed”. But, no word on when/where it is fixed.  I still see the problem in SQL 2008 R2 SP1 and I don’t see a mention of a fix for this in any of the three (3) CU’s released for SP1 so far.  It also appears that this is likely a problem in SSRS 2008 and SSRS 2005 based on a handful of inquiries I spotted in various forums when researching.  Although in SSRS 2005 the parameter setting and log files are in different locations due to SSRS’s usage of IIS for that version.  I also checked my Denali CTP3 installation of SSRS and sure enough – every log file since I installed (way, more than the default 14 days) still exists.

So, just a word of warning, if you assume that this parameter is working for you – it probably isn’t and you’ll need to find another way to delete your old log files for SSRS. If you have plenty of space on the drive where these are stored, then it isn’t an issue – yet. But, beware!

PASS Summit 2011 – What I Learned

In prior posts, I reflected on the things that I think make the PASS Summit such a great value – content, volunteers, Microsoft support, and networking opportunities.  This post will concentrate on the content provided at this year’s Summit and basically what I learned over those 3 days. 

In no particular order – here’s the brain dump (from my notes, of course, with occasional commentary thrown in!):

  • SQL Server Code Name “Denali” will officially be SQL Server 2012 and released in the first half of 2012.
  • Project Crescent is now Power View.
  • Microsoft is now promoting both on-premise databases and cloud-based databases as co-existing for the foreseeable future; but the cloud is getting bigger for SQL Azure (150 GB databases supported by year-end).
  • Big Data has arrived and Microsoft has taken notice – and action – by releasing the Microsoft SQL Server Connector for Apache Hadoop … with more coming in this arena.
  • I run the Houston Area SQL Server User Group so these next items are of importance to me personally:
    • PASS may finally establish a speaker bureau to help local chapters find speakers
    • PASS will be increasing services to chapter leaders, providing DNN (DotNetNuke) training, etc.
  • At (Principal Architect Escalation Engineer in Microsoft CSS) Bob Ward’s “Inside Tempdb” half-day session:
    • The model database in Denali SQL Server 2012 will be 4MB up from the 2MB it has been forever. Why is this important? Well, recovery isn’t complete until tempdb is started and the default size for tempdb (if you don’t change it) is the size of model. 4MB isn’t much these days, but it is a change you should be aware of.
    • tempdb is the “garbage dump of the engine”. It is used for user objects such as temp tables, table variables, temp procs, user-defined objects and online index space. It is also used by internal objects – sorts, work tables, work files (used for hash joins) and version store. 
    • Consider moving tempdb to its own storage volume if it has heavy I/O. SSD is an option, but as expensive as that is ensure you can’t make better use of it somewhere else first.
    • Bob’s rule of thumb for how many data files to allocate for tempdb:
      • Less than 8 CPUs (cores), 1 file per CPU
      • 8 or more CPUs, start with 8 files and increase by sets of 4 until contention resides
      • Also see Paul Randal’s blog for a slight variation on this theme.
  • At the “What’s New in Manageability in Denali” Microsoft session:
    • Since SSMS is now built on Visual Studio 2010, you will get multi-monitor support! Hooray!
    • There is a new Database Recovery Advisor which will be able to build a restore plan for you based on available backup files (database, log, etc.) in a folder even if you don’t have any msdb backup history. It can also handle split backups.  I’d cheer for this too, but seriously – this should have been in the product years ago!
    • Log viewer will now work with offline instances and has improved performance.
    • SCOM Management Pack (will be released at the same time as SQL Server 2012)
      • Ability to discover Availability Groups
      • Detailed knowledge of AlwaysOn tasks
      • Performance counter collections for AlwaysOn
      • Policy-Based Management integration, yes, integration – it’ll pull your PBM policies right into SCOM and alert on failures
      • Enhanced mirroring support – discovers and diagrams
      • 20 new rules for replication
      • Support for mount points (another “hooray/finally!”)
  • At the “AlwaysOn: Active Secondaries” Microsoft Session:
    • You can offload backups to any replica so as to not impact production. The LSN is communicated back to the primary which then notifies all replicas so that all log backups in all replicas form a single log chain.
    • If that sounds like a nightmare for a restore, remember the Database Recovery Advisor mentioned previously and restore is simple!
    • Differential backups are not supported on secondaries.
    • Only “Copy Only” full backup is supported on secondaries.
    • Advisable to store backups centrally (so you can just point the Database Recovery Advisor to a single location to create the restore plan).
    • Declarative policy to determine where backups occur automatically. This is advisory only, not enforced. Implement via a system function which returns a Boolean indicating if this is the preferred backup location.
    • And, of course, probably the number one reason for multiple secondaries (besides general DBA paranoia) is to offload reporting workloads without having to use database snapshots.
  • In MVP Tim Ford’s “Periodic Table of DMOs” session:
    • Using the format of the Periodic Table of the Elements from general science, Tim gives us a creative way to organize all the DMOs (Dynamic Management Objects – views and functions) now available to DBAs for troubleshooting and general understanding of what is happening within a database engine instance. I might actually use this to remember when and what to use.
  • SQL Server guru Paul Randal (blog) busted more DBA Myths in a spotlight session. As usual, not all myths were blatantly true or false, there were a few “it depends”! This session lasted until 6:30pm and I think most would have stayed until 9:30pm to listen to Paul’s in-depth, but humorous explanations about why these myths were true/false or especially “it depends”.  
  • Allan Hirt, Clustering MVP, demonstrated “Denali on Windows Server Core”:
    • Currently Denali is only supported on W2K8R2 SP1, no Windows 8 support announced yet (as of October 14, 2011).
    • You’ll need to add .NET and Allan will post on his blog how to do this effectively
    • You’ll need to add PowerShell 2.0. See KB976736 and use option 2!
    • If installing a Windows Cluster, then all nodes must be Server Core or all must allow GUI.
  • Microsoft Senior Program Manager for the Database Engine Security features, Il-Sung Lee, presented “What’s New in Security for SQL Server Code Name Denali”:
    • Windows groups can now have a default schema. The first group with the lowest principal id will be chosen if a user belongs to more than one group.
    • User-defined schema roles
    • All SKUs will now have the ability to specify server audits; database audits still require Enterprise Edition or higher.
    • User-defined audit events (via sp_audit_write)
    • Database Authentication (for contained databases)
    • Lots of cryptography changes – deprecating older methods, supporting newer, more secure ones
    • Il-Sung’s team blogs at: http://blogs.msdn.com/b/sqlsecurity/
  • SQLCAT brought in speakers from 4 companies who are early adopters of Denali & AlwaysOn to discuss their HA/DR requirements and migration paths to AlwaysOn. I’ll definitely be revisiting their solutions in the future and suspect there will be some whitepapers posted on SQLCAT’s website soon.
  • In “Where should I be encrypting my data”, MVP Denny Cherry provided a good refresher on some encryption basics:
    • Encryption will almost always increase the size of stored data
    • Encryption will decrease the usefulness of data deduplication
    • Encryption will add CPU load
    • Enabling TDE for an application database will also enable it for tempdb
    • TDE is for “data at rest” including backups
    • For “data on the wire” use IPSEC or SSL
    • If your storage uses MPIO, then “data at rest” protection at the LUN level. If you copy a file to another server, it would be readable; if you detach and attach the LUN elsewhere it would not be readable.
  • I also attended one of the “Lightning Talk” sessions, where 8 speakers had 5 minutes each to convey their topic to the audience. Some were humorous, some were serious, and one was seriously humorous! Grant Fritchey’s “Backup Testing, The Rant” is not something any of us present will ever forget – nor will we ever forget the importance of actually testing your backup! I really wish that had been caught on video, because the audio on the DVD just won’t do the presentation justice. Two other speakers in the session covered topics near and dear to me – “Thinking of Hosting a SQLSaturday?” (John Sterett) and “Build Up” (Niko Neugebauer). Both encouraged attendees to get involved in their local SQL Server community activities – and if you don’t have a local group – start one!

Now to start making the list of all the interesting sessions I couldn’t get to that I’ll want to watch (or re-watch) as soon as the Conference DVDs arrive! I foresee team “lunch ‘n learns” being scheduled.

My Denali First Glance – AlwaysOn Availability Groups

I’m finally finding some time to get more familiar with the new feature in Microsoft SQL Server Code-Named “Denali” called “AlwaysOn”. No, that is not a typo – capitalize “Always”, capitalize “On” and concatenate – “AlwaysOn”!

There are two flavors of AlwaysOn – failover clustering and availability groups.  AlwaysOn Failover Clustering is exactly what it sounds like – enabling a SQL Server Denali Instance to failover to another node in a Windows Server Failover Cluster. This is also known as a Failover Cluster Instance (FCI). On the other hand, AlwaysOn Availability Groups can best be described as database mirroring on steroids – and that is what I’ll cover in this post. I’ll cover the differences in Denali’s FCI implementation from prior versions in a later post.

First, think of all the things you don’t like about database mirroring:

  •         Only 1 mirror
  •         The mirror is always “in recovery” and unusable outside of a snapshot
  •         Failover is per user database (i.e. one at a time)
  •         Have to choose between synchronous (high availability) and asynchronous (disaster recovery)
  •         Requires a 3rd instance as a witness if you want “high safety mode” and automatic failover

Here’s what AlwaysOn Availability Groups will provide:

  •         Up to 4 replicas (the term “replica” essentially replaces “mirror”) in CTP3
  •         Read-only access is provided to replicas
  •         Multiple databases can be grouped together for failover defined as “Availability Groups”
  •         Synchronous and Asynchronous replicas at the same time – covering both high availability and disaster recovery (hence HADR)
  •         Windows Server Failover Clustering detects when failover is required – no third (or fourth or fifth) server required

Yes, you read that right; the Windows Failover Clustering feature is required to be installed on your Denali servers in order to use AlwaysOn Availability Groups. It does not mean, however, that you have to actually implement a Windows failover cluster of two or more nodes. But, it does mean that Windows Server 2008 Enterprise Edition is required….and that makes me believe that this feature will likely only be available in the Denali Enterprise Edition or higher once the final SKU feature sets are announced by Microsoft. However, SQL Server Enterprise Edition is required today to use asynchronous mode for database mirroring, so this shouldn’t come as a surprise.

Similar to database mirroring failover options, the availability failover modes are automatic or planned manual failover for synchronous replicas to prevent data loss or forced manual failover for asynchronous replicas which may incur data loss.  If the secondary replica in synchronous mode is not synchronized with the primary replica, then a forced manual failover is also the only option.

For seamless client connectivity, when you establish an Availability Group you may also create a Virtual Network Name (VNN) which will be assigned a Virtual IP (VIP). Whenever the Availability Group fails over, the VNN will redirect connections to the new primary replica. Alternatively, if there are only two availability replicas in an Availability Group, then “old-style” database mirroring connection strings may be used until you can establish a VNN and update your application to use the VNN. If you want to add a third replica, then you must convert to VNN for your client connectivity.

In addition to having read-only access to secondary replicas, you can take backups of your secondary replicas.  There are several options you will want to specify when setting up your Availability Groups (or alter afterwards) in order to properly control the backup of your primary and secondary replicas. Since this is still the CTP as I’m writing this, I’ll just point you to this BOL link for the most current info as it may change by the time you get around to implementing this.

The main purpose of this article is to make you aware of what is coming so that you can start planning for your future HADR requirements as you make the move to Denali when it is finally released. To dig deeper, check out the current BOL entry for AlwaysOn Availability Groups.

My Denali First Glance – SSMS

You know you are getting older when….

You realize you want to install SQL Server Denali’s SSMS (SQL Server Management Studio) on all your systems just to use the new zoom feature in the query pane!

With a couple of scroll clicks, this becomes:

And yes, you can type in your own zoom value from 20% to 400%.

Sometimes it really is the simple things in life that make us happy!

I might even resign myself to learn the changed keyboard shortcuts (due to full compatibility with Visual Studio) without too much complaining. Maybe…

 

My Denali First Glance – PowerShell Enhancements

First, PowerShell 2.0 is now required as a pre-requisite for installing SQL Server Denali (SQL Server 11). Hopefully, you’ll only be installing Denali in a Windows 2008 R2 environment which has PowerShell 2.0 installed by default. But, if not, you’ve now been forewarned – just in case you don’t read the “readme” prior to installing!

SQLPS is now a module (a new feature in PowerShell 2.0). The SQLPS utility has been deprecated, but if you use it, it apparently will start PowerShell 2.0 and import the SQLPS module.

Additionally, there is now a module of cmdlets for SSAS (Analysis Services). After installing Denali, if you execute the command Get-Module -ListAvailable you’ll see that in addition to SQLPS, SQLASCMDLETS also is displayed. This module contains 11 cmdlets as shown below:

But, my focus is Database Engine, so that’s all I’ll say about SSAS. If you are a BI guy or gal – check it out!

When you first import the SQLPS module, you will get the following warning unless you import the module using the -DisableNameChecking parameter.

You’ll notice that as soon as you import the SQLPS module, your location changes to the SQLSERVER provider. To see what’s new in this hierarchy, just type: dir

XEvent, IntegrationServices and SQLAS are all new. Thus, more features and components of the SQL Server stack are now being exposed for PowerShell management.

Then, there are all the new cmdlets available in SQLPS.

Considering there were only 5 cmdlets (highlighted in yellow above) for SQL Server 2008, I now count 29! Most appear directed at new features in Denali such as High Availability\Disaster Recovery (HADR) and Availability Group configurations. I’m sure I’ll be looking at those in detail when I get around to studying AlwaysOn.  However, two very traditional database commands have been provided now with their own cmdlets – Backup-SqlDatabase and Restore-SqlDatabase. These two cmdlets are modeled after the Microsoft.SqlServer.Management.Smo.Backup class, so if you are used to the properties in that class, then the parameters for these cmdlets will be very familiar.  

That means you can now execute commands like this as a SQLAgent PowerShell job step:

Dir SQLSERVER:\SQL\MyServer\MyInstance\Databases | Backup-SqlDatabase

This simple command will backup all user databases on an instance using all the defaults, of course. But, you can start imagining the possibilities here – especially if you start integrating any additional parameters needed in your environment.  

As indicated by the title, this is just a “first glance” at what is coming in Denali; I’m sure I’ll be investigating and writing more about using PowerShell with Denali in the future. What new PowerShell enhancements are you looking forward to implementing?

My Denali First Glance – New DMVs

All it takes to know that Denali (SQL Server Version 11) will be a major upgrade from SQL Server 2008 R2 (SQL Server Version 10.5) is to compare the number of Dynamic Management Views and Functions (DMVs) available. 

select count(*) from master.sys.sysobjects where name like 'dm%'

SQL Server 2008 R2 SP1 reports 141 DMVs while Denali CTP3 counts 174! Let’s see….174-141=33! Here’s what’s new – and just for good measure to ensure I didn’t leave any out, I’ll number them:

1.            dm_db_database_page_allocations

2.            dm_db_fts_index_physical_stats

3.            dm_db_log_space_usage

4.            dm_db_objects_disabled_on_compatibility_level_change

5.            dm_db_uncontained_entities

6.            dm_exec_describe_first_result_set

7.            dm_exec_describe_first_result_set_for_object

8.            dm_filestream_non_transacted_handles

9.            dm_fts_index_keywords_by_property

10.         dm_fts_semantic_similarity_population

11.         dm_hadr_auto_page_repair

12.         dm_hadr_availability_group_states

13.         dm_hadr_availability_replica_states

14.         dm_hadr_cluster

15.         dm_hadr_cluster_members

16.         dm_hadr_cluster_networks

17.         dm_hadr_database_replica_cluster_states

18.         dm_hadr_database_replica_states

19.         dm_hadr_instance_node_map

20.         dm_hadr_name_id_map

21.         dm_logconsumer_cachebufferrefs

22.         dm_logconsumer_privatecachebuffers

23.         dm_logpool_consumers

24.         dm_logpool_hashentries

25.         dm_logpool_sharedcachebuffers

26.         dm_logpool_stats

27.         dm_logpoolmgr_freepools

28.         dm_logpoolmgr_respoolsize

29.         dm_logpoolmgr_stats

30.         dm_os_cluster_properties

31.         dm_os_memory_broker_clerks

32.         dm_os_server_diagnostics_log_configurations

33.         dm_tcp_listener_states

Whew!

Almost everywhere you look in the Denali Database Engine much of what is “new” revolves around Availability Groups and HADR (High Availability / Disaster Recovery) options. Thus, many of the new DMVs (10) provide insight into these features. These DMVs have the naming convention – dm_hadr_%.  I’ll investigate these further when I get around to testing out AlwaysOn. But, in the meantime, you’ll find most but (as of this writing) not all of these documented in the Denali BOL DMV section (http://msdn.microsoft.com/en-us/library/ff877943(v=SQL.110).aspx) under the “AlwaysOn Availability Groups Dynamic Management Views and Functions”. You’ll also find that the new DMV, dm_tcp_listener_states, is also listed in this group in BOL, so I guess that makes 11 new DMVs in this category.

Another 10 new DMVs appear to deal with log information, but I can’t find any documentation on the ones using the dm_log% naming at present and most appear to be functions. Nor is information on dm_db_log_space_usage readily available. I suspect this deficiency will be remedied by RTM.  However, by running a SELECT query on sys.dm_db_log_space_usage, you’ll find that it returns a single row for the current or specified database with the following columns – database_id, total_log_size_in_bytes, used_log_space_in_bytes, used_log_space_in_percent.

I like this one tremendously – it provides exactly what I expected based on the name. I’ve been working with SQL Server since version 4.21 and now – finally – I no longer have to parse DBCC results or other DMVs and do these log space calculations for myself! Yippee!

On the other hand, dm_db_objects_disabled_on_compatibility_level_change isn’t exactly what I expected based on the name. I finally found it documented under the new section of DMVs in BOL called “Spatial Related Dynamic Management Views and Functions”. Huh? I’ll save you the trouble of clicking the link…BOL states that this DMV: Lists the persisted computed columns and spatial indexes that will be disabled as a result of changing compatibility level.Ok – so I’m guessing by the name that perhaps this will be expanded on in the future, but right now it is a fairly limited set of objects to which it applies.

If that list of 33 isn’t enough for you, there are the following DMVs which could also be classified as new to Denali, but were also “retroactively” released with SP1 for SQL Server 2008 R2. So if you’ll be skipping SQL Server 2008 R2 SP1, then these will definitely be new, too, when you move to Denali.  

  • dm_os_volume_stats
  • dm_os_windows_info
  • dm_server_memory_dumps
  • dm_server_services
  • dm_server_registry

My favorites from this list are dm_os_windows_info and dm_server_services. These two DMVs allow me to determine the Windows version my SQL Server instance is running on as well as information about the instance’s SQL Server services, such as start mode and service account name, without resorting to WMI or SQL-SMO queries. I can now get this information directly via a T-SQL query. The DMV dm_os_volume_stats is actually a function to which you’ll need to provide a database id and file id and it will return the size of the volume and remaining space on the volume containing that file, plus other details. I have to confess I was a little disappointed that this didn’t return a row for every volume containing any database file, so that I could quickly see if any of the SQL Server data volumes have space issues. But, I guess something has to remain for Microsoft to add in future releases!   

My Denali First Glance – Product Update Feature

Based on my first experience with the Product Update feature in Denali (SQL Server 11) CTP3, I’d have to say that “the third time is the charm”.  This is the third version of SQL Server to provide a method for incorporating updates during an initial product install. Slipstreamed updates were first provided for SQL Server 2008 and also in SQL Server 2008 R2.  I was really excited about that capability until I realized that in the large, global enterprise environment in which I worked that it caused more pain than it solved.  I won’t go into all the details, but since we ran our installs over the network and from multiple data centers, slipstreamed source was not efficient for us due to our install methodology and the number of combinations of source we needed to provide to our customers. However, based on what I’ve read about the Product Update feature in Denali as well as the testing I have performed so far, this solution will provide exactly what we need to easily and successfully deploy installations with the exact updates we want at setup time.

There are three ways to incorporate the Product Updates for Denali:

  • Windows Update / Microsoft Update
  • WSUS (Windows Server Update Services)
  • File Share

In an enterprise environment, you won’t find many servers with Internet access, so the first method, Windows Update, is probably out.  Also, enterprise environments typically have specific deployment schedules for changing even initial installs to a new service pack level. So, even if your servers do have Internet access, I don’t see this option being used by large enterprises. But, I love this option for my home installs!

As for the second method, WSUS, this one might be an option pending how well your DBAs interface with the team supporting WSUS. However, considering the various requirements in an enterprise environment – there are valid scenarios where a “down-level” install is required. Thus, this method might be eliminated as it would always provide the most current approved patch level.

That brings us to the File Share method.  At first glance, this looks like slipstream.  However, in my opinion, it is an order of magnitude better than slipstream. Why?

  1. I can maintain a single RTM code base. This reduces my storage footprint.
  2. I don’t have to copy/merge any files from the update into my RTM code base. This reduces potential for errors in translating what to copy/merge.
  3. I can specify the location of the updates using either a direct local path (e.g. C:\SQLUpdate) or a UNC name (e.g. \\SourceServer\SQLUpdate). This increases my installation methodology flexibility.
  4. I can have multiple update folders to choose from (e.g. \SQLUpdate\V11.Update1 versus \SQLUpdate\V11.Update2, etc.). This increases my ability to support varying customer version requirements.

With the File Share method, you simply add the /UpdateSource parameter to your
setup.exe command and tell it where that source resides.

Local Drive Source example:

setup.exe /Action=Install /UpdateSource=C:\SQLUpdate\V11.Test

Network File Share Source example using a local computer variable to designate the source server name:

setup.exe /Action=Install /UpdateSource=\\%Src%\SQLUpdate\V11.Test

When running setup using the command above, you’ll get the dialog shown below. You can see that the designated product update at the UNC location will be included in the installation.

It’s as easy as that!

But, what if you don’t want any updates included during installation?  That is easy, too.  Just add the /UpdateEnabled=False parameter to your setup command. By default, this parameter’s value is True.  And, finally, these parameters can always be incorporated into a configuration ini file, so that you can initiate the whole install unattended, but that’s a topic for another day.

If you’d like to test out the Product Update feature for yourself, check out these links for obtaining, implementing and commenting on the Denali CTP3 Product Update feature:

My SQL Server Denali CTP3 First Glance

CTP3 of Microsoft SQL Server Code Name “Denali” was released recently. I’ve been skimming the latest documentation to determine which new general and database engine features I will want to examine more closely.  There are many interesting features added in all the components (especially in the BI stack) of this SQL Server version, but given my focus on deploying and managing SQL Server databases in the enterprise environment, my initial concentration will start with this feature list:

  • Product Updates
  • Contained Databases
  • PowerShell Improvements
    • Uses the new PowerShell 2.0 modules feature to import the sqlps module
    • New cmdlets: backup-sqldatabase and restore-sqldatabase
  • New DMVs
    • dm_os_volume_stats
    • dm_os_windows_info
    • dm_server_memory_dumps
    • dm_server_services
    • dm_server_registry
  • AlwaysOn

If you, too, are ready to start investigating Microsoft’s next version of SQL Server, then I’d recommend starting with the current Release Notes which includes links to the CTP download, the Denali SQL Server Books Online, and Denali forums.

Which features are you looking forward to investigating?

Follow

Get every new post delivered to your Inbox.

Join 415 other followers