So you’ve deployed Exchange 2013 in a highly available configuration. Now it’s a couple of days after “patch Tuesday,” and you’re ready to update your servers. You’ve applied the patches in question to a test server, and you’re confident that they will not have adverse effects on your Exchange servers.
Now what? Do you just apply the patches, and let the high availability features in Exchange keep your users connected? What is the proper order in which to patch your servers? Do all of your servers need to be running at the same patch level, or can you apply some patches to one server, but not another?
Exchange 2013 – Steps for Updating Servers
Today I am going to outline the recommended process for updating your Exchange 2013 servers. This process is intended to apply to Exchange 2013 servers configured with at least 2 CA servers behind a hardware load balancer, and at least two MB servers in a DAG.
- Step 1 – Testing and Deploying Patch on High Availability Exchange
- Step 2 – Identifying Servers to Update, and Order of Update
- Step 3 – Put Servers into Maintenance Mode
- Step 4 – Scripting the process
- Step 5 – Wrapping it up: Returning to Previous Server Configuration
Step 1 – Testing and Deploying Patch on High Availability Exchange
Although it is a rare occurrence, every once in a while a patch is released that can have adverse effects on your Exchange organization. As you’ve put quite a lot of time and money into deploying a highly available Exchange organization, it would be a shame to waste that effort by bringing down your messaging services with a bad patch. For that reason it is always recommended that you carefully review the release notes, and deploy patches into a test environment before moving them to production.
“Your test environment should be built to match your production environment as closely as possible.”
Step 2 – Identifying Servers to Update, and Order of Update
For previous versions of Exchange, one might recommend a complicated process to determine which servers should be patched in what order. With Exchange 2013, those decisions are much easier.
Exchange 2013 has simplified the process of patching in several ways. First there are now a reduced number of server roles. Exchange 2007 and 2010 each had 5 server roles, but Exchange 2013 now has only two. That means you don’t have to worry about Hub Transport, Unified Messaging, or Edge Transport (the Edge Transport role is expected to return to Exchange 2013, but as of this writing is not available). If you are running a previous version Edge Transport server, in can be patched at any point in the process as Edge Transport servers should not be a member of the same forest as your other Exchange servers, and thus have very little direct communication with those machines.
So in what order do I recommend patching your Exchange 2013 servers? I would recommend building all your Exchange 2013 servers as Multi-Role (CA and MB) servers making the order in which you patch your servers will be completely irrelevant. I’ll write more about the recommendation for Multi-Role servers in a future blog post about Exchange 2013 server sizing.
Step 3 – Put Servers into Maintenance Mode
Assuming you follow my above advice and build all your Exchange 2013 servers to be Multi-Role servers, the below instructions will put them into maintenance mode, and allow you to patch them without interrupting service for your users. If you have broken up your Exchange 2013 servers into CA and MB servers, follow the below instructions on your Mailbox servers. How do you put your Exchange 2013 Client Access servers into maintenance mode? There is no need. Exchange 2013 Client Access servers are completely stateless, which means their only job is to proxy (or in one case redirect) your connections to the appropriate Mailbox server.
A properly deployed collection of CA servers will not require any special attention should one of them be unavailable.
Enabling Multi-Role for Exchange 2013 or Maintenance Mode for Mailbox Server
The following 4 commands will put Exchange 2013 Multi-Role or Mailbox server into maintenance mode. If you are studying for the Exchange 2013 exam 70-341, you may be well served to remember these commands and the order in which they are run as that information could be relevant to you in the near future (subtle enough hint for you?)
- Set-MailboxServer <servername> -DatabaseCopyActivationDisabledAndMoveNow $True
- Set-ServerComponentState –Identity <servername> –Component HubTransport –State Draining –Requester Maintenance
- Suspend-ClusterNode –Name <servername>
- Set-MailboxServer –Identity <servername> –DatabaseCopyAutoActivationPolicy Blocked
Once you have completed patching your server, the following four commands will reverse the process.
- Set-MailboxServer –Identity ExServer1 –DatabaseCopyAutoActivationPolicy Unrestricted
- Resume-ClusterNode –Name <servername>
- Set-ServerComponentState –Identity <servername> –Component HubTransport –State Active –Requester Maintenance
- Set-MailboxServer –Identity <servername> –DatabaseCopyActivationDisabledAndMoveNow $False
Step 4 – Scripting the Process
The easiest way to script this process would be to copy the above commands into notepad, then save that as a ps1 file. There are a lot of other options that PowerShell affords you to automate this process.
What about the StartDAGServerMaintenance.ps1 and StopDAGServerMaintenance.ps1 that were included with Exchange 2010? Those scripts are there in Exchange 2013, but they do not include the command to drain the transport queues. This command was not available in Exchange 2010 and the Exchange team did not update those scripts for Exchange 2013.
Step 5 – Wrapping It Up: Returning to Previous Server Configuration
Once you have patched all of your servers, you’ll need to reactivate your databases on the servers they were originally active on.
Exchange 2013 Server Maintenance: Simple and Safe
Overall the process of taking Exchange 2013 servers out of production for maintenance is much simpler and safer in Exchange 2013. There is much less chance of something going wrong, and if something does go wrong, it’s easier to recover.