SSL and Exchange Server

Posted on 2 CommentsPosted in 2008 R2, 2012 R2, 2013, certificates, exchange, https, IAmMEC, JetNexus, load balancer, Load Master, loadbalancer, mobile phones, SSL, TLS, windows server, xp

In October 2014 or thereabouts it became known that the SSL protocol (specifically SSL v3) was broken and decryption of the encrypted data was possible. This blog post sets out the steps to protect your Exchange Server organization regardless of whether you have one server or many, or whether or not you use a load balancer or not. As load balancers can terminate the SSL session and recreate it, it might be that changes are needed on your load balancer or maybe directly on the servers that run the CAS role. This blog post will cover both options and looks at the settings for a Kemp load balancer and a JetNexus load balancer.

Of course being an Exchange Server MVP, I tend to blog about Exchange related stuff, but actually this is valid for any server that you publish to the internet and probably valid of any internal server that you encrypt traffic to via the SSL suite of protocols. Microsoft outline the below configuration at https://technet.microsoft.com/en-us/library/security/3009008.aspx.

The steps in this blog will look at turning off the SSL protocol in Windows Server and turning on the TLS protocol (which does the same thing as SSL and is interchangeable for SSL, but more secure at the time of writing – Jan 2015). Some clients do not support TLS (such as Internet Explorer on Windows XP Service Pack 2 or earlier, so securing your servers as you need to do may stop some home users connecting to your Exchange Servers, but as XP SP2 should not be in use in any business now, these changes should not affect desktops. You could always use a different browser on XP as that might mitigate this issue, but using XP is a security risk in an of itself anyway! To disable clients from connecting to SSL v3 sites requires a client or GPO setting and this can be found via your favourite search engine.

Note that the registry settings and updates for the load balancers in this blog post will restrict client access to your servers if your client cannot negotiate a mutual cipher and secure channel protocol. Therefore care and testing are strongly advised.

Testing and checking your changes

Before you make any changes to your servers, especially internet facing ones, check and document what you have in place at the moment using https://www.ssllabs.com/ssltest. This service will connect to an SSL/TLS protected web site and report back on the issues found. Before running any of the changes below see what overall rating you get and document the following:

  • Authentication section: record the signature algorithm. For the signature algorithm its possible the certificate authority signature will be marked “SHA1withRSA WEAK SIGNATURE”. This certificate, if rekeyed and issued again by your certificate authority might be replaced with a SHA-2 certificate. The Google Chrome browser from September 2014 will report sites secured with this SHA-1 certificate as not fully trustworthy based on the expiry date of the certificate. If your certificate expires after Jan 1st 2017 then get it rekeyed as soon as possible. As 2015 goes on, this date will move closer in time. From early 2015 this cut off date becomes June 1st 2016 and so on. Details on the dates for this impact are in http://googleonlinesecurity.blogspot.co.uk/2014/09/gradually-sunsetting-sha-1.html. You can also use https://shaaaaaaaaaaaaa.com/ to test your certificate if the site is public facing, and this website gives details on who is now issuing SHA-2 keyed certificates. You can examine your external servers for SHA-1 certificates and the impact in Chrome (and later IE and Firefox) at https://www.digicert.com/sha1-sunset/. To do the same internally, use the DigiCert Certificate Inspector at https://www.digicert.com/cert-inspector.htm.
  • Authentication section: record the path values. Ensure that each certificate is either in the trust store or sent by the server and not an extra download.
  • Configuration section: document the cipher suites that are provided by your server
  • Handshake simulation section: Here it will list browsers and other devices (mobile phones) and what their default cipher is. If you do not support the cipher they support then you cannot communicate. Note that you typically support more than one cipher and the client will often support more than one cipher to, so though it is shown here as a mismatch this does not mean that it will not work and if this client is used by your users then click the link for the client and ensure that the server offers at least one of the the ciphers required by the client – unless all the ciphers are insecure in which case do not use that client!

Once you have a document on your current configuration, and a list of the clients you need to support and the ciphers they need you to support, you can go about removing SSL v3 and insecure ciphers.

Disabling SSL v3 on the server

To disable SSL v3 on a Windows Server (2008 or later) you need to set the Enabled registry value at “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 3.0\Server” to 0. If this value does not exist, the create a DWORD value called “Enabled” and leave it at 0. You then need to reboot the server.

If you are using Windows 2008 R2 or earlier you should enable TLS v1.1 and v1.2 at the same time. Those versions of Windows Server support TLS v1.1 and v1.2 but it is not enabled (only TLS v1.0 is enabled). To enable TLS v1.1 and v1.2 use set the Enabled value at “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.1\Server” to 1. Change the path to “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.2\Server” and the same setting to support TLS v1.2. If these keys do not exist, create them. It is also documented that the “DisabledByDefault” key is required, but I have seen this noted as being the same as the “Enabled” key – just the opposite value. Therefore as I have not actually checked, I set both Enabled to 1 and DisabledByDefault to 0.

To do both the disabling of SSL v2 and v3 (v2 can be enabled on older versions of Windows and should be disabled as well) I place the following in a .reg file and double click it on each server, followed by a reboot for it to take effect. This .reg file contents also disables the RC4 ciphers. These ciphers have been considered insecure for a few years and when I configure my servers not to support SSL v3 I also disable the RC4 ciphers as well.

Windows Registry Editor Version 5.00

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0]

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0\Client]

"Enabled"=dword:00000000

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0\Server]

"Enabled"=dword:00000000

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 3.0]

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 3.0\Client]

"Enabled"=dword:00000000

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 3.0\Server]

"Enabled"=dword:00000000

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Ciphers\RC4 128/128]

"Enabled"=dword:00000000

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Ciphers\RC4 40/128]

"Enabled"=dword:00000000

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Ciphers\RC4 56/128]

"Enabled"=dword:00000000

Then I use the following .reg file to enabled TLS v1.1 and TLS v1.2

Windows Registry Editor Version 5.00

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.1]

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.1\Client]

"Enabled"=dword:00000001

"DisabledByDefault"=dword:00000000

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.1\Server]

"Enabled"=dword:00000001

"DisabledByDefault"=dword:00000000

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.2]

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.2\Client]

"Enabled"=dword:00000001

"DisabledByDefault"=dword:00000000

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.2\Server]

"Enabled"=dword:00000001

"DisabledByDefault"=dword:00000000

 

Once you have applied both of the above sets of registry keys you can reboot the server at your convenience. Note that the regkeys may set values that are already set, for example TLS v1.1 and v1.2 are enabled on Exchange 2013 CAS servers and SSL v2 is disabled. For example the first of the below graphics comes from a test environment of mine that is running Windows Server 2012 R2 without any of the above registry keys set on them. You can see that Windows Server 2012 R2 is vulnerable to the POODLE attack and supports the RC4 cipher which is weak.

image

The F grade comes from patched but un-configured with regards to SSL Windows Server 2008 R2 server

image

After setting the above registry keys and rebooting, the test at https://www.ssllabs.com/ssltest then showed the following for 2012 R2 on the left (A grade) and Windows Server 2008 R2 on the right (A- grade):

image image

Disabling SSL v3 on a Kemp LoadMaster load balancer

If you protect your servers with a load balancer, which is common in the Exchange Server world, then you need to set your SSL and cipher settings on the load balancer, unless you are only balancing at TCP layer 4 and doing SSL pass through. Therefore even for clients that have a load balancer, you might not need to make the changes on the load balancer, but on the server via the above section instead. If you do SSL termination on the load balancer (TCP layer 7 load balancing) then I recommend setting the registry keys on the Exchange servers anyway to avoid security issues if you need to connect to the server directly and if you are going to disable SSL v3 in one location (the load balancer) there is no problem in disabling it on the server as well.

For a Kemp load balancer you need to be running version 7.1-20b to be able to do the following, and to ensure that the SSL code on the load balancer is not susceptible to issues such as heartbleed as well. To configure your load balancer to disable SSL v3 you need to modify the SSL properties of the virtual server and check the “Support TLS Only” option.

To disable the RC4 weak ciphers then there are a few choices, but the easiest I have seen to do is to select “Perfect Forward Secrecy Only” under Selection Filters and then add all the listed filters. Then from this list remove the three RC4 ciphers that are in the list.

If you do not select “Support TLS Only” and leave the ciphers at the default level then your load balancer will get an C grade at the test at https://www.ssllabs.com/ssltest because it is vulnerable to the POODLE attack. Setting just the “Support TLS Only” option and leaving the default ciphers in place will result in a B grade, as RC4 is still supported. Removing the RC4 ciphers (by following the instructions above to add the perfect forward secrecy ciphers and remove the RC4 ciphers from this list) as well as allowing only the TLS protocol will result in an A grade.

image

Kemp 7.1-22b does not support SSL v3 for the API and web interface as well as completing the above to protect the virtual services that the load balancer offers.

Kemp Technologies document the above steps at https://support.kemptechnologies.com/hc/en-us/articles/201995869, and point out the unobvious setting that if you filter the cipher list with the “TLS 1.x Ciphers Only” setting then it will only show you the TLS 1.2 ciphers and not any TLS 1.1 or TLS 1.0 ciphers. THerefore selecting “TLS 1.x Ciphers Only” rather than filtering using “Perfect Forward Secrecy Only” will result in a reduced client list, which may be an issue.

I was able to achieve an A grade on the SSL Labs test site. My certificate uses SHA-1, but expires in 2015 so by the time SHA-1 is reported an issue in the browser I will have changed it anyway.

image

Disabling SSL v3 on a JetNexus ALB-X load balancer

If you protect your servers with a load balancer, which is common in the Exchange Server world, then you need to set your SSL and cipher settings on the load balancer, unless you are only balancing at TCP layer 4 and doing SSL pass through. Therefore even for clients that have a load balancer, you might not need to make the changes on the load balancer, but on the server via the above section instead. If you do SSL termination on the load balancer (TCP layer 7 load balancing) then I recommend setting the registry keys on the Exchange servers anyway to avoid security issues if you need to connect to the server directly and if you are going to disable SSL v3 in one location (the load balancer) there is no problem in disabling it on the server as well.

For a JetNexus ALB-X load balancer you need to be running build 1553 or later. Build 1553 is a version 3 build, so any version 4 build is of a higher, and therefore valid build. This build (version 3.54.3) or later is needed to ensure Heartbleed mitigation and to allow the following configuration changes to be applied.

To configure the JetNexus  you need to upload a config file to turn off SSL v3 and RC4 ciphers. The config file is .txt file that is uploaded to the load balancer. In version 4, the primary cluster node can have the file uploaded to it, and the changes are replicated to the second node in the cluster automatically.

Before you upload a config file to make the changes required, ensure that you backup the current configuration from Advanced >> Update Software and click the button next to Download Current Configuration to save the configuration locally. Ensure you backup all nodes in a v4 cluster is appropriate.

Then select one of the three config file settings below and copy it to a text file and upload it from Advanced >> Update Software and use the Upload New Configuration option to install the file. The upload will reset all connections, do do this at during a quiet period of time.

The three configs are to reset the default ciphers, to disable SSL v3 and RC4, and to disable TLS v1.0 and SSL v3 and RC4

JetNexus protocol and cipher defaults:

#!update

 

[jetnexusdaemon]

Cipher004="ECDHE-RSA-AES128-SHA256:AES128-GCM-SHA256:RC4:HIGH:!MD5:!aNULL:!EDH"

Cipher1=""

Cipher2=""

CipherOptions="CIPHER_SERVER_PREFERENCE"

JetNexus protocol and cipher changes to disable SSL v3 and disable RC4 ciphers:

#!update

 

[jetnexusdaemon]

Cipher004="ECDHE-RSA-AES128-SHA256:AES128-GCM-SHA256:HIGH:!MD5:!aNULL:!EDH:!RC4"

Cipher1=""

Cipher2=""

CipherOptions="NO_SSLv3,CIPHER_SERVER_PREFERENCE"

JetNexus protocol and cipher changes to disable TLS v1.0, SSL v3 and disable RC4 ciphers:

#!update

 

[jetnexusdaemon]

Cipher004="ECDHE-RSA-AES128-SHA256:AES128-GCM-SHA256:HIGH:!MD5:!aNULL:!EDH:!RC4"

Cipher1=""

Cipher2=""

CipherOptions="NO_SSLv3,NO_TLSv1,CIPHER_SERVER_PREFERENCE"

On my test environment I was able to achieve an A- grade with the SSL Test website and the config to disable TLS 1.0, SSL3 and RC4 enabled. The A- is because of a lack of support for Forward Secrecy with the reference browsers used by the test site.

image

Browsers and Other Clients

There too much to discuss with regards to clients, apart from they need to support the same ciphers as mentioned above. A good guide to clients can be found at https://www.howsmyssl.com/s/about.html and from there you can test your client as well.

Additional comment 23/1/15 : One important comment to make though comes courtesy of Ingo Gegenwarth at https://ingogegenwarth.wordpress.com/2015/01/20/hardening-ssltls-and-outlook-for-mac/. This post discusses the TLS Renegotiation Indication Extension update at RFC 5746. It is possible to use the AllowInsecureRenegoClients registry key at HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL to ensure that only clients with the update mentioned at http://support.microsoft.com/kb/980436 are allowed to connect. If this is enabled (set to Strict Mode) and the above to disable SSL 2 and 3 is done then Outlook for Mac clients cannot connect to your Exchange Server. If this regkey is deleted or has a non-zero value then connections to SSL 2 and 3 can be made, but only for a renegotiation to TLS. Therefore ensure that you allow Compatibility Mode (which is the default) when you disable SSL 2 and 3, as Outlook for Mac and Outlook for Mac for Office 365 both require SSL support to then be able to start a TLS session.

The Case of the Disappearing Folders

Posted on 3 CommentsPosted in 2013, exchange online, IAmMEC, MVP, Office, Office 365 ProPlus, Outlook

Here is a issue I have come across at one of my current clients – you create a folder in Outlook 2013 when in the “Mail” view (showing only mail folders – your typical default view) and the folder does not get created. For example, in the below picture the user is in the middle of creating a folder called “Test Inline” as a child of the “SO” folder:

image

Upon pressing Enter, the folder disappears and fails to be created:

image

So where does one see this issue? It happens when the parent folder in question, in this case the “SO” folder is created by Microsoft’s PST Capture Tool. The PST Capture Tool creates a parent folder in the Online Archive in Exchange (in this case Exchange Online but it does not matter which Exchange Server) named after the PST file, so in this case SO.pst was uploaded by the PST Capture Tool. Any attempt to create folders inline below this parent folder fails! If you drag content into this folder it will not allow you to drop the content in, and the folder appears to be read-only.

If you change Outlooks view to Folder view (click the … on the Outlook 2013 view bar to the right) then you can create folders (using a dialog) and that works fine – this is how “Test Dialog” was made in the above pictures.

In Outlook 2010 all works as expected! In Outlook 2013 the issue appears to be the way Outlook handles folders that have a MAPI property on the folder created with a null value. In tools such as MFCMapi and OutlookSpy you can view the MAPI properties of a folder and the folder created by PST Capture Tool has a property call PR_CONTAINER_CLASS_W with a null (empty) value. Normally, Outlook will make folders that have “IPF.Note” as the value of this folder, if this is a mail and notes folder (i.e. not a calendar or contact etc folder). But clearly there is a problem, as Folder view allows you to create subfolders when the parent’s PR_CONTAINER_CLASS_W value is null and so does Outlook 2010 and coincidently does OWA!

The fix, but I do not have it ready yet, is to run an EWS script to reset the PR_CONTAINER_CLASS_W property of this folder to IPF.Note or wait for an update to Outlook from Microsoft, and for that I have contacted them.

With thanks to fellow MVP Jaap Wesselius for double-checking this for me and testing it in Outlook 2010.

Group Policy Import To Fix Google Chrome v37 Issues With Exchange Server and Microsoft CRM

Posted on 2 CommentsPosted in 2010, 2013, Chrome, crm, Dynamics, exchange, exchange online, Group Policy, IAmMEC, Office 365, owa

A recent update to Google Chrome (37.0.2062.120) removed the ability to support modal dialog boxes. This are dialogs that require your attention and stop you going back to the previous page until you have completed the info required – these are very useful in workflow type scenarios.

Google claim that as 0.004% of web sites use them (from Google anonymous statistics gathering that you can opt into in Chrome) they are justified in removing support for them – but they have not removed other things that have the same level of support!

With this version of Chrome (or the Chromium open source browser) there is a work around until April 30th 2015 that will allow modal dialogs to work again. Without this work around clicking links in OWA and ECP in Exchange 2010 and OWA and EAC in Exchange Online and Exchange 2013 will not popup. This can cause issues such as the inability to attach files in OWA and to create objects in ECP/EAC for the administrator. Popups in Microsoft CRM also do not work.

As a work around you could use a different browser, but if Chrome works for you (or does not in this case) and you are joined to a domain then you can download the following GPO export file and import it into your Active Directory to enable modal dialogs to work again in Exchange Server, Office 365 and Microsoft CRM products.

To download and import this GPO file to enable Chrome modal dialog box functionality to resume (until 30th April 2015, when Google stop allowing the work around) follow these steps:

  1. Download Google Chrome Show Modal Dialog Before 30 April 2015.zip
  2. Copy to a domain controller and expand the zip file. Ensure the contents of the zip file are not placed directly on your desktop as you cannot import from the desktop directly, so if you expand the zip to the desktop then copy the one folder that was created into a new subfolder.
  3. Start Group Policy Management MMC admin tool.
  4. Expand Forest > Domain > Your Domain > Group Policy Objects.
  5. Right click “Group Policy Objects” and choose New
  6. Create a new GPO called “Chrome and Chromium Modal Dialog Box Allow”:
    image
  7. Right click “Chrome and Chromium Modal Dialog Box Allow” GPO that you just made and choose Import Settings
  8. Proceed through the import wizard. You do not need to backup this new GPO on the second page of the dialog as the new GPO is empty.
  9. On the third page of the wizard browse to the parent folder containing the contents of the download above:
    image
  10. Click Next and you should see one backed up GPO listed:
    image
  11. Click Next to import this. If you click View Settings first a web page will open showing you that this GPO sets two registry keys for the computer and two registry keys for the user. These set SOFTWARE\Policies\Chromium\EnableDeprecatedWebPlatformFeatures and Software\Policies\Google\Chrome\EnableDeprecatedWebPlatformFeatures (for both Chromium and Chrome browsers) with a reg key (type string) 1:ShowModalDialog_EffectiveUntil20150430
  12. Proceed with Next and then Finish and the import will begin:
    image
  13. Click OK.
  14. Now link the GPO object to the root of your domain so it impacts all users and to the root of any OU that blocks inheritance. Import to other domains as above or link from this domain depending upon your current policy for managing GPO cross domains.
  15. Delete the zip and folder you downloaded. They are not needed any more.

Speaking at TechEd Europe 2014

Posted on 4 CommentsPosted in certificates, cloud, EOP, exchange, exchange online, Exchange Online Protection, GeoDNS, hybrid, IAmMEC, journaling, mcm, mcsm, MVP, Office 365, smarthost, smtp, starttls, TechEd, TLS, transport

I’m please to announce that Microsoft have asked me to speak on “Everything You Need To Know About SMTP Transport for Office 365” at TechEd Europe 2014 in Barcelona. Its going to be a busy few weeks as I go from there to the MVP Summit in Redmond, WA straight from that event.

image

My session is going to see how you can ensure your migration to Office 365 will be successful with regards to keeping mail flow working and not seeing any non-deliverable messages. We will cover real world scenarios for hybrid and staged migrations so that we can consider the impact of mail flow at all stages of the project. We will look at testing mail flow, SMTP to multiple endpoints, solving firewalling issues, and how email addressing and distribution group delivery is done in Office 365 so that we always know where a user is and what is going to happen when they are migrated.

Compliance and hygiene issues will be covered with regards to potentially journaling from multiple places and the impact of having anti-spam filtering in Office 365 that might not be your mail flow entry point.

We will consider the best practices for changing SMTP endpoints and when is a good time to change over from on-premise first to cloud first delivery, and if you need to maintain on-premises delivery how should you go about that process.

And finally we will cover troubleshooting the process should it go wrong or how to see what is actually happening during your test phase when you are trying out different options to see which works for your company and your requirements.

Full details of the session, once it goes live, are at http://teeu2014.eventpoint.com/topic/details/OFC-B350 (Microsoft ID login needed to see this). Room and time to be announced.

Exchange Web Services (EWS) and 501 Error

Posted on 12 CommentsPosted in 2010, 2013, EWS, exchange, exchange online, hybrid, IAmMEC, iis, Kemp, Load Master, Office 365, tmg

As is common with a lot that I write in this blog, it is based on noting down the answers to stuff I could not find online. For this issue, I did find something online by Michael Van “Hybrid”, but finding it was the challenge.

So rather than detailing the issue and the reason (you can read that on Michael’s blog) this talks about the steps to troubleshoot this issue.

So first the issue. When starting a migration test from an Exchange 2010 mailbox with an Exchange 2013 hybrid server (running the mailbox and CAS roles) behind a Kemp load balancing (running 7.16 – the latest release at the time of writing, but recently upgraded from version 5 through 6 to 7) I got the following error:

image

The server name will be different (thanks Michael for the screenshot). In my case this was my clients UK datacentre. My clients Hong Kong datacentre was behind a Kemp load balancer as well, but is only running Exchange 2010 and the New York datacentre has an F5 load balancer. Moves from HK worked, but UK and NY failed for different reasons.

The issue shown above is not easy to solve as the migration dialog tells you nothing. In my case it was also telling me the wrong server name. It should have been returning the External EWSUrl from Autodiscover for the mailbox I was trying to move, instead it was returning the Outlook Anywhere external URL from the New York site (as the UK is proxied via NY for the OA connections). For moves to the cloud, we added the External URL for EWS to each site directly so we would move direct and not via the only site that offered internet connected email.

So troubleshooting started with exrca.com – the Microsoft Connectivity Analyser. Autodiscover worked most of the time in the UK but Synchronization, Notification, Availability, and Automatic Replies tests (to test EWS) always failed after six and a half seconds.

Autodiscover returned the following error:

A Web exception occurred because an HTTP 400 – BadRequest response was received from Unknown.
HTTP Response Headers:
Connection: close
Content-Length: 87
Content-Type: text/html
Date: Tue, 24 Jun 2014 09:03:40 GMT

Elapsed Time: 108 ms.

And EWS, when AutoDiscover was returning data correctly, was as follows:

Creating a temporary folder to perform synchronization tests.

Failed to create temporary folder for performing tests.

Additional Details

Exception details:
Message: The request failed. The remote server returned an error: (501) Not Implemented.
Type: Microsoft.Exchange.WebServices.Data.ServiceRequestException
Stack trace:
at Microsoft.Exchange.WebServices.Data.ServiceRequestBase.GetEwsHttpWebResponse(IEwsHttpWebRequest request)
at Microsoft.Exchange.WebServices.Data.ServiceRequestBase.ValidateAndEmitRequest(IEwsHttpWebRequest& request)
at Microsoft.Exchange.WebServices.Data.ExchangeService.InternalFindFolders(IEnumerable`1 parentFolderIds, SearchFilter searchFilter, FolderView view, ServiceErrorHandling errorHandlingMode)
at Microsoft.Exchange.WebServices.Data.ExchangeService.FindFolders(FolderId parentFolderId, SearchFilter searchFilter, FolderView view)
at Microsoft.Exchange.WebServices.Data.Folder.FindFolders(SearchFilter searchFilter, FolderView view)
at Microsoft.Exchange.Tools.ExRca.Tests.GetOrCreateSyncFolderTest.PerformTestReally()
Exception details:
Message: The remote server returned an error: (501) Not Implemented.
Type: System.Net.WebException
Stack trace:
at System.Net.HttpWebRequest.GetResponse()
at Microsoft.Exchange.WebServices.Data.EwsHttpWebRequest.Microsoft.Exchange.WebServices.Data.IEwsHttpWebRequest.GetResponse()
at Microsoft.Exchange.WebServices.Data.ServiceRequestBase.GetEwsHttpWebResponse(IEwsHttpWebRequest request)

Elapsed Time: 6249 ms.

What was interesting here was the 501 and that it was always approx. 6 seconds before it failed.

Looking in the IIS logs from the 2010 servers that hold the UK mailboxes there were no 501 errors logged. The same was true for the EWS logs as well. So where is the 501 coming from. I decided to bypass Exchange 2013 for the exrca.com test (as my system is not yet live and that is easy to do) and so in Kemp pointed the EWS SubVDir directly to a specific Exchange 2010 server. Everything worked. So I decided it was an Exchange 2013 issue, apart from the fact I have lab environments the same as this (without Kemp) and it works fine there. So I decided to search for “Kemp EWS 501” and that was the bingo keyword combination. EWS and 501 or Exchange EWS and 501 got nothing at all.

With my environment back to Kemp >  2013 >  2010 I looked at Michaels suggestions. The first was to run Test-MigrationServerAvailability –ExchangeRemoteMove –RemoteServer servername.domain.com. I changed this slightly, as I was not convinced that I was connecting to the correct endpoint. The migration reported the wrong server name and the Exrca tests do not tell you what endpoint they are connecting to. So I tried Test-MigrationServerAvailability –ExchangeRemoteMove –Autodiscover –EmailAddress user-on-premises@domain.com

As AutoDiscover is reporting errors at times, the second of these cmdlets sometimes reported the following:

RunspaceId         : a711bdd3-b6a1-4fb8-96b8-f669239ea534
Result             : Failed
Message            : AutoDiscover failed with a configuration error: The migration service failed to detect the
migration endpoint using the Autodiscover service. Please enter the migration endpoint settings
or go back to the first step and retry using the Autodiscover service. Consider using the
Exchange Remote Connectivity Analyzer (
https://testexchangeconnectivity.com) to diagnose the
connectivity issues.
ConnectionSettings :
SupportsCutover    : False
ErrorDetail        : internal error:Microsoft.Exchange.Migration.AutoDiscoverFailedConfigurationErrorException:
AutoDiscover failed with a configuration error: The migration service failed to detect the
migration endpoint using the Autodiscover service. Please enter the migration endpoint settings
or go back to the first step and retry using the Autodiscover service. Consider using the
Exchange Remote Connectivity Analyzer (
https://testexchangeconnectivity.com) to diagnose the
connectivity issues.
at Microsoft.Exchange.Migration.DataAccessLayer.MigrationEndpointBase.InitializeFromAutoDiscove
r(SmtpAddress emailAddress, PSCredential credentials)
at Microsoft.Exchange.Management.Migration.TestMigrationServerAvailability.InternalProcessExcha
ngeRemoteMoveAutoDiscover()
IsValid            : True
Identity           :
ObjectState        : New

And when AutoDiscover was working (as it was random) I would get this:

RunspaceId         : a711bdd3-b6a1-4fb8-96b8-f669239ea534
Result             : Failed
Message            : The ExchangeRemote endpoint settings could not be determined from the autodiscover response. No
MRSProxy was found running at ‘outlook.domain.com’.
ConnectionSettings :
SupportsCutover    : False
ErrorDetail        : internal error:Microsoft.Exchange.Migration.MigrationRemoteEndpointSettingsCouldNotBeAutodiscovere
dException: The ExchangeRemote endpoint settings could not be determined from the autodiscover
response. No MRSProxy was found running at ‘outlook.domain.com’. —>
Microsoft.Exchange.Migration.MigrationServerConnectionFailedException: The connection to the
server ‘outlook.domain.com’ could not be completed. —>
Microsoft.Exchange.MailboxReplicationService.RemoteTransientException: The call to
https://outlook.domain.com/EWS/mrsproxy.svc’ failed. Error details: The HTTP request is
unauthorized with client authentication scheme ‘Negotiate’. The authentication header received
from the server was ‘Basic Realm=”outlook.domain.com”‘. –> The remote server returned an
error: (401) Unauthorized.. —>
Microsoft.Exchange.MailboxReplicationService.RemotePermanentException: The HTTP request is
unauthorized with client authentication scheme ‘Negotiate’. The authentication header received
from the server was ‘Basic Realm=”outlook.domain.com”‘. —>
Microsoft.Exchange.MailboxReplicationService.RemotePermanentException: The remote server returned
an error: (401) Unauthorized.
— End of inner exception stack trace —
— End of inner exception stack trace —
at Microsoft.Exchange.MailboxReplicationService.MailboxReplicationServiceFault.<>c__DisplayClas
s1.<ReconstructAndThrow>b__0()
at Microsoft.Exchange.MailboxReplicationService.ExecutionContext.Execute(Action operation)
at Microsoft.Exchange.MailboxReplicationService.MailboxReplicationServiceFault.ReconstructAndTh
row(String serverName, VersionInformation serverVersion)
at Microsoft.Exchange.MailboxReplicationService.WcfClientWithFaultHandling`2.<>c__DisplayClass1
.<CallService>b__0()
at Microsoft.Exchange.Net.WcfClientBase`1.CallService(Action serviceCall, String context)
at Microsoft.Exchange.Migration.MigrationExchangeProxyRpcClient.CanConnectToMrsProxy(Fqdn
serverName, Guid mbxGuid, NetworkCredential credentials, LocalizedException& error)
— End of inner exception stack trace —
at Microsoft.Exchange.Migration.DataAccessLayer.ExchangeRemoteMoveEndpoint.VerifyConnectivity()
at Microsoft.Exchange.Management.Migration.TestMigrationServerAvailability.InternalProcessEndpo
int(Boolean fromAutoDiscover)
— End of inner exception stack trace —
IsValid            : True
Identity           :
ObjectState        : New

This was returning the URL https://outlook.domain.com/EWS/mrsproxy.svc’ which is not correct for this mailbox (this was the OA endpoint in a different datacentre) and external Outlook access is not allowed at this company and so the TMG server in front of the F5 load balancer in the NY datacentre was not configured for OA anyway and browsing the the above URL returned the following picture, which is a well broken scenario but not the issue at hand here!

image

If OA (Outlook Anywhere) was available for this company, this is not what I would expect to see when browsing to the External EWS URL. To that end we have EWS URL’s are bypass TMG and go direct to the load balancer.

So now we have either no valid AutoDiscover response or EWS using the wrong URL. Back to the version of the cmdlet Michael was using as that ignores AutoDiscover: Test-MigrationServerAvailability –ExchangeRemoteMove –RemoteServer servername.domain.com

RunspaceId         : 5874c796-54ce-420f-950b-1d300cf0a64a
Result             : Failed
Message            : The connection to the server ‘ewsinukdatacentre.domain.com’ could not be completed.
ConnectionSettings :
SupportsCutover    : False
ErrorDetail        : Microsoft.Exchange.Migration.MigrationServerConnectionFailedException: The connection to the
server ‘ewsinukdatacentre.domain.com’ could not be completed. —>
Microsoft.Exchange.MailboxReplicationService.RemoteTransientException: The call to
https://ewsinukdatacentre.domain.com/EWS/mrsproxy.svc’ failed. Error details: The remote server returned
an unexpected response: (501) Invalid Request. –> The remote server returned an error: (501) Not
                     Implemented.. —> Microsoft.Exchange.MailboxReplicationService.RemotePermanentException: The
remote server returned an unexpected response: (501) Invalid Request. —>
Microsoft.Exchange.MailboxReplicationService.RemotePermanentException: The remote server returned
an error: (501) Not Implemented.
— End of inner exception stack trace —
— End of inner exception stack trace —
at Microsoft.Exchange.MailboxReplicationService.MailboxReplicationServiceFault.<>c__DisplayClas
s1.<ReconstructAndThrow>b__0()
at Microsoft.Exchange.MailboxReplicationService.ExecutionContext.Execute(Action operation)
at Microsoft.Exchange.MailboxReplicationService.MailboxReplicationServiceFault.ReconstructAndTh
row(String serverName, VersionInformation serverVersion)
at Microsoft.Exchange.MailboxReplicationService.WcfClientWithFaultHandling`2.<>c__DisplayClass1
.<CallService>b__0()
at Microsoft.Exchange.Net.WcfClientBase`1.CallService(Action serviceCall, String context)
at Microsoft.Exchange.Migration.MigrationExchangeProxyRpcClient.CanConnectToMrsProxy(Fqdn
serverName, Guid mbxGuid, NetworkCredential credentials, LocalizedException& error)
— End of inner exception stack trace —
at Microsoft.Exchange.Migration.DataAccessLayer.ExchangeRemoteMoveEndpoint.VerifyConnectivity()
at Microsoft.Exchange.Management.Migration.TestMigrationServerAvailability.InternalProcessEndpo
int(Boolean fromAutoDiscover)
IsValid            : True
Identity           :
ObjectState        : New

Now we can see the 501 error that exrca was returning. It would seem that the 501 is coming from the Kemp and not from the endpoint servers, which is why I could not located it in IIS or EWS logs and so in the Kemp System Message File (logging options > system log files) I found the 501 error:

kernel: L7: badrequest-client_read [157.56.251.92:61541->192.168.1.2:443] (-501): <s:Envelope ? , 0 [hlen 1270, nhdrs 8]

Where the first IP address was a Microsoft datacentre IP and the second was the Kemp listener IP.

It turns out this is due to the Kemp load balancer not returning to Microsoft the Continue-100 status that it should get. Microsoft waits 5 seconds and then sends the data it would have done if it got the response back. At this point the Kemp blocks this data with a 501 error.

It is possible to turn off Kemp’s processing of Continue-100 HTTP packets so it lets them through and this is covered in http://blog.masteringmsuc.com/2013/10/kemp-load-balancer-and-lync-unified.html:

 

In the version at my clients which was upgraded to version 7.0-16 from a v5 to v6 to v7 it was defaulting to RFC Conformant, but needs to be Ignore Continue-100 to work with Office 365. In versions of Kemp 7.1 and later the value needs to be changed to RFC-7231 Compliant from the default of “RFC-2616 Compliant”. Now EWS works, hybrid mailbox moves work, and AutoDiscover is always working on that server – so a mix of issues caused by differing interpretations of an RFC. To cover all these issues the Kemp load balancer started to include this 100-Continue Handling setting. We as ITPro’s need to ensure that we set the correct setting for our environment based on the software we use.

Changing AD FS 3.0 Certificates

Posted on 7 CommentsPosted in 2012, 2012 R2, ADFS, ADFS 3.0, certificates, IAmMEC, Office 365, WAP, Web Application Proxy

I am quite adept at configuring certificates and changing them around, but this one took me completely by surprise as it has a bunch of oddities to consider.

First the errors: Web Application Proxy (WAP) reported 0x80075213. In the event log the following:

The federation server proxy could not establish a trust with the Federation Service.

Additional Data
Exception details:
The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel.

User Action
Ensure that the credentials being used to establish a trust between the federation server proxy and the Federation Service are valid and that the Federation Service can be reached.

And

Unable to retrieve proxy configuration data from the Federation Service.

Additional Data

Trust Certificate Thumbprint:
431116CCF0AA65BB5DCCD9DD3BEE3DFF33AA4DBB

Status Code:
Exception details:
System.Net.WebException: The underlying connection was closed: The connection was closed unexpectedly.
at System.Net.HttpWebRequest.GetResponse()
at Microsoft.IdentityServer.Management.Proxy.StsConfigurationProvider.GetStsProxyConfiguration()

This was ultimately caused by the certificate on the AD FS Server having been replaced in the user interface, but this did not replace the certificate that HTTP was using or the published web applications and the certificates they were using.

So here are the steps to fix this issue:

AD FS Server

  1. Ensure the certificate is installed in the computer store of all the AD FS servers in the farm
  2. Grant permissions to the digital certificate to the ADFS Service account. Do this by right-clicking the new digital certificate in the MMC snap-in for certificates and choosing All Tasks > Manage Private Keys. Grant read permission to the service account that ADFS is using (you need to click Object Types and select Service Accounts to be able to select this user). Repeat for each server in the farm.
  3. In the AD FS management console expand service > certificates and ensure that the service communications certificate is correct and that the date is valid.
  4. Open this certificate by double-clicking it and on the Details tab, check the value for the Thumbprint.
  5. Start PowerShell on the AD FS Server and run Get-AdfsSslCertificate (not Get-AdfsCertificate). You should get back a few rows of data listing localhost and your federation service name along with a PortNumber and CertificateHash. Make sure the CertificateHash matches the Thumbprint for the service communications certificate.
  6. If they do not, then run Set-AdfsSslCertificate -Thumbprint XXXXXX (where XXXXX is the thumbprint value without spaces).
  7. Then you need to restart the ADFS Server.
  8. Repeat for each member of the farm, taking them out and in from any load balancer configuration you have. Ensure any SSL certs on the load balancer are updated as well.

Web Application Proxy

  1. Ensure the certificate is installed in the computer store of the web application proxy server as well. Permissions do not need to be set for this service.
  2. Run Get-WebApplicationProxySslCertificate. You should get back a few rows of data listing localhost and your federation service name along with a PortNumber and CertificateHash. Make sure the CertificateHash matches the Thumbprint for the service communications certificate. If it does not use Set-WebApplicationProxySslCertificate -Thumbprint XXXXX to change it. Here XXXXX is the thumbprint value without spaces.
  3. Restart WAP server and it should now connect to the AD FS endpoint and everything should be working again.
  4. Check that each Web Proxy Application is using the new certificate. Do this with Get-WebApplicationProxyApplication | Set-WebApplicationProxyApplication -ExternalCertificateThumbprint XXXXXX (where XXXXX is the thumbprint value without spaces). This sets all your applications to use the same certificate. If you have different certificates in use, then do them one by one with Get-WebApplicationProxyApplication | Format-Table Name,ExternalCert* to see what the existing thumbprints are and then Get-WebApplicationProxyApplication “name” | Set-WebApplicationProxyApplication -ExternalCertificateThumbprint XXXXXX to do just the one called “name”.

Getting Exchange Message Sizing Raw Data

Posted on 2 CommentsPosted in exchange, exchange online, IAmMEC, mcm, mcsm, Office 365

On the internet there are a number of resources for collecting the raw data needed to size Exchange Server deployments. These include:

This blog outlines my process for collecting the data needed for the average message size. What is missing from the above two posts is the ability to collect this data in one go for the last seven (or so days) and then get the message tracking log average over the busiest five or so days. Different countries have different working patterns and public holidays, and the scripts above only do the previous day (though Neil’s says the script uses averages across many days – it does not).

The Script

Download the script from the source and save to Notepad. Edit the top of the script by deleting the first five lines of the code (not the comments or the blank lines) and then replace them with the alternative lines shown below:

Remove These Lines

$today = get-date
$rundate = $($today.adddays(-1)).toshortdatestring()

$outfile_date = ([datetime]$rundate).tostring("yyyy_MM_dd")
$outfile = "email_stats_" + $outfile_date + ".csv"

Replace With These Lines

$today = get-date
$whichDay = $args[0]
if (! $whichDay) { $whichDay = 1 }
$rundate = $($today.adddays(-$whichDay))

$outfile_date = ([datetime]$rundate).tostring("yyyy_MM_dd")
$outfile = "email_stats_" + $outfile_date + ".csv"

Before running this script you need to do two things. First if you have different Exchange Server locations around the world you need to alter the script to only include the servers in those geographies that you want to search. Do this by changing the two lines that read as follows and add in the server name or a unique differentiator that will pick up just the mailbox and hub transport servers in these regions. Edge servers do not need to be considered:

  • $mbx_servers = Get-ExchangeServer |? {$_.serverrole -match “Mailbox”}|% {$_.fqdn}
  • $hts = get-exchangeserver |? {$_.serverrole -match “hubtransport”} |% {$_.name}

These two lines are near the top and need to be changed to something like the following:

  • $mbx_servers = Get-ExchangeServer UK* |? {$_.serverrole -match “Mailbox”}|% {$_.fqdn}
  • $hts = get-exchangeserver UK* |? {$_.serverrole -match “hubtransport”} |% {$_.name}

In the above I altered the script to find just the Exchange Servers starting with the letters UK. Adjust as needed. If you have one location/timezone worldwide then these changes are not needed.

If you are collecting data from an Exchange 2013 server(s) then change “hubtransport” in the $hts line to read “mailbox”.

Save the file as a PowerShell script (Get-MessageTrackingLogStats.ps1) and copy it to your Exchange Server.

Before you can run the script you need to check that you have enough tracking logs to process, or you will get invalid and skewed data. Run Get-TransportServer | FL name,*trackinglog* and make sure that you have a large enough quota for each day of logs and then check each of these folders and make sure they do not exceed this value. If they do, then you need to run the below script frequently before the log files are removed from the server rather than at the end of 7 or 14 days.

Now you can run the script with a number after it for how many days back you want to look at the logs. This will process the message tracking logs for that day, that number of days back. For example is you run the script Get-MessageTrackingLogStats.ps1 5 then it will look back for one day of data, five days ago. Repeat the running of the script until you have run it seven times. You will have one CSV file of tracking log reports for each of the last seven days:

  • Get-MessageTrackingLogStats.ps1 1
  • Get-MessageTrackingLogStats.ps1 2
  • Get-MessageTrackingLogStats.ps1 3
  • Get-MessageTrackingLogStats.ps1 4
  • Get-MessageTrackingLogStats.ps1 5
  • Get-MessageTrackingLogStats.ps1 6
  • Get-MessageTrackingLogStats.ps1 7

Zip up these files and take them to a computer running Microsoft Excel.

One the oldest file and process each of them as follows:
image

Hide columns C through E and H through to O and R through to U:
image

image

You now have a spreadsheet with Date/User/Received Total/Received MB Total/Sent Unique Total/Sent Unique MB Total:
image

Format as a table by selecting a cell in the table and clicking the Table button in the Insert tab:
image

The Table Tools / Design tab appears. Select Total Row check box from here. This will scroll you to the bottom of the table.
image

Select the third cell in the total row and drop-down the options and choose Average:
image

Drag this Average cell formula across all the four columns of numbers:
image

Modify the fourth column (Received MB Total) to read as follows =SUBTOTAL(101,[Received MB Total])/Table1[[#Totals],[Received Total]]*1024. This is the fourth column divided by the third column (the count of received messages) and multiplied by 1024 to convert it from MB to KB. The Exchange Bandwidth Calculator and Storage Calculator work on values in KB and not MB.
image

Repeat for the fourth column of figures. This time take the formula to be =SUBTOTAL(101,[Sent Unique MB Total])/Table1[[#Totals],[Sent Unique Total]]*1024 which is the last column divided by the previous and converted to KB. This total row now shows you the average messages sent and received and the average size of these messages in KB.

At the bottom of the spreadsheet add the following:

  • Sent/Mailbox/Day
  • Received/Mailbox/Day
  • Average/MessageSize/Day (KB)

Then copy the relevant data into the cells as shown The Average is the sum of the two averages divided by 2 (=Table1[[#Totals],[Received MB Total]]+Table1[[#Totals],[Sent Unique MB Total]]/2):
image

Then take all your numbers and reduce the number of decimal points shown:
image

Now that you have the raw data calculated, save this file as an Excel Workbook (and not a CSV).

Repeat for each of the CSV files you have available

Process The Daily Data

Create a new spreadsheet that contains the following three tables:

  • Messages Sent Per Day, with a row for each day and a column for each unique geographical region
  • Messages Received Per Day, with a row for each day and a column for each unique geographical region
  • Average Message Size (KB), with a row for each day and a column for each unique geographical region

This will look like the following, once all the data is copied from the source spreadsheets:

image

Now you can create a chart for each region for each table. The following shows Sent / Received and Average (KB). Each region is overlapping as a different line colour:

image
Note its possible here that HK (blue) is missing some data as it is unexpectedly low

image

image

Note the HK (blue) Sunday Average message size. This is probably becuase one or a few users sent a disproportionate number of larger emails on the quiet day of the week. For my analysis I am going to ignore it.

Now I have my peak Messages Sent Per Day for each region – and I take the highest value for the week and not the value for yesterday which is what I would get if I just ran the above script once.

This data can now go into the Bandwidth Calculator and generate accurate figures for the business in question.

Enabling Microsoft Rights Management in SharePoint Online

Posted on Leave a commentPosted in aadrm, active directory, cloud, IAmMEC, Office 365, policy, rms, sharepoint

This article is the fifth in a series of posts looking at Microsoft’s new Rights Management product set. In an earlier previous post we looked at turning on the feature in Office 365 and in this post we will look at protecting documents in SharePoint. This means your cloud users and will have their data protected just by saving it to a document library.

In this series of articles we will look at the following:

The items above will get lit up as the articles are released – so check back or leave a comment to the first post in the series and I will let you know when new content is added.

To enable SharePoint Online to integrate with Microsoft Rights Management you need to turn on RMS in SharePoint. You do this with the following steps:

  1. Go to service settings, click sites, and then click View site collections and manage additional settings in the SharePoint admin center:
    image
  2. Click settings and find Information Rights Management (IRM) in the list:
    image
  3. Select Use the IRM service specified in your configuration and click Refresh IRM Settings:
    image
  4. Click OK

Once this is done, you can now enable selected document libraries for RMS protection.

  1. Find the document library that you want to enforce RMS protection upon and click the PAGE tab to the top left of the SharePoint site (under the Office 365 logo).
    image
  2. Then click Library Settings:
    image
  3. If the site is not a document library, for example the picture below shows a “document center” site you will not see the Library Settings option. For these sites, navigate to the document library specifically and click the LIBRARY tab and then choose Library Settings:
    image
    image
  4. Click Information Rights Management
    image
  5. Select Restrict permissions on this library on download and add your policy title and policy description. Click SHOW OPTIONS to configure additional RMS settings on the library, and then click OK.
    image
  6. The additional options allow you to enforce restrictions to the document library such as RMS key caching (for offline use) and to allow the document to be shared with a group of users. This group must be mail enabled (or at least have an email address in its email address attribute) and be synced to the cloud.

To start using the RMS functionality in SharePoint, upload a document to this library or create a new document in the library. Then download the document again – it will now be RMS protected.

Using Microsoft Rights Management from Microsoft Office

Posted on Leave a commentPosted in active directory, exchange, exchange online, IAmMEC, Office 365, rms

This article is the second last in a series of posts looking at Microsoft’s new Rights Management product set. In an earlier previous post we looked at turning on the feature in Office 365 and in this post we will look at protecting documents and emails in Microsoft Office 2010 or later. This means your cloud users and your on-premises users can shared encrypted content and as it is cloud based, you can send encrypted content to anyone even if you are not using an Office 365 mailbox.

In this series of articles we will look at the following:

The items above will get lit up as the articles are released – so check back or leave a comment to the first post in the series and I will let you know when new content is added.

Once you have enable Microsoft Rights Management and your Microsoft Office user is connected to Office 365 via the accounts feature in Office:

image

In the above picture I have Microsoft Word showing that I am connected to my organizational account in Office 365.

Now that I am logged in to Office 365 I can access the RMS templates in Office 365 and protect documents with the templates I have created. To go through this process you need to follow these steps:

  1. Select File tab in the Office application that your document has been created in.
  2. You will find yourself in the Info area:
    image
  3. Click Protect Document > Restrict Access > Select your Office 365 tenant name (if you have more than one that you have used, you will see each that is enabled for RMS) > then select the protection template that you want to use:
    image
  4. The Protect Document button is highlighted and the RMS protection template is displayed, along with the description. If you are running a non-English version of Microsoft Office and your template has a matching language name, then you will see that instead. It will default to US-English for the languages that do not have a country option in the RMS template creation process:
    image
  5. Note though, that if you do not have any company employees working in the USA then you could use US-English for your own default language if your default language is not available – for example in the above I am on a UK-English version of Office and I could have used UK-English spellings and phrases for my name and description – who is to know that the US-English default is being shown!
  6. Back in the document the yellow banner, which can be closed, shows the protection level and the description:
    image

Though I have shown Word in the above examples, this is true for other Office documents as well, as long as you are using Office ProPlus edition, as you need that version to create RMS protected content. For example, an Outlook message is shown below:

image

Configuring Exchange On-Premises to Use Azure Rights Management

Posted on 19 CommentsPosted in 2010, 2013, 64 bit, aadrm, ADFS, ADFS 2.0, DLP, DNS, exchange, exchange online, https, hybrid, IAmMEC, load balancer, loadbalancer, mcm, mcsm, MVP, Office 365, powershell, rms, sharepoint, warm

This article is the fifth in a series of posts looking at Microsoft’s new Rights Management product set. In an earlier previous post we looked at turning on the feature in Office 365 and in this post we will look at enabling on-premises Exchange Servers to use this cloud based RMS server. This means your cloud users and your on-premises users can shared encrypted content and as it is cloud based, you can send encrypted content to anyone even if you are not using an Office 365 mailbox.

In this series of articles we will look at the following:

The items above will get lit up as the articles are released – so check back or leave a comment to the first post in the series and I will let you know when new content is added.

Exchange Server integrates very nicely with on-premises RMS servers. To integrate Exchange on-premises with Windows Azure Rights Management you need to install a small service online that can connect Exchange on-premises to the cloud RMS service. On-premises file servers (classification) and SharePoint can also use this service to integrate themselves with cloud RMS.

You install this small service on-premises on servers that run Windows Server 2012 R2, Windows Server 2012, or Windows Server 2008 R2. After you install and configure the connector, it acts as a communications interface between the on-premises IRM-enabled servers and the cloud service. The service can be downloaded from http://www.microsoft.com/en-us/download/details.aspx?id=40839

From this download link there are three files to get onto the server you are going to use for the connector.

  • RMSConnectorSetup.exe (the connector server software)
  • GenConnectorConfig.ps1 (this automates the configuration of registry settings on your Exchange and SharePoint servers)
  • RMSConnectorAdminToolSetup_x86.exe (needed if you want to configure the connector from a 32bit client)

Once you have all this software (or that which you need) and you install it then IT and users can easily protect documents and pictures both inside your organization and outside, without having to install additional infrastructure or establish trust relationships with other organizations.

The overview of the structure of the link between on-premises and Windows Azure Rights Management is as follows:

IC721938

Notice therefore that there are some prerequisites needed. You need to have an Office 365 tenant and turn on Windows Azure Rights Management. Once you have this done you need the following:

  • Get your Office 365 tenant up and running
  • Configure Directory Synchronization between on-premises Active Directory and Windows Azure Active Directory (the Office 365 DirSync tool)
  • It is also recommended (but not required) to enable ADFS for Office 365 to avoid having to login to Windows Azure Rights Management when creating or opening protected content.
  • Install the connector
  • Prepare credentials for configuring the software.
  • Authorising the server for connecting to the service
  • Configuring load balancing to make this a highly available service
  • Configuring Exchange Server on-premises to use the connector

Installing the Connector Service

  1. You need to set up an RMS administrator. This administrator is either the a specific user object in Office 365 or all the members of a security group in Office 365.
    1. To do this start PowerShell and connect to the cloud RMS service by typing Import-Module aadrm and then Connect-AadrmService.
    2. Enter your Office 365 global administrator username and password
    3. Run Add-AadrmRoleBasedAdministrator -EmailAddress <email address> -Role “GlobalAdministrator” or Add-AadrmRoleBasedAdministrator -SecurityGroupDisplayName <group Name> -Role “ConnectorAdministrator”. If the administrator object does not have an email address then you can lookup the ObjectID in Get-MSOLUser and use that instead of the email address.
  2. Create a namespace for the connector on any DNS namespace that you own. This namespace needs to be reachable from your on-premises servers, so it could be your .local etc. AD domain namespace. For example rmsconnector.contoso.local and an IP address of the connector server or load balancer VIP that you will use for the connector.
  3. Run RMSConnectorSetup.exe on the server you wish to have as the service endpoint on premises. If you are going to make a highly available solutions, then this software needs installing on multiple machines and can be installed in parallel. Install a single RMS connector (potentially consisting of multiple servers for high availability) per Windows Azure RMS tenant. Unlike Active Directory RMS, you do not have to install an RMS connector in each forest. Select to install the software on this computer:
    IC001
  4. Read and accept the licence agreement!
  5. Enter your RMS administrator credentials as configured in the first step.
  6. Click Next to prepare the cloud for the installation of the connector.
  7. Once the cloud is ready, click Install. During the RMS installation process, all prerequisite software is validated and installed, Internet Information Services (IIS) is installed if not already present, and the connector software is installed and configured
    IC002
  8. If this is the last server that you are installing the connector service on (or the first if you are not building a highly available solution) then select Launch connector administrator console to authorize servers. If you are planning on installing more servers, do them now rather than authorising servers:
    IC003
  9. To validate the connector quickly, connect to http://<connectoraddress>/_wmcs/certification/servercertification.asmx, replacing <connectoraddress> with the server address or name that has the RMS connector installed. A successful connection displays a ServerCertificationWebService page.
  10. For and Exchange Server organization or SharePoint farm it is recommended to create a security group (one for each) that contains the security objects that Exchange or SharePoint is. This way the servers all get the rights needed for RMS with the minimal of administration interaction. Adding servers individually rather than to the group results in the same outcome, it just requires you to do more work. It is important that you authorize the correct object. For a server to use the connector, the account that runs the on-premises service (for example, Exchange or SharePoint) must be selected for authorization. For example, if the service is running as a configured service account, add the name of that service account to the list. If the service is running as Local System, add the name of the computer object (for example, SERVERNAME$).
    1. For servers that run Exchange: You must specify a security group and you can use the default group (DOMAIN\Exchange Servers) that Exchange automatically creates and maintains of all Exchange servers in the forest.
    2. For SharePoint you can use the SERVERNAME$ object, but the recommendation configuration is to run SharePoint by using a manually configured service account. For the steps for this see http://technet.microsoft.com/en-us/library/dn375964.aspx.
    3. For file servers that use File Classification Infrastructure, the associated services run as the Local System account, so you must authorize the computer account for the file servers (for example, SERVERNAME$) or a group that contains those computer accounts.
  11. Add all the required groups (or servers) to the authorization dialog and then click close. For Exchange Servers, they will get SuperUser rights to RMS (to decrypt content):
    image
    image
  12. If you are using a load balancer, then add all the IP addresses of the connector servers to the load balancer under a new virtual IP and publish it for TCP port 80 (and 443 if you want to configure it to use certificates) and equally distribute the data across all the servers. No affinity is required. Add a health check for the success of a HTTP or HTTPS connection to http://<connectoraddress>/_wmcs/certification/servercertification.asmx so that the load balancer fails over correctly in the event of connector server failure.
  13. To use SSL (HTTPS) to connect to the connector server, on each server that runs the RMS connector, install a server authentication certificate that contains the name that you will use for the connector. For example, if your RMS connector name that you defined in DNS is rmsconnector.contoso.com, deploy a server authentication certificate that contains rmsconnector.contoso.com in the certificate subject as the common name. Or, specify rmsconnector.contoso.com in the certificate alternative name as the DNS value. The certificate does not have to include the name of the server. Then in IIS, bind this certificate to the Default Web Site.
  14. Note that any certificate chains or CRL’s for the certificates in use must be reachable.
  15. If you use proxy servers to reach the internet then see http://technet.microsoft.com/en-us/library/dn375964.aspx for steps on configuring the connector servers to reach the Windows Azure Rights Management cloud via a proxy server.
  16. Finally you need to configure the Exchange or SharePoint servers on premises to use Windows Azure Active Directory via the newly installed connector.
    • To do this you can either download and run GenConnectorConfig.ps1 on the server you want to configure or use the same tool to generate Group Policy script or a registry key script that can be used to deploy across multiple servers.
    • Just run the tool and at the prompt enter the URL that you have configured in DNS for the connector followed by the parameter to make the local registry settings or the registry files or the GPO import file. Enter either http:// or https:// in front of the URL depending upon whether or not SSL is in use of the connectors IIS website.
    • For example .\GenConnectorConfig.ps1 –ConnectorUri http://rmsconnector.contoso.com -SetExchange2013 will configure a local Exchange 2013 server
  17. If you have lots of servers to configure then run the script with –CreateRegEditFiles or –CreateGPOScript along with –ConnectorUri. This will make five reg files (for Exchange 2010 or 2013, SharePoint 2010 or 2013 and the File Classification service). For the GPO option it will make one GPO import script.
  18. Note that the connector can only be used by Exchange Server 2010 SP3 RU2 or later or Exchange 2013 CU3 or later. The OS on the server also needs to be include a version of the RMS client that supports RMS Cryptographic Mode 2. This is Windows Server 2008 + KB2627272 or Windows Server 2008 R2 + KB2627273 or Windows Server 2012 or Windows Server 2012 R2.
  19. For Exchange Server you need to manually enable IRM as you would do if you had an on-premises RMS server. This is covered in http://technet.microsoft.com/en-us/library/dd351212.aspx but in brief you run Set-IRMConfiguration -InternalLicensingEnabled $true. The rest, such as transport rules and OWA and search configuration is covered in the mentioned TechNet article.
  20. Finally you can test if RMS is working with Test-IRMConfiguration –Sender billy@contoso.com. You should get a message at the end of the test saying Pass.
  21. If you have downloaded GenConnectorConfig.ps1 before May 1st 2014 then download it again, as the version before this date writes the registry keys incorrectly and you get errors such as “FAIL: Failed to verify RMS version. IRM features require AD RMS on Windows Server 2008 SP2 with the hotfixes specified in Knowledge Base article 973247” and “Microsoft.Exchange.Security.RightsManagement.RightsManagementException: Failed to get Server Info from http://rmsconnector.contoso.com/_wmcs/certification/server.asmx. —> System.Net.WebException: The request failed with HTTP status 401: Unauthorized.”. If you get these then turn of IRM, delete the “C:\ProgramData\Microsoft\DRM\Server” folder to remove old licences, delete the registry keys and run the latest version of GetConnectorConfig.ps1, refresh the RMS keys with Set-IRMConfiguration –RefreshServerCertificates and reset IIS with IISRESET.

Now you can encrypt messages on-premises using your AADRM licence and so not require RMS Server deployed locally.

Is Your SenderID/SPF or DKIM Record Correctly Configured

Posted on Leave a commentPosted in DNS, domain, exchange, exchange online, IAmMEC, MX, Office 365, spam

With Microsoft having just announced that DKIM is coming to Office 365 soon (release notes here) and SenderID is already available, I thought this is a good time to write a blog on the use of DMARC to show if your records are correct.

DMARC is a protocol that allows you to see the effect of your SenderID/SPF records from the viewpoint of the recipient – as long as the recipient is one of the larger email receivers in the world, but that should be enough to help you validate your SenderID/SPF records. DMARC is currently in use at Outlook.com/Hotmail, Gmail, Facebook, Yahoo and Twitter to name some of the larger users of it.

To receive the results of a DMARC report you need to add your DMARC policy as a new TXT record to DNS. It looks something like this:

_dmarc IN TXT “v=DMARC1;p=reject;pct=100;rua=mailto:postmaster@dmarcdomain.com”

This TXT record for “_dmarc” at the root of your domain tells the receiver to report back to you all (pct=100) SenderID/SPF failures. They also report DKIM failures as well. If you send lots of email you might want to report on a smaller number of overall failures, say pct=20! The record says who to send the report to (rua=email address).

Finally, and one of the most useful bits of DMARC is that you can tell the receiver what to do with failures in your SenderID/SFP or DKIM record. For example in SenderID/SPF you can set the modifier “-all” to the end. This says that anything not covered in the rest of the record is to fail SenderID. The recipient then decides what to do with your email (as it might be your email, as you might have got your SPF or DKIM record wrong). In the example above you tell the recipient email system you would like them to reject (p=) the email. The policy could be quarantine or none. The policy p=none means accept the message, but report it back to the rua email address in the DMARC record.

p=none allows you to introduce SPF/DKIM and ensure no rejection of your messages at the recipient end, but get reporting back on the IP addresses from which your emails (or spoofed emails) are coming from.

Once you are sure your SPF/DKIM records are correct change your policy to p=quarantine and then when you are finally sure, change it to p=reject.

An example of creating a DMARC policy on your domain is shown below. The below example is AWS Route 53 DNS, but any DNS management console that support TXT records should work:

image

Finally, as this is just a quick intro to DMARC, you can use the ruf= qualifier to set the email address of per message failure reports.

Updating Exchange 2013 Anti-Malware Agent From A Non-Internet Connected Server

Posted on Leave a commentPosted in 2013, 64 bit, antivirus, exchange, Exchange Online Protection, IAmMEC, malware, mcm, mcsm, powershell, x64

In Forefront Protection for Exchange (now discontinued) for Exchange 2010 it was possible to run the script at http://support.microsoft.com/kb/2292741 to download the signatures and scan engines when the server did not have a direct connection to the download site at forefrontdl.microsoft.com.

To achieve the same with Exchange 2013 and the built-in anti-malware transport agent you can repurpose the 2010 script to download the engine updates to a folder on a machine with internet access and then use a script from Exchange Server 2013 to download from a share on the first machine that you downloaded the files to, and that the Exchange Servers can reach.

So start by downloading the script at http://support.microsoft.com/kb/2292741 and saving it as Update-Engines.ps1.

Create a folder called C:\Engines (for example) and share it with Authenticated Users / Read access and full control to the account that will run Update-Engines.ps1

Run Update-Engines.ps1 with the following

Update-Engines.ps1 -EngineDirPath C:\engines -UpdatePathUrl http://forefrontdl.microsoft.com/server/scanengineUpdate/  -Engines “Microsoft” -Platforms amd64

The above cmdlet/script downloads just the 64 bit Microsoft engine as that is all you need and places them in the local folder (which is the shared folder you created) on that machine. You can schedule this script using standard published techniques for scheduling PowerShell.

On your Exchange Server that has no internet connectivity, start Exchange Management Shell and run the following:

Set-MalwareFilteringServer ServerName –PrimaryUpdatePath \\dlserver\enginesShare

Then start a PowerShell window that is running as an administrator – you can use Exchange Management Shell, but it too needs to be started as an administrator to do this last step. In this shell run the following:

Add-PSSnapin microsoft.forefront.filtering.management.powershell

Get-EngineUpdateInformation

Start-EngineUpdate

Get-EngineUpdateInformation

Then compare the first results from Get-EngineUpdateInformation with the second results. If you have waited 30 or so seconds, the second set of results should be updated to the current time for the LastChecked value. UpdateVersion and UpdateStatus might also have changed. If your Exchange Server has internet connectivity it will already have updated automatically every hour and so not need this script running.

Exchange DLP Rules in Exchange Management Shell

Posted on Leave a commentPosted in 2013, cloud, DLP, EOP, exchange, exchange online, Exchange Online Protection, IAmMEC, IFilter, mcm, mcsm, Office 365

This one took a while to work out, so noting it down here!

If you want to create a transport rule for a DLP policy that has one data classification (i.e. data type to look for such as ‘Credit Card Number’) then that is easy in PowerShell and an example would be as below.

New-TransportRule -name “Contoso Pharma Restricted DLP Rule (Blocked)” -DlpPolicy ContosoPharma” -SentToScope NotInOrganization -MessageContainsDataClassifications @{Name=”Contoso Pharmaceutical Restricted Content”} -SetAuditSeverity High -RejectMessageEnhancedStatusCode 5.7.1 -RejectMessageReasonText “This email contains restricted content and you are not allowed to send it outside the organization”

As you can see, and highlighted in red, the data classification is a hashtable and the single classification is mentioned.

To add more than one classification is much more involved:

$DataClassificationA = @{Name=”Contoso Pharmaceutical Private Content”}
$DataClassificationB = @{Name=”Contoso Pharmaceutical Restricted Content”}
$AllDataClassifications = @{}
$AllDataClassifications.Add(“DataClassificationA”,$DataClassificationA)
$AllDataClassifications.Add(“DataClassificationB”,$DataClassificationB)
New-TransportRule -name “Notify if email contains ContosoPharma documents 1” -DlpPolicy “ContosoPharma” -SentToScope NotInOrganization -MessageContainsDataClassifications $AllDataClassifications.Values -SetAuditSeverity High -GenerateIncidentReport administrator -IncidentReportContent “Sender”,”Recipients”,”Subject” -NotifySender NotifyOnly

And as you can see, shown in red above, you need to make a hashtable of hashtables and then use the value of the final hashtable in the New-TransportRule

An “Inexpensive” Exchange Lab In Azure

Posted on Leave a commentPosted in 2010, 2013, Azure, cloud, DNS, exchange, exchange online, hyper-v, IAmMEC, Office 365, vhd, vm, vpn

This blog post centres around two scripts that can be used to quickly provision an Exchange Server lab in Azure and then to remove it again. The reason why the blog post is titled “inexpensive” is that Azure charges compute hours even if the virtual machines are shut down. Therefore to make my Exchange lab cheaper to operate and to not charge me when the lab is not being used, I took my already provisioned VHD files and created a few scripts to create the virtual machines and cloud service and then to remove it again if needed.

Before you start using these scripts, you need to have already uploaded or created your own VHD’s in Azure and designed your lab as you need. These scripts will then take a CSV file with the relevant values in them and create a VM for each VHD in the correct subnet (that you have also created in Azure) and always in the correct order – thus ensuring they always get the same IP address from your virtual network (UPDATE: 14 March 2014 – Thanks to Bhargav, this script now reserves the IPs as well as this is a newish feature in Azure). Without reserving an IP, when you boot your domain controller first in each subnet it will always get the fourth available IP address. This IP is the DNS IP address in Azure and then each of the other machines are created and booted in the order of your choosing and so get the subsequent IP’s. Azure never used to guarantee the IP but updates in Feb 2014 now allow this with the latest Azure PowerShell cmdlets. This way we can ensure the private IP is always the same and machine dependancies such as domain controllers running first are adhered to.

These scripts are created in PowerShell and call the Windows Azure PowerShell cmdlets. You need to install the Azure cmdlets on your computer and these scripts rely on features found in version 0.7.3.1 or later. You can install the cmdlets from http://www.windowsazure.com/en-us/documentation/articles/install-configure-powershell/

Build-AzureExchangeLab.ps1

# Retrieve with Get-AzureSubscription 
$subscriptionName = "Visual Studio Premium with MSDN"

Import-AzurePublishSettingsFile 'downloaded.publishsettings.file.got.with.Get-AzurePublishSettingsFile'

# Select the subscription to work on if you have more than one subscription
Select-AzureSubscription -SubscriptionName $subscriptionName

# Name of Virtual Network to add VM's to
$VMNetName = "MCMHybrid"

# CSV File with following columns (BringOnline,VMName,StorageAccount,VMOSDiskName,VMInstanceSize,SubnetName,IPAddress,Location,AffinityGroup,WaitForBoot,PublicRDPPort)
)
$CSVFile = Import-CSV 'path\filename.csv'

# Loop to build lab here. Ultimately get values from CSV file
foreach ($VMItem in $CSVFile) {

    # Retrieve with Get-AzureStorageAccount  
    $StorageAccount = $VMItem.StorageAccount

    # Specify the storage account location containing the VHDs 
    Set-AzureSubscription -SubscriptionName $subscriptionName  -CurrentStorageAccount $StorageAccount
  
    # Not Used $location = $VMItem.Location     # Retrieve with Get-AzureLocation

    # Specify the subnet to use. Retreive with Get-AzureVNetSite | FL Subnets
    $subnetName = $VMItem.SubnetName

    $AffinityGroup = $VMItem.AffinityGroup      # From Get-AzureAffinityGroup (for association with a private network you have already created). 

    $VMName = $VMItem.VMName
    $VMOSDiskName = $VMItem.VMOSDiskName        # From Get-AzureDisk
    $VMInstanceSize = $VMItem.VMInstanceSize    # ExtraSmall, Small, Medium, Large, ExtraLarge 
    $CloudServiceName = $VMName
    $IPAddress = $VMItem.IPAddress              # Reserves a specific IP for the VM
    if ($VMItem.BringOnline -eq "Yes") {
        Write-Host "Creating VM: " $VMName
        $NewVM = New-AzureVMConfig -Name $VMName -DiskName $VMOSDiskName -InstanceSize $VMInstanceSize | Add-AzureEndpoint -Name 'Remote Desktop' -LocalPort 3389 -PublicPort $VMItem.PublicRDPPort -Protocol tcp | Add-AzureEndpoint -Protocol tcp -LocalPort 25 -PublicPort 25 -Name 'SMTP' | Add-AzureEndpoint -Protocol tcp -LocalPort 443 -PublicPort 443 -Name 'SSL' | Add-AzureEndpoint -Protocol tcp -LocalPort 80 -PublicPort 80 -Name 'HTTP' | Set-AzureSubnet –SubnetNames $subnetName | Set-AzureStaticVNetIP –IPAddress $IPAddress        
        # Creates new VM and waits for it to boot if required
        if ($VMItem.WaitForBoot -eq "Yes") {New-AzureVM -ServiceName $CloudServiceName -AffinityGroup $AffinityGroup -VMs $NewVM -VNetName $VMNetName -WaitForBoot}
            else {New-AzureVM -ServiceName $CloudServiceName -AffinityGroup $AffinityGroup -VMs $NewVM -VNetName $VMNetName }
    }
}

Remove-AzureExchangeLab.ps1

# Retrieve with Get-AzureSubscription 
$subscriptionName = "Visual Studio Premium with MSDN"

Import-AzurePublishSettingsFile 'downloaded.publishsettings.file.got.with.Get-AzurePublishSettingsFile'

# Select the subscription to work on if you have more than one subscription
Select-AzureSubscription -SubscriptionName $subscriptionName

# CSV File with following columns (BringOnline,VMName,StorageAccount,VMOSDiskName,VMInstanceSize,SubnetName,IPAddress,Location,AffinityGroup,WaitForBoot,PublicRDPPort)
$CSVFile = Import-CSV 'path\filename.csv'

# Loop to build lab here. Ultimately get values from CSV file
foreach ($VMItem in $CSVFile) {

    # Stop VM
    Stop-AzureVM -Name $VMItem.VMName -ServiceName $VMItem.VMName -Force

    # Remove VM but leave VHDs behind
    Remove-AzureVM -ServiceName $VMItem.VMName -Name $VMItem.VMName 

    # Remove Cloud Service
    Remove-AzureService $VMItem.VMName -Force
}

CSV File Format

The CSV file has a row per virtual machine, listed in order that the machine is booted:

BringOnline,VMName,StorageAccount,VMOSDiskName,VMInstanceSize,SubnetName,IPAddress,Location,AffinityGroup,WaitForBoot,PublicRDPPort
Yes,mh-oxf-dc1,portalvhdsjv47jtq9qdrmb,mh-oxf-mbx2-mh-oxf-mbx2-0-201312301745030496,Small,Oxford,10.0.0.4,West Europe,C7Solutions-AG,Yes,3389
etc,…

The columns are as follows:

  • BringOnline: Yes or No
  • VMName: This name is used for the VM and the Cloud Service. It must be unique within Azure. An example might be EX-LAB-01 (if that is unique that is)
  • StorageAccount: The name of the storage account that the VHD is stored in. This might be one you created yourself or one made by Azure with a name containing random letters. For example portalvhdshr4djwe9dwcb5 would be what this value might look like. Use Get-AzureStorageAccount to find this value.
  • VMOSDiskName: This is the disk name (not the file name) and is retrieved with Get-AzureDisk
  • VMInstanceSize: ExtraSmall, Small, Medium, Large or ExtraLarge
  • SubnetName: Get with Get-AzureVNetSite | FL Subnets
  • IPAddress: Sets a specific IP address for the VM. VM will always get this IP when it boots and other VM’s will not take it if the happen to boot before it
  • Location: Retrieve with Get-AzureLocation. This value is not used in the script as I use Affinity Groups and subnets instead.
  • AffinityGroup: From Get-AzureAffinityGroup (for association with a private network you have already created).
  • WaitForBoot: Yes or No. This will wait for the VM to come online (and thus get an IP correctly provisioned in order) or ensure things like the domain controller is running first.
  • PublicRDPPort: Set to 3389 unless you want to use a different port. For simplicity, the script sets ports 443, 80 and 25 as open on the IP addresses of the VM

Highly Available Office 365 to On-Premises Mail Routing

Posted on 22 CommentsPosted in 2010, 2013, cloud, DNS, EOP, exchange, exchange online, Exchange Online Protection, hybrid, IAmMEC, MX, Office 365, smarthost, smtp

This article looks at how to configure mail flow from Office 365 (via Exchange Online Protection – EOP) to your On Premises organization to ensure that it is highly available and work in disaster recovery scenarios with no impact. It is based on exactly the same principle to that which I blogged about in 2012: http://c7solutions.com/2012/05/highly-available-geo-redundancy-with-html on creating redundant outbound connections from Exchange on premises.

The best way to explain this feature is to describe it in the way of an example:

For example MCMEmail Ltd have Hybrid set up, and delivery to the cloud first. So the DNS zone for mcmemail.co.uk has MX pointing to EOP.

They then create a new DNS zone at either a subzone (as in this example) or a different domain if they have one available. In the example this could be hybrid.mcmemail.co.uk. Into this zone they add the following records:

10 MX oxford-a.hybrid.mcmemail.co.uk

10 MX oxford-b.hybrid.mcmemail.co.uk

20 MX nuneaton.hybrid.mcmemail.co.uk

The below picture shows an example of this configured in AWS Route 53 DNS (though there are other DNS providers available)

image

In Exchange Online Protection administration pages (Office 365 Portal > Exchange Admin > Mail Flow > Connectors and modify your on-premises connector to point to the new zone. Example shown in the below picture:

image

Then all email is always delivered to the Oxford datacentre and nothing to the Nuneaton one (where the DR servers reside) unless the two Oxford datacentres (A and B) are both offline and so the 10 preference does not answer at all. At that time and that time only does the 20 preference get connected to.

Enabling and Configuring AADRM in Exchange Online

Posted on Leave a commentPosted in 2010, 2013, aadrm, exchange, exchange online, IAmMEC, mcm, mcsm, Office 365, rms

This article is the fourth in a series of posts looking at Microsoft’s new Rights Management product set. In the previous post we looked at turning on the feature in Office 365 and in this post we will look at how to manage the service in the cloud.

In this series of articles we will look at the following:

The items above will get lit up as the articles are released – so check back or leave a comment to the first post in the series and I will let you know when new content is added.

Once you have turned on Azure Active Directory rights management you need to enable it in a variety of locations based on your needs. This series of blog posts will look at doing that in both Exchange and SharePoint, both online in Office 365 and on-premises as well as for desktop users and mobile and tablet users. First we will start with Exchange Online.

Exchange Online configuration for AADRM is probably the most complex one to do, and its not that complex really! To enable AADRM for Exchange Online at the time of writing you need to import the RMS Key from AADRM. If you had installed AD RMS on premises then you might have already done this for Exchange Online to integrate it with your on-premises RMS infrastructure – if this is the case, don’t change the key online or it will break. These steps are for Exchange Online users who have never used or integrated AD RMS with Exchange on-premises.

Enabling AADRM in Exchange Online

  1. Enable AADRM in your Office 365 tenant as mentioned previously
  2. Set the RMS Key Sharing URL to the correct value as listed in http://technet.microsoft.com/en-us/library/dn151475(v=exchg.150).aspx
  3. For example, to set this if your Office 365 tenant is based in the EU you would use the following PowerShell cmdlet in a remote session connected to Exchange Online:
  4. Then import the keys and templates for your tenant from the AADRM servers online. The keys and templates are known as the Trusted Publishing Domain. This is done in a remote PowerShell session connected to Exchange Online using the following cmdlet:
    • Import-RMSTrustedPublishingDomain -RMSOnline -name “RMS Online”
  5. In the PowerShell response to the previous command you should see the AddedTemplates value read “Company Name – Confidential” and “Company Name – Confidential View Only” which are the default two templates. If customised templates have been created and published, they will appear here as well.
  6. To check that the key/template import has worked run Test-IRMConfiguration -RMSOnline from the Exchange Online remote PowerShell command prompt. You should see PASS listed at the end of the output.
  7. Finally, to turn on IRM protection in Exchange Online run Set-IRMConfiguration –InternalLicensingEnabled $true from an Exchange Online remote PowerShell session.

Configuring AADRM in Exchange Online

  1. Once IRM is enabled with Set-IRMConfiguration –InternalLicensingEnabled $true you can run Get-IRMConfiguration and you will see the following options are enabled. You can turn any of these off (and on again) as your require:
    • JournalReportDecryptionEnabled: Ensures the IRM protected messages stored in an Exchange Journal report are also stored in the same report in clear text.
    • ClientAccessServerEnabled: Enables OWA to offer IRM protection during email composing (click the ellipsis (…) in the new email compose screen and select set permissions menu). OWA will also prelicence IRM protected content so that OWA users can open content they are licenced to view etc. without needing to have access to the RMS infrastructure directly. Note that during testing I found it could take up to 24 hours for Exchange Online to show the RMS templates in OWA. [RMS011]
    • SearchEnabled: When you search your mailbox for content, anything that is IRM protected will appear in your search results if it matches the search keyword. This setting allows Exchange Search to open and index your content even if it is not listed as a valid user of the content.
    • TransportDecryptionSetting: This allows the transport pipeline in Exchange to decrypt content so that it is available for transport agents to view it. For example anti-malware agents and transport rules. The content is reprotected at the end of the transport pipeline before it leaves the server.
    • EDiscoverySuperUserEnabled: Allows discovery search administrators to query for keywords in your protected content even if they would not be able to directly open the content if they had access to your mailbox or if they found the email saved to a file share of other sharing location.

What the RMS Settings in Exchange Actually Do

JournalReportDecryptionEnabled

Enabling journal report decryption allows the Journaling agent to attach a decrypted copy of a rights-protected message to the journal report. Before you enable journal report decryption, you must add the Federated Delivery mailbox to the super users group configured on your Active Directory Rights Management Services (AD RMS) server or AADRM settings.

Note: This is currently not working in Exchange Online and the above instructions are for Exchange On-Premises deployments.

ClientAccessServerEnabled

When IRM is enabled on Client Access servers, Outlook Web App users can IRM-protect messages by applying an Active Directory Rights Management Services (AD RMS) template created on your AD RMS cluster or AADRM service. Outlook Web App users can also view IRM-protected messages and supported attachments. Before you enable IRM on Client Access servers, you must add the Federation mailbox to the super users group on the AD RMS cluster or AADRM service as this allows the server to decrypt all content for you on the server so that the user does not need to have to have access to the RMS or AADRM service.

With CAS being able to licence and get licences on your behalf from the RMS service, you have the ability to do RMS inside OWA, and even if you are offline in OWA then any protected content already comes with its licence and so can be read without a connection to the RMS service.

SearchEnabled

The SearchEnabled parameter specifies whether to enable searching of IRM-encrypted messages in Outlook Web App.

TransportDecryptionSetting

The TransportDecryptionSettingparameter specifies the transport decryption configuration. Valid values include one of the following:

  • Disabled   Transport decryption is disabled for internal and external messages.
  • Mandatory   Messages that can’t be decrypted are rejected, and a non-delivery report (NDR) is returned.
  • Optional   A best effort approach to decryption is provided. Messages are decrypted if possible, but delivered even if decryption fails.

Transport decryption allows RMS protected messages to be decrypted as they are processed on the Exchange Server and then encrypted again before they leave the server. This means transport agents such as anti virus or transport rules can process the message (i.e. scan for viruses or add signatures or do DLP processing) the message as they see it in its unencrypted form.

EDiscoverySuperUserEnabled

The EDiscoverySuperUserEnabledparameter specifies whether members of the Discovery Management role group can access IRM-protected messages that were returned by a discovery search and are residing in a discovery mailbox. To enable IRM-protected message access to the Discovery Management role group, set the value to $true.