Continuing Adventures in AD FS Claims Rules

Posted on 9 CommentsPosted in 2013, activesync, ADFS, ADFS 3.0, exchange online, https, Office 365, Outlook, OWA for Devices, Web Application Proxy

Updated 20th April 2017

There is an excellent article at http://blogs.technet.com/b/askds/archive/2012/06/26/an-adfs-claims-rules-adventure.aspx which discusses the use of Claims Rules in AD FS to limit some of the functionality of Office 365 to specific network locations, such as being only allowed to use Outlook when on the company LAN or VPN or to selected groups of users. That article’s four examples are excellent, but can be supplemented with the following scenarios:

  • OWA for Devices (i.e. OWA for the iPhone and OWA for Android), also known as Mobile OWA
  • Outlook Mobile Apps
  • Using Outlook on a VPN, but needing to set up the profile when also on the VPN
  • Outlook restrictions when using MAPI over HTTP
  • Legacy Auth and Modern Auth Considerations

OWA for Devices and AD FS Claims Rules

OWA for Devices is an app available from the Apple or Android store and provides mobile and offline access to your email, adding to the features available with ActiveSync. Though OWA for Devices is OWA, it also uses AutoDiscover to configure the app. Therefore if you have an AD FS claim rule in place the blocks AutoDiscover you will find that OWA for Devices just keeps prompting for authentication and never completes, though if you click Advanced and enter the server name by hand (outlook.office365.com) then it works.

Using OWA for iPhone/iPad diagnostics (described on Steve Goodman’s blog at http://www.stevieg.org/2013/08/troubleshooting-owa-for-iphone-and-ipad/) you might find the following entries in your mowa.log file:

[NetworkDispatcher exchangeURLConnectionFailed:withError:] , “Request failed”, “Error Domain=NSURLErrorDomain Code=-1012 “The operation couldn’t be completed. (NSURLErrorDomain error -1012.)” UserInfo=0x17dd2ca0 {NSErrorFailingURLKey=https://autodiscover-s.outlook.com/autodiscover/autodiscover.xml, NSErrorFailingURLStringKey=https://autodiscover-s.outlook.com/autodiscover/autodiscover.xml}”

And following the failure to reach https://autodiscover-s.outlook.com/autodiscover/autodiscover.xml successfully you might see the following:

[NetworkDispatcher exchangeURLConnectionFailed:withError:] , “Request failed”, “Error Domain=ExchangeURLConnectionError Code=2 “Timeout timer expired” UserInfo=0x17dc6eb0 {NSLocalizedDescription=Timeout timer expired}”

and the following:

[PAL] PalRequestManager.OnError(): ExchangeURLConnectionError2: Timeout timer expired

[autodiscover] Autodiscover search failed. Error: ExchangeURLConnectionError2: Timeout timer expired

[actions] Action (_o.$Yd 27) encountered an error during its execution: Error: Timeout timer expired

[autodiscover] TimerCallback_PeriodicCallback_HandleFailedAutodiscoverSearch took too long (XXXms) to complete

The reason is that the first set of AutoDiscover lookups work, as they are connecting to the on-premises endpoint and authentication for this endpoint does not go through AD FS. When you reach the end of the AutoDiscover redirect process you need to authenticate to Office 365, and that calls AD FS – and that might be impacted by AD FS claims rules.

The problem with OWA for Devices and claims rules can come about by the use of the AD FS Claims Rules Policy Builder wizard at http://gallery.technet.microsoft.com/office/Client-Access-Policy-30be8ae2. This tool can be adjusted quite easily for AD FS 2012 R2 (change line 218 to read “If (($OSVersion.Major –eq 6) –and ($OSVersion.Minor –ge 2))” (look for AD FS servers that are –ge [greater or equal to 2] rather than just 2). The AD FS Claims Rules Policy Builder has a setting called “Block only external Outlook clients” – and this blocks Outlook, Exchange Web Services, OAB and AutoDiscover from being used apart from on a network range that you provide. The problem is that you might want to block Outlook externally, but allow OWA for Devices to work.

To solve this problem make sure you do not block AutoDiscover in the claims rules. In the “Block only external Outlook clients” option, its the 2nd claim rule that needs to be deleted, so you end up with just the following:

image

With AutoDiscover allowed from all locations, but EWS and RPC blocked in Outlook, you still block Outlook apart from on your LAN, but you do not block OWA for Devices. Use exrca.com to test Outlook Autodiscover after making this change and you should find that the following error is not present when AutoDiscover gets to the autodiscover-s.outlook.com stage.

The Microsoft Connectivity Analyzer is attempting to retrieve an XML Autodiscover response from URL https://autodiscover-s.outlook.com/Autodiscover/Autodiscover.xml for user user@tenant.mail.onmicrosoft.com.

The Microsoft Connectivity Analyzer failed to obtain an Autodiscover XML response.

An HTTP 401 Unauthorized response was received from the remote Unknown server. This is usually the result of an incorrect username or password. If you are attempting to log onto an Office 365 service, ensure you are using your full User Principal Name (UPN).HTTP Response Headers:

If you changed an existing claims rule you either need to restart the AD FS service or wait a few hours for it to take effect.

OWA Apps and AD FS Claims Rules

The above section covers why MOWA (Mobile OWA) needs AutoDiscover to work. The same is true for any client application such as ActiveSync or Outlook for iOS / Outlook for Android / Windows Mail etc that needs to run AutoDiscover to determine your mailbox settings. Therefore, as in the above section ensure you are not blocking AutoDiscover from external networks unless apps on mobile devices are also only allowed to work on your WiFi.

Outlook with AD FS Claims Rules and VPN’s

As mentioned above, Outlook can be limited to working on a given set of IP ranges. If you block AutoDiscover from working apart from on these ranges (as was the issue causing OWA for Devices to fail above) then you can have issues with VPN connections.

Imagine the following scenario: Web traffic goes direct from the client, but Outlook traffic to Office 365 goes via the LAN. A split tunnel VPN scenario.

In this example, if AutoDiscover and initial profile configuration has never run you have a claims rule that allows Outlook to work on the VPN (as the public IP’s for your LAN/WAN are listed in the claims rules as valid source addresses), but AutoDiscover fails due to the same rules (as the web traffic is coming from the client and not the LAN). Therefore AutoDiscover fails and Outlook is never provisioned, so it appears as if Outlook is being blocked by the claim rules even though you are on the correct network for those claim rules.

Outlook with MAPI over HTTP

Updated April 20th 2017

A new connection protocol was released with Exchange Server 2013 SP1 called MAPI/HTTP and has been available in Exchange Online for a time before the release of it on-premises. After this protocol was released, support was made available for it from a range of client versions and currently it works with Outlook 2010 and later (as long as you are on the supported versions and not Outlook 2010 SP1 for example). There is no support for MAPI/HTTP on Outlook 2007 which is why this client will stop working with Office 365 in October 2017 (as Microsoft turn off RPC/HTTP at that time).

With this new protocol in place for Outlook 2010, it is important to ensure that the AD FS claim rule for x-ms-client-application with a value of Microsoft.Exchange.RPC is updated to include Microsoft.Exchange.Mapi and Microsoft.Exchange.Nspi. And come October 2017 when RPC/HTTP is turned off in Exchange Online, claim rules could be updated to remove Microsoft.Exchange.RPC from their checks.

Therefore if you are an Exchange Online user with claims rules for AD FS you need to add the following rules. These would have the same IP ranges as your Microsoft.Exchange.RPC rule and would be identical to this rule in every way, just the x-ms-client-application string needs to look for the following (one rule for each):

  • Microsoft.Exchange.Mapi
  • Microsoft.Exchange.Nspi

The Microsoft.Exchange.Mapi rule can be named “Outlook MAPI/HTTP” and the Nspi rule can be named “Outlook MAPI Address Book”. You need to keep the Exchange Web Services and Exchange Address Book rules in place, but once all users are migrated to MAPI/HTTP you can remove the Microsoft.Exchange.RPC rule.

Note that Exchange Online caches your AD FS credential’s for 24 hours for connections from a single IP address, so if you successfully connect to Exchange Online (say because you have not got the Microsoft.Exchange.Mapi block in place) then you will not connect back to AD FS for 24 hours and so not be affected by new rules that are added after you connect. If you want to test if your new Microsoft.Exchange.Mapi block rules work then you need to connect to Exchange Online via a different IP address, so a trip to the nearest coffee shop in the quest for network lockdown on your companies expense is called for!

Also it is possible to create a single rule rather than the multiple rules listed above. A single rule would contain the claim rule clause similar to the below:

exists([Type == "http://schemas.microsoft.com/2012/01/requestcontext/claims/x-ms-client-application", Value =~ "Microsoft.Exchange.WebServices|Microsoft.Exchange.RPC|Microsoft.Exchange.Mapi|Microsoft.Exchange.Nspi"])

Note that this is different from what the Rule Builder wizard creates, as that creates a string comparison value, which is compared to the result with ==, but when more than one possible string is stored in the value the comparison operator needs to be =~ which is string compared to RegEx, as the | sign makes the value in the claim rule a RegEx expression.

Legacy Clients and Outlook 2016

Outlook 2016 supports modern authentication out of the box (unlike Outlook 2013 which required client side registry keys and certain hotfixes). But you still need to enable Modern Authentication on your tenant in Exchange Online.

To check if Modern Auth is enabled run Get-OrganizationConfig | FL *OAuth2* from a remote PowerShell session connected to Exchange Online. The value of OAuth2ClientProfileEnabled will be $True if Modern Authentication is enabled. If it is not enabled then Outlook 2016 will use Legacy authentication protocols and so will be blocked by the claim rules discussed here. If you enable Modern Auth though, Outlook 2010 is impacted by Claim Rules and Outlook 2013 June 2015 update + reg keys and Outlook 2016 and later are not impacted by the claim rules above (see below for these). Outlook with Modern Auth is restricted with the use of Conditional Access in Azure AD Premium. If you do not turn on Modern Auth you can use the legacy auth claim rule restrictions until the point where Microsoft enable Modern Auth for you – therefore it is best that if you are doing this then make your changes at your schedule and enable Modern Auth now, EMS to block Outlook 2016 and claim rules to block Outlook 2010 until the point where Microsoft removes support for that product.

Therefore a single claims rule for blocking legacy auth would look like the following:

exists([Type == "http://schemas.microsoft.com/2012/01/requestcontext/claims/x-ms-proxy"])
 && exists([Type == "http://schemas.microsoft.com/2012/01/requestcontext/claims/x-ms-client-application", Value =~ "Microsoft.Exchange.WebServices|Microsoft.Exchange.RPC|Microsoft.Exchange.Mapi|Microsoft.Exchange.Nspi"])
 && NOT exists([Type == "http://schemas.microsoft.com/2012/01/requestcontext/claims/x-ms-forwarded-client-ip", Value =~ "\b51\.141\.11\.170\b"])
 && exists([Type == "http://schemas.microsoft.com/2012/01/requestcontext/claims/x-ms-endpoint-absolute-path", Value == "/adfs/services/trust/2005/usernamemixed"])
 => issue(Type = "http://schemas.microsoft.com/authorization/claims/deny", Value = "true");

Note that in the above, though this comes from ADFS on Windows Server 2012 R2, we are still recommending the use of x-ms-proxy and x-ms-forwarded-client-ip. That is because the insidecorporatenetwork claim cannot be used for legacy authentication, as all auth request originate from Exchange Online (active auth) which is never on-premises! This claim rule is added to the Office 365 Relying Party Trust Issuance Authorization Rules after the default Permit Access to All Users rule that the Convert-MSOLDomainToFederated cmdlet or AADConnect configures. The claim rule permits access to all, then denies access if an legacy authentication session happens for Outlook protocols where the forwarded IP address is not the public IP for your LAN

Use Claim Rules to Block Modern Auth Clients

So the final scenario here, newly added to this old post in April 2017, requires issuing deny’s to claims when insidecorporatenetwork is false for the protocols/user agents that you want to block. For example the following will block all modern auth requests from outside the network from all applications apart from ActiveSync and AutoDiscover (as AutoDiscover is used by ActiveSync to set up the mobile device initially):

exists([Type == " http://schemas.microsoft.com/ws/2012/01/insidecorporatenetwork", Value == "false"])
 && NOT exists([Type == "http://schemas.microsoft.com/2012/01/requestcontext/claims/x-ms-client-application", Value =~ "Microsoft.Exchange.ActiveSync|Microsoft.Exchange.Autodiscover"])
 && exists([Type == "http://schemas.microsoft.com/2012/01/requestcontext/claims/x-ms-endpoint-absolute-path", Value == "/adfs/ls/"])
 => issue(Type = "http://schemas.microsoft.com/authorization/claims/deny", Value = "true");

Intermittent Error 8004789A with AD FS and WAP 3.0 (Windows Server 2012 R2)

Posted on Leave a commentPosted in 2012 R2, 2013, ADFS, ADFS 3.0, Office 365

This error appears when you attempt to authenticate with Office 365 using AD FS 3.0 – but only sometimes, and often it was working fine and then it starts!

I’ve found this error is due to two things, though there are other reasons. The full list of issues is at http://blogs.technet.com/b/applicationproxyblog/archive/2014/05/28/understanding-and-fixing-proxy-trust-ctl-issues-with-ad-fs-2012-r2-and-web-application-proxy.aspx.

I found that this occured if the WAP servers and the ADFS servers where at different timezones (not just times)

And I found that if the domain schema level is no 2012 R2 then you need to run the script included to copy settings between the ADFS servers. Certificates expire every 20 days, or when they are manually changed, so this script needs running by hand at or before these regular changes.

The second of these two issues though has been fixed in the June 2014 update for Windows Server 2012 R2. The fix is documented in http://support.microsoft.com/kb/2964735 and the update (the June 2014 update) is at http://support.microsoft.com/kb/2962409

Exchange Web Services (EWS) and 501 Error

Posted on 12 CommentsPosted in 2010, 2013, EWS, exchange, exchange online, hybrid, IAmMEC, iis, Kemp, Load Master, Office 365, tmg

As is common with a lot that I write in this blog, it is based on noting down the answers to stuff I could not find online. For this issue, I did find something online by Michael Van “Hybrid”, but finding it was the challenge.

So rather than detailing the issue and the reason (you can read that on Michael’s blog) this talks about the steps to troubleshoot this issue.

So first the issue. When starting a migration test from an Exchange 2010 mailbox with an Exchange 2013 hybrid server (running the mailbox and CAS roles) behind a Kemp load balancing (running 7.16 – the latest release at the time of writing, but recently upgraded from version 5 through 6 to 7) I got the following error:

image

The server name will be different (thanks Michael for the screenshot). In my case this was my clients UK datacentre. My clients Hong Kong datacentre was behind a Kemp load balancer as well, but is only running Exchange 2010 and the New York datacentre has an F5 load balancer. Moves from HK worked, but UK and NY failed for different reasons.

The issue shown above is not easy to solve as the migration dialog tells you nothing. In my case it was also telling me the wrong server name. It should have been returning the External EWSUrl from Autodiscover for the mailbox I was trying to move, instead it was returning the Outlook Anywhere external URL from the New York site (as the UK is proxied via NY for the OA connections). For moves to the cloud, we added the External URL for EWS to each site directly so we would move direct and not via the only site that offered internet connected email.

So troubleshooting started with exrca.com – the Microsoft Connectivity Analyser. Autodiscover worked most of the time in the UK but Synchronization, Notification, Availability, and Automatic Replies tests (to test EWS) always failed after six and a half seconds.

Autodiscover returned the following error:

A Web exception occurred because an HTTP 400 – BadRequest response was received from Unknown.
HTTP Response Headers:
Connection: close
Content-Length: 87
Content-Type: text/html
Date: Tue, 24 Jun 2014 09:03:40 GMT

Elapsed Time: 108 ms.

And EWS, when AutoDiscover was returning data correctly, was as follows:

Creating a temporary folder to perform synchronization tests.

Failed to create temporary folder for performing tests.

Additional Details

Exception details:
Message: The request failed. The remote server returned an error: (501) Not Implemented.
Type: Microsoft.Exchange.WebServices.Data.ServiceRequestException
Stack trace:
at Microsoft.Exchange.WebServices.Data.ServiceRequestBase.GetEwsHttpWebResponse(IEwsHttpWebRequest request)
at Microsoft.Exchange.WebServices.Data.ServiceRequestBase.ValidateAndEmitRequest(IEwsHttpWebRequest& request)
at Microsoft.Exchange.WebServices.Data.ExchangeService.InternalFindFolders(IEnumerable`1 parentFolderIds, SearchFilter searchFilter, FolderView view, ServiceErrorHandling errorHandlingMode)
at Microsoft.Exchange.WebServices.Data.ExchangeService.FindFolders(FolderId parentFolderId, SearchFilter searchFilter, FolderView view)
at Microsoft.Exchange.WebServices.Data.Folder.FindFolders(SearchFilter searchFilter, FolderView view)
at Microsoft.Exchange.Tools.ExRca.Tests.GetOrCreateSyncFolderTest.PerformTestReally()
Exception details:
Message: The remote server returned an error: (501) Not Implemented.
Type: System.Net.WebException
Stack trace:
at System.Net.HttpWebRequest.GetResponse()
at Microsoft.Exchange.WebServices.Data.EwsHttpWebRequest.Microsoft.Exchange.WebServices.Data.IEwsHttpWebRequest.GetResponse()
at Microsoft.Exchange.WebServices.Data.ServiceRequestBase.GetEwsHttpWebResponse(IEwsHttpWebRequest request)

Elapsed Time: 6249 ms.

What was interesting here was the 501 and that it was always approx. 6 seconds before it failed.

Looking in the IIS logs from the 2010 servers that hold the UK mailboxes there were no 501 errors logged. The same was true for the EWS logs as well. So where is the 501 coming from. I decided to bypass Exchange 2013 for the exrca.com test (as my system is not yet live and that is easy to do) and so in Kemp pointed the EWS SubVDir directly to a specific Exchange 2010 server. Everything worked. So I decided it was an Exchange 2013 issue, apart from the fact I have lab environments the same as this (without Kemp) and it works fine there. So I decided to search for “Kemp EWS 501” and that was the bingo keyword combination. EWS and 501 or Exchange EWS and 501 got nothing at all.

With my environment back to Kemp >  2013 >  2010 I looked at Michaels suggestions. The first was to run Test-MigrationServerAvailability –ExchangeRemoteMove –RemoteServer servername.domain.com. I changed this slightly, as I was not convinced that I was connecting to the correct endpoint. The migration reported the wrong server name and the Exrca tests do not tell you what endpoint they are connecting to. So I tried Test-MigrationServerAvailability –ExchangeRemoteMove –Autodiscover –EmailAddress user-on-premises@domain.com

As AutoDiscover is reporting errors at times, the second of these cmdlets sometimes reported the following:

RunspaceId         : a711bdd3-b6a1-4fb8-96b8-f669239ea534
Result             : Failed
Message            : AutoDiscover failed with a configuration error: The migration service failed to detect the
migration endpoint using the Autodiscover service. Please enter the migration endpoint settings
or go back to the first step and retry using the Autodiscover service. Consider using the
Exchange Remote Connectivity Analyzer (
https://testexchangeconnectivity.com) to diagnose the
connectivity issues.
ConnectionSettings :
SupportsCutover    : False
ErrorDetail        : internal error:Microsoft.Exchange.Migration.AutoDiscoverFailedConfigurationErrorException:
AutoDiscover failed with a configuration error: The migration service failed to detect the
migration endpoint using the Autodiscover service. Please enter the migration endpoint settings
or go back to the first step and retry using the Autodiscover service. Consider using the
Exchange Remote Connectivity Analyzer (
https://testexchangeconnectivity.com) to diagnose the
connectivity issues.
at Microsoft.Exchange.Migration.DataAccessLayer.MigrationEndpointBase.InitializeFromAutoDiscove
r(SmtpAddress emailAddress, PSCredential credentials)
at Microsoft.Exchange.Management.Migration.TestMigrationServerAvailability.InternalProcessExcha
ngeRemoteMoveAutoDiscover()
IsValid            : True
Identity           :
ObjectState        : New

And when AutoDiscover was working (as it was random) I would get this:

RunspaceId         : a711bdd3-b6a1-4fb8-96b8-f669239ea534
Result             : Failed
Message            : The ExchangeRemote endpoint settings could not be determined from the autodiscover response. No
MRSProxy was found running at ‘outlook.domain.com’.
ConnectionSettings :
SupportsCutover    : False
ErrorDetail        : internal error:Microsoft.Exchange.Migration.MigrationRemoteEndpointSettingsCouldNotBeAutodiscovere
dException: The ExchangeRemote endpoint settings could not be determined from the autodiscover
response. No MRSProxy was found running at ‘outlook.domain.com’. —>
Microsoft.Exchange.Migration.MigrationServerConnectionFailedException: The connection to the
server ‘outlook.domain.com’ could not be completed. —>
Microsoft.Exchange.MailboxReplicationService.RemoteTransientException: The call to
https://outlook.domain.com/EWS/mrsproxy.svc’ failed. Error details: The HTTP request is
unauthorized with client authentication scheme ‘Negotiate’. The authentication header received
from the server was ‘Basic Realm=”outlook.domain.com”‘. –> The remote server returned an
error: (401) Unauthorized.. —>
Microsoft.Exchange.MailboxReplicationService.RemotePermanentException: The HTTP request is
unauthorized with client authentication scheme ‘Negotiate’. The authentication header received
from the server was ‘Basic Realm=”outlook.domain.com”‘. —>
Microsoft.Exchange.MailboxReplicationService.RemotePermanentException: The remote server returned
an error: (401) Unauthorized.
— End of inner exception stack trace —
— End of inner exception stack trace —
at Microsoft.Exchange.MailboxReplicationService.MailboxReplicationServiceFault.<>c__DisplayClas
s1.<ReconstructAndThrow>b__0()
at Microsoft.Exchange.MailboxReplicationService.ExecutionContext.Execute(Action operation)
at Microsoft.Exchange.MailboxReplicationService.MailboxReplicationServiceFault.ReconstructAndTh
row(String serverName, VersionInformation serverVersion)
at Microsoft.Exchange.MailboxReplicationService.WcfClientWithFaultHandling`2.<>c__DisplayClass1
.<CallService>b__0()
at Microsoft.Exchange.Net.WcfClientBase`1.CallService(Action serviceCall, String context)
at Microsoft.Exchange.Migration.MigrationExchangeProxyRpcClient.CanConnectToMrsProxy(Fqdn
serverName, Guid mbxGuid, NetworkCredential credentials, LocalizedException& error)
— End of inner exception stack trace —
at Microsoft.Exchange.Migration.DataAccessLayer.ExchangeRemoteMoveEndpoint.VerifyConnectivity()
at Microsoft.Exchange.Management.Migration.TestMigrationServerAvailability.InternalProcessEndpo
int(Boolean fromAutoDiscover)
— End of inner exception stack trace —
IsValid            : True
Identity           :
ObjectState        : New

This was returning the URL https://outlook.domain.com/EWS/mrsproxy.svc’ which is not correct for this mailbox (this was the OA endpoint in a different datacentre) and external Outlook access is not allowed at this company and so the TMG server in front of the F5 load balancer in the NY datacentre was not configured for OA anyway and browsing the the above URL returned the following picture, which is a well broken scenario but not the issue at hand here!

image

If OA (Outlook Anywhere) was available for this company, this is not what I would expect to see when browsing to the External EWS URL. To that end we have EWS URL’s are bypass TMG and go direct to the load balancer.

So now we have either no valid AutoDiscover response or EWS using the wrong URL. Back to the version of the cmdlet Michael was using as that ignores AutoDiscover: Test-MigrationServerAvailability –ExchangeRemoteMove –RemoteServer servername.domain.com

RunspaceId         : 5874c796-54ce-420f-950b-1d300cf0a64a
Result             : Failed
Message            : The connection to the server ‘ewsinukdatacentre.domain.com’ could not be completed.
ConnectionSettings :
SupportsCutover    : False
ErrorDetail        : Microsoft.Exchange.Migration.MigrationServerConnectionFailedException: The connection to the
server ‘ewsinukdatacentre.domain.com’ could not be completed. —>
Microsoft.Exchange.MailboxReplicationService.RemoteTransientException: The call to
https://ewsinukdatacentre.domain.com/EWS/mrsproxy.svc’ failed. Error details: The remote server returned
an unexpected response: (501) Invalid Request. –> The remote server returned an error: (501) Not
                     Implemented.. —> Microsoft.Exchange.MailboxReplicationService.RemotePermanentException: The
remote server returned an unexpected response: (501) Invalid Request. —>
Microsoft.Exchange.MailboxReplicationService.RemotePermanentException: The remote server returned
an error: (501) Not Implemented.
— End of inner exception stack trace —
— End of inner exception stack trace —
at Microsoft.Exchange.MailboxReplicationService.MailboxReplicationServiceFault.<>c__DisplayClas
s1.<ReconstructAndThrow>b__0()
at Microsoft.Exchange.MailboxReplicationService.ExecutionContext.Execute(Action operation)
at Microsoft.Exchange.MailboxReplicationService.MailboxReplicationServiceFault.ReconstructAndTh
row(String serverName, VersionInformation serverVersion)
at Microsoft.Exchange.MailboxReplicationService.WcfClientWithFaultHandling`2.<>c__DisplayClass1
.<CallService>b__0()
at Microsoft.Exchange.Net.WcfClientBase`1.CallService(Action serviceCall, String context)
at Microsoft.Exchange.Migration.MigrationExchangeProxyRpcClient.CanConnectToMrsProxy(Fqdn
serverName, Guid mbxGuid, NetworkCredential credentials, LocalizedException& error)
— End of inner exception stack trace —
at Microsoft.Exchange.Migration.DataAccessLayer.ExchangeRemoteMoveEndpoint.VerifyConnectivity()
at Microsoft.Exchange.Management.Migration.TestMigrationServerAvailability.InternalProcessEndpo
int(Boolean fromAutoDiscover)
IsValid            : True
Identity           :
ObjectState        : New

Now we can see the 501 error that exrca was returning. It would seem that the 501 is coming from the Kemp and not from the endpoint servers, which is why I could not located it in IIS or EWS logs and so in the Kemp System Message File (logging options > system log files) I found the 501 error:

kernel: L7: badrequest-client_read [157.56.251.92:61541->192.168.1.2:443] (-501): <s:Envelope ? , 0 [hlen 1270, nhdrs 8]

Where the first IP address was a Microsoft datacentre IP and the second was the Kemp listener IP.

It turns out this is due to the Kemp load balancer not returning to Microsoft the Continue-100 status that it should get. Microsoft waits 5 seconds and then sends the data it would have done if it got the response back. At this point the Kemp blocks this data with a 501 error.

It is possible to turn off Kemp’s processing of Continue-100 HTTP packets so it lets them through and this is covered in http://blog.masteringmsuc.com/2013/10/kemp-load-balancer-and-lync-unified.html:

 

In the version at my clients which was upgraded to version 7.0-16 from a v5 to v6 to v7 it was defaulting to RFC Conformant, but needs to be Ignore Continue-100 to work with Office 365. In versions of Kemp 7.1 and later the value needs to be changed to RFC-7231 Compliant from the default of “RFC-2616 Compliant”. Now EWS works, hybrid mailbox moves work, and AutoDiscover is always working on that server – so a mix of issues caused by differing interpretations of an RFC. To cover all these issues the Kemp load balancer started to include this 100-Continue Handling setting. We as ITPro’s need to ensure that we set the correct setting for our environment based on the software we use.

Configuring Exchange On-Premises to Use Azure Rights Management

Posted on 19 CommentsPosted in 2010, 2013, 64 bit, aadrm, ADFS, ADFS 2.0, DLP, DNS, exchange, exchange online, https, hybrid, IAmMEC, load balancer, loadbalancer, mcm, mcsm, MVP, Office 365, powershell, rms, sharepoint, warm

This article is the fifth in a series of posts looking at Microsoft’s new Rights Management product set. In an earlier previous post we looked at turning on the feature in Office 365 and in this post we will look at enabling on-premises Exchange Servers to use this cloud based RMS server. This means your cloud users and your on-premises users can shared encrypted content and as it is cloud based, you can send encrypted content to anyone even if you are not using an Office 365 mailbox.

In this series of articles we will look at the following:

The items above will get lit up as the articles are released – so check back or leave a comment to the first post in the series and I will let you know when new content is added.

Exchange Server integrates very nicely with on-premises RMS servers. To integrate Exchange on-premises with Windows Azure Rights Management you need to install a small service online that can connect Exchange on-premises to the cloud RMS service. On-premises file servers (classification) and SharePoint can also use this service to integrate themselves with cloud RMS.

You install this small service on-premises on servers that run Windows Server 2012 R2, Windows Server 2012, or Windows Server 2008 R2. After you install and configure the connector, it acts as a communications interface between the on-premises IRM-enabled servers and the cloud service. The service can be downloaded from http://www.microsoft.com/en-us/download/details.aspx?id=40839

From this download link there are three files to get onto the server you are going to use for the connector.

  • RMSConnectorSetup.exe (the connector server software)
  • GenConnectorConfig.ps1 (this automates the configuration of registry settings on your Exchange and SharePoint servers)
  • RMSConnectorAdminToolSetup_x86.exe (needed if you want to configure the connector from a 32bit client)

Once you have all this software (or that which you need) and you install it then IT and users can easily protect documents and pictures both inside your organization and outside, without having to install additional infrastructure or establish trust relationships with other organizations.

The overview of the structure of the link between on-premises and Windows Azure Rights Management is as follows:

IC721938

Notice therefore that there are some prerequisites needed. You need to have an Office 365 tenant and turn on Windows Azure Rights Management. Once you have this done you need the following:

  • Get your Office 365 tenant up and running
  • Configure Directory Synchronization between on-premises Active Directory and Windows Azure Active Directory (the Office 365 DirSync tool)
  • It is also recommended (but not required) to enable ADFS for Office 365 to avoid having to login to Windows Azure Rights Management when creating or opening protected content.
  • Install the connector
  • Prepare credentials for configuring the software.
  • Authorising the server for connecting to the service
  • Configuring load balancing to make this a highly available service
  • Configuring Exchange Server on-premises to use the connector

Installing the Connector Service

  1. You need to set up an RMS administrator. This administrator is either the a specific user object in Office 365 or all the members of a security group in Office 365.
    1. To do this start PowerShell and connect to the cloud RMS service by typing Import-Module aadrm and then Connect-AadrmService.
    2. Enter your Office 365 global administrator username and password
    3. Run Add-AadrmRoleBasedAdministrator -EmailAddress <email address> -Role “GlobalAdministrator” or Add-AadrmRoleBasedAdministrator -SecurityGroupDisplayName <group Name> -Role “ConnectorAdministrator”. If the administrator object does not have an email address then you can lookup the ObjectID in Get-MSOLUser and use that instead of the email address.
  2. Create a namespace for the connector on any DNS namespace that you own. This namespace needs to be reachable from your on-premises servers, so it could be your .local etc. AD domain namespace. For example rmsconnector.contoso.local and an IP address of the connector server or load balancer VIP that you will use for the connector.
  3. Run RMSConnectorSetup.exe on the server you wish to have as the service endpoint on premises. If you are going to make a highly available solutions, then this software needs installing on multiple machines and can be installed in parallel. Install a single RMS connector (potentially consisting of multiple servers for high availability) per Windows Azure RMS tenant. Unlike Active Directory RMS, you do not have to install an RMS connector in each forest. Select to install the software on this computer:
    IC001
  4. Read and accept the licence agreement!
  5. Enter your RMS administrator credentials as configured in the first step.
  6. Click Next to prepare the cloud for the installation of the connector.
  7. Once the cloud is ready, click Install. During the RMS installation process, all prerequisite software is validated and installed, Internet Information Services (IIS) is installed if not already present, and the connector software is installed and configured
    IC002
  8. If this is the last server that you are installing the connector service on (or the first if you are not building a highly available solution) then select Launch connector administrator console to authorize servers. If you are planning on installing more servers, do them now rather than authorising servers:
    IC003
  9. To validate the connector quickly, connect to http://<connectoraddress>/_wmcs/certification/servercertification.asmx, replacing <connectoraddress> with the server address or name that has the RMS connector installed. A successful connection displays a ServerCertificationWebService page.
  10. For and Exchange Server organization or SharePoint farm it is recommended to create a security group (one for each) that contains the security objects that Exchange or SharePoint is. This way the servers all get the rights needed for RMS with the minimal of administration interaction. Adding servers individually rather than to the group results in the same outcome, it just requires you to do more work. It is important that you authorize the correct object. For a server to use the connector, the account that runs the on-premises service (for example, Exchange or SharePoint) must be selected for authorization. For example, if the service is running as a configured service account, add the name of that service account to the list. If the service is running as Local System, add the name of the computer object (for example, SERVERNAME$).
    1. For servers that run Exchange: You must specify a security group and you can use the default group (DOMAIN\Exchange Servers) that Exchange automatically creates and maintains of all Exchange servers in the forest.
    2. For SharePoint you can use the SERVERNAME$ object, but the recommendation configuration is to run SharePoint by using a manually configured service account. For the steps for this see http://technet.microsoft.com/en-us/library/dn375964.aspx.
    3. For file servers that use File Classification Infrastructure, the associated services run as the Local System account, so you must authorize the computer account for the file servers (for example, SERVERNAME$) or a group that contains those computer accounts.
  11. Add all the required groups (or servers) to the authorization dialog and then click close. For Exchange Servers, they will get SuperUser rights to RMS (to decrypt content):
    image
    image
  12. If you are using a load balancer, then add all the IP addresses of the connector servers to the load balancer under a new virtual IP and publish it for TCP port 80 (and 443 if you want to configure it to use certificates) and equally distribute the data across all the servers. No affinity is required. Add a health check for the success of a HTTP or HTTPS connection to http://<connectoraddress>/_wmcs/certification/servercertification.asmx so that the load balancer fails over correctly in the event of connector server failure.
  13. To use SSL (HTTPS) to connect to the connector server, on each server that runs the RMS connector, install a server authentication certificate that contains the name that you will use for the connector. For example, if your RMS connector name that you defined in DNS is rmsconnector.contoso.com, deploy a server authentication certificate that contains rmsconnector.contoso.com in the certificate subject as the common name. Or, specify rmsconnector.contoso.com in the certificate alternative name as the DNS value. The certificate does not have to include the name of the server. Then in IIS, bind this certificate to the Default Web Site.
  14. Note that any certificate chains or CRL’s for the certificates in use must be reachable.
  15. If you use proxy servers to reach the internet then see http://technet.microsoft.com/en-us/library/dn375964.aspx for steps on configuring the connector servers to reach the Windows Azure Rights Management cloud via a proxy server.
  16. Finally you need to configure the Exchange or SharePoint servers on premises to use Windows Azure Active Directory via the newly installed connector.
    • To do this you can either download and run GenConnectorConfig.ps1 on the server you want to configure or use the same tool to generate Group Policy script or a registry key script that can be used to deploy across multiple servers.
    • Just run the tool and at the prompt enter the URL that you have configured in DNS for the connector followed by the parameter to make the local registry settings or the registry files or the GPO import file. Enter either http:// or https:// in front of the URL depending upon whether or not SSL is in use of the connectors IIS website.
    • For example .\GenConnectorConfig.ps1 –ConnectorUri http://rmsconnector.contoso.com -SetExchange2013 will configure a local Exchange 2013 server
  17. If you have lots of servers to configure then run the script with –CreateRegEditFiles or –CreateGPOScript along with –ConnectorUri. This will make five reg files (for Exchange 2010 or 2013, SharePoint 2010 or 2013 and the File Classification service). For the GPO option it will make one GPO import script.
  18. Note that the connector can only be used by Exchange Server 2010 SP3 RU2 or later or Exchange 2013 CU3 or later. The OS on the server also needs to be include a version of the RMS client that supports RMS Cryptographic Mode 2. This is Windows Server 2008 + KB2627272 or Windows Server 2008 R2 + KB2627273 or Windows Server 2012 or Windows Server 2012 R2.
  19. For Exchange Server you need to manually enable IRM as you would do if you had an on-premises RMS server. This is covered in http://technet.microsoft.com/en-us/library/dd351212.aspx but in brief you run Set-IRMConfiguration -InternalLicensingEnabled $true. The rest, such as transport rules and OWA and search configuration is covered in the mentioned TechNet article.
  20. Finally you can test if RMS is working with Test-IRMConfiguration –Sender billy@contoso.com. You should get a message at the end of the test saying Pass.
  21. If you have downloaded GenConnectorConfig.ps1 before May 1st 2014 then download it again, as the version before this date writes the registry keys incorrectly and you get errors such as “FAIL: Failed to verify RMS version. IRM features require AD RMS on Windows Server 2008 SP2 with the hotfixes specified in Knowledge Base article 973247” and “Microsoft.Exchange.Security.RightsManagement.RightsManagementException: Failed to get Server Info from http://rmsconnector.contoso.com/_wmcs/certification/server.asmx. —> System.Net.WebException: The request failed with HTTP status 401: Unauthorized.”. If you get these then turn of IRM, delete the “C:\ProgramData\Microsoft\DRM\Server” folder to remove old licences, delete the registry keys and run the latest version of GetConnectorConfig.ps1, refresh the RMS keys with Set-IRMConfiguration –RefreshServerCertificates and reset IIS with IISRESET.

Now you can encrypt messages on-premises using your AADRM licence and so not require RMS Server deployed locally.

Updating Exchange 2013 Anti-Malware Agent From A Non-Internet Connected Server

Posted on Leave a commentPosted in 2013, 64 bit, antivirus, exchange, Exchange Online Protection, IAmMEC, malware, mcm, mcsm, powershell, x64

In Forefront Protection for Exchange (now discontinued) for Exchange 2010 it was possible to run the script at http://support.microsoft.com/kb/2292741 to download the signatures and scan engines when the server did not have a direct connection to the download site at forefrontdl.microsoft.com.

To achieve the same with Exchange 2013 and the built-in anti-malware transport agent you can repurpose the 2010 script to download the engine updates to a folder on a machine with internet access and then use a script from Exchange Server 2013 to download from a share on the first machine that you downloaded the files to, and that the Exchange Servers can reach.

So start by downloading the script at http://support.microsoft.com/kb/2292741 and saving it as Update-Engines.ps1.

Create a folder called C:\Engines (for example) and share it with Authenticated Users / Read access and full control to the account that will run Update-Engines.ps1

Run Update-Engines.ps1 with the following

Update-Engines.ps1 -EngineDirPath C:\engines -UpdatePathUrl http://forefrontdl.microsoft.com/server/scanengineUpdate/  -Engines “Microsoft” -Platforms amd64

The above cmdlet/script downloads just the 64 bit Microsoft engine as that is all you need and places them in the local folder (which is the shared folder you created) on that machine. You can schedule this script using standard published techniques for scheduling PowerShell.

On your Exchange Server that has no internet connectivity, start Exchange Management Shell and run the following:

Set-MalwareFilteringServer ServerName –PrimaryUpdatePath \\dlserver\enginesShare

Then start a PowerShell window that is running as an administrator – you can use Exchange Management Shell, but it too needs to be started as an administrator to do this last step. In this shell run the following:

Add-PSSnapin microsoft.forefront.filtering.management.powershell

Get-EngineUpdateInformation

Start-EngineUpdate

Get-EngineUpdateInformation

Then compare the first results from Get-EngineUpdateInformation with the second results. If you have waited 30 or so seconds, the second set of results should be updated to the current time for the LastChecked value. UpdateVersion and UpdateStatus might also have changed. If your Exchange Server has internet connectivity it will already have updated automatically every hour and so not need this script running.

Exchange DLP Rules in Exchange Management Shell

Posted on Leave a commentPosted in 2013, cloud, DLP, EOP, exchange, exchange online, Exchange Online Protection, IAmMEC, IFilter, mcm, mcsm, Office 365

This one took a while to work out, so noting it down here!

If you want to create a transport rule for a DLP policy that has one data classification (i.e. data type to look for such as ‘Credit Card Number’) then that is easy in PowerShell and an example would be as below.

New-TransportRule -name “Contoso Pharma Restricted DLP Rule (Blocked)” -DlpPolicy ContosoPharma” -SentToScope NotInOrganization -MessageContainsDataClassifications @{Name=”Contoso Pharmaceutical Restricted Content”} -SetAuditSeverity High -RejectMessageEnhancedStatusCode 5.7.1 -RejectMessageReasonText “This email contains restricted content and you are not allowed to send it outside the organization”

As you can see, and highlighted in red, the data classification is a hashtable and the single classification is mentioned.

To add more than one classification is much more involved:

$DataClassificationA = @{Name=”Contoso Pharmaceutical Private Content”}
$DataClassificationB = @{Name=”Contoso Pharmaceutical Restricted Content”}
$AllDataClassifications = @{}
$AllDataClassifications.Add(“DataClassificationA”,$DataClassificationA)
$AllDataClassifications.Add(“DataClassificationB”,$DataClassificationB)
New-TransportRule -name “Notify if email contains ContosoPharma documents 1” -DlpPolicy “ContosoPharma” -SentToScope NotInOrganization -MessageContainsDataClassifications $AllDataClassifications.Values -SetAuditSeverity High -GenerateIncidentReport administrator -IncidentReportContent “Sender”,”Recipients”,”Subject” -NotifySender NotifyOnly

And as you can see, shown in red above, you need to make a hashtable of hashtables and then use the value of the final hashtable in the New-TransportRule

An “Inexpensive” Exchange Lab In Azure

Posted on Leave a commentPosted in 2010, 2013, Azure, cloud, DNS, exchange, exchange online, hyper-v, IAmMEC, Office 365, vhd, vm, vpn

This blog post centres around two scripts that can be used to quickly provision an Exchange Server lab in Azure and then to remove it again. The reason why the blog post is titled “inexpensive” is that Azure charges compute hours even if the virtual machines are shut down. Therefore to make my Exchange lab cheaper to operate and to not charge me when the lab is not being used, I took my already provisioned VHD files and created a few scripts to create the virtual machines and cloud service and then to remove it again if needed.

Before you start using these scripts, you need to have already uploaded or created your own VHD’s in Azure and designed your lab as you need. These scripts will then take a CSV file with the relevant values in them and create a VM for each VHD in the correct subnet (that you have also created in Azure) and always in the correct order – thus ensuring they always get the same IP address from your virtual network (UPDATE: 14 March 2014 – Thanks to Bhargav, this script now reserves the IPs as well as this is a newish feature in Azure). Without reserving an IP, when you boot your domain controller first in each subnet it will always get the fourth available IP address. This IP is the DNS IP address in Azure and then each of the other machines are created and booted in the order of your choosing and so get the subsequent IP’s. Azure never used to guarantee the IP but updates in Feb 2014 now allow this with the latest Azure PowerShell cmdlets. This way we can ensure the private IP is always the same and machine dependancies such as domain controllers running first are adhered to.

These scripts are created in PowerShell and call the Windows Azure PowerShell cmdlets. You need to install the Azure cmdlets on your computer and these scripts rely on features found in version 0.7.3.1 or later. You can install the cmdlets from http://www.windowsazure.com/en-us/documentation/articles/install-configure-powershell/

Build-AzureExchangeLab.ps1

# Retrieve with Get-AzureSubscription 
$subscriptionName = "Visual Studio Premium with MSDN"

Import-AzurePublishSettingsFile 'downloaded.publishsettings.file.got.with.Get-AzurePublishSettingsFile'

# Select the subscription to work on if you have more than one subscription
Select-AzureSubscription -SubscriptionName $subscriptionName

# Name of Virtual Network to add VM's to
$VMNetName = "MCMHybrid"

# CSV File with following columns (BringOnline,VMName,StorageAccount,VMOSDiskName,VMInstanceSize,SubnetName,IPAddress,Location,AffinityGroup,WaitForBoot,PublicRDPPort)
)
$CSVFile = Import-CSV 'path\filename.csv'

# Loop to build lab here. Ultimately get values from CSV file
foreach ($VMItem in $CSVFile) {

    # Retrieve with Get-AzureStorageAccount  
    $StorageAccount = $VMItem.StorageAccount

    # Specify the storage account location containing the VHDs 
    Set-AzureSubscription -SubscriptionName $subscriptionName  -CurrentStorageAccount $StorageAccount
  
    # Not Used $location = $VMItem.Location     # Retrieve with Get-AzureLocation

    # Specify the subnet to use. Retreive with Get-AzureVNetSite | FL Subnets
    $subnetName = $VMItem.SubnetName

    $AffinityGroup = $VMItem.AffinityGroup      # From Get-AzureAffinityGroup (for association with a private network you have already created). 

    $VMName = $VMItem.VMName
    $VMOSDiskName = $VMItem.VMOSDiskName        # From Get-AzureDisk
    $VMInstanceSize = $VMItem.VMInstanceSize    # ExtraSmall, Small, Medium, Large, ExtraLarge 
    $CloudServiceName = $VMName
    $IPAddress = $VMItem.IPAddress              # Reserves a specific IP for the VM
    if ($VMItem.BringOnline -eq "Yes") {
        Write-Host "Creating VM: " $VMName
        $NewVM = New-AzureVMConfig -Name $VMName -DiskName $VMOSDiskName -InstanceSize $VMInstanceSize | Add-AzureEndpoint -Name 'Remote Desktop' -LocalPort 3389 -PublicPort $VMItem.PublicRDPPort -Protocol tcp | Add-AzureEndpoint -Protocol tcp -LocalPort 25 -PublicPort 25 -Name 'SMTP' | Add-AzureEndpoint -Protocol tcp -LocalPort 443 -PublicPort 443 -Name 'SSL' | Add-AzureEndpoint -Protocol tcp -LocalPort 80 -PublicPort 80 -Name 'HTTP' | Set-AzureSubnet –SubnetNames $subnetName | Set-AzureStaticVNetIP –IPAddress $IPAddress        
        # Creates new VM and waits for it to boot if required
        if ($VMItem.WaitForBoot -eq "Yes") {New-AzureVM -ServiceName $CloudServiceName -AffinityGroup $AffinityGroup -VMs $NewVM -VNetName $VMNetName -WaitForBoot}
            else {New-AzureVM -ServiceName $CloudServiceName -AffinityGroup $AffinityGroup -VMs $NewVM -VNetName $VMNetName }
    }
}

Remove-AzureExchangeLab.ps1

# Retrieve with Get-AzureSubscription 
$subscriptionName = "Visual Studio Premium with MSDN"

Import-AzurePublishSettingsFile 'downloaded.publishsettings.file.got.with.Get-AzurePublishSettingsFile'

# Select the subscription to work on if you have more than one subscription
Select-AzureSubscription -SubscriptionName $subscriptionName

# CSV File with following columns (BringOnline,VMName,StorageAccount,VMOSDiskName,VMInstanceSize,SubnetName,IPAddress,Location,AffinityGroup,WaitForBoot,PublicRDPPort)
$CSVFile = Import-CSV 'path\filename.csv'

# Loop to build lab here. Ultimately get values from CSV file
foreach ($VMItem in $CSVFile) {

    # Stop VM
    Stop-AzureVM -Name $VMItem.VMName -ServiceName $VMItem.VMName -Force

    # Remove VM but leave VHDs behind
    Remove-AzureVM -ServiceName $VMItem.VMName -Name $VMItem.VMName 

    # Remove Cloud Service
    Remove-AzureService $VMItem.VMName -Force
}

CSV File Format

The CSV file has a row per virtual machine, listed in order that the machine is booted:

BringOnline,VMName,StorageAccount,VMOSDiskName,VMInstanceSize,SubnetName,IPAddress,Location,AffinityGroup,WaitForBoot,PublicRDPPort
Yes,mh-oxf-dc1,portalvhdsjv47jtq9qdrmb,mh-oxf-mbx2-mh-oxf-mbx2-0-201312301745030496,Small,Oxford,10.0.0.4,West Europe,C7Solutions-AG,Yes,3389
etc,…

The columns are as follows:

  • BringOnline: Yes or No
  • VMName: This name is used for the VM and the Cloud Service. It must be unique within Azure. An example might be EX-LAB-01 (if that is unique that is)
  • StorageAccount: The name of the storage account that the VHD is stored in. This might be one you created yourself or one made by Azure with a name containing random letters. For example portalvhdshr4djwe9dwcb5 would be what this value might look like. Use Get-AzureStorageAccount to find this value.
  • VMOSDiskName: This is the disk name (not the file name) and is retrieved with Get-AzureDisk
  • VMInstanceSize: ExtraSmall, Small, Medium, Large or ExtraLarge
  • SubnetName: Get with Get-AzureVNetSite | FL Subnets
  • IPAddress: Sets a specific IP address for the VM. VM will always get this IP when it boots and other VM’s will not take it if the happen to boot before it
  • Location: Retrieve with Get-AzureLocation. This value is not used in the script as I use Affinity Groups and subnets instead.
  • AffinityGroup: From Get-AzureAffinityGroup (for association with a private network you have already created).
  • WaitForBoot: Yes or No. This will wait for the VM to come online (and thus get an IP correctly provisioned in order) or ensure things like the domain controller is running first.
  • PublicRDPPort: Set to 3389 unless you want to use a different port. For simplicity, the script sets ports 443, 80 and 25 as open on the IP addresses of the VM

Highly Available Office 365 to On-Premises Mail Routing

Posted on 22 CommentsPosted in 2010, 2013, cloud, DNS, EOP, exchange, exchange online, Exchange Online Protection, hybrid, IAmMEC, MX, Office 365, smarthost, smtp

This article looks at how to configure mail flow from Office 365 (via Exchange Online Protection – EOP) to your On Premises organization to ensure that it is highly available and work in disaster recovery scenarios with no impact. It is based on exactly the same principle to that which I blogged about in 2012: http://c7solutions.com/2012/05/highly-available-geo-redundancy-with-html on creating redundant outbound connections from Exchange on premises.

The best way to explain this feature is to describe it in the way of an example:

For example MCMEmail Ltd have Hybrid set up, and delivery to the cloud first. So the DNS zone for mcmemail.co.uk has MX pointing to EOP.

They then create a new DNS zone at either a subzone (as in this example) or a different domain if they have one available. In the example this could be hybrid.mcmemail.co.uk. Into this zone they add the following records:

10 MX oxford-a.hybrid.mcmemail.co.uk

10 MX oxford-b.hybrid.mcmemail.co.uk

20 MX nuneaton.hybrid.mcmemail.co.uk

The below picture shows an example of this configured in AWS Route 53 DNS (though there are other DNS providers available)

image

In Exchange Online Protection administration pages (Office 365 Portal > Exchange Admin > Mail Flow > Connectors and modify your on-premises connector to point to the new zone. Example shown in the below picture:

image

Then all email is always delivered to the Oxford datacentre and nothing to the Nuneaton one (where the DR servers reside) unless the two Oxford datacentres (A and B) are both offline and so the 10 preference does not answer at all. At that time and that time only does the 20 preference get connected to.

Errors in Moving Exchange Archive Mailboxes to Office 365

Posted on 2 CommentsPosted in 2010, 2013, archive, cloud, exchange, exchange online, Office 365, Outlook

I was trying to move an Archive mailbox to the Office 365 service from my demo environment the other day when I came across an error I thought I would note down here for completion. I could not find the error elsewhere on the internet

An archive mailbox must be enabled before it can be moved

Now this sounds a stupid sort of error, because if I am moving an archive mailbox then it must already be enabled or the move mailbox wizard will not let me move it. But the mailbox does have an archive on the Exchange organization on-premises, but when the move has completed it has errors reported.

The reason for the error is all down, in my case, to two things.

  1. Its a demo environment and I am doing things too quickly
  2. DirSync is not working at the moment.

In my scenario the DirSync services on my server had stopped a few weeks ago and I had not started them – ensure that DirSync is running. And secondly, as it was a demo I was doing I was creating an archive mailbox and then moving it to the cloud shortly afterwards. Even if DirSync was running, the chances are that it would not have had a chance to sync the existence of the archive mailbox associated with the user account to Office 365’s directory. Archive mailboxes can only be moved from on-premises to the cloud if the cloud service knows about the mailbox archive to move.

To fix my issue I restarted DirSync services and forced a full sync with the cmdlet Start-OnlineCoexistenceSync –FullSync. See http://technet.microsoft.com/en-us/library/jj151771.aspx#BKMK_SynchronizeDirectories on the steps to force a directory sync for running this cmdlet.

Once the sync was completed and Office 365 directory is aware that my user has an on-premises archive I was able to move the archive to Office 365 and keep the mailbox on-premises.

Of course, if I want to create an archive in the cloud with a mailbox on premises for real (not a demo) then I would just create the archive straight in the cloud. My scenario and the three hour DirSync delay (or forcing DirSync) was only needed as I had created an archive and then moved it.

Enabling and Configuring AADRM in Exchange Online

Posted on Leave a commentPosted in 2010, 2013, aadrm, exchange, exchange online, IAmMEC, mcm, mcsm, Office 365, rms

This article is the fourth in a series of posts looking at Microsoft’s new Rights Management product set. In the previous post we looked at turning on the feature in Office 365 and in this post we will look at how to manage the service in the cloud.

In this series of articles we will look at the following:

The items above will get lit up as the articles are released – so check back or leave a comment to the first post in the series and I will let you know when new content is added.

Once you have turned on Azure Active Directory rights management you need to enable it in a variety of locations based on your needs. This series of blog posts will look at doing that in both Exchange and SharePoint, both online in Office 365 and on-premises as well as for desktop users and mobile and tablet users. First we will start with Exchange Online.

Exchange Online configuration for AADRM is probably the most complex one to do, and its not that complex really! To enable AADRM for Exchange Online at the time of writing you need to import the RMS Key from AADRM. If you had installed AD RMS on premises then you might have already done this for Exchange Online to integrate it with your on-premises RMS infrastructure – if this is the case, don’t change the key online or it will break. These steps are for Exchange Online users who have never used or integrated AD RMS with Exchange on-premises.

Enabling AADRM in Exchange Online

  1. Enable AADRM in your Office 365 tenant as mentioned previously
  2. Set the RMS Key Sharing URL to the correct value as listed in http://technet.microsoft.com/en-us/library/dn151475(v=exchg.150).aspx
  3. For example, to set this if your Office 365 tenant is based in the EU you would use the following PowerShell cmdlet in a remote session connected to Exchange Online:
  4. Then import the keys and templates for your tenant from the AADRM servers online. The keys and templates are known as the Trusted Publishing Domain. This is done in a remote PowerShell session connected to Exchange Online using the following cmdlet:
    • Import-RMSTrustedPublishingDomain -RMSOnline -name “RMS Online”
  5. In the PowerShell response to the previous command you should see the AddedTemplates value read “Company Name – Confidential” and “Company Name – Confidential View Only” which are the default two templates. If customised templates have been created and published, they will appear here as well.
  6. To check that the key/template import has worked run Test-IRMConfiguration -RMSOnline from the Exchange Online remote PowerShell command prompt. You should see PASS listed at the end of the output.
  7. Finally, to turn on IRM protection in Exchange Online run Set-IRMConfiguration –InternalLicensingEnabled $true from an Exchange Online remote PowerShell session.

Configuring AADRM in Exchange Online

  1. Once IRM is enabled with Set-IRMConfiguration –InternalLicensingEnabled $true you can run Get-IRMConfiguration and you will see the following options are enabled. You can turn any of these off (and on again) as your require:
    • JournalReportDecryptionEnabled: Ensures the IRM protected messages stored in an Exchange Journal report are also stored in the same report in clear text.
    • ClientAccessServerEnabled: Enables OWA to offer IRM protection during email composing (click the ellipsis (…) in the new email compose screen and select set permissions menu). OWA will also prelicence IRM protected content so that OWA users can open content they are licenced to view etc. without needing to have access to the RMS infrastructure directly. Note that during testing I found it could take up to 24 hours for Exchange Online to show the RMS templates in OWA. [RMS011]
    • SearchEnabled: When you search your mailbox for content, anything that is IRM protected will appear in your search results if it matches the search keyword. This setting allows Exchange Search to open and index your content even if it is not listed as a valid user of the content.
    • TransportDecryptionSetting: This allows the transport pipeline in Exchange to decrypt content so that it is available for transport agents to view it. For example anti-malware agents and transport rules. The content is reprotected at the end of the transport pipeline before it leaves the server.
    • EDiscoverySuperUserEnabled: Allows discovery search administrators to query for keywords in your protected content even if they would not be able to directly open the content if they had access to your mailbox or if they found the email saved to a file share of other sharing location.

What the RMS Settings in Exchange Actually Do

JournalReportDecryptionEnabled

Enabling journal report decryption allows the Journaling agent to attach a decrypted copy of a rights-protected message to the journal report. Before you enable journal report decryption, you must add the Federated Delivery mailbox to the super users group configured on your Active Directory Rights Management Services (AD RMS) server or AADRM settings.

Note: This is currently not working in Exchange Online and the above instructions are for Exchange On-Premises deployments.

ClientAccessServerEnabled

When IRM is enabled on Client Access servers, Outlook Web App users can IRM-protect messages by applying an Active Directory Rights Management Services (AD RMS) template created on your AD RMS cluster or AADRM service. Outlook Web App users can also view IRM-protected messages and supported attachments. Before you enable IRM on Client Access servers, you must add the Federation mailbox to the super users group on the AD RMS cluster or AADRM service as this allows the server to decrypt all content for you on the server so that the user does not need to have to have access to the RMS or AADRM service.

With CAS being able to licence and get licences on your behalf from the RMS service, you have the ability to do RMS inside OWA, and even if you are offline in OWA then any protected content already comes with its licence and so can be read without a connection to the RMS service.

SearchEnabled

The SearchEnabled parameter specifies whether to enable searching of IRM-encrypted messages in Outlook Web App.

TransportDecryptionSetting

The TransportDecryptionSettingparameter specifies the transport decryption configuration. Valid values include one of the following:

  • Disabled   Transport decryption is disabled for internal and external messages.
  • Mandatory   Messages that can’t be decrypted are rejected, and a non-delivery report (NDR) is returned.
  • Optional   A best effort approach to decryption is provided. Messages are decrypted if possible, but delivered even if decryption fails.

Transport decryption allows RMS protected messages to be decrypted as they are processed on the Exchange Server and then encrypted again before they leave the server. This means transport agents such as anti virus or transport rules can process the message (i.e. scan for viruses or add signatures or do DLP processing) the message as they see it in its unencrypted form.

EDiscoverySuperUserEnabled

The EDiscoverySuperUserEnabledparameter specifies whether members of the Discovery Management role group can access IRM-protected messages that were returned by a discovery search and are residing in a discovery mailbox. To enable IRM-protected message access to the Discovery Management role group, set the value to $true.

Managing Azure Active Directory Rights Management

Posted on Leave a commentPosted in 2013, aadrm, dirsync, encryption, IAmMEC, journal, journaling, licence, mcm, mcsm, MVP, Office 365, rms, transport agent

This article is the third in a series of posts looking at Microsoft’s new Rights Management product set. In the previous post we looked at turning on the feature in Office 365 and in this post we will look at how to manage the service in the cloud.

In this series of articles we will look at the following:

The items above will get lit up as the articles are released – so check back or leave a comment to the first post in the series and I will let you know when new content is added.

Once you have signed up for the Azure Active Directory Rights Management (AADRM) Service there are a few things that you need to manage. These are:

  • The service itself
  • Users who are allowed to create RMS protected content
  • Enable and configure Super User rights if required.

Managing AADRM

There is not a lot to do in the Office 365 admin web pages with regard to the management of the service apart from enabling it, which we covered in the previous post and disabling it. Disabling the service involves the same steps as enabling it – you just click the big deactivate button!

AADRM can be further managed with PowerShell though. There are lots of blog posts on connecting to Office 365 using PowerShell, and some of those include the cmdlets to connect to Exchange Online etc. as well. The code below adds to this, and loads the AADRM module and connects to AADRM service in the cloud.

$cred = Get-Credential

write-host "Username: " $cred.username

Connect-MsolService -Credential $cred

Write-Host "...connected to Office 365 Windows Azure Active Directory"

$s = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://ps.outlook.com/powershell -Credential $cred -Authentication Basic -AllowRedirection

$importresults = Import-PSSession $s -Verbose

Write-Host "...connected to Exchange Online"

Import-Module AADRM

Connect-AadrmService -Verbose -Credential $cred

If you save the above PowerShell code as a text file with a .ps1 extension then you can run the script and easily connect to Office 365 with the credentials you enter. Then connect to Exchange Online with the same set of credentials and finally to AADRM with, of course, the same credentials. This allows you to manages users, email and security from a single session.

To get the AADRM PowerShell module on your computer (so that Import-Module AADRM works) you need to download the Rights Management PowerShell administration module from http://go.microsoft.com/fwlink/?LinkId=257721 and then install it.

To install you need to have already installed the Microsoft Online Services Sign-In Assistant 7.0 and PowerShell 2.0. The PowerShell config file needs some settings adding to it, though I found on my Windows 8 PC that these had already been done. See the instructions at http://technet.microsoft.com/en-us/library/jj585012.aspx for this change to the config file.

  1. Run a PowerShell session and load the module with
    1. Import-Module AADRM
    2. Connect-AadrmService -Verbose
  2. Login when prompted with a user with Global Admin rights in Office 365.
  3. Or, use the script above to do Office 365, Exchange Online and AADRM in a single console.
  4. Run Get-Aadrm to check that the service is enabled

Enabling Super User Rights

Super Users in RMS are accounts that have the ability to decrypt any content protected with that RMS system. You do not need Super User rights to use RMS, nor do you need anyone who has Super User rights to use the product. But there are times when it might be required. One example would be during a discovery or compliance process. At this time it might be required that someone is able to open any RMS protected document to look for hits on the compliance issue in question. Super User gives that right, but would be needed just for the duration of the task that requires these rights. Rights to be Super User would be granted as needed and very importantly removed as needed.

Another example for the use of Super User is when a process needs to see content in its unprotected form. The common use case for this is Exchange Server and its transport decryption process. In Exchange Server you have agents that run against each message looking for something and then acting if that something is found. For example you would not want an virus to bypass the built in AV features of Exchange Server 2013 by protecting it with RMS! Or if you had a disclaimer transport rule or agent, you would not want the disclaimer or DLP feature to not see the content and act upon it because the content was encrypted. The same goes for journaling and the ability to journal a clear text copy of the message as well as the encrypted one if you wish.

To do all this in Exchange Server, the RMS Super User feature needs to be enabled and we will come back in a later post on the specifics of doing that for Exchange, but first we need to enable it in AARMS and set the users who will be Super Users and then, when we are finished with whatever required Super User, we need to turn it off again.

The Rights Management super users group is a special group that has full control over all rights-protected content managed by the Rights Management service. Its members are granted full owner rights in all use licenses that are issued by the subscriber organization for which the super users group is configured. This means that members of this group can decrypt any rights-protected content file and remove rights-protection from it for content previously protected within that organization.

By default, the super users feature is not enabled and no groups or users are assigned membership to it. To turn on the feature run Enable-AadrmSuperUserFeature from the AADRM PowerShell console. The opposite cmdlets exists to turn the feature off again – Disable-AadrmSuperUserFeature!

Once it is enabled you can set Office 365 users as Super Users. To do this run Add-AadrmSuperUser –EmailAddress user@domain.com where the user is either a cloud only Office 365 account or one that you have pushed to Office 365 using DirSync from your on-premises Active Directory. You can add more than one user, each user is added as a separate running of the cmdlets.

To see your list of Super Users, run Get-AadrmSuperUser. To remove users either take them out one by one (Remove-AadrmSuperUser –EmailAddress user@tenant.onmicrosoft.com) or just turn off the Super User feature with Disable-AadrmSuperUserFeature.

Adding AADRM Licences to Users

Once you have AADRM activated you can give your users the rights to create protected content. This is done in the licencing page of the Office 365 web admin portal or via PowerShell. The steps for adding user licences in the shell are discussed at http://c7solutions.com/2011/07/assign-specific-licences-in-office-365-html. That article was written some time ago, so the following are the changes for AADRM:

  • The SkuPartNumber for AADRM is RIGHTSMANAGEMENT_ADHOC
  • The Service Plan for the AADRM SKU is RMS_S_ADHOC

Message Classifications, Exchange 2013, Exchange Online and Outlook

Posted on 11 CommentsPosted in 2010, 2013, exchange, exchange online, Office 365, Outlook, rms

Message Classifications are a way to tag email with a property that describes the purpose of the email, for example “Internal Use Only” might be a classification to tell the recipient of the email that the message should not be forwarded. Classifications are configured by administrators and appear shortly after creation in Outlook Web App, but need further work for them to appear in Outlook. Once you have a classification system in place, you can use Transport Rules in Exchange to read the classification and act on the message, for example if the message is classified “Internal Use Only” and the recipient is in an external domain then an NDR could be returned and the message dropped.

This blog post will cover the following:

  1. Creating a message classification
  2. Localising message classifications for different geographies and language groups
  3. Classification considerations when you have multiple Exchange organizations
  4. Creating a transport rule to act on a classified message
  5. Sending messages with classification via OWA
  6. Setting up Outlook to use message classifications
  7. Sending messages with classification via Outlook

Creating a Message Classification

This needs to be done in the Exchange Management Shell. It is a single cmdlet per classification. A simple example being:

New-MessageClassification -Name “CorporateConfidential” -DisplayName “Corporate Confidential” -SenderDescription “This email is confidential for the entire company and not to be distributed outside the company”

This creates a classification called Default\Name. In the above example this would be Default\CorporateConfidential. This value is not seen by anyone other than the administrators of Exchange, users see the DisplayName value. The SenderDescription appears at the top of the message as it is being written in Outlook or OWA and you can have a different RecipientDescription for display in the recipients email. In the above example the SenderDescription (which is required) will also be the RecipientDescription as that was not specifically set.

Localising Message Classifications

If you want to localise the classification continue with something similar to this (translations from Bing!) by changing the display and description text and setting the locale to match:

New-MessageClassification -Name “CorporateConfidential” -DisplayName “Société Confidentielle” -SenderDescription “Cet email est confidentiel pour l’ensemble de l’entreprise et de ne pas être distribués à l’extérieur de l’entreprise” -Locale FR-FR

Once you have the correct translation, which you need for the DisplayName and SenderDescription, you run the cmdlet which sets the language against the previously created classification. Note that at the time of writing this cmdlet does not work in Exchange Online (see reported issue for more).

Classifications and Hybrid Mode (or multiple Exchange organizations)

If you have both installed an Office 365 tenant and on-premises organization (i.e Hybrid Mode) then you should create the classifications in one organisation (recommend the on-premises org) and export them for use in Outlook (see later in the blog). For the other organization (recommend Exchange Online) you should use the same cmdlets to create the classifications again in Exchange Online but specify the ClassificationID that was automatically set when the classification was made on-premises – the same classification in both organizations should have matching ClassificationID. To get the ClassificationID from Exchange on-premises before running the cmdlets again in remote PowerShell use Get-MessageClassification | FT Name,ClassificationID

Creating a Transport Rule That Uses Message Classifications

1Create a new transport rule that implements RMS protection with the “CompanyName – Confidential” template if the message is flagged with the previously created classification. The following screenshot shows the required properties, or you could use PowerShell:

New-TransportRule -Name “Rights Protect Company Confidential Emails PS” -HasClassification CorporateConfidential -ApplyRightsProtectionTemplate “CompanyName – Confidential”

Remember to change the name of the classification and the RMS template name to suit. Also remember that if you have Hybrid Exchange mode enabled, you need to create the transport rule in both locations, therefore you need RMS enabled in both locations (though if you are not using RMS as this example does, you do not need it enabled to do message classificaitions).
RMS012

Classifications and OWA

Once the rule and the classification are created, send an email using OWA where you have set the classification during composing the email. Note that during testing I found it could take up to 24 hours for Exchange Online to show both the RMS templates in OWA and the message classifications.

RMS013

Note that at the time of writing there is a bug in OWA in Exchange 2013 that causes transport rules to not see the classification correctly. Tests using Outlook (see the next few steps) will work fine.

Classifications and Outlook

It is more complex to do message classifications for Outlook as you need to export the classifications from Exchange Server as an XML file, place the XML file in a location the Outlook client computer can get to and set some registry keys. The steps are as follows:

  1. To send the email using Outlook you need to first export the message classification using Export-OutlookClassification.ps1 script from Exchange on-premises (in the Exchange/v15/scripts folder of your installation). To export from Exchange Online you need to copy this script from an on-premises installation and run it in your remote PowerShell window connected to Exchange Online.
  2. Once you have the Classifications XML you need to copy this to each Outlook computer and run Regedit on these computers to enable classifications and to point to the classification file. The regedit settings are:
    1. [HKEY_CURRENT_USER\Software\Microsoft\Office\16.0\Common\Policy]
    2. “AdminClassificationPath”=”c:\\temp\\temp\\classifications.xml”
    3. “EnableClassifications”=dword:00000001
    4. “TrustClassifications”=dword:00000001
  3. You need to set the path correctly in the registry as to the location (on the local computer or always available network drive) that the classifications xml file is located. This registry set is for Outlook 2016 – change the version number from 16.0 to suit earlier versions of Outlook (2013 = 15.0, 2010 = 14.0).
  4. Once the registry is set and the classifications XML file in the location the registry will look in, restart Outlook and compose a test message, setting the classification you are looking for in transport rules.
    RMS014
  5. When the message arrives at the destination inbox, if it was sent with the classification then it should have the RMS template applied (the action of the transport rule). In the following screenshot you can see two emails in Outlook. The lower email was sent with OWA and due to an OWA bug the classification is set incorrectly and so it is not RMS protected as the transport rule does not fire. The upper email can be seen to have RMS protection, though I cannot screenshot it as it has RMS protection and that stops me using PrintScreen or OneNote screen clipping tools whilst that message is open!
    RMS015RMS016

Cannot Send Emails To Office 365 or Exchange Online Protection Using TLS

Posted on 12 CommentsPosted in 2003, 2007, 2010, 2013, exchange, exchange online, Exchange Online Protection, FOPE, hybrid, Office 365, spam, starttls, TLS

I have found this is a common issue. You set up an Exchange Online Hybrid or Exchange Online Protection (EOP) stand alone service and follow all the instructions for the creating of the connectors needed, only to find that your emails queue in your Exchange Server. If you turn on protocol logging you get this error in the log “Connector is configured to send mail only over TLS connections and remote doesn’t support TLS” and if you look at the SMTP protocol verbs that are recorded in the log you see that Microsoft’s servers do not offer STARTTLS as a verb.

STARTTLS is the SMTP verb needed to begin a secure and encrypted session using TLS. Communication between your on-premises servers and Microsoft for hybrid or EOP configurations requires TLS and if you cannot start TLS then your email will queue.

If you are not configuring hybrid or EOP standalone and need to send an email to someone on Office 365 then this is not an issue, because Exchange Server does not require TLS for normal email communication and so the lack of a STARTTLS verb means your email is sent in clear text.

The reason why you are not getting STARTTLS offered is that your connecting IP address is on the Microsoft block list. If you change your connector (temporarily) to allow opportunistic TLS or no TLS at all then your emails will leave the queue – but will be rejected by the Microsoft servers. The NDR for the rejection will tell you to email Microsoft’s delisting service. So now you have an NDR with the answer to the problem in, you can fix it! It takes 1 to 2 hours to get delisted from when Microsoft process your email – so they say it takes 48 hours end to end.

Therefore my recommendation when setting up Exchange Online Hybrid or stand alone EOP is to send an email over plain text to EOP before you configure your service. If you are on the blocklist then you will get back the delisting email and you can process that whilst setting up the connectors to Office 365 and so by the time you are ready to test, you are off the blocklist!

To send a test email over Telnet

  1. Install the Telnet Client feature on your Exchange Server that will be your source server for hybrid or connectivity to EOP for outbound email
  2. Type the following. This will send an email to a fake address at Microsoft, but will hit the TLS error before the message is rejected

    telnet microsoft-com.mail.protection.outlook.com 25

  3. You are now connected to Exchange Online Protection and you should get a 220 response
  4. Type the following to send the email by command line. No typo’s allowed in telnet, so type carefully. Replacing your email address where prompted so that you get the NDR back to you.

    ehlo yourdomainname.com
    mail from: youremailaddress@domainname.com
    rcpt to: madeupaddress@microsoft.com
    data
    from: Your Name <youremailaddress@domainname.com>
    to: madeupaddress@microsoft.com
    subject: testing to see if my IP is blocked

    type something here, it does not matter what, this is the body of the message you are sending
    .

  5. A few points about the above. It must finish with a . (full stop) on a line by itself followed by a carriage return. There must be a blank line between the subject line and the body. And finally, for each line of data you type, the Microsoft SMTP servers will return either a 250 or 354 response.

Rebuilding Search Catalogs on Exchange Server 2013

Posted on 2 CommentsPosted in 2013, exchange, IAmMEC, search

In Exchange 2010 there was a PowerShell script for rebuilding the search catalog. This is depreciated in Exchange 2013. TechNet contains instructions on copying the catalog from a working server in the DAG – but what about if the database is not a member of a DAG or all the catalog’s in the DAG are broken?

This blog post will attempt to answer this:

Ensure all Search Services are Running

If you cannot complete a search in OWA or you are using Outlook in Online mode (or searching in Outlook 2013 beyond the default one year of cached content) then you return to the server for the search to take place. Therefore on the server that is active for your mailbox database the following services need to be running:

  • RTM: Microsoft Exchange Search Service
  • CU1 and later: Microsoft Exchange Search Service and Microsoft Exchange Search Host Controller

Ensure that these services are running before attempting to complete a search.

Once they are running, check the health of the search catalog with the following cmdlet in Exchange Management Shell: Get-MailboxDatabaseCopyStatus

The last column will display the state of the index. Anything other than Healthy needs to be checked. Crawling as a response means the index is currently being created and updated.

Checking for Corrupt Search Catalogs

Run the Test-ExchangeSearch cmdlet to get back a set of results on the health of your search catalog. This will take a short while, as Exchange will write known content into the Health Mailboxes on each database and then query search for the words that it expects to find. Errors here such as Time out for test thread and Catalog State: FailedAndSuspended should be fixed.

If the mailbox database is part of a DAG and other working copies exist then you can reseed the catalog using Update-MailboxDatabaseCopy faileddb/server –CatalogOnly

If all the copies are failed then stop the services listed in the previous section and delete or rename the catalog folder and restart the services listed above. The search catalog will be rebuilt and will take some time.

The location of the search catalog is always a subfolder of the mailbox database file, and this folder can be found by running: Get-MailboxDatabase -Server servername | fl name,edbfilepath,guid

This will return the name, the path to the mailbox database and the database guid, and then all you need to do is navigate to the folder that contains the database and rename the folder with the GUID as the folder name. Any new name will do. If a folder does not exist of the GUID of the mailbox database then a new one will be created.

Start the services that you stopped above and shortly a new catalog folder will be created. Depending upon the size of the database, rebuilding the catalog could take some time. Run Get-MailboxDatabaseCopyStatus to see when the catalog has finished crawling. Crawling will take additional IO from your disks, and so it is recommended not to run these steps during the day when IO is required, unless fixing your search index is more important!

Secret NSA Listening Ports in Exchange Server 2013? Of Course Not…

Posted on Leave a commentPosted in 2013, Edge, exchange, firewall, IAmMEC, iis, networking, powershell, transport

But what do those extra ports in Exchange Server 2013 that are listening actually do.

If you bring up a command prompt on an Exchange Server 2013 machine and run netstat –ano | find “:25”. You will get back a list of IP addresses that are listening on any port starting 25. The last number on the line is the process ID for that listening port. So for Mailbox only role servers you are interested in the row that shows 0.0.0.0:25 and for a multirole (CAS and Mailbox) server, the row that shows 0.0.0.0:2525 as shown:

image

Above you can see that process 21476 is listening on 2525 and as this is a multirole server this process ID will be EdgeTransport.exe – you can verify this in Task Manager if you want.

Repeat the netstat cmd, this time for the process ID you have selected: netstat –ano | find “xxxxx” where xxxxx is the process ID for EdgeTransport.exe, as shown:

image

You will now see that EdgeTransport.exe is listening in on a range of ports for both IPv4 and the same ports for IPv6. These ports are 25 or 2525, which is used by 2007 or 2010 Edge role servers, or by other 2007/2010 Hub Transport role servers or by other 2013 Mailbox role servers or by 2013 CAS role servers to send emails to this server via SMTP. Port 465 is the port that 2013 CAS servers proxy authenticated SMTP connections to that they receive on port 587 from mail clients. But what about port 29952 in my example (and on your servers a different port) which changes each time the service is restarted?

If you do netstat again just for this port you will see something like the following:

image

This shows that nothing is connected currently to these ports, and so they seem to be doing nothing. But if on a different Exchange 2013 server you do some viewing of the transport queues then these ports will start to show some activity.

On a different server in my environment I ran Get-Queue –Server remoteservername. If I do this on the local server, then nothing special happens as Exchange does not need to connect to these ports, but if it is run from a different server and I ask it to show the queue on the first server that we have been looking at above, then these ports become used:

image

image

Above we can see Exchange Management Shell on mail5 connecting to mail4 (the original server in this blog post). The second picture from mail4 shows that port 29952 have received a connection from the IPv6  address of mail5 and specifically from port 65172 on that remote server.

If I look finally at the second server in this exercise and see what process is connecting from port 65172 (and again, your ports will be different) I see that process 15156 is doing this (the process ID is the last column in the output)

image

Taking this process ID to Task Manager, I see process 15156 is the IIS Worker Process, which is the process that PowerShell connects to to do its work.

image

Therefore, the random and changing port that EdgeTransport.exe listens on is nothing to do with PRISM, but all to do with your management of remote queues.

Ensuring Email Delivery Security with Exchange 2013

Posted on 4 CommentsPosted in 2007, 2010, 2013, encryption, exchange, IAmMEC, TLS, transport

To force Exchange 2013 to guarantee the secure delivery of a message can be done a few different ways. In this version of the product and in previous versions it was possible to create a send connector for a given domain and enable Mutual TLS on the connector. Then all messages to the domain(s) that this connector serviced would need to travel over a TLS connection where the certificate at both ends was completely valid (i.e. valid regards the date, had the correct subject or SAN for the domain, was issued by a trusted certificate authority etc.). In previous versions (2007 to 2010 again) it was possible to enable Domain Secure and add another level of checks to the Mutual TLS session. Domain Secure does not work in Exchange 2013.

And great though these methods of transport security are, they are limited in that they are difficult to set up (require good knowledge of certificates) and needs to be properly configured at both ends of the connection. They are also limited in that they will only secure email to the selected domains. If you need to send a “top secret” email to someone, you don’t really want to have to configure a connector at both ends and force all email for that domain down the same path.

So, in Exchange 2013 you can create a transport rule to force the connection to use TLS, and if TLS fails then have the message queue on the sender until it retries and eventually expires. If TLS is never available, the message never goes out of Exchange – or so it would be if all you read was the description in the documentation!

The RouteMessageOutboundRequireTLS transport rule action (or the Secure the message with > TLS encryption option in the ECP transport rules wizard) allows you to craft a rule for any condition (for example the subject or body contains any of these words: top secret) which will require the email to use an encrypted session for the delivery outside of Exchange Server. Note that for this to work the TLS session does not need to be protected by a given certificate or valid etc., it just needs the receiving SMTP Server to offer STARTTLS and for the encryption to work.

And it needs a source server for sending the message in every Active Directory site within the organization. Currently (as of CU2 for Exchange 2013) if you send a message from a site that does not contain a send connector that can handle the message to the internet then Exchange will pass it to a site that can, but the source transport server will now not enforce the TLS requirement and will send the message unprotected if STARTTLS is not offered.

So if you want to guarantee the use of TLS for certain types of message use the RouteMessageOutboundRequireTLS transport rule condition and ensure that you do not need to do cross site delivery of messages to reach a send connector source server to delivery the message to the internet.