Duplicate Exchange Online and Exchange Server Mailboxes

Posted on 8 CommentsPosted in duplicate, EOP, exchange, exchange online, Exchange Online Protection, Exchange Server, mailbox, MX, Office 365

With a hybrid Exchange Online deployment, where you have Exchange Server on-premises and Exchange Online configured in the cloud, and utilising AADConnect to synchronize the directories, you should never find that a synced user object is configured as both a mailbox in Exchange Online and a mailbox on-premises.

When Active Directory is synced to Azure Active Directory, the ExchangeGUID attribute for the on-premises user is synced to the cloud (assuming that you have not done a limited attribute sync and excluded the Exchange attributes from syncing to AAD – as syncing the attributes is required for Exchange Online hybrid). Exchange Online though does not read attributes from Azure Active Directory. Exchange Online reads its attributes from the Exchange Online Directory Service (EXODS). The Exchange Online directory takes a sync of information relating to Exchange from Azure Active Directory (Azure AD), which is known as forward sync. This ensures that the ExchangeGUID attribute from the on-premises mailbox is synced into Exchange Online for your tenant.

When a user is given an Exchange Online licence, it becomes the job of Exchange Online to provision a mailbox for this user. When Exchange Online needs to provision a new mailbox, it will not do so where the ExchangeGUID attribute already exists. The existence of this attribute tells the provisioning process that the mailbox already exists on-premises and may be migrated here later and so not to create a conflicting mailbox. A cloud user who does not have an ExchangeGUID attribute synced from on-premises will get a mailbox created by the Exchange Online provisioning process upon a licence being assigned, and on-premises users that do not have a mailbox on-premises (who also have no ExchangeGUID attribute) will also find that granting them an Exchange Online licence will trigger the creation of a mailbox for them. Note that this last option will create a mailbox in the cloud – but all the attribute management of this mailbox must be done on-premises, as the object syncs from on-premises and so that is the source of the object. Therefore avoid licencing synced objects that do not have a mailbox or remote-mailbox on premises (see my session on this at Microsoft Ignite 2018 “THR2145 – Why do we need to keep an Exchange Server on-premises when we move to the cloud?“)

The above is what happens in most cases – the user on-premises has a ExchangeGUID value, that is synced to the cloud, and then the user is licenced and a second mailbox is not created. But there is an edge case where an on-premises user with a mailbox (and therefore has the ExchangeGUID attribute populated) will also get a mailbox in Exchange Online. This happens where the organization manually created cloud mailboxes before enabling AADConnect to sync the directories, and these cloud users match the on-premises user by UserPrincipalName or primary SMTP address.

In this above case, because they are cloud users with an Exchange Online licence they get a mailbox. Deleting the cloud user and then enabling sync will cause the original cloud mailbox to be restored to the user account as the UserPrincipalName matches.

For example, the below shows a user being created in the cloud called “twomailboxes@domain.com”:

image

The user is granted a full Office 365 E3 licence, so this means the user has an Exchange Online mailbox. There is no AADConnect sync in place, but the UPN matches a user on-premises who has a mailbox.

In Exchange Online PowerShell, once the mailbox is provisioned we can see the following:

image

PS> Get-Mailbox twomailboxes | FL name,userprincipalname,exchangeguid


Name              : twomailboxes
UserPrincipalName : twomailboxes@domain.com
ExchangeGuid      : d893372b-bfe0-4833-9905-eb497bb81de4

Repeating the same on-premises will show a separate user (remember, no AD sync in place at this time) with the same UPN and of course a different ExchangeGUID.

image

[PS] > Get-Mailbox twomailboxes | FL name,userprincipalname,exchangeguid


Name              : Two Mailboxes
UserPrincipalName : twomailboxes@domain.com
ExchangeGuid      : 625d70aa-82ed-47a2-afa2-d3c091d149aa

Note that the on-premises object ExchangeGUID is not the same as the cloud ExchangeGUID. This is because there are two seperate mailboxes.

Get-User in the cloud will also report something useful. It will show the “PreviousRecipientTypeDetails” value, which is not modifiable by the administrator, which in this case shows there was not a previous mailbox for the user but this can show that a previous mailbox did exist. For completion we also show the licence state:

image

PS > Get-User twomailboxes | FL name,recipienttype,previousrecipienttypedetails,*sku*


Name                         : twomailboxes
RecipientType                : UserMailbox
PreviousRecipientTypeDetails : None
SKUAssigned                  : True

Now in preparation for the sync of the Active Directory to Azure Active Directory, the user accounts in the cloud are either left in place (and so sync will do a soft-match for those users) or they are deleted and the on-premises user account syncs to the cloud. In the first case, the clash on the sync will result in the cloud mailbox being merged into the settings from the on-premises mailbox (on-premises values overwrite cloud values unless on-premises value is null or does not exist). In the second case, cloud object deleted before sync, there is no user account to merge into, but there is a mailbox to restore against this user. And even though the newly synced user has an ExchangeGUID attribute on-premises that is synced to the cloud, and they have a valid licence, Exchange Online reattaches the old mailbox associated with a different GUID but same UPN/email address.

The impact of this is minor to massive. In the scenario where MX points to on-premises and you have not yet moved any mailboxes to the cloud, this cloud mailbox will only get email from other cloud mailboxes in your tenant (there are none in this scenario) or internal alerts in Office 365 (and these are reducing over time, as they start to follow correct routing). It can be a major issue though if you use MX to Exchange Online Protection. As there is now a mailbox in the cloud for a user on-premises, inbound internet sourced email for your on-premises user will get delivered to the cloud mailbox and not appear on-premises!

The other problems are that where there is a duplicate mailbox, move requests for those users for onboarding to Exchange Online will fail:

image

This reads “a subscription for the migration user <email> couldn’t be loaded”. This occurs where the user was not licenced and so there was not a duplicate mailbox in the cloud, but the user was later licenced before the migration completed.

Where the invalid duplicate mailbox exists in the cloud and is getting valid emails delivered to it, the recovery work described below additionally will involve exporting email from this invalid mailbox and then removing the mailbox as part of the fix. Extraction of email from the duplicate mailbox needs to happen before the licence is removed.

To remove the cloud mailbox and to stop it being recreated, you need to ensure that the synced user does not have an Exchange Online licence. You can grant them other licences in Office 365, but not Exchange Online. I have noticed that if you do licencing via Azure AD group based licencing rules then this will also fail (these are still in preview at time of writing) and that you need to ensure that the user is assigned the licence directly in the Office 365 portal and that they do not get the Exchange Online licence. After licence reconciliation in the cloud occurs (a few minutes typically) the duplicate mailbox is removed (though I have seen this take a few hours). The Get-User cmdlet above will show the RecipientType being a MailUser and not Mailbox.

You are now in a position where your duplicate cloud mailbox is gone (which is why if that mailbox had been a target to valid emails before now, you would need to have extracted the data via discovery and search processes first).

Running the above Get-User and Get-Mailbox (and now Get-MailUser) cmdlets in the cloud will show you that the ExchangeGUID on the cloud object now matches the on-premises object and the duplication is gone. You can now migrate that mailbox to the cloud successfully.

We can take a look at what we see in remote PowerShell here:

Recall from above that there were two different ExchangeGUIDs, one in the cloud and one on-premises. These in my example where:

Cloud duplicate ExchangeGuid      : d893372b-bfe0-4833-9905-eb497bb81de4

On-premises mailbox ExchangeGuid  : 625d70aa-82ed-47a2-afa2-d3c091d149aa

Get-User before licences removed in the cloud, showing a mailbox and that it was previously a mailbox as well:

image

PS > Get-User twomailboxes | FL name,recipienttype,previousrecipienttypedetails,*sku*


Name                         : Two Mailboxes
RecipientType                : UserMailbox
PreviousRecipientTypeDetails : UserMailbox
SKUAssigned                  : True

Get-Mailbox in the cloud showing the GUID was different from on-premises:

image

PS > Get-Mailbox twomailboxes | FL name,userprincipalname,exchangeguid


Name              : Two Mailboxes
UserPrincipalName :
twomailboxes@domain.com

ExchangeGuid      : d893372b-bfe0-4833-9905-eb497bb81de4

Once the licence is removed in Office 365 for Exchange Online and licence reconciliation completes (SKUAssigned is False) you will see that Get-User shows it is not a mailbox anymore:

image

PS > Get-User twomailboxes | FL name,recipienttype,previousrecipienttypedetails,*sku*


Name                         : Two Mailboxes
RecipientType                : MailUser
PreviousRecipientTypeDetails : UserMailbox
SKUAssigned                  : False

And finally Get-MailUser (not Get-Mailbox now) run in Exchange Online shows the ExchangeGUID matches the on-premises, synced, ExchangeGUID value:

image

PS > Get-MailUser twomailboxes | FL name,userprincipalname,exchangeguid


Name              : Two Mailboxes
UserPrincipalName : twomailboxes@domain.com
ExchangeGuid      : 625d70aa-82ed-47a2-afa2-d3c091d149aa

Note that giving these users back their Exchange Online licence will revert all of the above and restore their old mailbox. As these users cannot have an Exchange Online licence assigned in the cloud before you migrate their on-premises mailbox, as that will restore the old cloud mailbox, you need to ensure that within 30 days of their on-premises mailbox being migrated to the cloud you do give then an Exchange Online licence. Giving them a licence after migration of their on-premises mailbox to the cloud will ensure their single, migrated, mailbox remains in Exchange Online. But giving their user a licence before migration will restore their old cloud mailbox.

For users that never had a matching UPN in the cloud and a cloud mailbox, you can licence them before you migrate their mailbox as they will work correctly within the provisioning system in Exchange Online.

DMARC Quarantine Issues

Posted on Leave a commentPosted in dkim, dmarc, EOP, exchange, exchange online, Exchange Online Protection, Exchange Server, spf, spoof

I saw the following error with a client the other day when sending emails from the client to any of the Virgin Media owned consumer ISP email addresses (virginmedia.com, ntlworld.com, blueyonder.com etc.)

mx3.mnd.ukmail.iss.as9143.net gave this error:
vLkg1v00o2hp5bc01Lkg9w DMARC validation failed with result 3.00:quarantine

In the above, the server name (…as9143.net) might change as will the value before the error, but either DMARC validation failed with result 3.00:quarantine or 4.00:reject is the end of the error message.

We resolved this error by shorting the DMARC record of the sending organization. Before we made the change we had a DMARC record of 204 characters. We cannot find a reference online to the maximum length of a DMARC record, though we could successfully add a record of this length to Route 53 DNS provided by AWS, though a record of 277 characters was not allowed in AWS. Other references online to domain character length seem to imply that 255 characters is the max, but not specifically for DMARC.

So, shortening the DMARC record to remove two of the three email addresses in each of the RUA and RUF values was the fix that we needed. This change was done for two reasons, first the above error occurred only with emails to Virgin Media and sometimes an NDR would be received and other times the NDR would fail, but the original email never made it through and secondly the two removed email addresses where not actively being checked for DMARC status messages anyway and so there is no harm in the removal of them from the DMARC record anyway!

The original DMARC record we had this issue with looked like this (xxx.xxxxx representing the client domain):

v=DMARC1; p=quarantine; fo=1;rua=mailto:admin@xxx.xxxxx,mailto:dmarc-rua@dmarc.service.gov.uk,mailto:dmarc@xxx.xxxxx;ruf=mailto:admin@xxx.xxxxx,mailto:dmarc-ruf@dmarc.service.gov.uk,mailto:dmarc@xxx.xxxxx;

Then we changed the record to the following to resolve it:

v=DMARC1; p=quarantine; fo=1;rua=mailto:dmarc-rua@dmarc.service.gov.uk;ruf=mailto:dmarc-ruf@dmarc.service.gov.uk;

Reducing the length of the record resulted in DMARC analytics and forensic email not going to mailboxes at the client (one of whom those mailboxes did not exist anyway) and only going to the UK government DMARC policy checking service, but most importantly for a client that has a requirement to respond to citizen’s emails (and whom could easily be using Virgin Media email addresses) we resolved the issue.

XOORG, Edge and Exchange 2010 Hybrid

Posted on 2 CommentsPosted in 2010, Edge, EOP, exchange, exchange online, Exchange Online Protection, Exchange Server, Office 365

So you have found yourself in the position of moving to Exchange Online from a legacy version of Exchange Server, namely Exchange 2010. You are planning to move everyone, or mostly everyone to Exchange Online and directory synchronization plays a major part (can it play a minor part?) in your plans. So you have made the option to go hybrid mode when you discover that there are manual steps to making Exchange 2010 mail flow to Exchange Online work if you have Exchange Edge Servers in use.

So, what do you do. You look online and find a number of references to setting up XOORG, but nothing about what that is and nothing about what you really need to do. And this you found this article!

So, how do you configure Exchange Server 2010 with Edge Servers, so that you can have hybrid mode to Exchange Online.

Why You Need These Steps

So you ran the hybrid wizard, and it completed (eventually if you have a large number of users) and you start your testing only to find that emails never arrive in Office 365 whilst your MX record is still pointing on-premises. After a while you start to get NDR’s for your test emails saying “#554 5.4.6 Hop count exceeded – possible mail loop” and when you look at the diagnostic information for administrators at the bottom of the NDR you see that your email goes between the hub transport servers and the edge servers and back to the hub transport servers etc. and about three or so hours after sending it, with the various timeouts involved, the email NDR arrives and the message is not sent.

The problem is that the Edge Server sees the recipient as internal, and not in the cloud, as the email has been forwarded to the user@tenant.mail.onmicrosoft.com, and Exchange 2010 is authoritative for this namespace. You are missing a configuration that tells the Edge that some emails with certain properties are not internal, but really external and others (those coming back from the cloud) are the only ones to send internal to the on-premises servers.

So what do you do?

Preparation

Before you run the hybrid wizard you need to do the following. If you have already run the wizard that is fine, you will do these steps and run it again.

  1. Install a digital certificate on all your Edge Servers that is issued by a trusted third party (i.e. GoDaddy, Digicert and others). The private key for this certificate needs to be on each server as well, but you do not need to allow the key to be exported again.
  2. Enable the certificate for SMTP, but ensure you do not set it as the default certificate. You do this by using Exchange Management Shell to Get-ExchangeCertificate to key the key’s thumbprint value and then running Enable-ExchangeCertificate –Thumbrint <thumbprintvalue> –Services SMTP. At this point you are prompted if you want to set this certificate as the default certificate. The answer is always No!
  3. If you answer yes, then run the Enable-ExchangeCertificate cmdlet again, but this time for the certificate thumbprint that was the default and set the default back again. If you change the default you will break EdgeSync and internal mail flow for everyone. And you must use the self-signed certificate for EdgeSync and this third party issued certificate for cloud mail flow, as you cannot use the same certificate for both internal and external traffic.
  4. The certificate needs to be the same across all your Edge Servers.
  5. If you are doing multi-forest hybrid, then the certificate is only the same across all the Edge Servers in one Exchange Organization. The next organization in your multi-forest hybrid needs to use a different certificate for all its Edge Servers.
  6. Then take this same certificate and install it on a single Hub Transport server on-premises. The hybrid wizard cannot see what certificates you have on the Edge Servers, so you need to help the wizard along a bit. Again, this certificate needs enabling for SMTP, but not setting as the default certificate.

Running The Hybrid Wizard

Now you can run the hybrid wizard. The important answers you need to include here are that the hub transport server that you pick must be the one that you placed the certificate on, as you cannot pick the Edge Servers that you will use for mail flow in the wizard. But you will need to enter the IP addresses that your Edge Servers are published on the internet as, and you will need to enter the FQDN of the Edge Servers as well.

Complete the wizard and then time for some manual changes.

Manual Changes

The hybrid wizard will have made a send connector on-premises called “Outbound to Office 365”. You need to change this connector to use the Edge Servers as the source servers. Note that if you run the hybrid wizard again, you might need to reset this value back to the Edge Servers. So once all these required changes are made, remember that running the wizard again could constitute an unexpected change and so should be run with care or out of hours.

Use Set-SendConnector “Outbound to Office 365” -SourceTransportServers <EDGE1>,<EDGE2> and this will cause the send connector settings to replicate to the Edge Server.

Next get a copy of the FQDN value from the receive connector that the hybrid wizard created on the hub transport server. This receive connector will be called “Inbound from Office 365” and will be tied to the public IP ranged of Exchange Online Protection. As your Edge Servers receive the inbound emails from EOP, this receive connector will serve no purposes apart from the fact that its settings are the template for your receive connector on the Edge Servers that the wizard cannot modify. The same receive connector will also have a setting called TlsDomainCapabilities and the value of this setting will be mail.protection.outlook.com:AcceptOorgProtocol. AcceptOorgProtocol is the XOORG value that you see referenced on the internet, but it is really called AcceptOorgProtocol and this is the value that allows the Edge Server to distinguish between inbound and outbound mail for your Office 365 tenant.

So on each Edge Server run the following cmdlet in Exchange Management Shell to modify the default receive connector: Set-ReceiveConnector *def* -TlsDomainCapabilities mail.protection.outlook.com:AcceptOorgProtocol -Fqdn <fqdnFromTheInboundReceiveConnectorOnTheHubTransportServer>.

This needs repeating on each Edge Server. The FQDN value ensures that the correct certificate is selected and the TlsDomainCapabilities setting ensures you do not loop email to Office 365 back on-premises again. Other emails using the Default Receive Connector are not affected by this change, apart from now being able to offer the public certificate as well to their inbound partners.

You can now continue with your testing knowing that mail flow is working, so now onto AutoDiscover, clients, free/busy, public folders etc. etc. etc.

OWA and Conditional Access: Inconsistent Error Reports

Posted on 1 CommentPosted in AzureAD, conditional access, EM+S, enterprise mobility + security, exchange, exchange online, Exchange Online Protection, IAmMEC, Outlook

Here is a good error message. Its good, because I could not find any references to it on Google and the fault was nothing to do with the error message:

image

The error says “something went wrong” and “Ref A: a long string of Hex Ref B: AMSEDGE0319 Ref C: Date Time”. The server name in Ref B will change as well. It also says “more details” and if you click that there are no more details, but that text changes to “fewer details”. As far as I have seen, this only appears on Outlook Web Access (OWA).

The error appears under these conditions:

  1. You are enabled for Enterprise Mobility + Security licences in Azure AD
  2. Conditional Access rules are enabled
  3. The device you are on, or the location you are at etc (see the specifics of the conditional access rule) mean that you are outside the conditions allowed to access Outlook Web Access
  4. You browsed directly to https://outlook.office.com or https://outlook.office365.com

What you see in the error message is OWA’s way of telling you that you cannot get to that site from where you are. That you have failed the conditional access tests.

On the other hand, if you visit the Office 365 portal or MyApps (https://portal.office.com or https://myapps.microsoft.com) and click the Mail icon in your Office 365 menu or on the portal homepage then you get a page that says (in the language of your browser):

image or in Welsh, image

This says “you can’t get there from here” and the reasons why you have failed conditional access.

If you were on a device or location that allowed you to connect (such as a device managed by Intune and compliant with Intune rules) then going to OWA directly will work, as will going via the menu.

So how can you avoid this odd error message for your users. For this, you need to replace outlook.office.com with your own custom URL. For OWA you can create a DNS CNAME in your domain for (lets say) webmail that points to outlook.office365.com (for this it will not work if you point the CNAME to outlook.office.com). Your users can now go to webmail.yourdomain.com. This will redirect the user via Azure AD for login and token generation, and as you are redirected via Azure AD you will always see the proper, language relevant, conditional access page.

Administrators, AADConnect and AdminSDHolder Issues (or why are some accounts having permission-issue)

Posted on 4 CommentsPosted in AADConnect, AADSync, active directory, AdminSDHolder, dirsync, exchange, exchange online, hybrid, Office 365

[Scripts updated 5th October 2017 to support updates for Exchange Hybrid Writeback. If you ran earlier versions of these scripts you will need to run them again]

AdminSDHolder is something I come across a lot, but find a lot of admins are unaware of it. In brief it is any user that is a member of a protected group (i.e. Domain Admins) will find that their AD permission inheritance and access control lists on their AD object will be reset every hour. Michael B. Smith did a nice write-up on this subject here.

AdminSDHolder is an AD object that determines what the permissions for all protected group members need to be. Why this matters with AADConnect and your sync to Azure Active Directory (i.e. the directory used by Office 365) is that any object that the AADConnect service cannot read cannot be synced, and any object that the AADConnect service cannot write to can be targeted by writeback permissions. This blog post was last updated 18th June 2017 in advance of the release of AADConnect version 1.1.553.0.

For the read permissions this is less of an issue, as the default read permissions by every object is part of a standard Active Directory deployment and so you will find that AdminSDHolder contains this permission and therefore protected objects can be read by AADConnect. This happens in reality becase Authenticated Users have read permissions to lots of attributes on the AdminSDHolder object under the hidden System containing in the domain. Unless your AD permissions are very locked down or AdminSDHolder permissions have been changed to remove Authenticated Users you should have no issue in syncing admin accounts, who of course might have dependencies on mailboxes and SharePoint sites etc. and so need to be synced to the cloud.

Writeback though is a different ball game. Unless you have done AADConnect with Express settings you will find that protected accounts fail during the last stage of AADConnect sync process. You often see errors in the Export profile for your Active Directory that list your admin accounts. Ofter the easiest way to fix this is to enable the Inheritance permission check box on the user account and sync again. The changes are now successfully written but within the hour this inheritance checkbox will be removed and the default permissions as set on AdminSDHolder reapplied to these user accounts. Later changes that need written back from the cloud will result in a failure to writeback again, and again permission issues will be to blame.

To fix this we just need to ensure that the AdminSDHolder object has the correct permissions needed. This is nothing more than doing what the AADConnect Express wizard will do for you anyway, but if you don’t do the Express wizard I don’t think I have seen what you should do documented anywhere – so this is the first (maybe).

Often if you don’t run Express settings you are interested in the principal of least privilege and so the rest of this blog post will outline what you will see in your Active Directory and what to do to ensure protected accounts will always sync and writeback in the Azure Active Directory sync engine. I covered the permissions to enable various types of writeback permissions in a different blog post, but the scripts in this post never added the correct write permissions to AdminSDHolder, so this post will cover what to do for your protected accounts.

First, take a look at any protected account (i.e. one that is a member of Domain Admins):
image

You will see in the Advanced permissions dialog that their is an “Enable Inheritance” button (or a check box is unchecked in older versions of Active Directory. You will also notice that all the permissions under the “Inherited From” column read “None” – that is there are no permissions inherited. You will also see, as shown in the above dialog, that if Express settings have been run for your AADConnect sync service that a access control entry for the AADConnect service account will be listed – here this is MSOL_924f68d9ff1f (yours will be different if it exists) and has read/write for everything. This is not least privilege! If you have run the sync engine previously on different servers and later removed them (as the sync engine can only run on one server to one AAD tenant, excluding staging servers) then you might see more than one MSOL account. The description field of the account will show what server it was created on for your information.

If you compare your above admin account to a non-protected account you will see inheritance can be disabled and that the Inherited From column lists the source of the permission inheritance.

Compare the access control entries (ACE) to the list of ACE’s on the AdminSDHolder object. AdminSDHolder can be found at CN=AdminSDHolder,CN=System,DC=domain,DC=local. You should find that the protected accounts match those of the AdminSDHolder, or at least will within the hour as someone could have just changed something.

Add a permission ACE to AdminSDHolder and it will appear on each protected account within an hour, remove an ACE and it will go within the hour as well. So you could for example remove the MSOL_ account(s) from older ADSync deployments and tidy up your permissions as well.

This is what my Advanced permissions for AdminSDHolder looks like on my domain

image

If I add the relevant ACE’s here for the writeback permissions then within the hour, and then for syncs that happen after that time, the errors for writeback in the sync management console will go away. Note though that AdminSDHolder is per domain, so if you are syncing more than one domain you need to set these permissions on each domain.

To script these permissions, run the following in PowerShell to update AD permissions regarding to the different hybrid writebacks scenarios that you are interested in implementing.

Finding All Your AdminSDHolder Affected Users

The following PowerShell will let you know all the users in your domain who have an AdminCount set to 1 (>0 in reality), which means they are impacted by AdminSDHolder restrictions. The changes below directly on the AdminSDHolder will impact these users as their permissions will get updated to allow writeback from Azure AD.

get-aduser -Filter {admincount -gt 0} -Properties adminCount -ResultSetSize $null | FT DistinguishedName,Enabled,SamAccountName

SourceAnchor Writeback

This setting is needed for all installations since version 1.1.553.0.

$accountName = "domain\aad_account" #[this is the account that will be used by Azure AD Connect Sync to manage objects in the directory, this is an account usually in the form of AAD_number or MSOL_number].
$AdminSDHolder = "CN=AdminSDHolder,CN=System,DC=contoso,DC=com"

$cmd = "dsacls.exe '$AdminSDHolder' /G '`"$accountName`":WP;ms-ds-consistencyGuid'"
Invoke-Expression $cmd | Out-Null

Password Writeback

The following PowerShell will modify the permissions on the AdminSDHolder object so that protected accounts can have Self Service Password Reset (SSPR) function against the accounts. Note you need to change the DC values in the script for it to function against your domain(s).

Note that if you implement this, I recommend that you use version 1.1.553 or later, as that version restricts rogue Azure AD admins from resetting other Active Directory admins passwords and then taking ownership of the Active Directory account. Often Azure AD admins have admin rights in AD, and so this was always possible independent of AADConnect, but versions of AADConnect prior to 1.1.553 would allow an Azure AD admin to reset a restricted AD account that they did not own.

To determine the account name that permissions must be granted to, open the Synchronization Service Manager on the sync server, click Connectors and double click the connector to the domain you are updating. Under the Connect to Active Directory Forest item you will see the Forest Name and User Name. The User Name is the name of the account you need in the script. An example is shown below:

image

$accountName = "domain\aad_account" #[this is the account that will be used by Azure AD Connect Sync to manage objects in the directory, this is an account usually in the form of AAD_number or MSOL_number].
$AdminSDHolder = "CN=AdminSDHolder,CN=System,DC=contoso,DC=com"

$cmd = "dsacls.exe '$AdminSDHolder' /G '`"$accountName`":CA;`"Reset Password`"'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls.exe '$AdminSDHolder' /G '`"$accountName`":CA;`"Change Password`"'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls.exe '$AdminSDHolder' /G '`"$accountName`":WP;lockoutTime'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls.exe '$AdminSDHolder' /G '`"$accountName`":WP;pwdLastSet'"
Invoke-Expression $cmd | Out-Null

Note: Take care with this permission. Though Express Mode does this, there is a possible scenario where writeback of an AD admin password change could be misused. Versions of AADConnect released on 2018 specifically block this functionality where the user is changing a different admin user. These permissions allow the current user to make this change to their own account.

Exchange Hybrid Mode Writeback

The below script will set the permissions required for the service account that AADSync uses. Note that if Express mode has been used, then an account called MSOL_AD_Sync_RichCoexistence will exist that has these permissions rather than being assigned directly to the sync account. Therefore you could change the below permissions to utilise MSOL_AD_Sync_RichCoexistence rather than AAD_ or MSOL_ and achieve the same results, but knowing that future changes to the MSOL_ or AAD_ account will be saved as it was done via a group.

The final permission in the set is for msDS-ExternalDirectoryObjectID and this is part of the Exchange Server 2016 (and maybe Exchange Server 2013 later CU’s) schema updates. Newer documentation on AAD Connect synchronized attributes already has this attribute listed, for example in Azure AD Connect sync: Attributes synchronized to Azure Active Directory

$accountName = "domain\aad_account"
$AdminSDHolder = "CN=AdminSDHolder,CN=System,DC=contoso,DC=com"

$cmd = "dsacls '$AdminSDHolder' /G '`"$accountName`":WP;proxyAddresses'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls '$AdminSDHolder' /G '`"$accountName`":WP;msExchUCVoiceMailSettings'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls '$AdminSDHolder' /G '`"$accountName`":WP;msExchUserHoldPolicies'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls '$AdminSDHolder' /G '`"$accountName`":WP;msExchArchiveStatus'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls '$AdminSDHolder' /G '`"$accountName`":WP;msExchSafeSendersHash'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls '$AdminSDHolder' /G '`"$accountName`":WP;msExchBlockedSendersHash'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls '$AdminSDHolder' /G '`"$accountName`":WP;msExchSafeRecipientsHash'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls '$AdminSDHolder' /G '`"$accountName`":WP;msDS-ExternalDirectoryObjectID'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls '$AdminSDHolder' /G '`"$accountName`":WP;publicDelegates'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls '$AdminSDHolder' /G '`"$accountName`":WP;msExchDelegateLinkList'"
Invoke-Expression $cmd | Out-Null

Once these two scripts are run against AdminSDHolder object and you wait an hour, the permissions will be applied to your protected accounts, then within 30 minutes (based on the default sync time) any admin account that is failing to get cloud settings written back to Active Directory due to permission-issue errors will automatically get resolved.

Exchange Edge Server and Common Attachment Blocking In Exchange Online Protection

Posted on Leave a commentPosted in 2007, 2010, 2013, 2016, Edge, EOP, exchange, exchange online, Exchange Online Protection, FOPE, IAmMEC, Office 365

Both Exchange Server Edge role and Exchange Online Protection have an attachment filtering policy. The default in Edge Server is quite long, and the default in EOP is quite short. There is also a few values that are common to both.

So, how do you merge the lists so that your Edge Server attachment filtering policy is copied to Exchange Online in advance of changing your MX record to EOP?

You run

Set-MalwareFilterPolicy Default -FileTypes ade,adp,cpl,app,bas,asx,bat,chm,cmd,com,crt,csh,exe,fxp,hlp,hta,inf,ins,isp,js,jse,ksh,lnk,mda,mdb,mde,mdt,mdw,mdz,msc,msi,msp,mst,ops,pcd,pif,prf,prg,ps1,ps11,ps11xml,ps1xml,ps2,ps2xml,psc1,psc2,reg,scf,scr,sct,shb,shs,url,vb,vbe,vbs,wsc,wsf,wsh,xnk,ace,ani,docm,jar

This takes both the Edge Server default list and the EOP default list, minus the duplicate values and adds them to EOP. If you have a different custom list then use the following PowerShell to get your two lists and then use the above (with “Default” being the name of the policy) PowerShell to update the list in the cloud

Edge Server: Get-AttachmentFilterEntry

EOP: $variable = Get-MalwareFilterPolicy Default
$variable.FileTypes

Exchange Online Archive–Counting Archives

Posted on Leave a commentPosted in archive, exchange, exchange online, Exchange Server, EXO, IAmMEC, Office 365

If you are using Exchange Online Archive and what to get a count of the number of users with an archive, or a list of the users with an archive, then the following PowerShell scripts will give you this info:

List all users with an Exchange Online Archive:

Get-MailUser -ResultSize Unlimited | where {$_.ArchiveName -ilike “In-Place Archive*”}

Count all users with an Exchange Online Archive:

(Get-MailUser -ResultSize Unlimited | where {$_.ArchiveName -ilike “In-Place Archive*”}).Count

Both of these PowerShell cmdlets need to be run in Exchange Online via Remote PowerShell.

Exchange Server and Missing Root Certificates

Posted on Leave a commentPosted in 2007, 2010, 2013, exchange, exchange online, Exchange Server, federation, Free/Busy

I came across an issue with a clients Exchange Server deployment today that is not well documented – or rather it is, but you need to know where to look. So I thought I would document the troubleshooting steps and the fix here.

We specifically came across this error when testing Free/Busy for an Office 365 migration, though it could happen for a variety of reasons. Free/Busy and other lookups in a cross-forest Exchange Server deployment require a working organization configuration and this was failing. Running Test-FederationTrust (a prerequisite of the organization relationship) in verbose mode (add -Verbose to the end) returned the following:

Unable to retrieve federation metadata from the security token
service. Reason: Microsoft.Exchange.Management.FederationProvisioning.FederationMetadataException: Unable to access the
Federation Metadata document from the federation partner. Detailed information: “The underlying connection was closed:
Could not establish trust relationship for the SSL/TLS secure channel.”.

The final result of the test will also show two errors for “Unable to retrieve federation metadata from the security token service.” and “Failed to request delegation token.”

The last part of the verbose error is the clue here. The server in question is unable to make an SSL/TLS connection to the endpoint that the federation trust needs to reach to get the federation trust metadata. That endpoint is listed right at the start of the Verbose output. It reads:

VERBOSE: [16:53:08.306 GMT] Test-FederationTrust : Requesting Federation Metadata from
https://nexus.microsoftonline-p.com/FederationMetadata/2006-12/FederationMetadata.xml.

Now that we have a URL and an error message, check that the URL is reachable from each of your Exchange Servers. At my client today we found one server could not successfully reach this endpoint without an SSL error turning up in the browser. The problem was that the certificate that the endpoint is secure with is issued by the Baltimore Cybertrust Root Certificate – one that Microsoft uses for lots of services, but the root certificate was not installed on that machine. Lots of root certs where missing from that machine as it had never had a root certificate update applied to it.

We installed the latest Root Certificate Update and then the federation trust worked and free/busy etc. (mail tips, cross-forest message tracking etc.) all worked fine.

Qualifications in Exchange Signatures

Posted on Leave a commentPosted in 2013, active directory, exchange, Exchange Server, Global Catalog, IAmMEC, iQ.Suite

In a recent project I was working with iQ.Suite from GBS and specifically the component of this software that add signatures to emails. The client are an international organization with users in different geographies and we needed to accommodate the users qualifications in their email signature.

The problem with this is that in Germany qualifications are written in front of the name and in the USA at the end and in other countries at the start and the end. We were doing a Notes to Exchange migration and in Notes the iQ.Suite signature software read data from Notes that was originally pulled from Active Directory, and so the client had placed the qualifications in the DisplayName field in the Active Directory.

But when we migrated to Exchange Server the Global Address List listed the users DisplayName an so the German users where all listed together with “Dipl” as the first characters of their name. Also the name the email came from was written like this. The signature worked, but the other changes that became apparent meant we had to work out a different way to look at this problem.

So rather than using DisplayName for the users name and qualifications, we used personalPrefix in Active Directory to store anything needed before their name (Dipl in the above German example, and Prof or Dr being English examples) and the generationQualifier Active Directory attribute to store any string that followed the users DisplayName (such as Jr in the USA or BSc for qualifications etc.)

In iQ.Suite we created a signature that looked like the following. This has a conditional [COND] entry for personalTitle, displayName and generationQualifier. That is if each of these are present, then show the displayName with personalTitle before it and generationQualifier after it. If the user does not have values for these fields, do not show them. The [COND] control is documented in iQ.Suite.

[COND]personalTitle;[VAR]personalTitle[/VAR] [/COND][COND]displayName;[VAR]displayName[/VAR][/COND][COND]generationQualifier; [VAR]generationQualifier[/VAR][/COND]

What was not so well documented, and why I wanted to write this blog entry was that the personalTitle and generationQualifier attributes are not stored in the Global Catalog and so are missing in the users signature. In the multi-domain deployment we had at the client, iQ.Suite read the personalTitle, displayName and generationQualifier Active Directory attributes from the Global Catalog as Exchange was installed in a resource domain and the users in separate domains and so unless the attribute was pushed to the Global Catalog it was not seen by iQ.Suite.

To promote an attribute to be visible in the Global Catalog you need to open the Schema Management MMC snap-in, find the attributes of question and tick the Replicate this attribute to the Global Catalog field. This is outlined in https://technet.microsoft.com/en-us/library/cc737521(v=ws.10).aspx.

Configuring Sync and Writeback Permissions in Active Directory for Azure Active Directory Sync

Posted on 47 CommentsPosted in 2008, 2008 R2, 2012, 2012 R2, active directory, ADFS 3.0, Azure, Azure Active Directory, cloud, exchange, exchange online, groups, hybrid, IAmMEC, Office 365, WAP, Web Application Proxy, windows

[This blog post was last updated 5th October 2017 – added support to Exchange Hybrid for msExchDelegateLinkList attribute which was announced at Microsoft Ignite 2017 for the support of keeping auto-mapping working cross on-premises and the cloud]

[Updated 18th June 2017 in advance of the release of AADConnect version 1.1.553.0. This post contains updates to the below scripts to include the latest attributes synced back to on-premises including publicDelegates, which is used for supporting bi-directional sync for “Send on Behalf” of permissions in Exchange Online/Exchange Server hybrid writeback scenarios]

[Update March 2017 – added another blog post on using the below to fix permission-issue errors on admin and other protected accounts at http://c7solutions.com/2017/03/administrators-aadconnect-and-adminsdholder-issues]

Azure Active Directory has been long the read-only cousin of Active Directory for those Office 365 and Azure users who sync their directory from Active Directory to Azure Active Directory apart from eight attributes for Exchange Server hybrid mode. Not any more. Azure Active Directory writeback is now available. This enables objects to be mastered or changed in Azure Active Directory and written back to on-premises Active Directory.

This writeback includes:

  • Devices that can be enrolled with Office 365 MDM or Intune, which will allow login to AD FS controlled resources based on user and the device they are on
  • “Modern Groups” in Office 365 can be written back to on-premises Exchange Server 2013 CU8 or later hybrid mode and appear as mail enabled distribution lists on premises. Does not require AAD Premium licences
  • Users can change their passwords via the login page or user settings in Office 365 and have that password written back online.
  • Exchange Server hybrid writeback is the classic writeback from Azure AD and is the apart from Group Writeback is the only one of these writebacks that does not require Azure AD Premium licences.
  • User writeback from Azure AD (i.e. users made in Office 365 in the cloud for example) to on-premises Active Directory
  • Password Hash Sync (this is not really writeback, but its the only permission needed by default for forward sync, so added here)
  • Windows 10 devices for “Azure AD Domain Join” functionality

All of these features require AADConnect and not and of the earlier verions. You can add all these writeback functions from the AADConect setup wizard, and if you have used Custom mode, then you will need to implement the following permissions.

In all the below sections you need to grant permission to the connector account. You can find the connector account for your Active Directory forest from the Synchronization Service program > Connectors > double-click your domain > select Connect to Active Directory Forest. The account listed here is the connector account you need to grant permissions to.

SourceAnchor Writeback

For users with (typically) multi-forest deployments or plans or a forest migration, the objectGuid value in Active Directory, which is used as the source for the attribute that keys your on-premises object to your synced cloud object – in AAD sync parlance, this is known as the SourceAnchor. If you set up AADConnect version 1.1.553.0 or later you can opt to change from objectGuid to a new source anchor attribute known as ms-ds-consistencyGuid. To be able to use this new feature you need the ability for AADConnect connector account to be able to read ObjectGUID and then write it back to ms-ds-consistencyGuid. The read permissions are typically available to the connector account without doing anything special, and if AADConnect is installed in Express Mode it will get the write permissions it needs, but as with the rest of this blog, if you are not using Express Mode you need to grant the permissions manually and so write permissions are needed to the ms-ds-consistencyGuid attribute. This can be done with this script.


$accountName = "domain\aad_account" #[this is the account that will be used by Azure AD Connect Sync to manage objects in the directory, this is often an account in the form of MSOL_number or AAD_number].
$ForestDN = "DC=contoso,DC=com"

$cmd = "dsacls '$ForestDN' /I:S /G '`"$accountName`":WP;ms-ds-consistencyGuid;user'"
Invoke-Expression $cmd | Out-Null

Note that if you use ms-ds-consistencyGuid then there are changes required on your ADFS deployment as well. The Issuance Transform Rules for the Office 365 Relying Party Trust contains a rule that specifies the ImmutableID (aka AADConnect SourceAnchor) that the user will be identified as for login. By default this is set to ObjectGUID, and if you use AADConnect to set up ADFS for you then the application will update the rule. But if you set up ADFS yourself then you need to update the rule.

Issuance Transform Rules

When Office 365 is configured to federate a domain (use ADFS for authentication of that domain and not Azure AD) then the following are the claims rules that exist out of the box need to be adjusted. This is to support the use of ms-ds-consistencyguid as the immutable ID.

ADFS Management UI > Trust Relationships > Relying Party Trusts

Select Microsoft Office 365 Identity Platform > click Edit Claim Rules

You get two or three rules listed here. You get three rules if you use -SupportMultipleDomain switch in Convert-MSOLDomainToFederated.
Rule 1:
Change objectGUID to ms-DS-ConsistencyGUID
Rule Was:
c:[Type == “http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname”]
=> issue(store = “Active Directory”, types = (“http://schemas.xmlsoap.org/claims/UPN”, “http://schemas.microsoft.com/LiveID/Federation/2008/05/ImmutableID”), query = “samAccountName={0};userPrincipalName,objectGUID;{1}”, param = regexreplace(c.Value, “(?<domain>[^\\]+)\\(?<user>.+)”, “${user}”), param = c.Value);
New Value:
c:[Type == “http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname”]
=> issue(store = “Active Directory”, types = (“http://schemas.xmlsoap.org/claims/UPN”, “http://schemas.microsoft.com/LiveID/Federation/2008/05/ImmutableID”), query = “samAccountName={0};userPrincipalName,ms-DS-ConsistencyGUID;{1}”, param = regexreplace(c.Value, “(?<domain>[^\\]+)\\(?<user>.+)”, “${user}”), param = c.Value);

Preparing for Device Writeback

If you do not have a 2012 R2 or later domain controller then you need to update the schema of your forest. Do this by getting a Windows Server 2012 R2 ISO image and mounting it as a drive. Copy the support/adprep folder from this image or DVD to a 64 bit domain member in the same site as the Schema Master. Then run adprep /forestprep from an admin cmd prompt when logged in as a Schema Admin. The domain member needs to be a 64 bit domain joined machine for adprep.exe to run.

Wait for the schema changes to replicate around the network.

Import the cmdlets needed to configure your Active Directory for writeback by running Import-Module ‘C:\Program Files\Microsoft Azure Active Directory Connect\AdPrep\AdSyncPrep.psm1’ from an administrative PowerShell session. You need Azure AD Global Admin and Enterprise Admin permissions for Azure and local AD forest respectively. The cmdlets for this are obtained by running the Azure AD Connect tool.


$accountName = "domain\aad_account" #[this is the account that will be used by Azure AD Connect Sync to manage objects in the directory, this is often an account in the form of MSOL_number or AAD_number].
Initialize-ADSyncDeviceWriteBack -AdConnectorAccount $accountName -DomainName contoso.com #[domain where devices will be created].

This will create the “Device Registration Services” node in the Configuration partition of your forest as shown:

image

To see this, open Active Directory Sites and Services and from the View menu select Show Services Node. Also in the domain partition you should now see an OU called RegisteredDevices. The AADSync account now has permissions to write objects to this container as well.

In Azure AD Connect, if you get the error “This feature is disabled because there is no eligible forest with appropriate permissions for device writeback” then you need to complete the steps in this section and click Previous in the AADConnect wizard to go back to the “Connect your directories” page and then you can click Next to return to the “Optional features” page. This time the Device Writeback option will not be greyed out.

Device Writeback needs a 2012 R2 or later AD FS server and WAP to make use of the device info in the Active Directory (for example, conditional access to resources based on the user and the device they are using). Once Device Writeback is prepared for with these cmdlets and the AADConnect Synchronization Options page is enabled for Device Writeback then the following will appear in Active Directory:

image

Not shown in the above, but adding the Display Name column in Active Directory Users and Computers tells you the device name. The registered owner and registered users of the device are available to view, but as they are SID values, they are not really readable.

If you have multiple forests, then you need add the SCP record for the tenant name in each separate forest. The above will do it for the forest AADConnect is installed in and the below script can be used to add the SCP to other forests:

$verifiedDomain = "contoso.com"  # Replace this with any of your verified domain names in Azure AD
$tenantID = "27f998bf-86f2-41bf-91ab-2d7ab011df35"  # Replace this with you tenant ID
$configNC = "CN=Configuration,DC=corp,DC=contoso,DC=com"  # Replace this with your AD configuration naming context
$de = New-Object System.DirectoryServices.DirectoryEntry
$de.Path = "LDAP://CN=Services," + $configNC
$deDRC = $de.Children.Add("CN=Device Registration Configuration", "container")
$deDRC.CommitChanges()
$deSCP = $deDRC.Children.Add("CN=62a0ff2e-97b9-4513-943f-0d221bd30080", "serviceConnectionPoint")
$deSCP.Properties["keywords"].Add("azureADName:" + $verifiedDomain)
$deSCP.Properties["keywords"].Add("azureADId:" + $tenantID)
$deSCP.CommitChanges()

Preparing for Group Writeback

Writing Office 365 “Modern Groups” back to Active Directory on-premises requires Exchange Server 2013 CU8 or later schema updates and servers installed. To create the OU and permissions required for Group Writeback you need to do the following.

Import the cmdlets needed to configure your Active Directory for writeback by running Import-Module ‘C:\Program Files\Microsoft Azure Active Directory Connect\AdPrep\AdSyncPrep.psm1’ from an administrative PowerShell session. You need Domain Admin permissions for the domain in the local AD forest that you will write back groups to. The cmdlets for this are obtained by running the Azure AD Connect tool.

$accountName = "domain\aad_account" #[this is the account that will be used by Azure AD Connect Sync to manage objects in the directory, this is often an account in the form of MSOL_number or AAD_number].
$cloudGroupOU = "OU=CloudGroups,DC=contoso,DC=com"
Initialize-ADSyncGroupWriteBack -AdConnectorAccount $accountName -GroupWriteBackContainerDN $cloudGroupOU

Once these cmdlets are run the AADSync account will have permissions to write objects to this OU. You can view the permissions in Active Directory Users and Computers for this OU if you enable Advanced mode in that program. There should be a permission entry for this account that is not inherited from the parent OU’s.

At the time of writing, the distribution list that is created on writeback from Azure AD will not appear in the Global Address List in Outlook etc. or allow on-premises mailboxes to send to these internal only cloud based groups. To add it to the address book you need to create a new subdomain, update public DNS and add send connectors to hybrid Exchange Server. This is all outlined in https://technet.microsoft.com/en-us/library/mt668829(v=exchg.150).aspx. This ensure’s that on-premises mailboxes can deliver to groups as internal senders and not require external senders enabled on the group. To add the group to the Global Address List you need to run Update-AddressList in Exchange Server. Once group writeback is prepared for using these cmdlets here and AADConnect has had it enabled during the Synchronization Options page, you should see the groups appearing in the selected OU as shown:

image

And you should find that on-premises users can send email to these groups as well.

Preparing for Password Writeback

The option for users to change their passwords in the cloud and have then written back to on-premises (with multifactor authentication and proof of right to change the password) is also available in Office 365 / Azure AD with the Premium Azure Active Directory or Enterprise Mobility Pack licence.

To enable password writeback for AADConnect you need to enable the Password Writeback option in AADConnect synchronization settings and then run the following three PowerShell cmdlets on the AADSync server:


Get-ADSyncConnector | fl name,AADPasswordResetConfiguration
Get-ADSyncAADPasswordResetConfiguration -Connector "contoso.onmicrosoft.com - AAD"
Set-ADSyncAADPasswordResetConfiguration -Connector "contoso.onmicrosoft.com - AAD" -Enable $true

The first of these cmdlets lists the ADSync connectors and the name and password reset state of the connector. You need the name of the AAD connector. The middle cmdlet tells you the state of password writeback on that connector and the last cmdlet enables it if needed. The name of the connector is required in these last two cmdlets.

To set the permissions on-premises for the passwords to be written back the following script is needed:

$passwordOU = "DC=contoso,DC=com" #[you can scope this down to a specific OU]
$accountName = "domain\aad_account" #[this is the account that will be used by Azure AD Connect Sync to manage objects in the directory, this is often an account in the form of MSOL_number or AAD_number].

$cmd = "dsacls.exe '$passwordOU' /I:S /G '`"$accountName`":CA;`"Reset Password`";user'"
Invoke-Expression $cmd | Out-Null

$cmd = "dsacls.exe '$passwordOU' /I:S /G '`"$accountName`":CA;`"Change Password`";user'"
Invoke-Expression $cmd | Out-Null

$cmd = "dsacls.exe '$passwordOU' /I:S /G '`"$accountName`":WP;lockoutTime;user'"
Invoke-Expression $cmd | Out-Null

$cmd = "dsacls.exe '$passwordOU' /I:S /G '`"$accountName`":WP;pwdLastSet;user'"
Invoke-Expression $cmd | Out-Null

Finally you need to run the above once per domain.

Preparing for Exchange Server Hybrid Writeback

Hybrid mode in Exchange Server requires the writing back on eight attributes from Azure AD to Active Directory. The list of attributes written back is found here. The following script will set these permissions for you in the OU you select (or as shown at the root of the domain). The DirSync tool used to do all this permissioning for you, but the AADSync tool does not. Therefore scripts such as this are required. This script sets lots of permissions on these eight attributes, but for clarify on running the script the output of the script is sent to Null. Remove the “| Out-Null” from the script to see the changes as they occur (the script also takes a lot longer to run).

$accountName = "domain\aad_account" #[this is the account that will be used by Azure AD Connect Sync to manage objects in the directory, this is often an account in the form of MSOL_number or AAD_number].
$HybridOU = "DC=contoso,DC=com"

#Object type: user
$cmd = "dsacls '$HybridOU' /I:S /G '`"$accountName`":WP;proxyAddresses;user'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls '$HybridOU' /I:S /G '`"$accountName`":WP;msExchUCVoiceMailSettings;user'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls '$HybridOU' /I:S /G '`"$accountName`":WP;msExchUserHoldPolicies;user'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls '$HybridOU' /I:S /G '`"$accountName`":WP;msExchArchiveStatus;user'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls '$HybridOU' /I:S /G '`"$accountName`":WP;msExchSafeSendersHash;user'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls '$HybridOU' /I:S /G '`"$accountName`":WP;msExchBlockedSendersHash;user'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls '$HybridOU' /I:S /G '`"$accountName`":WP;msExchSafeRecipientsHash;user'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls '$HybridOU' /I:S /G '`"$accountName`":WP;msDS-ExternalDirectoryObjectID;user'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls '$HybridOU' /I:S /G '`"$accountName`":WP;publicDelegates;user'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls '$HybridOU' /I:S /G '`"$accountName`":WP;msExchDelegateLinkList;user'"
Invoke-Expression $cmd | Out-Null

#Object type: iNetOrgPerson
$cmd = "dsacls '$HybridOU' /I:S /G '`"$accountName`":WP;proxyAddresses;iNetOrgPerson'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls '$HybridOU' /I:S /G '`"$accountName`":WP;msExchUCVoiceMailSettings;iNetOrgPerson'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls '$HybridOU' /I:S /G '`"$accountName`":WP;msExchUserHoldPolicies;iNetOrgPerson'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls '$HybridOU' /I:S /G '`"$accountName`":WP;msExchArchiveStatus;iNetOrgPerson'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls '$HybridOU' /I:S /G '`"$accountName`":WP;msExchSafeSendersHash;iNetOrgPerson'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls '$HybridOU' /I:S /G '`"$accountName`":WP;msExchBlockedSendersHash;iNetOrgPerson'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls '$HybridOU' /I:S /G '`"$accountName`":WP;msExchSafeRecipientsHash;iNetOrgPerson'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls '$HybridOU' /I:S /G '`"$accountName`":WP;msDS-ExternalDirectoryObjectID;iNetOrgPerson'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls '$HybridOU' /I:S /G '`"$accountName`":WP;publicDelegates;iNetOrgPerson'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls '$HybridOU' /I:S /G '`"$accountName`":WP;msExchDelegateLinkList;iNetOrgPerson'"
Invoke-Expression $cmd | Out-Null

#Object type: group
$cmd = "dsacls '$HybridOU' /I:S /G '`"$accountName`":WP;proxyAddresses;group'"
Invoke-Expression $cmd | Out-Null

#Object type: contact
$cmd = "dsacls '$HybridOU' /I:S /G '`"$accountName`":WP;proxyAddresses;contact'"
Invoke-Expression $cmd | Out-Null

Finally you need to run the above once per domain.

Preparing for User Writeback

[This functionality is not in the current builds of AADConnect]

Currently in preview at the time of writing, you are able to make users in Azure Active Directory (cloud users as Office 365 would call them) and write them back to on-premises Active Directory. The users password is not written back and so needs changing before the user can login on-premises.

To prepare the on-premises Active Directory to writeback user objects you need to run this script. This is contained in AdSyncPrep.psm1 and that is installed as part of Azure AD Connect. Azure AD Connect will install Azure AD Sync, which is needed to do the writeback. To load the AdSyncPrep.psm1 module into PowerShell run Import-Module ‘C:\Program Files\Microsoft Azure Active Directory Connect\AdPrep\AdSyncPrep.psm1’ from an administrative PowerShell session.

$accountName = "domain\aad_account" #[this is the account that will be used by Azure AD Connect Sync to manage objects in the directory, this is an account usually in the form of AAD_number].
$cloudUserOU = "OU=CloudUsers,DC=contoso,DC=com"
Initialize-ADSyncUserWriteBack -AdConnectorAccount $accountName -UserWriteBackContainerDN $cloudUserOU

Once the next AADSync occurs you should see users in the OU used above that match the cloud users in Office 365 as shown:

image

Prepare for Password Hash Sync

This set of PowerShell ensures that the AADConnect account has the correct permissions to read password hashes from the Active Directory when they are changed, so that the service can sync them to the cloud. You need this permission whenever you enable Password Hash Sync (which could be in conjunction with another authentication method as well)

$DomainDN = "DC=contoso,DC=com" 
$accountName = "domain\aad_account" #[this is the account that will be used by Azure AD Connect Sync to manage objects in the directory, this is often an account in the form of MSOL_number or AAD_number].

$cmd = "dsacls.exe '$DomainDN' /G '`"$accountName`":CA;`"Replicating Directory Changes`";'"
Invoke-Expression $cmd | Out-Null

$cmd = "dsacls.exe '$DomainDN' /G '`"$accountName`":CA;`"Replicating Directory Changes All`";'"
Invoke-Expression $cmd | Out-Null

Prepare for Windows 10 Registered Device Writeback Sync

Windows 10 devices that are joined to your domain can be written to Azure Active Directory as a registered device, and so conditional access rules on device ownership can be enforced. To do this you need to import the AdSyncPrep.psm1 module. This module supports the following two additional cmdlets to prepare your Active Directory for Windows 10 device sync:

CD "C:\Program Files\Microsoft Azure Active Directory Connect\AdPrep"
Import-Module .\AdSyncPrep.psm1
Initialize-ADSyncDomainJoinedComputerSync
Initialize-ADSyncNGCKeysWriteBack

These cmdlets are run as follows:

$accountName = "domain\aad_account" #[this is the account that will be used by Azure AD Connect Sync to manage objects in the directory, this is often an account in the form of MSOL_number or AAD_number].
$azureAdCreds = Get-Credential #[Azure Active Directory administrator account]

CD "C:\Program Files\Microsoft Azure Active Directory Connect\AdPrep"
Import-Module .\AdSyncPrep.psm1
Initialize-ADSyncDomainJoinedComputerSync -AdConnectorAccount $accountName -AzureADCredentials $azureAdCreds 
Initialize-ADSyncNGCKeysWriteBack -AdConnectorAccount $accountName 

To successfully run these cmdlets you need to have the latest version of the Microsoft Online PowerShell modules installed (the V1.1 versions, not the V2.0 preview). You can get these from https://www.powershellgallery.com/packages/MSOnline (which in turn needs MSOL Signin Assistant from https://www.microsoft.com/en-us/download/details.aspx?id=41950 and the Windows Management Framework v5 from https://www.microsoft.com/en-us/download/details.aspx?id=50395). If you get errors in the above, make sure you have the correct version, download from above and try the scripts again.

Once complete, open Active Directory Sites and Services and from the View menu Show Services Node. Then you should see the GUID of your domain under the Device Registration Configuration container.

image

Unable To Send Exchange Quota Message

Posted on Leave a commentPosted in 2010, 2013, exchange, IAmMEC

In Exchange 2013 you can sometimes see the following event log error (MSExchange Store Driver Submission, ID 1012):

The store driver failed to submit event <id> mailbox <guid> MDB <database guid> and couldn’t generate an NDR due to exception Microsoft.Exchange.MailboxTransport.StoreDriverCommon.InvalidSenderException
   at Microsoft.Exchange.MailboxTransport.Shared.SubmissionItem.SubmissionItemUtils.CopySenderTo(SubmissionItemBase submissionItem, TransportMailItem message)
   at Microsoft.Exchange.MailboxTransport.Submission.StoreDriverSubmission.MailItemSubmitter.GenerateNdrMailItem()
   at Microsoft.Exchange.MailboxTransport.Submission.StoreDriverSubmission.MailItemSubmitter.<>c__DisplayClass1.<FailedSubmissionNdrWorker>b__0()
   at Microsoft.Exchange.MailboxTransport.StoreDriverCommon.StorageExceptionHandler.RunUnderTableBasedExceptionHandler(IMessageConverter converter, StoreDriverDelegate workerFunction).

And this will be preceded by the following event log warning (MSexchangeIS, ID 1077):

The mailbox <guid> on database <database guid> is approaching its storage limit. A notification has been sent to the user. This warning will not be sent again for at least twenty four hours.

The mailbox in both errors is the same and it occurs for mailboxes that have moved to Exchange Server 2013 from Exchange Server 2010 and are close to their mailbox quota. To fix the issue move the mailbox to a different database. The easiest way to do this is New-MoveRequest <guid> where the same GUID is used.

If you have lots of these then this is a little more time consuming, unless you get PowerShell to the rescue.

The following two cmdlets will query the last seven days of the event logs for MSexchangeIS sourced events with ID 1077, get the event log message (which contains the mailbox guid), manipulate the string containing the message and generate a text file of just the mailbox guids. The second cmdlet will run a New-MoveRequest for each mailbox listed in the text file.

Get-WinEvent -ComputerName PC1 -ProviderName MSExchangeIS | where {$_.ID -eq 1077 -AND $_.TimeCreated -gt [DateTime]::Now.AddDays(-7).Date} | select @{Name="mailbox";Expression={$_.Message.Substring(12,36)}} | ft -HideTableHeaders -AutoSize | out-file nearquota.txt

and then

Get-Content .\nearquota.txt | foreach {New-MoveRequest -Identity $_}

Make sure though that your Application event log is large enough to store more than seven days of events and then run these cmdlets, per server every seven days until the issue goes away (or over the course of say a year, move all mailboxes to different databases and that fixes it as well).

Using Office 365 PST Ingestion Service

Posted on 6 CommentsPosted in exchange, exchange online, Office 365, pst, sharepoint

[Updated 10th Nov 2015 with tips on managing bad items in PST files]
Its been in private preview for a while, and recently entered a free preview for any Office 365 subscriber to try. So I gave it a go and have the following tips and guidance.

Preparing to upload PST files

You can upload PST files in situ from their current location on the network. There is no requirement to first copy them to a new folder for uploading. To do this requires a few things to consider, not just including running the AzCopy process with an account that can access all the content.

AzCopy is the command line tool used to copy your PST files to Azure in advance of importing them into Office 365 mailboxes. You do not need an Azure subscription to do this, and until September 2015 this is a free service. To do this in-situ upload of PST files without first copying them to a local network staging location you should include the /Pattern: property in AzCopy. This is documented in the AzCopy help but not currently in the PST Ingestion help on TechNet (https://technet.microsoft.com/library/ms.o365.cc.IngestionHelp.aspx?v=15.1.166.0&l=1). Using AzCopy without /Pattern will upload everything in the source path. As this is a PST ingestion process, you only want *.pst as the /Pattern. When this ingestion process starts to include uploads for SharePoint, then /Pattern will of course not be as useful a value to include.

In the following example, AzCopy is reading from a folder called “C:\Shares\Users” (/source:) and looking in all subdirectories (/S) and only uploading *.pst files (/Pattern:”*.pst”).


<span style="font-size: medium;">azcopy /source:"C:\Shares\Users" /Dest:https://uniqueurl.blob.core.windows.net/ingestiondata/20150101 /DestKey:uniquekey /Pattern:"*.pst" /S /V:"c:\temp\pstIngestion20150101.log"</span>

The data is uploaded to a folder called ingestiondata/20150101 in your Azure storage blog for the PST Ingestion process (notice no space after the URL and before ingestiondata as shown in TechNet). Each file is uploaded to a subfolder of this folder that matches the folder it is located on in the source. For example, if the following folder structure existed:

image

Then in Azure storage the structure would be like the following:

ingestiondata/20150101/Jenny/Outlook Files/2009/jenny2009.pst

ingestiondata/20150101/Paul/archive.pst

ingestiondata/20150101/Simon/PST Files/2009/SimonArchive.pst

ingestiondata/20150101/Simon/Archive2011.pst

Notice that the folder structure underneath the /Source: path is duplicated to Azure, and for a real world scenario, notice that Simon has two PST files in different folders. The /Pattern property of AzCopy will find both even though they may not be where you expect them to be. The 20150101 value is just a unique value that I have used (its a date) that I would change for different uploads, meaning that different uploads would never clash with an existing upload. In TechNet it suggests a name that represents the file share that you set as the source value, so that two uploads from two sources cannot over write each other. So in my example I might do an upload on a different day, and so use a different data value or I could use CUserShares as a way to represent the local upload and FileServerHome to represent \\fileserver\home. If I used FileServer/Home (changing \ for /) then I am creating additional subdirectories in Azure storage and this needs to be taken into account.

Preparing the PST Mapping File

Once the upload is complete, and note that this is best done overnight as it maximises bandwidth use, you have 30 days to import the files from Azure into your mailboxes. To do this you need to create a CSV file like the following:

Workload,FilePath,Name,Mailbox,IsArchive,TargetRootFolder,SPFileContainer,SPManifestContainer,SPSiteUrl
Exchange,20150101/Jenny/Outlook Files/2009,jenny2009.pst,jenny@contoso.com,FALSE,Archive_jenny2009,,,
Exchange,20150101/Paul,archive.pst,paul@contoso.com,FALSE,Archive_Archive,,,
Exchange,20150101/Simon/PST Files/2009,SimonArchive.pst,simon@contoso.com,FALSE,Archive_SimonArchive,,,
Exchange,20150101/Simon,Archive2011.pst,simon@contoso.com,FALSE,Archive_Archive2011,,,

In Excel, it would look as follows:

image

This has a few important elements in it. Mainly the Name value (for the PST filename) is case sensitive which is not documented in TechNet at this time. I guess the FilePath is as well, but I did not come across that issue as I set all the case to the same as the source. The name matches the PST filename, and the FilePath matches the value after “ingestiondata” in the URL including the path the file was uploaded from. Therefore in my example for Jenny above, where the PST file was called “jenny2009.pst” and the path on the local file server was “C:\Shares\Users\Jenny\Outlook Files\2009\” and the /Source: was “C:\Shares\Users” and 20150101 was used as the value in the /Dest: following the URL, the result of the FilePath in the CSV becomes “20150101/Jenny/Outlook Files/2009”. That is, the CSV needs to have a FilePath that includes Dest (after ingestiondata) and the local source with \ changed to / and not including the /Source: value.

A second example, if I used the following AzCopy cmdline:


<span style="font-size: medium;">azcopy /source:"\\fileserver\home" /Dest:https://uniqueurl.blob.core.windows.net/ingestiondata/FileServer/home /DestKey:uniquekey /Pattern:"*.pst" /S /V:"c:\temp\pstIngestionFileServerHome.log"</span>

Then I would have FilePath values in the CSV that looked like “FileServer/home/Jenny/Outlook Files/2009” (case sensitive).

Once you upload the mapping file the PST import from Azure to the Exchange mailbox (or Archive) starts. If the PST file cannot be found then you get an error in the management console at quite shortly after starting. The error reads as follows:

Could not find source file {0}. Please correct the FilePath column in the mapping file and create a new job with the updated mapping file

Full file path

fileserver/home/Jenny/outlook files/2009/Jenny2009.pst

In the above error I have purposely set the FilePath and PST file to the wrong case as that is the cause of this error (unless you did not upload the PST or the path is completely wrong). The best source of the FilePath name comes from the AzCopy log file (set in the /V switch for AzCopy). This will show the path (not including the string value used after “ingestiondata” in the Dest switch that you need to add), but will show the full path the file was uploaded to and the correct case for this path and file.

All the best with removing PST’s from the network! Of course there is more to do that just mentioned here – you need to find them, work out who the PST’s belong to and create this mapping file accurately. There are a number of PST ingestion software companies who will do this for you. You also need to ensure that the PST’s do not contain bad items and to control the import settings for the PST import process.

To ensure there are no bad items in the PST files (or try to at least) it is recommended that you scan the PST files with SCANPST.EXE (http://support.microsoft.com/en-us/kb/272227). This tool needs to be run on all PST files that you have located before you upload them, or if bandwidth is not an issue, to upload them, import them and then process only those that fail.

Once SCANPST.EXE is complete, upload the new PST file and import it again (probably a new mapping file needed). Then also tell the PST Ingestion service that it is to continue processing items even if it finds bad items. To do this you need to configure a custom BadItemLimit once the import starts (as the current BadItemLimit default is 0, which means to fail at the first bad item. You will get “TooManyBadItemsPermanentException” errors in the import log file if you need to do this. To set the BadItemLimit use either of the following:

  1. Connect to Exchange Online via remote PowerShell
  2. Get-MailboxImportRequest | FL name, mailbox, status, whencreated, requestguid
  3. This returns a list of import requests. Look for the most recent and get the requestguid value.
  4. Set-MailboxImportRequest -Identity “request-guid-found-above” -BadItemLimit unlimited –AcceptLargeDataLoss

Or you can just set the BadItemLimit the same for all imports without looking for the latest one

  1. $all_import_requests = Get-Mailboximportrequest
    Foreach ($import_request in $all_import_requests)
    {
    Set-Mailboximportrequest -identity ($import_request).requestguid -BadItemLimit unlimited –AcceptLargeDataLoss
    }

Managing Office 365 Groups With Remote PowerShell

Posted on 2 CommentsPosted in Azure, cloud, exchange, exchange online, groups, IAmMEC, mcm, mcsm, MVP, Office 365, owa, powershell

Announced during Microsoft Ignite 2015, there are now PowerShell administration cmdlets available for the administration of the Groups feature in Office 365.

The cmdlets are all based around “UnifedGroups”, for example Get-UnifiedGroups.

Create a Group

Use New-UnifiedGroup to do this. An example would be New-UnifiedGroup -DisplayName “Sales” -Alias sales –EmailAddress sales@contoso.com

The use of the EmailAddress parameter is useful as it allows you to set a group that is not given an email address based on your default domain, but from one of the other domains in your Office 365 tenant.

Modify a Groups Settings

Use Set-UnifiedGroup to change settings such as the ability to receive emails from outside the tenant (RequireSenderAuthenticationEnabled would be $false), limit email from a whitelist (AcceptMessagesOnlyFromSendersOrMembers) and other Exchange distribution list settings such as hidden from address lists, mail tips and the like. AutoSubscribeNewMembers can be used to tell the group to email all new messages to all new members, PrimarySmtpAddress to change the email address that the group sends from.

Remove a Group

This is the new Remove-UnifiedGroup cmdlet.

Add Members to a Group

This cmdlet is Add-UnifiedGroupLinks. For example Add-UnifiedGroupLinks sales -LinkType members -Links brian,nicolas will add the two names members to the group. The LinkType value can be members as shown, but also “owners” and “subscribers” to add group administrators (owners) or just those who receive email sent to the group but not access to the groups content. To change members to owners you do not need to remove the members, just run something like Add-UnifiedGroupLinks sales –LinkType owners -Links brian,nicolas

You can also pipe in a user list from, for example a CSV file, to populate a group. This would read: Add-UnifiedGroupLinks sales -LinkType members -Links $users where $users = Get-Content username.csv would be run before it to populate the $users variable. The source of the variable can be anything done in PowerShell.

Remove Members from a Group

For this use Remove-UnifiedGroupLinks and mention the group name, the LinkType (member, owner or subscriber) and the user or users to remove.

To Disable Group Creation in OWA

Set-OWAMailboxPolicy is used to create a policy that is not allowed to create Groups and then users have that policy applied to them. For example Set-OWAMailboxPolicy “Students” –GroupCreationEnabled $false followed by Set-CASMailbox mary –OWAMailboxPolicy Students to stop the user “mary” creating groups. After the policy is assigned and propagates around the Office 365 service, the user can join and leave groups, but not create them.

Control Group Naming

This feature allows you to control the group name or block words from being used. This is easier to set in the Distribution Groups settings in Exchange Control Panel rather than via PowerShell. To do this EAC use Recipients > Groups and click the ellipses icon (…) and select Configure Group Naming Policy. This is the same policy for distribution groups. You can add static text to the start or end of name, as well as dynamic text such as region.

Admins creating groups are not subject to this policy, but unlike DL’s if they create groups in PowerShell the policy is also not applied and so the -IgnoreNamingPolicy switch is not required.

Exchange OWA and Multi-Factor Authentication

Posted on 21 CommentsPosted in 2010, 2013, Azure, exchange, IAmMEC, MFA, MVP, owa, smartphone

Multi-factor authentication (MFA), that is the need to have a username, password and something else to pass authentication is possible with on-premises servers using a service from Windows Azure and the Multi-Factor Authentication Server (an on-premises piece of software).

The Multi-Factor Authentication Server intercepts login request to OWA, if the request is valid (that is the username and password work) then the mobile phone of the user is called or texted (or an app starts automatically on the phone) and the user validates their login. This is typically done by pressing # (if a phone call) or clicking Verify in the app, but can require the entry of a PIN as well. Note that when the MFA server intercepts the login request in OWA, there is no user interface in OWA to tell you what is happening. This can result in user disconnect and stops the use of two-way MFA (receive number by text, type number into web application type scenario). Therefore to that end, MFA directly on the OWA application is not supported by the Microsoft Exchange team. Steps for setting up ADFS for Exchange Server 2013 SP1 or later are at https://technet.microsoft.com/en-us/library/dn635116(v=exchg.150).aspx. Once this is in place, you need to enable MFA for ADFS rather than MFA for OWA. I have covered this in a separate post at http://c7solutions.com/2016/04/installing-azure-multi-factor-authentication-and-adfs.

To configure Multi-Factor Authentication Server for OWA (unsupported) you need to complete the following steps:

Some of these steps are the same regardless of which service you are adding MFA to and some slightly different. I wrote a blog on MFA and VPN at http://c7solutions.com/2015/01/windows-rras-vpn-and-multi-factor-authentication and this contains the general setup steps and so these are not repeated here. Just what you need to do differently

Step 1

See http://c7solutions.com/2015/01/windows-rras-vpn-and-multi-factor-authentication

Step 2: Install MFA Server on-premises

This is covered in http://c7solutions.com/2015/01/windows-rras-vpn-and-multi-factor-authentication, but the difference with OWA is that it needs to be installed on the Exchange CAS server where the authentication takes place.

Ensure you have .NET 3.5 installed via Server Manager > Features. This will install the .NET 2.0 feature that is required by MFA server. If the installation of the download fails, this is the most likely reason for the failure, so install .NET 3.5 and then try the MFA Server install again.

The install of the MFA server does not take very long. After a few minutes the install will complete and then you need to run the Multi-Factor Authentication Server admin tools. These are on the Start Screen in More Apps or the Start Menu. Note that it will start the software itself if given time:

image

image

Do not skip the wizard, but click Next. You will be asked to activate the server. Activating the server is linking it to your Azure MFA instance. The email address and password you need are obtained from the Azure multi-factor auth provider that was configured in Step 1. Click the Generate Activation Credentials on the Downloads page of the Azure MFA provider auth management page.

image

The credentials are valid for ten minutes, so your will differ from mine. Enter them into the MFA Server configuration wizard and click Next.

MFA Server will attempt to reach Azure over TCP 443.

Select the group of servers that the configuration should replicate around. For example, if you where installing this software on each Exchange CAS server, then you might enter “Exchange Servers” as the group name in the first install and then select it during the install on the remaining servers. This config will be shared amongst all servers with the same group name. If you already have a config set up with users in it and set up a new group here, then it will be different settings for the users. For example you might have a phone call to authenticate a VPN connection but use the app for OWA logins. This would require two configs and different groups of servers. If you want the same settings for all users in the entire company, then one group (the default group) should be configured.

image

Next choose if you want to replicate your settings. If you have more than one MFA Server instance in the same group select yes.

Then choose what you want to authenticate. Here I have chosen OWA:

image

Then I need to choose the type of authentication I have in place. In my OWA installation I am using the default of Forms Based Authentication, but if you select Forms-based authentication here, the example URL for forms based authentication shown on the next page is from Exchange Server 2003 (not 2007 or later). Therefore I select HTTP authentication

Next I need to provide the URL to OWA. I can get this by browsing the OWA site over https. The MFA install will also use HTTPS, so you will need a certificate and have this trusted by a third party if you want to support user managed devices. Users managing their own MFA settings (such as telephone numbers and form of authentication) reduces the support requirement. That needs the User Portal, the SDK and the Mobile App webservice installed as well. These are outside the scope of this blog. For here I am going to use https://servername/owa.

image

Finish the installation at this time and wait for the admin application to appear.

Step 3: Configure Users for MFA

Here we need to import the users who will be authenticated with MFA. Select the Users area and click Import from Active Directory. Browse the settings to imports group members, or OUs or a search to add your user account. Once you have it working for yourself, add others. Users not listed here will not see any change in their authentication method.

Ensure that your test user has a mobile number imported from the Active Directory. If not add one, choosing the correct country code as well. The default authentication for the user is that they will get a phone call to this number and need to press # before they can be logged in. Ensure that the user is set to Enabled as well in the users area of the management program.

Step 4: Configure OWA for MFA (additional steps)

On the IIS Authentication node you can adjust the default configuration for HTTP. Here you need to set Require Multi-Factor Authentication user match. This ensures that each auth attempt is matched to a user in the users list. If the user exists and is enabled, then do MFA for them. If disabled, then the setting for Succeed Authentication on the advanced tab comes into play. If the user is not listed, authentication passes through without MFA.

image

Change to the Native Module tab and select OWA under Default Web Site only. Do not set authentication on the Backend Web Site. Also enable the native module on ECP on the Default Web Site as well:

image

Then I can attempt a login to OWA or ECP. Once I successfully authenticate my phone rings and I am prompted to press #. Once I press # I am allowed into Exchange!

SSL and Exchange Server

Posted on 2 CommentsPosted in 2008 R2, 2012 R2, 2013, certificates, exchange, https, IAmMEC, JetNexus, load balancer, Load Master, loadbalancer, mobile phones, SSL, TLS, windows server, xp

In October 2014 or thereabouts it became known that the SSL protocol (specifically SSL v3) was broken and decryption of the encrypted data was possible. This blog post sets out the steps to protect your Exchange Server organization regardless of whether you have one server or many, or whether or not you use a load balancer or not. As load balancers can terminate the SSL session and recreate it, it might be that changes are needed on your load balancer or maybe directly on the servers that run the CAS role. This blog post will cover both options and looks at the settings for a Kemp load balancer and a JetNexus load balancer.

Of course being an Exchange Server MVP, I tend to blog about Exchange related stuff, but actually this is valid for any server that you publish to the internet and probably valid of any internal server that you encrypt traffic to via the SSL suite of protocols. Microsoft outline the below configuration at https://technet.microsoft.com/en-us/library/security/3009008.aspx.

The steps in this blog will look at turning off the SSL protocol in Windows Server and turning on the TLS protocol (which does the same thing as SSL and is interchangeable for SSL, but more secure at the time of writing – Jan 2015). Some clients do not support TLS (such as Internet Explorer on Windows XP Service Pack 2 or earlier, so securing your servers as you need to do may stop some home users connecting to your Exchange Servers, but as XP SP2 should not be in use in any business now, these changes should not affect desktops. You could always use a different browser on XP as that might mitigate this issue, but using XP is a security risk in an of itself anyway! To disable clients from connecting to SSL v3 sites requires a client or GPO setting and this can be found via your favourite search engine.

Note that the registry settings and updates for the load balancers in this blog post will restrict client access to your servers if your client cannot negotiate a mutual cipher and secure channel protocol. Therefore care and testing are strongly advised.

Testing and checking your changes

Before you make any changes to your servers, especially internet facing ones, check and document what you have in place at the moment using https://www.ssllabs.com/ssltest. This service will connect to an SSL/TLS protected web site and report back on the issues found. Before running any of the changes below see what overall rating you get and document the following:

  • Authentication section: record the signature algorithm. For the signature algorithm its possible the certificate authority signature will be marked “SHA1withRSA WEAK SIGNATURE”. This certificate, if rekeyed and issued again by your certificate authority might be replaced with a SHA-2 certificate. The Google Chrome browser from September 2014 will report sites secured with this SHA-1 certificate as not fully trustworthy based on the expiry date of the certificate. If your certificate expires after Jan 1st 2017 then get it rekeyed as soon as possible. As 2015 goes on, this date will move closer in time. From early 2015 this cut off date becomes June 1st 2016 and so on. Details on the dates for this impact are in http://googleonlinesecurity.blogspot.co.uk/2014/09/gradually-sunsetting-sha-1.html. You can also use https://shaaaaaaaaaaaaa.com/ to test your certificate if the site is public facing, and this website gives details on who is now issuing SHA-2 keyed certificates. You can examine your external servers for SHA-1 certificates and the impact in Chrome (and later IE and Firefox) at https://www.digicert.com/sha1-sunset/. To do the same internally, use the DigiCert Certificate Inspector at https://www.digicert.com/cert-inspector.htm.
  • Authentication section: record the path values. Ensure that each certificate is either in the trust store or sent by the server and not an extra download.
  • Configuration section: document the cipher suites that are provided by your server
  • Handshake simulation section: Here it will list browsers and other devices (mobile phones) and what their default cipher is. If you do not support the cipher they support then you cannot communicate. Note that you typically support more than one cipher and the client will often support more than one cipher to, so though it is shown here as a mismatch this does not mean that it will not work and if this client is used by your users then click the link for the client and ensure that the server offers at least one of the the ciphers required by the client – unless all the ciphers are insecure in which case do not use that client!

Once you have a document on your current configuration, and a list of the clients you need to support and the ciphers they need you to support, you can go about removing SSL v3 and insecure ciphers.

Disabling SSL v3 on the server

To disable SSL v3 on a Windows Server (2008 or later) you need to set the Enabled registry value at “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 3.0\Server” to 0. If this value does not exist, the create a DWORD value called “Enabled” and leave it at 0. You then need to reboot the server.

If you are using Windows 2008 R2 or earlier you should enable TLS v1.1 and v1.2 at the same time. Those versions of Windows Server support TLS v1.1 and v1.2 but it is not enabled (only TLS v1.0 is enabled). To enable TLS v1.1 and v1.2 use set the Enabled value at “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.1\Server” to 1. Change the path to “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.2\Server” and the same setting to support TLS v1.2. If these keys do not exist, create them. It is also documented that the “DisabledByDefault” key is required, but I have seen this noted as being the same as the “Enabled” key – just the opposite value. Therefore as I have not actually checked, I set both Enabled to 1 and DisabledByDefault to 0.

To do both the disabling of SSL v2 and v3 (v2 can be enabled on older versions of Windows and should be disabled as well) I place the following in a .reg file and double click it on each server, followed by a reboot for it to take effect. This .reg file contents also disables the RC4 ciphers. These ciphers have been considered insecure for a few years and when I configure my servers not to support SSL v3 I also disable the RC4 ciphers as well.

Windows Registry Editor Version 5.00

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0]

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0\Client]

"Enabled"=dword:00000000

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0\Server]

"Enabled"=dword:00000000

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 3.0]

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 3.0\Client]

"Enabled"=dword:00000000

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 3.0\Server]

"Enabled"=dword:00000000

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Ciphers\RC4 128/128]

"Enabled"=dword:00000000

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Ciphers\RC4 40/128]

"Enabled"=dword:00000000

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Ciphers\RC4 56/128]

"Enabled"=dword:00000000

Then I use the following .reg file to enabled TLS v1.1 and TLS v1.2

Windows Registry Editor Version 5.00

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.1]

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.1\Client]

"Enabled"=dword:00000001

"DisabledByDefault"=dword:00000000

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.1\Server]

"Enabled"=dword:00000001

"DisabledByDefault"=dword:00000000

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.2]

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.2\Client]

"Enabled"=dword:00000001

"DisabledByDefault"=dword:00000000

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.2\Server]

"Enabled"=dword:00000001

"DisabledByDefault"=dword:00000000

 

Once you have applied both of the above sets of registry keys you can reboot the server at your convenience. Note that the regkeys may set values that are already set, for example TLS v1.1 and v1.2 are enabled on Exchange 2013 CAS servers and SSL v2 is disabled. For example the first of the below graphics comes from a test environment of mine that is running Windows Server 2012 R2 without any of the above registry keys set on them. You can see that Windows Server 2012 R2 is vulnerable to the POODLE attack and supports the RC4 cipher which is weak.

image

The F grade comes from patched but un-configured with regards to SSL Windows Server 2008 R2 server

image

After setting the above registry keys and rebooting, the test at https://www.ssllabs.com/ssltest then showed the following for 2012 R2 on the left (A grade) and Windows Server 2008 R2 on the right (A- grade):

image image

Disabling SSL v3 on a Kemp LoadMaster load balancer

If you protect your servers with a load balancer, which is common in the Exchange Server world, then you need to set your SSL and cipher settings on the load balancer, unless you are only balancing at TCP layer 4 and doing SSL pass through. Therefore even for clients that have a load balancer, you might not need to make the changes on the load balancer, but on the server via the above section instead. If you do SSL termination on the load balancer (TCP layer 7 load balancing) then I recommend setting the registry keys on the Exchange servers anyway to avoid security issues if you need to connect to the server directly and if you are going to disable SSL v3 in one location (the load balancer) there is no problem in disabling it on the server as well.

For a Kemp load balancer you need to be running version 7.1-20b to be able to do the following, and to ensure that the SSL code on the load balancer is not susceptible to issues such as heartbleed as well. To configure your load balancer to disable SSL v3 you need to modify the SSL properties of the virtual server and check the “Support TLS Only” option.

To disable the RC4 weak ciphers then there are a few choices, but the easiest I have seen to do is to select “Perfect Forward Secrecy Only” under Selection Filters and then add all the listed filters. Then from this list remove the three RC4 ciphers that are in the list.

If you do not select “Support TLS Only” and leave the ciphers at the default level then your load balancer will get an C grade at the test at https://www.ssllabs.com/ssltest because it is vulnerable to the POODLE attack. Setting just the “Support TLS Only” option and leaving the default ciphers in place will result in a B grade, as RC4 is still supported. Removing the RC4 ciphers (by following the instructions above to add the perfect forward secrecy ciphers and remove the RC4 ciphers from this list) as well as allowing only the TLS protocol will result in an A grade.

image

Kemp 7.1-22b does not support SSL v3 for the API and web interface as well as completing the above to protect the virtual services that the load balancer offers.

Kemp Technologies document the above steps at https://support.kemptechnologies.com/hc/en-us/articles/201995869, and point out the unobvious setting that if you filter the cipher list with the “TLS 1.x Ciphers Only” setting then it will only show you the TLS 1.2 ciphers and not any TLS 1.1 or TLS 1.0 ciphers. THerefore selecting “TLS 1.x Ciphers Only” rather than filtering using “Perfect Forward Secrecy Only” will result in a reduced client list, which may be an issue.

I was able to achieve an A grade on the SSL Labs test site. My certificate uses SHA-1, but expires in 2015 so by the time SHA-1 is reported an issue in the browser I will have changed it anyway.

image

Disabling SSL v3 on a JetNexus ALB-X load balancer

If you protect your servers with a load balancer, which is common in the Exchange Server world, then you need to set your SSL and cipher settings on the load balancer, unless you are only balancing at TCP layer 4 and doing SSL pass through. Therefore even for clients that have a load balancer, you might not need to make the changes on the load balancer, but on the server via the above section instead. If you do SSL termination on the load balancer (TCP layer 7 load balancing) then I recommend setting the registry keys on the Exchange servers anyway to avoid security issues if you need to connect to the server directly and if you are going to disable SSL v3 in one location (the load balancer) there is no problem in disabling it on the server as well.

For a JetNexus ALB-X load balancer you need to be running build 1553 or later. Build 1553 is a version 3 build, so any version 4 build is of a higher, and therefore valid build. This build (version 3.54.3) or later is needed to ensure Heartbleed mitigation and to allow the following configuration changes to be applied.

To configure the JetNexus  you need to upload a config file to turn off SSL v3 and RC4 ciphers. The config file is .txt file that is uploaded to the load balancer. In version 4, the primary cluster node can have the file uploaded to it, and the changes are replicated to the second node in the cluster automatically.

Before you upload a config file to make the changes required, ensure that you backup the current configuration from Advanced >> Update Software and click the button next to Download Current Configuration to save the configuration locally. Ensure you backup all nodes in a v4 cluster is appropriate.

Then select one of the three config file settings below and copy it to a text file and upload it from Advanced >> Update Software and use the Upload New Configuration option to install the file. The upload will reset all connections, do do this at during a quiet period of time.

The three configs are to reset the default ciphers, to disable SSL v3 and RC4, and to disable TLS v1.0 and SSL v3 and RC4

JetNexus protocol and cipher defaults:

#!update

 

[jetnexusdaemon]

Cipher004="ECDHE-RSA-AES128-SHA256:AES128-GCM-SHA256:RC4:HIGH:!MD5:!aNULL:!EDH"

Cipher1=""

Cipher2=""

CipherOptions="CIPHER_SERVER_PREFERENCE"

JetNexus protocol and cipher changes to disable SSL v3 and disable RC4 ciphers:

#!update

 

[jetnexusdaemon]

Cipher004="ECDHE-RSA-AES128-SHA256:AES128-GCM-SHA256:HIGH:!MD5:!aNULL:!EDH:!RC4"

Cipher1=""

Cipher2=""

CipherOptions="NO_SSLv3,CIPHER_SERVER_PREFERENCE"

JetNexus protocol and cipher changes to disable TLS v1.0, SSL v3 and disable RC4 ciphers:

#!update

 

[jetnexusdaemon]

Cipher004="ECDHE-RSA-AES128-SHA256:AES128-GCM-SHA256:HIGH:!MD5:!aNULL:!EDH:!RC4"

Cipher1=""

Cipher2=""

CipherOptions="NO_SSLv3,NO_TLSv1,CIPHER_SERVER_PREFERENCE"

On my test environment I was able to achieve an A- grade with the SSL Test website and the config to disable TLS 1.0, SSL3 and RC4 enabled. The A- is because of a lack of support for Forward Secrecy with the reference browsers used by the test site.

image

Browsers and Other Clients

There too much to discuss with regards to clients, apart from they need to support the same ciphers as mentioned above. A good guide to clients can be found at https://www.howsmyssl.com/s/about.html and from there you can test your client as well.

Additional comment 23/1/15 : One important comment to make though comes courtesy of Ingo Gegenwarth at https://ingogegenwarth.wordpress.com/2015/01/20/hardening-ssltls-and-outlook-for-mac/. This post discusses the TLS Renegotiation Indication Extension update at RFC 5746. It is possible to use the AllowInsecureRenegoClients registry key at HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL to ensure that only clients with the update mentioned at http://support.microsoft.com/kb/980436 are allowed to connect. If this is enabled (set to Strict Mode) and the above to disable SSL 2 and 3 is done then Outlook for Mac clients cannot connect to your Exchange Server. If this regkey is deleted or has a non-zero value then connections to SSL 2 and 3 can be made, but only for a renegotiation to TLS. Therefore ensure that you allow Compatibility Mode (which is the default) when you disable SSL 2 and 3, as Outlook for Mac and Outlook for Mac for Office 365 both require SSL support to then be able to start a TLS session.