Exchange Online Archive–Counting Archives

Posted on Leave a commentPosted in archive, exchange, exchange online, Exchange Server, EXO, IAmMEC, Office 365

If you are using Exchange Online Archive and what to get a count of the number of users with an archive, or a list of the users with an archive, then the following PowerShell scripts will give you this info:

List all users with an Exchange Online Archive:

Get-MailUser -ResultSize Unlimited | where {$_.ArchiveName -ilike “In-Place Archive*”}

Count all users with an Exchange Online Archive:

(Get-MailUser -ResultSize Unlimited | where {$_.ArchiveName -ilike “In-Place Archive*”}).Count

Both of these PowerShell cmdlets need to be run in Exchange Online via Remote PowerShell.

Exchange Server and Missing Root Certificates

Posted on Leave a commentPosted in 2007, 2010, 2013, exchange, exchange online, Exchange Server, federation, Free/Busy

I came across an issue with a clients Exchange Server deployment today that is not well documented – or rather it is, but you need to know where to look. So I thought I would document the troubleshooting steps and the fix here.

We specifically came across this error when testing Free/Busy for an Office 365 migration, though it could happen for a variety of reasons. Free/Busy and other lookups in a cross-forest Exchange Server deployment require a working organization configuration and this was failing. Running Test-FederationTrust (a prerequisite of the organization relationship) in verbose mode (add -Verbose to the end) returned the following:

Unable to retrieve federation metadata from the security token
service. Reason: Microsoft.Exchange.Management.FederationProvisioning.FederationMetadataException: Unable to access the
Federation Metadata document from the federation partner. Detailed information: “The underlying connection was closed:
Could not establish trust relationship for the SSL/TLS secure channel.”.

The final result of the test will also show two errors for “Unable to retrieve federation metadata from the security token service.” and “Failed to request delegation token.”

The last part of the verbose error is the clue here. The server in question is unable to make an SSL/TLS connection to the endpoint that the federation trust needs to reach to get the federation trust metadata. That endpoint is listed right at the start of the Verbose output. It reads:

VERBOSE: [16:53:08.306 GMT] Test-FederationTrust : Requesting Federation Metadata from
https://nexus.microsoftonline-p.com/FederationMetadata/2006-12/FederationMetadata.xml.

Now that we have a URL and an error message, check that the URL is reachable from each of your Exchange Servers. At my client today we found one server could not successfully reach this endpoint without an SSL error turning up in the browser. The problem was that the certificate that the endpoint is secure with is issued by the Baltimore Cybertrust Root Certificate – one that Microsoft uses for lots of services, but the root certificate was not installed on that machine. Lots of root certs where missing from that machine as it had never had a root certificate update applied to it.

We installed the latest Root Certificate Update and then the federation trust worked and free/busy etc. (mail tips, cross-forest message tracking etc.) all worked fine.

Qualifications in Exchange Signatures

Posted on Leave a commentPosted in 2013, active directory, exchange, Exchange Server, Global Catalog, IAmMEC, iQ.Suite

In a recent project I was working with iQ.Suite from GBS and specifically the component of this software that add signatures to emails. The client are an international organization with users in different geographies and we needed to accommodate the users qualifications in their email signature.

The problem with this is that in Germany qualifications are written in front of the name and in the USA at the end and in other countries at the start and the end. We were doing a Notes to Exchange migration and in Notes the iQ.Suite signature software read data from Notes that was originally pulled from Active Directory, and so the client had placed the qualifications in the DisplayName field in the Active Directory.

But when we migrated to Exchange Server the Global Address List listed the users DisplayName an so the German users where all listed together with “Dipl” as the first characters of their name. Also the name the email came from was written like this. The signature worked, but the other changes that became apparent meant we had to work out a different way to look at this problem.

So rather than using DisplayName for the users name and qualifications, we used personalPrefix in Active Directory to store anything needed before their name (Dipl in the above German example, and Prof or Dr being English examples) and the generationQualifier Active Directory attribute to store any string that followed the users DisplayName (such as Jr in the USA or BSc for qualifications etc.)

In iQ.Suite we created a signature that looked like the following. This has a conditional [COND] entry for personalTitle, displayName and generationQualifier. That is if each of these are present, then show the displayName with personalTitle before it and generationQualifier after it. If the user does not have values for these fields, do not show them. The [COND] control is documented in iQ.Suite.

[COND]personalTitle;[VAR]personalTitle[/VAR] [/COND][COND]displayName;[VAR]displayName[/VAR][/COND][COND]generationQualifier; [VAR]generationQualifier[/VAR][/COND]

What was not so well documented, and why I wanted to write this blog entry was that the personalTitle and generationQualifier attributes are not stored in the Global Catalog and so are missing in the users signature. In the multi-domain deployment we had at the client, iQ.Suite read the personalTitle, displayName and generationQualifier Active Directory attributes from the Global Catalog as Exchange was installed in a resource domain and the users in separate domains and so unless the attribute was pushed to the Global Catalog it was not seen by iQ.Suite.

To promote an attribute to be visible in the Global Catalog you need to open the Schema Management MMC snap-in, find the attributes of question and tick the Replicate this attribute to the Global Catalog field. This is outlined in https://technet.microsoft.com/en-us/library/cc737521(v=ws.10).aspx.

Configuring Sync and Writeback Permissions in Active Directory for Azure Active Directory Sync

Posted on 47 CommentsPosted in 2008, 2008 R2, 2012, 2012 R2, active directory, ADFS 3.0, Azure, Azure Active Directory, cloud, exchange, exchange online, groups, hybrid, IAmMEC, Office 365, WAP, Web Application Proxy, windows

[This blog post was last updated 5th October 2017 – added support to Exchange Hybrid for msExchDelegateLinkList attribute which was announced at Microsoft Ignite 2017 for the support of keeping auto-mapping working cross on-premises and the cloud]

[Updated 18th June 2017 in advance of the release of AADConnect version 1.1.553.0. This post contains updates to the below scripts to include the latest attributes synced back to on-premises including publicDelegates, which is used for supporting bi-directional sync for “Send on Behalf” of permissions in Exchange Online/Exchange Server hybrid writeback scenarios]

[Update March 2017 – added another blog post on using the below to fix permission-issue errors on admin and other protected accounts at http://c7solutions.com/2017/03/administrators-aadconnect-and-adminsdholder-issues]

Azure Active Directory has been long the read-only cousin of Active Directory for those Office 365 and Azure users who sync their directory from Active Directory to Azure Active Directory apart from eight attributes for Exchange Server hybrid mode. Not any more. Azure Active Directory writeback is now available. This enables objects to be mastered or changed in Azure Active Directory and written back to on-premises Active Directory.

This writeback includes:

  • Devices that can be enrolled with Office 365 MDM or Intune, which will allow login to AD FS controlled resources based on user and the device they are on
  • “Modern Groups” in Office 365 can be written back to on-premises Exchange Server 2013 CU8 or later hybrid mode and appear as mail enabled distribution lists on premises. Does not require AAD Premium licences
  • Users can change their passwords via the login page or user settings in Office 365 and have that password written back online.
  • Exchange Server hybrid writeback is the classic writeback from Azure AD and is the apart from Group Writeback is the only one of these writebacks that does not require Azure AD Premium licences.
  • User writeback from Azure AD (i.e. users made in Office 365 in the cloud for example) to on-premises Active Directory
  • Password Hash Sync (this is not really writeback, but its the only permission needed by default for forward sync, so added here)
  • Windows 10 devices for “Azure AD Domain Join” functionality

All of these features require AADConnect and not and of the earlier verions. You can add all these writeback functions from the AADConect setup wizard, and if you have used Custom mode, then you will need to implement the following permissions.

In all the below sections you need to grant permission to the connector account. You can find the connector account for your Active Directory forest from the Synchronization Service program > Connectors > double-click your domain > select Connect to Active Directory Forest. The account listed here is the connector account you need to grant permissions to.

SourceAnchor Writeback

For users with (typically) multi-forest deployments or plans or a forest migration, the objectGuid value in Active Directory, which is used as the source for the attribute that keys your on-premises object to your synced cloud object – in AAD sync parlance, this is known as the SourceAnchor. If you set up AADConnect version 1.1.553.0 or later you can opt to change from objectGuid to a new source anchor attribute known as ms-ds-consistencyGuid. To be able to use this new feature you need the ability for AADConnect connector account to be able to read ObjectGUID and then write it back to ms-ds-consistencyGuid. The read permissions are typically available to the connector account without doing anything special, and if AADConnect is installed in Express Mode it will get the write permissions it needs, but as with the rest of this blog, if you are not using Express Mode you need to grant the permissions manually and so write permissions are needed to the ms-ds-consistencyGuid attribute. This can be done with this script.


$accountName = "domain\aad_account" #[this is the account that will be used by Azure AD Connect Sync to manage objects in the directory, this is often an account in the form of MSOL_number or AAD_number].
$ForestDN = "DC=contoso,DC=com"

$cmd = "dsacls '$ForestDN' /I:S /G '`"$accountName`":WP;ms-ds-consistencyGuid;user'"
Invoke-Expression $cmd | Out-Null

Note that if you use ms-ds-consistencyGuid then there are changes required on your ADFS deployment as well. The Issuance Transform Rules for the Office 365 Relying Party Trust contains a rule that specifies the ImmutableID (aka AADConnect SourceAnchor) that the user will be identified as for login. By default this is set to ObjectGUID, and if you use AADConnect to set up ADFS for you then the application will update the rule. But if you set up ADFS yourself then you need to update the rule.

Issuance Transform Rules

When Office 365 is configured to federate a domain (use ADFS for authentication of that domain and not Azure AD) then the following are the claims rules that exist out of the box need to be adjusted. This is to support the use of ms-ds-consistencyguid as the immutable ID.

ADFS Management UI > Trust Relationships > Relying Party Trusts

Select Microsoft Office 365 Identity Platform > click Edit Claim Rules

You get two or three rules listed here. You get three rules if you use -SupportMultipleDomain switch in Convert-MSOLDomainToFederated.
Rule 1:
Change objectGUID to ms-DS-ConsistencyGUID
Rule Was:
c:[Type == “http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname”]
=> issue(store = “Active Directory”, types = (“http://schemas.xmlsoap.org/claims/UPN”, “http://schemas.microsoft.com/LiveID/Federation/2008/05/ImmutableID”), query = “samAccountName={0};userPrincipalName,objectGUID;{1}”, param = regexreplace(c.Value, “(?<domain>[^\\]+)\\(?<user>.+)”, “${user}”), param = c.Value);
New Value:
c:[Type == “http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname”]
=> issue(store = “Active Directory”, types = (“http://schemas.xmlsoap.org/claims/UPN”, “http://schemas.microsoft.com/LiveID/Federation/2008/05/ImmutableID”), query = “samAccountName={0};userPrincipalName,ms-DS-ConsistencyGUID;{1}”, param = regexreplace(c.Value, “(?<domain>[^\\]+)\\(?<user>.+)”, “${user}”), param = c.Value);

Preparing for Device Writeback

If you do not have a 2012 R2 or later domain controller then you need to update the schema of your forest. Do this by getting a Windows Server 2012 R2 ISO image and mounting it as a drive. Copy the support/adprep folder from this image or DVD to a 64 bit domain member in the same site as the Schema Master. Then run adprep /forestprep from an admin cmd prompt when logged in as a Schema Admin. The domain member needs to be a 64 bit domain joined machine for adprep.exe to run.

Wait for the schema changes to replicate around the network.

Import the cmdlets needed to configure your Active Directory for writeback by running Import-Module ‘C:\Program Files\Microsoft Azure Active Directory Connect\AdPrep\AdSyncPrep.psm1’ from an administrative PowerShell session. You need Azure AD Global Admin and Enterprise Admin permissions for Azure and local AD forest respectively. The cmdlets for this are obtained by running the Azure AD Connect tool.


$accountName = "domain\aad_account" #[this is the account that will be used by Azure AD Connect Sync to manage objects in the directory, this is often an account in the form of MSOL_number or AAD_number].
Initialize-ADSyncDeviceWriteBack -AdConnectorAccount $accountName -DomainName contoso.com #[domain where devices will be created].

This will create the “Device Registration Services” node in the Configuration partition of your forest as shown:

image

To see this, open Active Directory Sites and Services and from the View menu select Show Services Node. Also in the domain partition you should now see an OU called RegisteredDevices. The AADSync account now has permissions to write objects to this container as well.

In Azure AD Connect, if you get the error “This feature is disabled because there is no eligible forest with appropriate permissions for device writeback” then you need to complete the steps in this section and click Previous in the AADConnect wizard to go back to the “Connect your directories” page and then you can click Next to return to the “Optional features” page. This time the Device Writeback option will not be greyed out.

Device Writeback needs a 2012 R2 or later AD FS server and WAP to make use of the device info in the Active Directory (for example, conditional access to resources based on the user and the device they are using). Once Device Writeback is prepared for with these cmdlets and the AADConnect Synchronization Options page is enabled for Device Writeback then the following will appear in Active Directory:

image

Not shown in the above, but adding the Display Name column in Active Directory Users and Computers tells you the device name. The registered owner and registered users of the device are available to view, but as they are SID values, they are not really readable.

If you have multiple forests, then you need add the SCP record for the tenant name in each separate forest. The above will do it for the forest AADConnect is installed in and the below script can be used to add the SCP to other forests:

$verifiedDomain = "contoso.com"  # Replace this with any of your verified domain names in Azure AD
$tenantID = "27f998bf-86f2-41bf-91ab-2d7ab011df35"  # Replace this with you tenant ID
$configNC = "CN=Configuration,DC=corp,DC=contoso,DC=com"  # Replace this with your AD configuration naming context
$de = New-Object System.DirectoryServices.DirectoryEntry
$de.Path = "LDAP://CN=Services," + $configNC
$deDRC = $de.Children.Add("CN=Device Registration Configuration", "container")
$deDRC.CommitChanges()
$deSCP = $deDRC.Children.Add("CN=62a0ff2e-97b9-4513-943f-0d221bd30080", "serviceConnectionPoint")
$deSCP.Properties["keywords"].Add("azureADName:" + $verifiedDomain)
$deSCP.Properties["keywords"].Add("azureADId:" + $tenantID)
$deSCP.CommitChanges()

Preparing for Group Writeback

Writing Office 365 “Modern Groups” back to Active Directory on-premises requires Exchange Server 2013 CU8 or later schema updates and servers installed. To create the OU and permissions required for Group Writeback you need to do the following.

Import the cmdlets needed to configure your Active Directory for writeback by running Import-Module ‘C:\Program Files\Microsoft Azure Active Directory Connect\AdPrep\AdSyncPrep.psm1’ from an administrative PowerShell session. You need Domain Admin permissions for the domain in the local AD forest that you will write back groups to. The cmdlets for this are obtained by running the Azure AD Connect tool.

$accountName = "domain\aad_account" #[this is the account that will be used by Azure AD Connect Sync to manage objects in the directory, this is often an account in the form of MSOL_number or AAD_number].
$cloudGroupOU = "OU=CloudGroups,DC=contoso,DC=com"
Initialize-ADSyncGroupWriteBack -AdConnectorAccount $accountName -GroupWriteBackContainerDN $cloudGroupOU

Once these cmdlets are run the AADSync account will have permissions to write objects to this OU. You can view the permissions in Active Directory Users and Computers for this OU if you enable Advanced mode in that program. There should be a permission entry for this account that is not inherited from the parent OU’s.

At the time of writing, the distribution list that is created on writeback from Azure AD will not appear in the Global Address List in Outlook etc. or allow on-premises mailboxes to send to these internal only cloud based groups. To add it to the address book you need to create a new subdomain, update public DNS and add send connectors to hybrid Exchange Server. This is all outlined in https://technet.microsoft.com/en-us/library/mt668829(v=exchg.150).aspx. This ensure’s that on-premises mailboxes can deliver to groups as internal senders and not require external senders enabled on the group. To add the group to the Global Address List you need to run Update-AddressList in Exchange Server. Once group writeback is prepared for using these cmdlets here and AADConnect has had it enabled during the Synchronization Options page, you should see the groups appearing in the selected OU as shown:

image

And you should find that on-premises users can send email to these groups as well.

Preparing for Password Writeback

The option for users to change their passwords in the cloud and have then written back to on-premises (with multifactor authentication and proof of right to change the password) is also available in Office 365 / Azure AD with the Premium Azure Active Directory or Enterprise Mobility Pack licence.

To enable password writeback for AADConnect you need to enable the Password Writeback option in AADConnect synchronization settings and then run the following three PowerShell cmdlets on the AADSync server:


Get-ADSyncConnector | fl name,AADPasswordResetConfiguration
Get-ADSyncAADPasswordResetConfiguration -Connector "contoso.onmicrosoft.com - AAD"
Set-ADSyncAADPasswordResetConfiguration -Connector "contoso.onmicrosoft.com - AAD" -Enable $true

The first of these cmdlets lists the ADSync connectors and the name and password reset state of the connector. You need the name of the AAD connector. The middle cmdlet tells you the state of password writeback on that connector and the last cmdlet enables it if needed. The name of the connector is required in these last two cmdlets.

To set the permissions on-premises for the passwords to be written back the following script is needed:

$passwordOU = "DC=contoso,DC=com" #[you can scope this down to a specific OU]
$accountName = "domain\aad_account" #[this is the account that will be used by Azure AD Connect Sync to manage objects in the directory, this is often an account in the form of MSOL_number or AAD_number].

$cmd = "dsacls.exe '$passwordOU' /I:S /G '`"$accountName`":CA;`"Reset Password`";user'"
Invoke-Expression $cmd | Out-Null

$cmd = "dsacls.exe '$passwordOU' /I:S /G '`"$accountName`":CA;`"Change Password`";user'"
Invoke-Expression $cmd | Out-Null

$cmd = "dsacls.exe '$passwordOU' /I:S /G '`"$accountName`":WP;lockoutTime;user'"
Invoke-Expression $cmd | Out-Null

$cmd = "dsacls.exe '$passwordOU' /I:S /G '`"$accountName`":WP;pwdLastSet;user'"
Invoke-Expression $cmd | Out-Null

Finally you need to run the above once per domain.

Preparing for Exchange Server Hybrid Writeback

Hybrid mode in Exchange Server requires the writing back on eight attributes from Azure AD to Active Directory. The list of attributes written back is found here. The following script will set these permissions for you in the OU you select (or as shown at the root of the domain). The DirSync tool used to do all this permissioning for you, but the AADSync tool does not. Therefore scripts such as this are required. This script sets lots of permissions on these eight attributes, but for clarify on running the script the output of the script is sent to Null. Remove the “| Out-Null” from the script to see the changes as they occur (the script also takes a lot longer to run).

$accountName = "domain\aad_account" #[this is the account that will be used by Azure AD Connect Sync to manage objects in the directory, this is often an account in the form of MSOL_number or AAD_number].
$HybridOU = "DC=contoso,DC=com"

#Object type: user
$cmd = "dsacls '$HybridOU' /I:S /G '`"$accountName`":WP;proxyAddresses;user'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls '$HybridOU' /I:S /G '`"$accountName`":WP;msExchUCVoiceMailSettings;user'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls '$HybridOU' /I:S /G '`"$accountName`":WP;msExchUserHoldPolicies;user'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls '$HybridOU' /I:S /G '`"$accountName`":WP;msExchArchiveStatus;user'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls '$HybridOU' /I:S /G '`"$accountName`":WP;msExchSafeSendersHash;user'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls '$HybridOU' /I:S /G '`"$accountName`":WP;msExchBlockedSendersHash;user'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls '$HybridOU' /I:S /G '`"$accountName`":WP;msExchSafeRecipientsHash;user'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls '$HybridOU' /I:S /G '`"$accountName`":WP;msDS-ExternalDirectoryObjectID;user'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls '$HybridOU' /I:S /G '`"$accountName`":WP;publicDelegates;user'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls '$HybridOU' /I:S /G '`"$accountName`":WP;msExchDelegateLinkList;user'"
Invoke-Expression $cmd | Out-Null

#Object type: iNetOrgPerson
$cmd = "dsacls '$HybridOU' /I:S /G '`"$accountName`":WP;proxyAddresses;iNetOrgPerson'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls '$HybridOU' /I:S /G '`"$accountName`":WP;msExchUCVoiceMailSettings;iNetOrgPerson'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls '$HybridOU' /I:S /G '`"$accountName`":WP;msExchUserHoldPolicies;iNetOrgPerson'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls '$HybridOU' /I:S /G '`"$accountName`":WP;msExchArchiveStatus;iNetOrgPerson'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls '$HybridOU' /I:S /G '`"$accountName`":WP;msExchSafeSendersHash;iNetOrgPerson'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls '$HybridOU' /I:S /G '`"$accountName`":WP;msExchBlockedSendersHash;iNetOrgPerson'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls '$HybridOU' /I:S /G '`"$accountName`":WP;msExchSafeRecipientsHash;iNetOrgPerson'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls '$HybridOU' /I:S /G '`"$accountName`":WP;msDS-ExternalDirectoryObjectID;iNetOrgPerson'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls '$HybridOU' /I:S /G '`"$accountName`":WP;publicDelegates;iNetOrgPerson'"
Invoke-Expression $cmd | Out-Null
$cmd = "dsacls '$HybridOU' /I:S /G '`"$accountName`":WP;msExchDelegateLinkList;iNetOrgPerson'"
Invoke-Expression $cmd | Out-Null

#Object type: group
$cmd = "dsacls '$HybridOU' /I:S /G '`"$accountName`":WP;proxyAddresses;group'"
Invoke-Expression $cmd | Out-Null

#Object type: contact
$cmd = "dsacls '$HybridOU' /I:S /G '`"$accountName`":WP;proxyAddresses;contact'"
Invoke-Expression $cmd | Out-Null

Finally you need to run the above once per domain.

Preparing for User Writeback

[This functionality is not in the current builds of AADConnect]

Currently in preview at the time of writing, you are able to make users in Azure Active Directory (cloud users as Office 365 would call them) and write them back to on-premises Active Directory. The users password is not written back and so needs changing before the user can login on-premises.

To prepare the on-premises Active Directory to writeback user objects you need to run this script. This is contained in AdSyncPrep.psm1 and that is installed as part of Azure AD Connect. Azure AD Connect will install Azure AD Sync, which is needed to do the writeback. To load the AdSyncPrep.psm1 module into PowerShell run Import-Module ‘C:\Program Files\Microsoft Azure Active Directory Connect\AdPrep\AdSyncPrep.psm1’ from an administrative PowerShell session.

$accountName = "domain\aad_account" #[this is the account that will be used by Azure AD Connect Sync to manage objects in the directory, this is an account usually in the form of AAD_number].
$cloudUserOU = "OU=CloudUsers,DC=contoso,DC=com"
Initialize-ADSyncUserWriteBack -AdConnectorAccount $accountName -UserWriteBackContainerDN $cloudUserOU

Once the next AADSync occurs you should see users in the OU used above that match the cloud users in Office 365 as shown:

image

Prepare for Password Hash Sync

This set of PowerShell ensures that the AADConnect account has the correct permissions to read password hashes from the Active Directory when they are changed, so that the service can sync them to the cloud. You need this permission whenever you enable Password Hash Sync (which could be in conjunction with another authentication method as well)

$DomainDN = "DC=contoso,DC=com" 
$accountName = "domain\aad_account" #[this is the account that will be used by Azure AD Connect Sync to manage objects in the directory, this is often an account in the form of MSOL_number or AAD_number].

$cmd = "dsacls.exe '$DomainDN' /G '`"$accountName`":CA;`"Replicating Directory Changes`";'"
Invoke-Expression $cmd | Out-Null

$cmd = "dsacls.exe '$DomainDN' /G '`"$accountName`":CA;`"Replicating Directory Changes All`";'"
Invoke-Expression $cmd | Out-Null

Prepare for Windows 10 Registered Device Writeback Sync

Windows 10 devices that are joined to your domain can be written to Azure Active Directory as a registered device, and so conditional access rules on device ownership can be enforced. To do this you need to import the AdSyncPrep.psm1 module. This module supports the following two additional cmdlets to prepare your Active Directory for Windows 10 device sync:

CD "C:\Program Files\Microsoft Azure Active Directory Connect\AdPrep"
Import-Module .\AdSyncPrep.psm1
Initialize-ADSyncDomainJoinedComputerSync
Initialize-ADSyncNGCKeysWriteBack

These cmdlets are run as follows:

$accountName = "domain\aad_account" #[this is the account that will be used by Azure AD Connect Sync to manage objects in the directory, this is often an account in the form of MSOL_number or AAD_number].
$azureAdCreds = Get-Credential #[Azure Active Directory administrator account]

CD "C:\Program Files\Microsoft Azure Active Directory Connect\AdPrep"
Import-Module .\AdSyncPrep.psm1
Initialize-ADSyncDomainJoinedComputerSync -AdConnectorAccount $accountName -AzureADCredentials $azureAdCreds 
Initialize-ADSyncNGCKeysWriteBack -AdConnectorAccount $accountName 

To successfully run these cmdlets you need to have the latest version of the Microsoft Online PowerShell modules installed (the V1.1 versions, not the V2.0 preview). You can get these from https://www.powershellgallery.com/packages/MSOnline (which in turn needs MSOL Signin Assistant from https://www.microsoft.com/en-us/download/details.aspx?id=41950 and the Windows Management Framework v5 from https://www.microsoft.com/en-us/download/details.aspx?id=50395). If you get errors in the above, make sure you have the correct version, download from above and try the scripts again.

Once complete, open Active Directory Sites and Services and from the View menu Show Services Node. Then you should see the GUID of your domain under the Device Registration Configuration container.

image

Unable To Send Exchange Quota Message

Posted on Leave a commentPosted in 2010, 2013, exchange, IAmMEC

In Exchange 2013 you can sometimes see the following event log error (MSExchange Store Driver Submission, ID 1012):

The store driver failed to submit event <id> mailbox <guid> MDB <database guid> and couldn’t generate an NDR due to exception Microsoft.Exchange.MailboxTransport.StoreDriverCommon.InvalidSenderException
   at Microsoft.Exchange.MailboxTransport.Shared.SubmissionItem.SubmissionItemUtils.CopySenderTo(SubmissionItemBase submissionItem, TransportMailItem message)
   at Microsoft.Exchange.MailboxTransport.Submission.StoreDriverSubmission.MailItemSubmitter.GenerateNdrMailItem()
   at Microsoft.Exchange.MailboxTransport.Submission.StoreDriverSubmission.MailItemSubmitter.<>c__DisplayClass1.<FailedSubmissionNdrWorker>b__0()
   at Microsoft.Exchange.MailboxTransport.StoreDriverCommon.StorageExceptionHandler.RunUnderTableBasedExceptionHandler(IMessageConverter converter, StoreDriverDelegate workerFunction).

And this will be preceded by the following event log warning (MSexchangeIS, ID 1077):

The mailbox <guid> on database <database guid> is approaching its storage limit. A notification has been sent to the user. This warning will not be sent again for at least twenty four hours.

The mailbox in both errors is the same and it occurs for mailboxes that have moved to Exchange Server 2013 from Exchange Server 2010 and are close to their mailbox quota. To fix the issue move the mailbox to a different database. The easiest way to do this is New-MoveRequest <guid> where the same GUID is used.

If you have lots of these then this is a little more time consuming, unless you get PowerShell to the rescue.

The following two cmdlets will query the last seven days of the event logs for MSexchangeIS sourced events with ID 1077, get the event log message (which contains the mailbox guid), manipulate the string containing the message and generate a text file of just the mailbox guids. The second cmdlet will run a New-MoveRequest for each mailbox listed in the text file.

Get-WinEvent -ComputerName PC1 -ProviderName MSExchangeIS | where {$_.ID -eq 1077 -AND $_.TimeCreated -gt [DateTime]::Now.AddDays(-7).Date} | select @{Name="mailbox";Expression={$_.Message.Substring(12,36)}} | ft -HideTableHeaders -AutoSize | out-file nearquota.txt

and then

Get-Content .\nearquota.txt | foreach {New-MoveRequest -Identity $_}

Make sure though that your Application event log is large enough to store more than seven days of events and then run these cmdlets, per server every seven days until the issue goes away (or over the course of say a year, move all mailboxes to different databases and that fixes it as well).

Using Office 365 PST Ingestion Service

Posted on 6 CommentsPosted in exchange, exchange online, Office 365, pst, sharepoint

[Updated 10th Nov 2015 with tips on managing bad items in PST files]
Its been in private preview for a while, and recently entered a free preview for any Office 365 subscriber to try. So I gave it a go and have the following tips and guidance.

Preparing to upload PST files

You can upload PST files in situ from their current location on the network. There is no requirement to first copy them to a new folder for uploading. To do this requires a few things to consider, not just including running the AzCopy process with an account that can access all the content.

AzCopy is the command line tool used to copy your PST files to Azure in advance of importing them into Office 365 mailboxes. You do not need an Azure subscription to do this, and until September 2015 this is a free service. To do this in-situ upload of PST files without first copying them to a local network staging location you should include the /Pattern: property in AzCopy. This is documented in the AzCopy help but not currently in the PST Ingestion help on TechNet (https://technet.microsoft.com/library/ms.o365.cc.IngestionHelp.aspx?v=15.1.166.0&l=1). Using AzCopy without /Pattern will upload everything in the source path. As this is a PST ingestion process, you only want *.pst as the /Pattern. When this ingestion process starts to include uploads for SharePoint, then /Pattern will of course not be as useful a value to include.

In the following example, AzCopy is reading from a folder called “C:\Shares\Users” (/source:) and looking in all subdirectories (/S) and only uploading *.pst files (/Pattern:”*.pst”).


<span style="font-size: medium;">azcopy /source:"C:\Shares\Users" /Dest:https://uniqueurl.blob.core.windows.net/ingestiondata/20150101 /DestKey:uniquekey /Pattern:"*.pst" /S /V:"c:\temp\pstIngestion20150101.log"</span>

The data is uploaded to a folder called ingestiondata/20150101 in your Azure storage blog for the PST Ingestion process (notice no space after the URL and before ingestiondata as shown in TechNet). Each file is uploaded to a subfolder of this folder that matches the folder it is located on in the source. For example, if the following folder structure existed:

image

Then in Azure storage the structure would be like the following:

ingestiondata/20150101/Jenny/Outlook Files/2009/jenny2009.pst

ingestiondata/20150101/Paul/archive.pst

ingestiondata/20150101/Simon/PST Files/2009/SimonArchive.pst

ingestiondata/20150101/Simon/Archive2011.pst

Notice that the folder structure underneath the /Source: path is duplicated to Azure, and for a real world scenario, notice that Simon has two PST files in different folders. The /Pattern property of AzCopy will find both even though they may not be where you expect them to be. The 20150101 value is just a unique value that I have used (its a date) that I would change for different uploads, meaning that different uploads would never clash with an existing upload. In TechNet it suggests a name that represents the file share that you set as the source value, so that two uploads from two sources cannot over write each other. So in my example I might do an upload on a different day, and so use a different data value or I could use CUserShares as a way to represent the local upload and FileServerHome to represent \\fileserver\home. If I used FileServer/Home (changing \ for /) then I am creating additional subdirectories in Azure storage and this needs to be taken into account.

Preparing the PST Mapping File

Once the upload is complete, and note that this is best done overnight as it maximises bandwidth use, you have 30 days to import the files from Azure into your mailboxes. To do this you need to create a CSV file like the following:

Workload,FilePath,Name,Mailbox,IsArchive,TargetRootFolder,SPFileContainer,SPManifestContainer,SPSiteUrl
Exchange,20150101/Jenny/Outlook Files/2009,jenny2009.pst,jenny@contoso.com,FALSE,Archive_jenny2009,,,
Exchange,20150101/Paul,archive.pst,paul@contoso.com,FALSE,Archive_Archive,,,
Exchange,20150101/Simon/PST Files/2009,SimonArchive.pst,simon@contoso.com,FALSE,Archive_SimonArchive,,,
Exchange,20150101/Simon,Archive2011.pst,simon@contoso.com,FALSE,Archive_Archive2011,,,

In Excel, it would look as follows:

image

This has a few important elements in it. Mainly the Name value (for the PST filename) is case sensitive which is not documented in TechNet at this time. I guess the FilePath is as well, but I did not come across that issue as I set all the case to the same as the source. The name matches the PST filename, and the FilePath matches the value after “ingestiondata” in the URL including the path the file was uploaded from. Therefore in my example for Jenny above, where the PST file was called “jenny2009.pst” and the path on the local file server was “C:\Shares\Users\Jenny\Outlook Files\2009\” and the /Source: was “C:\Shares\Users” and 20150101 was used as the value in the /Dest: following the URL, the result of the FilePath in the CSV becomes “20150101/Jenny/Outlook Files/2009”. That is, the CSV needs to have a FilePath that includes Dest (after ingestiondata) and the local source with \ changed to / and not including the /Source: value.

A second example, if I used the following AzCopy cmdline:


<span style="font-size: medium;">azcopy /source:"\\fileserver\home" /Dest:https://uniqueurl.blob.core.windows.net/ingestiondata/FileServer/home /DestKey:uniquekey /Pattern:"*.pst" /S /V:"c:\temp\pstIngestionFileServerHome.log"</span>

Then I would have FilePath values in the CSV that looked like “FileServer/home/Jenny/Outlook Files/2009” (case sensitive).

Once you upload the mapping file the PST import from Azure to the Exchange mailbox (or Archive) starts. If the PST file cannot be found then you get an error in the management console at quite shortly after starting. The error reads as follows:

Could not find source file {0}. Please correct the FilePath column in the mapping file and create a new job with the updated mapping file

Full file path

fileserver/home/Jenny/outlook files/2009/Jenny2009.pst

In the above error I have purposely set the FilePath and PST file to the wrong case as that is the cause of this error (unless you did not upload the PST or the path is completely wrong). The best source of the FilePath name comes from the AzCopy log file (set in the /V switch for AzCopy). This will show the path (not including the string value used after “ingestiondata” in the Dest switch that you need to add), but will show the full path the file was uploaded to and the correct case for this path and file.

All the best with removing PST’s from the network! Of course there is more to do that just mentioned here – you need to find them, work out who the PST’s belong to and create this mapping file accurately. There are a number of PST ingestion software companies who will do this for you. You also need to ensure that the PST’s do not contain bad items and to control the import settings for the PST import process.

To ensure there are no bad items in the PST files (or try to at least) it is recommended that you scan the PST files with SCANPST.EXE (http://support.microsoft.com/en-us/kb/272227). This tool needs to be run on all PST files that you have located before you upload them, or if bandwidth is not an issue, to upload them, import them and then process only those that fail.

Once SCANPST.EXE is complete, upload the new PST file and import it again (probably a new mapping file needed). Then also tell the PST Ingestion service that it is to continue processing items even if it finds bad items. To do this you need to configure a custom BadItemLimit once the import starts (as the current BadItemLimit default is 0, which means to fail at the first bad item. You will get “TooManyBadItemsPermanentException” errors in the import log file if you need to do this. To set the BadItemLimit use either of the following:

  1. Connect to Exchange Online via remote PowerShell
  2. Get-MailboxImportRequest | FL name, mailbox, status, whencreated, requestguid
  3. This returns a list of import requests. Look for the most recent and get the requestguid value.
  4. Set-MailboxImportRequest -Identity “request-guid-found-above” -BadItemLimit unlimited –AcceptLargeDataLoss

Or you can just set the BadItemLimit the same for all imports without looking for the latest one

  1. $all_import_requests = Get-Mailboximportrequest
    Foreach ($import_request in $all_import_requests)
    {
    Set-Mailboximportrequest -identity ($import_request).requestguid -BadItemLimit unlimited –AcceptLargeDataLoss
    }

Managing Office 365 Groups With Remote PowerShell

Posted on 2 CommentsPosted in Azure, cloud, exchange, exchange online, groups, IAmMEC, mcm, mcsm, MVP, Office 365, owa, powershell

Announced during Microsoft Ignite 2015, there are now PowerShell administration cmdlets available for the administration of the Groups feature in Office 365.

The cmdlets are all based around “UnifedGroups”, for example Get-UnifiedGroups.

Create a Group

Use New-UnifiedGroup to do this. An example would be New-UnifiedGroup -DisplayName “Sales” -Alias sales –EmailAddress sales@contoso.com

The use of the EmailAddress parameter is useful as it allows you to set a group that is not given an email address based on your default domain, but from one of the other domains in your Office 365 tenant.

Modify a Groups Settings

Use Set-UnifiedGroup to change settings such as the ability to receive emails from outside the tenant (RequireSenderAuthenticationEnabled would be $false), limit email from a whitelist (AcceptMessagesOnlyFromSendersOrMembers) and other Exchange distribution list settings such as hidden from address lists, mail tips and the like. AutoSubscribeNewMembers can be used to tell the group to email all new messages to all new members, PrimarySmtpAddress to change the email address that the group sends from.

Remove a Group

This is the new Remove-UnifiedGroup cmdlet.

Add Members to a Group

This cmdlet is Add-UnifiedGroupLinks. For example Add-UnifiedGroupLinks sales -LinkType members -Links brian,nicolas will add the two names members to the group. The LinkType value can be members as shown, but also “owners” and “subscribers” to add group administrators (owners) or just those who receive email sent to the group but not access to the groups content. To change members to owners you do not need to remove the members, just run something like Add-UnifiedGroupLinks sales –LinkType owners -Links brian,nicolas

You can also pipe in a user list from, for example a CSV file, to populate a group. This would read: Add-UnifiedGroupLinks sales -LinkType members -Links $users where $users = Get-Content username.csv would be run before it to populate the $users variable. The source of the variable can be anything done in PowerShell.

Remove Members from a Group

For this use Remove-UnifiedGroupLinks and mention the group name, the LinkType (member, owner or subscriber) and the user or users to remove.

To Disable Group Creation in OWA

Set-OWAMailboxPolicy is used to create a policy that is not allowed to create Groups and then users have that policy applied to them. For example Set-OWAMailboxPolicy “Students” –GroupCreationEnabled $false followed by Set-CASMailbox mary –OWAMailboxPolicy Students to stop the user “mary” creating groups. After the policy is assigned and propagates around the Office 365 service, the user can join and leave groups, but not create them.

Control Group Naming

This feature allows you to control the group name or block words from being used. This is easier to set in the Distribution Groups settings in Exchange Control Panel rather than via PowerShell. To do this EAC use Recipients > Groups and click the ellipses icon (…) and select Configure Group Naming Policy. This is the same policy for distribution groups. You can add static text to the start or end of name, as well as dynamic text such as region.

Admins creating groups are not subject to this policy, but unlike DL’s if they create groups in PowerShell the policy is also not applied and so the -IgnoreNamingPolicy switch is not required.

Exchange OWA and Multi-Factor Authentication

Posted on 21 CommentsPosted in 2010, 2013, Azure, exchange, IAmMEC, MFA, MVP, owa, smartphone

Multi-factor authentication (MFA), that is the need to have a username, password and something else to pass authentication is possible with on-premises servers using a service from Windows Azure and the Multi-Factor Authentication Server (an on-premises piece of software).

The Multi-Factor Authentication Server intercepts login request to OWA, if the request is valid (that is the username and password work) then the mobile phone of the user is called or texted (or an app starts automatically on the phone) and the user validates their login. This is typically done by pressing # (if a phone call) or clicking Verify in the app, but can require the entry of a PIN as well. Note that when the MFA server intercepts the login request in OWA, there is no user interface in OWA to tell you what is happening. This can result in user disconnect and stops the use of two-way MFA (receive number by text, type number into web application type scenario). Therefore to that end, MFA directly on the OWA application is not supported by the Microsoft Exchange team. Steps for setting up ADFS for Exchange Server 2013 SP1 or later are at https://technet.microsoft.com/en-us/library/dn635116(v=exchg.150).aspx. Once this is in place, you need to enable MFA for ADFS rather than MFA for OWA. I have covered this in a separate post at http://c7solutions.com/2016/04/installing-azure-multi-factor-authentication-and-adfs.

To configure Multi-Factor Authentication Server for OWA (unsupported) you need to complete the following steps:

Some of these steps are the same regardless of which service you are adding MFA to and some slightly different. I wrote a blog on MFA and VPN at http://c7solutions.com/2015/01/windows-rras-vpn-and-multi-factor-authentication and this contains the general setup steps and so these are not repeated here. Just what you need to do differently

Step 1

See http://c7solutions.com/2015/01/windows-rras-vpn-and-multi-factor-authentication

Step 2: Install MFA Server on-premises

This is covered in http://c7solutions.com/2015/01/windows-rras-vpn-and-multi-factor-authentication, but the difference with OWA is that it needs to be installed on the Exchange CAS server where the authentication takes place.

Ensure you have .NET 3.5 installed via Server Manager > Features. This will install the .NET 2.0 feature that is required by MFA server. If the installation of the download fails, this is the most likely reason for the failure, so install .NET 3.5 and then try the MFA Server install again.

The install of the MFA server does not take very long. After a few minutes the install will complete and then you need to run the Multi-Factor Authentication Server admin tools. These are on the Start Screen in More Apps or the Start Menu. Note that it will start the software itself if given time:

image

image

Do not skip the wizard, but click Next. You will be asked to activate the server. Activating the server is linking it to your Azure MFA instance. The email address and password you need are obtained from the Azure multi-factor auth provider that was configured in Step 1. Click the Generate Activation Credentials on the Downloads page of the Azure MFA provider auth management page.

image

The credentials are valid for ten minutes, so your will differ from mine. Enter them into the MFA Server configuration wizard and click Next.

MFA Server will attempt to reach Azure over TCP 443.

Select the group of servers that the configuration should replicate around. For example, if you where installing this software on each Exchange CAS server, then you might enter “Exchange Servers” as the group name in the first install and then select it during the install on the remaining servers. This config will be shared amongst all servers with the same group name. If you already have a config set up with users in it and set up a new group here, then it will be different settings for the users. For example you might have a phone call to authenticate a VPN connection but use the app for OWA logins. This would require two configs and different groups of servers. If you want the same settings for all users in the entire company, then one group (the default group) should be configured.

image

Next choose if you want to replicate your settings. If you have more than one MFA Server instance in the same group select yes.

Then choose what you want to authenticate. Here I have chosen OWA:

image

Then I need to choose the type of authentication I have in place. In my OWA installation I am using the default of Forms Based Authentication, but if you select Forms-based authentication here, the example URL for forms based authentication shown on the next page is from Exchange Server 2003 (not 2007 or later). Therefore I select HTTP authentication

Next I need to provide the URL to OWA. I can get this by browsing the OWA site over https. The MFA install will also use HTTPS, so you will need a certificate and have this trusted by a third party if you want to support user managed devices. Users managing their own MFA settings (such as telephone numbers and form of authentication) reduces the support requirement. That needs the User Portal, the SDK and the Mobile App webservice installed as well. These are outside the scope of this blog. For here I am going to use https://servername/owa.

image

Finish the installation at this time and wait for the admin application to appear.

Step 3: Configure Users for MFA

Here we need to import the users who will be authenticated with MFA. Select the Users area and click Import from Active Directory. Browse the settings to imports group members, or OUs or a search to add your user account. Once you have it working for yourself, add others. Users not listed here will not see any change in their authentication method.

Ensure that your test user has a mobile number imported from the Active Directory. If not add one, choosing the correct country code as well. The default authentication for the user is that they will get a phone call to this number and need to press # before they can be logged in. Ensure that the user is set to Enabled as well in the users area of the management program.

Step 4: Configure OWA for MFA (additional steps)

On the IIS Authentication node you can adjust the default configuration for HTTP. Here you need to set Require Multi-Factor Authentication user match. This ensures that each auth attempt is matched to a user in the users list. If the user exists and is enabled, then do MFA for them. If disabled, then the setting for Succeed Authentication on the advanced tab comes into play. If the user is not listed, authentication passes through without MFA.

image

Change to the Native Module tab and select OWA under Default Web Site only. Do not set authentication on the Backend Web Site. Also enable the native module on ECP on the Default Web Site as well:

image

Then I can attempt a login to OWA or ECP. Once I successfully authenticate my phone rings and I am prompted to press #. Once I press # I am allowed into Exchange!

SSL and Exchange Server

Posted on 2 CommentsPosted in 2008 R2, 2012 R2, 2013, certificates, exchange, https, IAmMEC, JetNexus, load balancer, Load Master, loadbalancer, mobile phones, SSL, TLS, windows server, xp

In October 2014 or thereabouts it became known that the SSL protocol (specifically SSL v3) was broken and decryption of the encrypted data was possible. This blog post sets out the steps to protect your Exchange Server organization regardless of whether you have one server or many, or whether or not you use a load balancer or not. As load balancers can terminate the SSL session and recreate it, it might be that changes are needed on your load balancer or maybe directly on the servers that run the CAS role. This blog post will cover both options and looks at the settings for a Kemp load balancer and a JetNexus load balancer.

Of course being an Exchange Server MVP, I tend to blog about Exchange related stuff, but actually this is valid for any server that you publish to the internet and probably valid of any internal server that you encrypt traffic to via the SSL suite of protocols. Microsoft outline the below configuration at https://technet.microsoft.com/en-us/library/security/3009008.aspx.

The steps in this blog will look at turning off the SSL protocol in Windows Server and turning on the TLS protocol (which does the same thing as SSL and is interchangeable for SSL, but more secure at the time of writing – Jan 2015). Some clients do not support TLS (such as Internet Explorer on Windows XP Service Pack 2 or earlier, so securing your servers as you need to do may stop some home users connecting to your Exchange Servers, but as XP SP2 should not be in use in any business now, these changes should not affect desktops. You could always use a different browser on XP as that might mitigate this issue, but using XP is a security risk in an of itself anyway! To disable clients from connecting to SSL v3 sites requires a client or GPO setting and this can be found via your favourite search engine.

Note that the registry settings and updates for the load balancers in this blog post will restrict client access to your servers if your client cannot negotiate a mutual cipher and secure channel protocol. Therefore care and testing are strongly advised.

Testing and checking your changes

Before you make any changes to your servers, especially internet facing ones, check and document what you have in place at the moment using https://www.ssllabs.com/ssltest. This service will connect to an SSL/TLS protected web site and report back on the issues found. Before running any of the changes below see what overall rating you get and document the following:

  • Authentication section: record the signature algorithm. For the signature algorithm its possible the certificate authority signature will be marked “SHA1withRSA WEAK SIGNATURE”. This certificate, if rekeyed and issued again by your certificate authority might be replaced with a SHA-2 certificate. The Google Chrome browser from September 2014 will report sites secured with this SHA-1 certificate as not fully trustworthy based on the expiry date of the certificate. If your certificate expires after Jan 1st 2017 then get it rekeyed as soon as possible. As 2015 goes on, this date will move closer in time. From early 2015 this cut off date becomes June 1st 2016 and so on. Details on the dates for this impact are in http://googleonlinesecurity.blogspot.co.uk/2014/09/gradually-sunsetting-sha-1.html. You can also use https://shaaaaaaaaaaaaa.com/ to test your certificate if the site is public facing, and this website gives details on who is now issuing SHA-2 keyed certificates. You can examine your external servers for SHA-1 certificates and the impact in Chrome (and later IE and Firefox) at https://www.digicert.com/sha1-sunset/. To do the same internally, use the DigiCert Certificate Inspector at https://www.digicert.com/cert-inspector.htm.
  • Authentication section: record the path values. Ensure that each certificate is either in the trust store or sent by the server and not an extra download.
  • Configuration section: document the cipher suites that are provided by your server
  • Handshake simulation section: Here it will list browsers and other devices (mobile phones) and what their default cipher is. If you do not support the cipher they support then you cannot communicate. Note that you typically support more than one cipher and the client will often support more than one cipher to, so though it is shown here as a mismatch this does not mean that it will not work and if this client is used by your users then click the link for the client and ensure that the server offers at least one of the the ciphers required by the client – unless all the ciphers are insecure in which case do not use that client!

Once you have a document on your current configuration, and a list of the clients you need to support and the ciphers they need you to support, you can go about removing SSL v3 and insecure ciphers.

Disabling SSL v3 on the server

To disable SSL v3 on a Windows Server (2008 or later) you need to set the Enabled registry value at “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 3.0\Server” to 0. If this value does not exist, the create a DWORD value called “Enabled” and leave it at 0. You then need to reboot the server.

If you are using Windows 2008 R2 or earlier you should enable TLS v1.1 and v1.2 at the same time. Those versions of Windows Server support TLS v1.1 and v1.2 but it is not enabled (only TLS v1.0 is enabled). To enable TLS v1.1 and v1.2 use set the Enabled value at “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.1\Server” to 1. Change the path to “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.2\Server” and the same setting to support TLS v1.2. If these keys do not exist, create them. It is also documented that the “DisabledByDefault” key is required, but I have seen this noted as being the same as the “Enabled” key – just the opposite value. Therefore as I have not actually checked, I set both Enabled to 1 and DisabledByDefault to 0.

To do both the disabling of SSL v2 and v3 (v2 can be enabled on older versions of Windows and should be disabled as well) I place the following in a .reg file and double click it on each server, followed by a reboot for it to take effect. This .reg file contents also disables the RC4 ciphers. These ciphers have been considered insecure for a few years and when I configure my servers not to support SSL v3 I also disable the RC4 ciphers as well.

Windows Registry Editor Version 5.00

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0]

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0\Client]

"Enabled"=dword:00000000

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0\Server]

"Enabled"=dword:00000000

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 3.0]

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 3.0\Client]

"Enabled"=dword:00000000

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 3.0\Server]

"Enabled"=dword:00000000

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Ciphers\RC4 128/128]

"Enabled"=dword:00000000

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Ciphers\RC4 40/128]

"Enabled"=dword:00000000

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Ciphers\RC4 56/128]

"Enabled"=dword:00000000

Then I use the following .reg file to enabled TLS v1.1 and TLS v1.2

Windows Registry Editor Version 5.00

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.1]

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.1\Client]

"Enabled"=dword:00000001

"DisabledByDefault"=dword:00000000

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.1\Server]

"Enabled"=dword:00000001

"DisabledByDefault"=dword:00000000

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.2]

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.2\Client]

"Enabled"=dword:00000001

"DisabledByDefault"=dword:00000000

 

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.2\Server]

"Enabled"=dword:00000001

"DisabledByDefault"=dword:00000000

 

Once you have applied both of the above sets of registry keys you can reboot the server at your convenience. Note that the regkeys may set values that are already set, for example TLS v1.1 and v1.2 are enabled on Exchange 2013 CAS servers and SSL v2 is disabled. For example the first of the below graphics comes from a test environment of mine that is running Windows Server 2012 R2 without any of the above registry keys set on them. You can see that Windows Server 2012 R2 is vulnerable to the POODLE attack and supports the RC4 cipher which is weak.

image

The F grade comes from patched but un-configured with regards to SSL Windows Server 2008 R2 server

image

After setting the above registry keys and rebooting, the test at https://www.ssllabs.com/ssltest then showed the following for 2012 R2 on the left (A grade) and Windows Server 2008 R2 on the right (A- grade):

image image

Disabling SSL v3 on a Kemp LoadMaster load balancer

If you protect your servers with a load balancer, which is common in the Exchange Server world, then you need to set your SSL and cipher settings on the load balancer, unless you are only balancing at TCP layer 4 and doing SSL pass through. Therefore even for clients that have a load balancer, you might not need to make the changes on the load balancer, but on the server via the above section instead. If you do SSL termination on the load balancer (TCP layer 7 load balancing) then I recommend setting the registry keys on the Exchange servers anyway to avoid security issues if you need to connect to the server directly and if you are going to disable SSL v3 in one location (the load balancer) there is no problem in disabling it on the server as well.

For a Kemp load balancer you need to be running version 7.1-20b to be able to do the following, and to ensure that the SSL code on the load balancer is not susceptible to issues such as heartbleed as well. To configure your load balancer to disable SSL v3 you need to modify the SSL properties of the virtual server and check the “Support TLS Only” option.

To disable the RC4 weak ciphers then there are a few choices, but the easiest I have seen to do is to select “Perfect Forward Secrecy Only” under Selection Filters and then add all the listed filters. Then from this list remove the three RC4 ciphers that are in the list.

If you do not select “Support TLS Only” and leave the ciphers at the default level then your load balancer will get an C grade at the test at https://www.ssllabs.com/ssltest because it is vulnerable to the POODLE attack. Setting just the “Support TLS Only” option and leaving the default ciphers in place will result in a B grade, as RC4 is still supported. Removing the RC4 ciphers (by following the instructions above to add the perfect forward secrecy ciphers and remove the RC4 ciphers from this list) as well as allowing only the TLS protocol will result in an A grade.

image

Kemp 7.1-22b does not support SSL v3 for the API and web interface as well as completing the above to protect the virtual services that the load balancer offers.

Kemp Technologies document the above steps at https://support.kemptechnologies.com/hc/en-us/articles/201995869, and point out the unobvious setting that if you filter the cipher list with the “TLS 1.x Ciphers Only” setting then it will only show you the TLS 1.2 ciphers and not any TLS 1.1 or TLS 1.0 ciphers. THerefore selecting “TLS 1.x Ciphers Only” rather than filtering using “Perfect Forward Secrecy Only” will result in a reduced client list, which may be an issue.

I was able to achieve an A grade on the SSL Labs test site. My certificate uses SHA-1, but expires in 2015 so by the time SHA-1 is reported an issue in the browser I will have changed it anyway.

image

Disabling SSL v3 on a JetNexus ALB-X load balancer

If you protect your servers with a load balancer, which is common in the Exchange Server world, then you need to set your SSL and cipher settings on the load balancer, unless you are only balancing at TCP layer 4 and doing SSL pass through. Therefore even for clients that have a load balancer, you might not need to make the changes on the load balancer, but on the server via the above section instead. If you do SSL termination on the load balancer (TCP layer 7 load balancing) then I recommend setting the registry keys on the Exchange servers anyway to avoid security issues if you need to connect to the server directly and if you are going to disable SSL v3 in one location (the load balancer) there is no problem in disabling it on the server as well.

For a JetNexus ALB-X load balancer you need to be running build 1553 or later. Build 1553 is a version 3 build, so any version 4 build is of a higher, and therefore valid build. This build (version 3.54.3) or later is needed to ensure Heartbleed mitigation and to allow the following configuration changes to be applied.

To configure the JetNexus  you need to upload a config file to turn off SSL v3 and RC4 ciphers. The config file is .txt file that is uploaded to the load balancer. In version 4, the primary cluster node can have the file uploaded to it, and the changes are replicated to the second node in the cluster automatically.

Before you upload a config file to make the changes required, ensure that you backup the current configuration from Advanced >> Update Software and click the button next to Download Current Configuration to save the configuration locally. Ensure you backup all nodes in a v4 cluster is appropriate.

Then select one of the three config file settings below and copy it to a text file and upload it from Advanced >> Update Software and use the Upload New Configuration option to install the file. The upload will reset all connections, do do this at during a quiet period of time.

The three configs are to reset the default ciphers, to disable SSL v3 and RC4, and to disable TLS v1.0 and SSL v3 and RC4

JetNexus protocol and cipher defaults:

#!update

 

[jetnexusdaemon]

Cipher004="ECDHE-RSA-AES128-SHA256:AES128-GCM-SHA256:RC4:HIGH:!MD5:!aNULL:!EDH"

Cipher1=""

Cipher2=""

CipherOptions="CIPHER_SERVER_PREFERENCE"

JetNexus protocol and cipher changes to disable SSL v3 and disable RC4 ciphers:

#!update

 

[jetnexusdaemon]

Cipher004="ECDHE-RSA-AES128-SHA256:AES128-GCM-SHA256:HIGH:!MD5:!aNULL:!EDH:!RC4"

Cipher1=""

Cipher2=""

CipherOptions="NO_SSLv3,CIPHER_SERVER_PREFERENCE"

JetNexus protocol and cipher changes to disable TLS v1.0, SSL v3 and disable RC4 ciphers:

#!update

 

[jetnexusdaemon]

Cipher004="ECDHE-RSA-AES128-SHA256:AES128-GCM-SHA256:HIGH:!MD5:!aNULL:!EDH:!RC4"

Cipher1=""

Cipher2=""

CipherOptions="NO_SSLv3,NO_TLSv1,CIPHER_SERVER_PREFERENCE"

On my test environment I was able to achieve an A- grade with the SSL Test website and the config to disable TLS 1.0, SSL3 and RC4 enabled. The A- is because of a lack of support for Forward Secrecy with the reference browsers used by the test site.

image

Browsers and Other Clients

There too much to discuss with regards to clients, apart from they need to support the same ciphers as mentioned above. A good guide to clients can be found at https://www.howsmyssl.com/s/about.html and from there you can test your client as well.

Additional comment 23/1/15 : One important comment to make though comes courtesy of Ingo Gegenwarth at https://ingogegenwarth.wordpress.com/2015/01/20/hardening-ssltls-and-outlook-for-mac/. This post discusses the TLS Renegotiation Indication Extension update at RFC 5746. It is possible to use the AllowInsecureRenegoClients registry key at HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL to ensure that only clients with the update mentioned at http://support.microsoft.com/kb/980436 are allowed to connect. If this is enabled (set to Strict Mode) and the above to disable SSL 2 and 3 is done then Outlook for Mac clients cannot connect to your Exchange Server. If this regkey is deleted or has a non-zero value then connections to SSL 2 and 3 can be made, but only for a renegotiation to TLS. Therefore ensure that you allow Compatibility Mode (which is the default) when you disable SSL 2 and 3, as Outlook for Mac and Outlook for Mac for Office 365 both require SSL support to then be able to start a TLS session.

Group Policy Import To Fix Google Chrome v37 Issues With Exchange Server and Microsoft CRM

Posted on 2 CommentsPosted in 2010, 2013, Chrome, crm, Dynamics, exchange, exchange online, Group Policy, IAmMEC, Office 365, owa

A recent update to Google Chrome (37.0.2062.120) removed the ability to support modal dialog boxes. This are dialogs that require your attention and stop you going back to the previous page until you have completed the info required – these are very useful in workflow type scenarios.

Google claim that as 0.004% of web sites use them (from Google anonymous statistics gathering that you can opt into in Chrome) they are justified in removing support for them – but they have not removed other things that have the same level of support!

With this version of Chrome (or the Chromium open source browser) there is a work around until April 30th 2015 that will allow modal dialogs to work again. Without this work around clicking links in OWA and ECP in Exchange 2010 and OWA and EAC in Exchange Online and Exchange 2013 will not popup. This can cause issues such as the inability to attach files in OWA and to create objects in ECP/EAC for the administrator. Popups in Microsoft CRM also do not work.

As a work around you could use a different browser, but if Chrome works for you (or does not in this case) and you are joined to a domain then you can download the following GPO export file and import it into your Active Directory to enable modal dialogs to work again in Exchange Server, Office 365 and Microsoft CRM products.

To download and import this GPO file to enable Chrome modal dialog box functionality to resume (until 30th April 2015, when Google stop allowing the work around) follow these steps:

  1. Download Google Chrome Show Modal Dialog Before 30 April 2015.zip
  2. Copy to a domain controller and expand the zip file. Ensure the contents of the zip file are not placed directly on your desktop as you cannot import from the desktop directly, so if you expand the zip to the desktop then copy the one folder that was created into a new subfolder.
  3. Start Group Policy Management MMC admin tool.
  4. Expand Forest > Domain > Your Domain > Group Policy Objects.
  5. Right click “Group Policy Objects” and choose New
  6. Create a new GPO called “Chrome and Chromium Modal Dialog Box Allow”:
    image
  7. Right click “Chrome and Chromium Modal Dialog Box Allow” GPO that you just made and choose Import Settings
  8. Proceed through the import wizard. You do not need to backup this new GPO on the second page of the dialog as the new GPO is empty.
  9. On the third page of the wizard browse to the parent folder containing the contents of the download above:
    image
  10. Click Next and you should see one backed up GPO listed:
    image
  11. Click Next to import this. If you click View Settings first a web page will open showing you that this GPO sets two registry keys for the computer and two registry keys for the user. These set SOFTWARE\Policies\Chromium\EnableDeprecatedWebPlatformFeatures and Software\Policies\Google\Chrome\EnableDeprecatedWebPlatformFeatures (for both Chromium and Chrome browsers) with a reg key (type string) 1:ShowModalDialog_EffectiveUntil20150430
  12. Proceed with Next and then Finish and the import will begin:
    image
  13. Click OK.
  14. Now link the GPO object to the root of your domain so it impacts all users and to the root of any OU that blocks inheritance. Import to other domains as above or link from this domain depending upon your current policy for managing GPO cross domains.
  15. Delete the zip and folder you downloaded. They are not needed any more.

Exchange Online Free/Busy Issues with OAuth Authentication

Posted on Leave a commentPosted in 2010, 2013, EWS, exchange, exchange online, Free/Busy, OAuth, Office 365

Update: 10 Dec 2014: It is reported that this issue is fixed in CU7 for Exchange Server 2013

OAuth authentication is a new server to server authentication model available in Exchange 2013 SP1 and later and Exchange Online (Office 365). With OAuth enabled and Exchange hybrid in place and where you have multiple endpoints of Exchange Server on-premises and those on-premises Exchange Servers are different versions then you might have issues getting Exchange Online to On-Premises free/busy lookups to work.

Here is the scenario:

Company with Exchange 2010 servers in multiple internet connected sites, going hybrid to Exchange Online.

Exchange Online tenant created and hybrid mode put in place between Exchange Online and Exchange Server 2013 on-premises. In the site where the Exchange 2013 hybrid servers are located there are Exchange 2010 SP3 servers. As hybrid mode was set up with SP1, OAuth was enabled.

Exchange 2010 in the remote sites is configured with an ExternalURL for EWS. Therefore a free/busy lookup from an Office 365 user to a mailbox in one of these remote sites goes direct to the EWS endpoint on Exchange 2010 – it is not proxied via the 2013 hybrid server.

With OAuth enabled this configuration will fail as Exchange Online will use OAuth to authenticate to Exchange 2010 on-premises and fail. The IIS logs will contain entries such as this:

2014-07-22 19:39:34 10.100.28.73 POST /ews/exchange.asmx – 443 – 10.100.28.220 ASProxy/CrossForest/EmailDomain//15.00.0985.008 401 0 0 0

Where the 401 indicates authentication failed and the path ASProxy/CrossForest/EmailDomain indicating OAuth in use. There will be no entries in the IIS log for the Federation Org type of authentication.

If the EWS connection for free/busy goes via the 2013 hybrid server (ExternalURL for the remote site is null) then the free/busy lookup works, or if the OAuth connector in Exchange Online is disabled (Get-IntraOrganizationConnector | Set-IntraOrganizationConnector -Enabled $false from Exchange Online remote PowerShell session) and EWS lookup for free/busy goes direct to the remote Exchange 2010 server then free/busy lookups work.

So if you want OAuth and direct EWS connections to remote sites for free/busy you need Exchange 2013 at those remote sites. If you want to have Exchange 2013 hybrid servers only at your primary site (for mail flow) and OAuth as well (for eDiscovery cross-forest) then you need to proxy your EWS free/busy requests via the Exchange 2013 hybrid server.

This is a known issue in Exchange and may be fixed in the future.

Speaking at TechEd Europe 2014

Posted on 4 CommentsPosted in certificates, cloud, EOP, exchange, exchange online, Exchange Online Protection, GeoDNS, hybrid, IAmMEC, journaling, mcm, mcsm, MVP, Office 365, smarthost, smtp, starttls, TechEd, TLS, transport

I’m please to announce that Microsoft have asked me to speak on “Everything You Need To Know About SMTP Transport for Office 365” at TechEd Europe 2014 in Barcelona. Its going to be a busy few weeks as I go from there to the MVP Summit in Redmond, WA straight from that event.

image

My session is going to see how you can ensure your migration to Office 365 will be successful with regards to keeping mail flow working and not seeing any non-deliverable messages. We will cover real world scenarios for hybrid and staged migrations so that we can consider the impact of mail flow at all stages of the project. We will look at testing mail flow, SMTP to multiple endpoints, solving firewalling issues, and how email addressing and distribution group delivery is done in Office 365 so that we always know where a user is and what is going to happen when they are migrated.

Compliance and hygiene issues will be covered with regards to potentially journaling from multiple places and the impact of having anti-spam filtering in Office 365 that might not be your mail flow entry point.

We will consider the best practices for changing SMTP endpoints and when is a good time to change over from on-premise first to cloud first delivery, and if you need to maintain on-premises delivery how should you go about that process.

And finally we will cover troubleshooting the process should it go wrong or how to see what is actually happening during your test phase when you are trying out different options to see which works for your company and your requirements.

Full details of the session, once it goes live, are at http://teeu2014.eventpoint.com/topic/details/OFC-B350 (Microsoft ID login needed to see this). Room and time to be announced.

Creating Mailboxes in Office 365 When Using DirSync

Posted on 18 CommentsPosted in 2008 R2, 2012, 2012 R2, 2013, Azure, cloud, dirsync, exchange, exchange online, Office 365

This blog post describes the process to create a new user in Active Directory on-premises when email is held in Office 365 and DirSync is in use. With DirSync in use the editable copy of the user object is on-premises and most attributes cannot be modified in the cloud.

Creating the User

  1. Open Active Directory Users and Computers on a Windows 2008 R2 or later server. Ensure that Advanced Features is enabled (View > Advanced Features)
    • Note that if you do not have 2008 R2 or later then use ADSI Edit to make the changes mentioned below that are made on the Attribute Editor tab in Active Directory Users and Computers 2008 R2 or later.
  2. Create an Active Directory user as you normally would. Do not complete any Exchange server properties if you are requested to do so. Completing Exchange on-premises will make a mailbox on premises that will then need to be migrated to Exchange Online. This document describes creating the mailbox online.
  3. Ensure that the user’s email address on the General tab of the AD properties is correct.
  4. Ensure that the users login name on the Account tab is as follows:
    1. User Logon Name: The first part of their email address
    2. The Domain name drop-down: The second part of their email address (not the AD domain name if they are different)
    3. User Logon Name (Pre Windows 2000): DOMAIN as provided and use the first part of the email address (i.e. first.last etc). If first part of email is too long enter as much as you can and ensure it is unique within domain)

Setting the Email Address Properties

  1. On the Attribute Editor tab ensure that Filter > Show only attributes that have values is not selected. Then find and enter the following information:
    1. proxyAddresses: SMTP:primary.email@domain for this user – SMTP needs to be in capitals. Then add additional email addresses as required, but these start with smtp: in lower case.
    2. targetAddress: SMTP:first_part_of_email@tennantname.onmicrosoft.com
    3. Note that both these addresses need to be unique within your directory – Attribute Editor will not check them for uniqueness but they will fail to replicate to Azure with DirSync if they are not unique.
  2. Click OK and close the account creation dialog.
  3. Within three hours this object will sync to Windows Azure Active Directory.
    1. This can be speeded up by logging into the DirSync server and starting PowerShell
    2. Type “Import-Module DirSync” in PowerShell
    3. Type “Start-OnlineCoexistenceSync” in PowerShell – DirSync will replicate now rather than waiting up to three hours.
  4. Check that the DirSync process was successful – if you have entered values that are not unique then DirSync will fail to replicate them and you will need to fix them on-premises and replicate them again.
  5. Licence the user in Office 365 by logging into https://portal.office.com and granting a licence to this user that contains an Exchange Online licence. The mailbox will be created automatically shortly after this.

Additional Attributes

The following are a list of attributes to change in ADSI Edit or the Attribute Editor tab to modify other attributes as required:

Important Point

The above attributes are not the full and exclusive list of attributes and values that you need to set. For example, in Jan 2018 Microsoft published support for delegate access permissions across forest in a hybrid deployment – this uses values that are mentioned in the full list link in the paragraph above but are not set here.

This document should only be used as a reference and not to create or maintain mailboxes for AD accounts that are synced to the cloud – for that you need to have an Exchange Server as that is the only supported way to maintain your Exchange Online attributes. At Microsoft Ignite in 2017, it was announced that cloud management for synced accounts is coming – until that time you are best advised to have the Exchange Server for its admin tools only installed on-premises as well.

Exchange Web Services (EWS) and 501 Error

Posted on 12 CommentsPosted in 2010, 2013, EWS, exchange, exchange online, hybrid, IAmMEC, iis, Kemp, Load Master, Office 365, tmg

As is common with a lot that I write in this blog, it is based on noting down the answers to stuff I could not find online. For this issue, I did find something online by Michael Van “Hybrid”, but finding it was the challenge.

So rather than detailing the issue and the reason (you can read that on Michael’s blog) this talks about the steps to troubleshoot this issue.

So first the issue. When starting a migration test from an Exchange 2010 mailbox with an Exchange 2013 hybrid server (running the mailbox and CAS roles) behind a Kemp load balancing (running 7.16 – the latest release at the time of writing, but recently upgraded from version 5 through 6 to 7) I got the following error:

image

The server name will be different (thanks Michael for the screenshot). In my case this was my clients UK datacentre. My clients Hong Kong datacentre was behind a Kemp load balancer as well, but is only running Exchange 2010 and the New York datacentre has an F5 load balancer. Moves from HK worked, but UK and NY failed for different reasons.

The issue shown above is not easy to solve as the migration dialog tells you nothing. In my case it was also telling me the wrong server name. It should have been returning the External EWSUrl from Autodiscover for the mailbox I was trying to move, instead it was returning the Outlook Anywhere external URL from the New York site (as the UK is proxied via NY for the OA connections). For moves to the cloud, we added the External URL for EWS to each site directly so we would move direct and not via the only site that offered internet connected email.

So troubleshooting started with exrca.com – the Microsoft Connectivity Analyser. Autodiscover worked most of the time in the UK but Synchronization, Notification, Availability, and Automatic Replies tests (to test EWS) always failed after six and a half seconds.

Autodiscover returned the following error:

A Web exception occurred because an HTTP 400 – BadRequest response was received from Unknown.
HTTP Response Headers:
Connection: close
Content-Length: 87
Content-Type: text/html
Date: Tue, 24 Jun 2014 09:03:40 GMT

Elapsed Time: 108 ms.

And EWS, when AutoDiscover was returning data correctly, was as follows:

Creating a temporary folder to perform synchronization tests.

Failed to create temporary folder for performing tests.

Additional Details

Exception details:
Message: The request failed. The remote server returned an error: (501) Not Implemented.
Type: Microsoft.Exchange.WebServices.Data.ServiceRequestException
Stack trace:
at Microsoft.Exchange.WebServices.Data.ServiceRequestBase.GetEwsHttpWebResponse(IEwsHttpWebRequest request)
at Microsoft.Exchange.WebServices.Data.ServiceRequestBase.ValidateAndEmitRequest(IEwsHttpWebRequest& request)
at Microsoft.Exchange.WebServices.Data.ExchangeService.InternalFindFolders(IEnumerable`1 parentFolderIds, SearchFilter searchFilter, FolderView view, ServiceErrorHandling errorHandlingMode)
at Microsoft.Exchange.WebServices.Data.ExchangeService.FindFolders(FolderId parentFolderId, SearchFilter searchFilter, FolderView view)
at Microsoft.Exchange.WebServices.Data.Folder.FindFolders(SearchFilter searchFilter, FolderView view)
at Microsoft.Exchange.Tools.ExRca.Tests.GetOrCreateSyncFolderTest.PerformTestReally()
Exception details:
Message: The remote server returned an error: (501) Not Implemented.
Type: System.Net.WebException
Stack trace:
at System.Net.HttpWebRequest.GetResponse()
at Microsoft.Exchange.WebServices.Data.EwsHttpWebRequest.Microsoft.Exchange.WebServices.Data.IEwsHttpWebRequest.GetResponse()
at Microsoft.Exchange.WebServices.Data.ServiceRequestBase.GetEwsHttpWebResponse(IEwsHttpWebRequest request)

Elapsed Time: 6249 ms.

What was interesting here was the 501 and that it was always approx. 6 seconds before it failed.

Looking in the IIS logs from the 2010 servers that hold the UK mailboxes there were no 501 errors logged. The same was true for the EWS logs as well. So where is the 501 coming from. I decided to bypass Exchange 2013 for the exrca.com test (as my system is not yet live and that is easy to do) and so in Kemp pointed the EWS SubVDir directly to a specific Exchange 2010 server. Everything worked. So I decided it was an Exchange 2013 issue, apart from the fact I have lab environments the same as this (without Kemp) and it works fine there. So I decided to search for “Kemp EWS 501” and that was the bingo keyword combination. EWS and 501 or Exchange EWS and 501 got nothing at all.

With my environment back to Kemp >  2013 >  2010 I looked at Michaels suggestions. The first was to run Test-MigrationServerAvailability –ExchangeRemoteMove –RemoteServer servername.domain.com. I changed this slightly, as I was not convinced that I was connecting to the correct endpoint. The migration reported the wrong server name and the Exrca tests do not tell you what endpoint they are connecting to. So I tried Test-MigrationServerAvailability –ExchangeRemoteMove –Autodiscover –EmailAddress user-on-premises@domain.com

As AutoDiscover is reporting errors at times, the second of these cmdlets sometimes reported the following:

RunspaceId         : a711bdd3-b6a1-4fb8-96b8-f669239ea534
Result             : Failed
Message            : AutoDiscover failed with a configuration error: The migration service failed to detect the
migration endpoint using the Autodiscover service. Please enter the migration endpoint settings
or go back to the first step and retry using the Autodiscover service. Consider using the
Exchange Remote Connectivity Analyzer (
https://testexchangeconnectivity.com) to diagnose the
connectivity issues.
ConnectionSettings :
SupportsCutover    : False
ErrorDetail        : internal error:Microsoft.Exchange.Migration.AutoDiscoverFailedConfigurationErrorException:
AutoDiscover failed with a configuration error: The migration service failed to detect the
migration endpoint using the Autodiscover service. Please enter the migration endpoint settings
or go back to the first step and retry using the Autodiscover service. Consider using the
Exchange Remote Connectivity Analyzer (
https://testexchangeconnectivity.com) to diagnose the
connectivity issues.
at Microsoft.Exchange.Migration.DataAccessLayer.MigrationEndpointBase.InitializeFromAutoDiscove
r(SmtpAddress emailAddress, PSCredential credentials)
at Microsoft.Exchange.Management.Migration.TestMigrationServerAvailability.InternalProcessExcha
ngeRemoteMoveAutoDiscover()
IsValid            : True
Identity           :
ObjectState        : New

And when AutoDiscover was working (as it was random) I would get this:

RunspaceId         : a711bdd3-b6a1-4fb8-96b8-f669239ea534
Result             : Failed
Message            : The ExchangeRemote endpoint settings could not be determined from the autodiscover response. No
MRSProxy was found running at ‘outlook.domain.com’.
ConnectionSettings :
SupportsCutover    : False
ErrorDetail        : internal error:Microsoft.Exchange.Migration.MigrationRemoteEndpointSettingsCouldNotBeAutodiscovere
dException: The ExchangeRemote endpoint settings could not be determined from the autodiscover
response. No MRSProxy was found running at ‘outlook.domain.com’. —>
Microsoft.Exchange.Migration.MigrationServerConnectionFailedException: The connection to the
server ‘outlook.domain.com’ could not be completed. —>
Microsoft.Exchange.MailboxReplicationService.RemoteTransientException: The call to
https://outlook.domain.com/EWS/mrsproxy.svc’ failed. Error details: The HTTP request is
unauthorized with client authentication scheme ‘Negotiate’. The authentication header received
from the server was ‘Basic Realm=”outlook.domain.com”‘. –> The remote server returned an
error: (401) Unauthorized.. —>
Microsoft.Exchange.MailboxReplicationService.RemotePermanentException: The HTTP request is
unauthorized with client authentication scheme ‘Negotiate’. The authentication header received
from the server was ‘Basic Realm=”outlook.domain.com”‘. —>
Microsoft.Exchange.MailboxReplicationService.RemotePermanentException: The remote server returned
an error: (401) Unauthorized.
— End of inner exception stack trace —
— End of inner exception stack trace —
at Microsoft.Exchange.MailboxReplicationService.MailboxReplicationServiceFault.<>c__DisplayClas
s1.<ReconstructAndThrow>b__0()
at Microsoft.Exchange.MailboxReplicationService.ExecutionContext.Execute(Action operation)
at Microsoft.Exchange.MailboxReplicationService.MailboxReplicationServiceFault.ReconstructAndTh
row(String serverName, VersionInformation serverVersion)
at Microsoft.Exchange.MailboxReplicationService.WcfClientWithFaultHandling`2.<>c__DisplayClass1
.<CallService>b__0()
at Microsoft.Exchange.Net.WcfClientBase`1.CallService(Action serviceCall, String context)
at Microsoft.Exchange.Migration.MigrationExchangeProxyRpcClient.CanConnectToMrsProxy(Fqdn
serverName, Guid mbxGuid, NetworkCredential credentials, LocalizedException& error)
— End of inner exception stack trace —
at Microsoft.Exchange.Migration.DataAccessLayer.ExchangeRemoteMoveEndpoint.VerifyConnectivity()
at Microsoft.Exchange.Management.Migration.TestMigrationServerAvailability.InternalProcessEndpo
int(Boolean fromAutoDiscover)
— End of inner exception stack trace —
IsValid            : True
Identity           :
ObjectState        : New

This was returning the URL https://outlook.domain.com/EWS/mrsproxy.svc’ which is not correct for this mailbox (this was the OA endpoint in a different datacentre) and external Outlook access is not allowed at this company and so the TMG server in front of the F5 load balancer in the NY datacentre was not configured for OA anyway and browsing the the above URL returned the following picture, which is a well broken scenario but not the issue at hand here!

image

If OA (Outlook Anywhere) was available for this company, this is not what I would expect to see when browsing to the External EWS URL. To that end we have EWS URL’s are bypass TMG and go direct to the load balancer.

So now we have either no valid AutoDiscover response or EWS using the wrong URL. Back to the version of the cmdlet Michael was using as that ignores AutoDiscover: Test-MigrationServerAvailability –ExchangeRemoteMove –RemoteServer servername.domain.com

RunspaceId         : 5874c796-54ce-420f-950b-1d300cf0a64a
Result             : Failed
Message            : The connection to the server ‘ewsinukdatacentre.domain.com’ could not be completed.
ConnectionSettings :
SupportsCutover    : False
ErrorDetail        : Microsoft.Exchange.Migration.MigrationServerConnectionFailedException: The connection to the
server ‘ewsinukdatacentre.domain.com’ could not be completed. —>
Microsoft.Exchange.MailboxReplicationService.RemoteTransientException: The call to
https://ewsinukdatacentre.domain.com/EWS/mrsproxy.svc’ failed. Error details: The remote server returned
an unexpected response: (501) Invalid Request. –> The remote server returned an error: (501) Not
                     Implemented.. —> Microsoft.Exchange.MailboxReplicationService.RemotePermanentException: The
remote server returned an unexpected response: (501) Invalid Request. —>
Microsoft.Exchange.MailboxReplicationService.RemotePermanentException: The remote server returned
an error: (501) Not Implemented.
— End of inner exception stack trace —
— End of inner exception stack trace —
at Microsoft.Exchange.MailboxReplicationService.MailboxReplicationServiceFault.<>c__DisplayClas
s1.<ReconstructAndThrow>b__0()
at Microsoft.Exchange.MailboxReplicationService.ExecutionContext.Execute(Action operation)
at Microsoft.Exchange.MailboxReplicationService.MailboxReplicationServiceFault.ReconstructAndTh
row(String serverName, VersionInformation serverVersion)
at Microsoft.Exchange.MailboxReplicationService.WcfClientWithFaultHandling`2.<>c__DisplayClass1
.<CallService>b__0()
at Microsoft.Exchange.Net.WcfClientBase`1.CallService(Action serviceCall, String context)
at Microsoft.Exchange.Migration.MigrationExchangeProxyRpcClient.CanConnectToMrsProxy(Fqdn
serverName, Guid mbxGuid, NetworkCredential credentials, LocalizedException& error)
— End of inner exception stack trace —
at Microsoft.Exchange.Migration.DataAccessLayer.ExchangeRemoteMoveEndpoint.VerifyConnectivity()
at Microsoft.Exchange.Management.Migration.TestMigrationServerAvailability.InternalProcessEndpo
int(Boolean fromAutoDiscover)
IsValid            : True
Identity           :
ObjectState        : New

Now we can see the 501 error that exrca was returning. It would seem that the 501 is coming from the Kemp and not from the endpoint servers, which is why I could not located it in IIS or EWS logs and so in the Kemp System Message File (logging options > system log files) I found the 501 error:

kernel: L7: badrequest-client_read [157.56.251.92:61541->192.168.1.2:443] (-501): <s:Envelope ? , 0 [hlen 1270, nhdrs 8]

Where the first IP address was a Microsoft datacentre IP and the second was the Kemp listener IP.

It turns out this is due to the Kemp load balancer not returning to Microsoft the Continue-100 status that it should get. Microsoft waits 5 seconds and then sends the data it would have done if it got the response back. At this point the Kemp blocks this data with a 501 error.

It is possible to turn off Kemp’s processing of Continue-100 HTTP packets so it lets them through and this is covered in http://blog.masteringmsuc.com/2013/10/kemp-load-balancer-and-lync-unified.html:

 

In the version at my clients which was upgraded to version 7.0-16 from a v5 to v6 to v7 it was defaulting to RFC Conformant, but needs to be Ignore Continue-100 to work with Office 365. In versions of Kemp 7.1 and later the value needs to be changed to RFC-7231 Compliant from the default of “RFC-2616 Compliant”. Now EWS works, hybrid mailbox moves work, and AutoDiscover is always working on that server – so a mix of issues caused by differing interpretations of an RFC. To cover all these issues the Kemp load balancer started to include this 100-Continue Handling setting. We as ITPro’s need to ensure that we set the correct setting for our environment based on the software we use.

Getting Exchange Message Sizing Raw Data

Posted on 2 CommentsPosted in exchange, exchange online, IAmMEC, mcm, mcsm, Office 365

On the internet there are a number of resources for collecting the raw data needed to size Exchange Server deployments. These include:

This blog outlines my process for collecting the data needed for the average message size. What is missing from the above two posts is the ability to collect this data in one go for the last seven (or so days) and then get the message tracking log average over the busiest five or so days. Different countries have different working patterns and public holidays, and the scripts above only do the previous day (though Neil’s says the script uses averages across many days – it does not).

The Script

Download the script from the source and save to Notepad. Edit the top of the script by deleting the first five lines of the code (not the comments or the blank lines) and then replace them with the alternative lines shown below:

Remove These Lines

$today = get-date
$rundate = $($today.adddays(-1)).toshortdatestring()

$outfile_date = ([datetime]$rundate).tostring("yyyy_MM_dd")
$outfile = "email_stats_" + $outfile_date + ".csv"

Replace With These Lines

$today = get-date
$whichDay = $args[0]
if (! $whichDay) { $whichDay = 1 }
$rundate = $($today.adddays(-$whichDay))

$outfile_date = ([datetime]$rundate).tostring("yyyy_MM_dd")
$outfile = "email_stats_" + $outfile_date + ".csv"

Before running this script you need to do two things. First if you have different Exchange Server locations around the world you need to alter the script to only include the servers in those geographies that you want to search. Do this by changing the two lines that read as follows and add in the server name or a unique differentiator that will pick up just the mailbox and hub transport servers in these regions. Edge servers do not need to be considered:

  • $mbx_servers = Get-ExchangeServer |? {$_.serverrole -match “Mailbox”}|% {$_.fqdn}
  • $hts = get-exchangeserver |? {$_.serverrole -match “hubtransport”} |% {$_.name}

These two lines are near the top and need to be changed to something like the following:

  • $mbx_servers = Get-ExchangeServer UK* |? {$_.serverrole -match “Mailbox”}|% {$_.fqdn}
  • $hts = get-exchangeserver UK* |? {$_.serverrole -match “hubtransport”} |% {$_.name}

In the above I altered the script to find just the Exchange Servers starting with the letters UK. Adjust as needed. If you have one location/timezone worldwide then these changes are not needed.

If you are collecting data from an Exchange 2013 server(s) then change “hubtransport” in the $hts line to read “mailbox”.

Save the file as a PowerShell script (Get-MessageTrackingLogStats.ps1) and copy it to your Exchange Server.

Before you can run the script you need to check that you have enough tracking logs to process, or you will get invalid and skewed data. Run Get-TransportServer | FL name,*trackinglog* and make sure that you have a large enough quota for each day of logs and then check each of these folders and make sure they do not exceed this value. If they do, then you need to run the below script frequently before the log files are removed from the server rather than at the end of 7 or 14 days.

Now you can run the script with a number after it for how many days back you want to look at the logs. This will process the message tracking logs for that day, that number of days back. For example is you run the script Get-MessageTrackingLogStats.ps1 5 then it will look back for one day of data, five days ago. Repeat the running of the script until you have run it seven times. You will have one CSV file of tracking log reports for each of the last seven days:

  • Get-MessageTrackingLogStats.ps1 1
  • Get-MessageTrackingLogStats.ps1 2
  • Get-MessageTrackingLogStats.ps1 3
  • Get-MessageTrackingLogStats.ps1 4
  • Get-MessageTrackingLogStats.ps1 5
  • Get-MessageTrackingLogStats.ps1 6
  • Get-MessageTrackingLogStats.ps1 7

Zip up these files and take them to a computer running Microsoft Excel.

One the oldest file and process each of them as follows:
image

Hide columns C through E and H through to O and R through to U:
image

image

You now have a spreadsheet with Date/User/Received Total/Received MB Total/Sent Unique Total/Sent Unique MB Total:
image

Format as a table by selecting a cell in the table and clicking the Table button in the Insert tab:
image

The Table Tools / Design tab appears. Select Total Row check box from here. This will scroll you to the bottom of the table.
image

Select the third cell in the total row and drop-down the options and choose Average:
image

Drag this Average cell formula across all the four columns of numbers:
image

Modify the fourth column (Received MB Total) to read as follows =SUBTOTAL(101,[Received MB Total])/Table1[[#Totals],[Received Total]]*1024. This is the fourth column divided by the third column (the count of received messages) and multiplied by 1024 to convert it from MB to KB. The Exchange Bandwidth Calculator and Storage Calculator work on values in KB and not MB.
image

Repeat for the fourth column of figures. This time take the formula to be =SUBTOTAL(101,[Sent Unique MB Total])/Table1[[#Totals],[Sent Unique Total]]*1024 which is the last column divided by the previous and converted to KB. This total row now shows you the average messages sent and received and the average size of these messages in KB.

At the bottom of the spreadsheet add the following:

  • Sent/Mailbox/Day
  • Received/Mailbox/Day
  • Average/MessageSize/Day (KB)

Then copy the relevant data into the cells as shown The Average is the sum of the two averages divided by 2 (=Table1[[#Totals],[Received MB Total]]+Table1[[#Totals],[Sent Unique MB Total]]/2):
image

Then take all your numbers and reduce the number of decimal points shown:
image

Now that you have the raw data calculated, save this file as an Excel Workbook (and not a CSV).

Repeat for each of the CSV files you have available

Process The Daily Data

Create a new spreadsheet that contains the following three tables:

  • Messages Sent Per Day, with a row for each day and a column for each unique geographical region
  • Messages Received Per Day, with a row for each day and a column for each unique geographical region
  • Average Message Size (KB), with a row for each day and a column for each unique geographical region

This will look like the following, once all the data is copied from the source spreadsheets:

image

Now you can create a chart for each region for each table. The following shows Sent / Received and Average (KB). Each region is overlapping as a different line colour:

image
Note its possible here that HK (blue) is missing some data as it is unexpectedly low

image

image

Note the HK (blue) Sunday Average message size. This is probably becuase one or a few users sent a disproportionate number of larger emails on the quiet day of the week. For my analysis I am going to ignore it.

Now I have my peak Messages Sent Per Day for each region – and I take the highest value for the week and not the value for yesterday which is what I would get if I just ran the above script once.

This data can now go into the Bandwidth Calculator and generate accurate figures for the business in question.