Categories
exchange online Exchange Server mailbox move networking

Exchange Move Requests | Large Items | And Setting TCP KeepAliveTime To A Large Value

I have seen this situation a number of times. A large mailbox (or mailbox and archive) wont move to the target because the process of checking what the changes are in the mailbox take too long, the network or Exchange Server times out the users move and then reports the mailbox is locked.

The fix for this is counter though to everything else you read online about this. Often you will see to reduce the TCP KeepAliveTime and reboot the server. This is the opposite – increase the value and do not reboot the server. Here is why:

First make sure no bad items in your failed moves – this is not a fix for bad items, this is a fix where things timeout:

Get-MoveRequest -MoveStatus failed | Get-MoveRequestStatistics | fl badite*

View the Move Request Statistics log for one of your failed mailbox moves:

Get-MoveRequestStatistics "<name>" -IncludeReport | fl | Out-File movereport.txt

Search the report that you have saved in the above cmdlet and search for “Error” in the text file. If you get the following then the mailbox is probably too large for a successful move, which means the source server or network has not got the resources. What can happen is the move is progressing and a check happens for changes to the source mailbox – this takes a long time to complete and something times out. When target Exchange tries to connect again, the source has lost the TCP port and so a new move is started, but the mailbox is still locked for the old move. Therefore the move cannot continue.

I have found that by increasing TCP KeepAliveTime (contrary to all the advise online) that this solves the issue. Now I need to be clear here – all I am doing is changing the registry key for this setting and restarting the MRS service on the source Exchange Server. I am NOT restarting Windows, and so I am not changing the KeepAliveTime for the entire network stack. I think MRS checks the registry key to see the KeepAliveTime and sets this to the lock time on the mailbox during the move. If I can lock the mailbox for longer, moves don’t timeout and fail is the theory behind why this happens

The error I get in the MailboxStatistics report (see above for cmdlet) reads:

Message                                : Error: Couldn’t switch the mailbox into Sync Source mode.
                                          This could be because of one of the following reasons:
                                            Another administrator is currently moving the mailbox.
                                            The mailbox is locked.
                                            The Microsoft Exchange Mailbox Replication service (MRS) doesn’t have the correct permissions.
                                            Network errors are preventing MRS from cleanly closing its session with the Mailbox server. If this is the case, MRS may continue to encounter this error for up to 2 hours – this duration is controlled by the TCP KeepAlive settings on the Mailbox server.
                                          Wait for the mailbox to be released before attempting to move this mailbox again. –> An error occurred while saving the changes on the folder “FolderID/”. Error details: Failed, Property: [0x66180003]
                                          InTransitStatus, PropertyErrorCode: AccessDenied, PropertyErrorDescription: .
                                          –> Property: [0x66180003] InTransitStatus, PropertyErrorCode: AccessDenied,
                                          PropertyErrorDescription: .

Of interest in the error is the point that says “MRS may continue to encounter this error for up to 2 hours ”. This time value matches the default TCP KeepAliveTime value. Raising this in the registry and restarting the MRS service (not the server) changes the lock timout, which means that when the long job that is happening on the target finishes (and takes longer than two hours), the source server is still waiting for the connection and does not throw the above error.

Once you have your mailboxes moved, delete the registry value (to put it back to the default of two hours) and avoid rebooting the server when this key is set to a different value. If you started with a different value return to that one instead of deleting the registry value.

The KeepAliveTime setting is found at \HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters, and its a DWORD value called KeepAliveTime. The value is in milliseconds, so 7200000 is two hours and 86400000 is 24 hours (which is the value I tend to use to resolve this issue). This change is made on the mailbox server and the service restarted on that server (or servers if you have more than one).

Categories
2013 2016 exchange Exchange Server update upgrade

bin/ExSMIME.dll Copy Error During Exchange Patching

I have seen a lot of this, and there are some documents online but none that described what I was seeing. I was getting the following on an upgrade of Exchange 2013 CU10 to CU22 (yes, a big difference in versions):

     The following error was generated when “$error.Clear();
           $dllFile = join-path $RoleInstallPath “bin\ExSMIME.dll”;
           $regsvr = join-path (join-path $env:SystemRoot system32) regsvr32.exe;
          start-SetupProcess -Name:”$regsvr” -Args:”/s `”$dllFile`”” -Timeout:120000;
         ” was run: “Microsoft.Exchange.Configuration.Tasks.TaskException: Process execution failed with exit code 3.
    at Microsoft.Exchange.Management.Tasks.RunProcessBase.InternalProcessRecord()
   at Microsoft.Exchange.Configuration.Tasks.Task.<ProcessRecord>b__b()
    at Microsoft.Exchange.Configuration.Tasks.Task.InvokeRetryableFunc(String funcName, Action func, Boolean terminatePipelineIfFailed)”.

The Exchange Server setup operation didn’t complete. More details can be found
in ExchangeSetup.log located in the <SystemDrive>:\ExchangeSetupLogs folder.

In this error the file ExSMIME.dll fails to copy. You can find the correct copy of this file in the CU source files at …\CU22\setup\serverroles\common. I copy the ExSMIME.dll file from here directly into the \Program Files\Microsoft\Exchange Server\v15\bin folder and then restart the upgrade.

I have found that the upgrade fails again here if it things there is a pending reboot due to other installations and I have also seen at this point the detection for the VC++ runtime fails. I have documented this elsewhere, and the workaround for the is found at https://c7solutions.com/2019/02/exchange-server-dependency-on-visual-c-failing-detection.

A reboot later and the installation is successful. The error somehow seems to think that the file is not where it is looking for it. In the ExchangeSetup.log file it records the issue as “Error 3”, which generally means “not found”!

Categories
ADFS ADFS 3.0 Azure Azure Active Directory Azure AD AzureAD

Decommission ADFS When Moving To Azure AD Based Authentication

I am doing a number of ADFS to Azure AD based authentication projects, where authentication is moved to Password Hash Sync + SSO or Pass Through Auth + SSO. Once that part of the project is complete it is time to decommission the ADFS and WAP servers. This guide is for Windows 2012 R2 installations of ADFS. There are guides for the other versions online.

This guide assumes you were using ADFS for one relying party trust, that is Office 365, and now that you have moved authentication to Azure AD you do not need to maintain your ADFS and WAP server farms.

Compile a list of server names

So first check that these conditions are true. Login to the primary node in your ADFS farm. If you don’t know which is the primary, try this on any one of them and it will tell you the primary node! Run Get-ADFSSyncProperties and you will either get back a list of properties where LastSyncFromPrimaryComputerName reads the name of the primary computer or it says PrimaryComputer.

If you don’t know all your ADFS Server Farm members then you can use tools such as found at this blog for querying AD for service account usage as ADFS is stateless and does not record the servers in the farm directly.

There is no list of the WAP servers in the farm – so you need to know this server names already, but looking in the Event Viewer on an ADFS server should show you who have connected recently in terms of WAP servers.

Get CertificateSharingContainer

On the primary ADFS server run (Get-ADFSProperties).CertificateSharingContainer. Keep a note of this DN, as you will need to delete it near the end of the installtion (after a few reboots and when it is not available any more)

Check no authentication is happening and no additional relying party trusts

Login to each ADFS box and check the event logs (Application). If any service is still using ADFS there will be logs for invalid logins. Successful logins are not recorded by default, but failures are – so if you have failures to login currently happening then something is still using ADFS and so you will not be wanting to uninstall it until you have discovered that.

On the primary ADFS farm member open the ADFS admin console and navigate to Trust Relationships >Relying Party Trusts

image

If all you can see if Microsoft Office 365 Identity Platform (though it has an different name if you initially configured it years and years ago). Device Registration Service is built into ADFS, so ignore that. If you have any others, you need to work on decommissioning these before you decommission ADFS. If you have done the Azure AD authentication migration then the Office 365 Relying Party Trust will no longer be in use. Run Get-MSOLDomain from Azure AD PowerShell and check that no domain is listed as Federated. If all domains are Managed, then you can delete the relying party trust.

Uninstall Additional Connectors etc.

If you have added connectors into ADFS, for example MFA Server tools, then uninstall these first. For example if you have Microsoft MFA Server ADFS Connector or even the full MFA Server installed, then you have this and IIS to uninstall. MFA Server is removed from the control panel (there are a few different things to remove, such as MFA Mobile Web App Service, MFA User Portal etc. Remove the MFA Server piece last. IIS is removed with Remove-WindowsFeature Web-Server. If you uninstall MFA Server, remember to go and remove the servers from the Azure AD Portal > MFA > Server Status area at https://aad.portal.azure.com/ ds

Uninstall the WAP Servers

Login to each WAP server, open the Remote Access Management Console and look for published web applications. Remove any related to ADFS that are not being used any more. Look up Azure App Proxy as a replacement technology for this service. Make a note of the URL that you are removing – its very likely that this means you can remove the same name from public and private DNS as well once the service is no longer needed.

When all the published web applications are removed, uninstall WAP with the following Remove-WindowsFeature Web-Application-Proxy,CMAK,RSAT-RemoteAccess. You might not have CMAK installed, but the other two features need removing.

Reboot the box to complete the removal and then process the server for your decommissioning steps if it is not used for anything else.

Uninstall the ADFS Servers

Starting with the secondary nodes, uninstall ADFS with Remove-WindowsFeature ADFS-Federation,Windows-Internal-Database. After this run del C:\Windows\WID\data\adfs* to delete the database files that you have just uninstalled.

Remove Other Stuff

Your ADFS Service account can now be deleted, as can:

Your DNS entry, internal and external for the ADFS Service, as can:

The firewall rules for TCP 443 to WAP (from the internet), and between WAP and ADFS, as well as:

Any load balancer configuration you have. Finally, you can:

Remove the certificate entries in Active Directory for ADFS. If you have removed ALL the ADFS instances in your organization, delete the ADFS node under CN=Microsoft,CN=Program Data,DC=domain,DC=local. If you have only removed one ADFS farm and you have others, then the value you recorded at the top for the certificate is the specific tree of items that you can delete rather than deleting the entire ADFS node.

Categories
Azure Active Directory Azure AD AzureAD MFA multi-factor auth Multi-Factor Authentication token2

Hardware Tokens for Office 365 and Azure AD Services Without Azure AD P1 Licences

A recent update to Azure AD Premium 1 (P1) licence has been the use of hardware tokens for multi-factor authentication (MFA). This is excellent news if your MFA deployment is stuck because users cannot use phones on the shop floor or work environment or they do not want to use personal devices for work activities. But it requires a P1 licence for each user. Now a P1 licence gives lots of stuff in addition to hardware token support for MFA, such as (but not exclusively) Conditional Access, which is a better way to implement MFA than when used without P1, which requires MFA in all circumstances and for all apps from all locations.

But if you want MFA in all circumstances and for all apps from all locations, and also need hardware tokens, this is where programable tokens come into play. I recently purchased a miniOTP-2 token from Token 2 (www.token2.com) and they provided me with a C300 token as well so that I could write this blog post.

In the scenario that I am going to describe here, I have two different programable tokens and I will walk through the MFA registration process for a user (using the new user interface for this service that was released end of Feb 2019).

IMG_20190226_095155

Enabling the new MFA Registration Process

Open the Azure AD Portal from https://aad.portal.azure.com, click Azure Active Directory from the primary menu and then select User settings from the sub-menu. Under Access panel, click Manage settings for access panel preview features. You will see the following:

image

In this I have previously turned on the preview features for registering and managing security info that was rollout out early 2018 and now I can see a second option for the same, but called refresh.

Set both of these options to All (or a selected group if you want to preview for a subset of users initially). Click Save.

For what follows, it will work even if you have None set for both options, just the screenshots will look different and the latest refresh of this feature is much easier for users to work with – so I recommend it is turned on for both options.

Configure MFA Settings for your Tenant

In the Azure AD portal sub-menu click MFA under Manage MFA Server and click Additional cloud-based MFA settings under Configure. This opens another tab in your browser where you will see the Multi-Factor Authentication / Service Settings.

Under Verification Options ensure that Verification code from mobile app or hardware token is enabled. Other options such as “app passwords”, “skip for federated users”, “trusted IPs” (available if you ever once had the AAD P1 licence on your tenant even if you do not have it now) and “remember multi-factor authentication” can be set to your requirements.

Register a Programable MFA Token for a User

Once you have MFA settings configured you can enable the service for a user and have the token registered for the user. If you have a P1 licence you upload the token serial number to Azure AD, but if you do not have a P1 licence then you need to use a programable token as these appear to act just like authenticator apps you get on your phone.

In Azure AD you can register a user’s token by logging in as the user (they would do this for you) by visiting https://aka.ms/mfasetup. End to end this process takes about 10 seconds – so its very possible to add this process into new user joining procedures or have help desk visit the user or the other way around. This process needs an NFC burner app and device (Android phone is good), but don’t require the user to do this themselves using their phone – burn the token for them using a help desk PC or phone. If you get the end user to walk through all these steps you will confuse them totally!

The new UI for end user security settings:

image

As mentioned above, the UI you see is based upon the User settings options in Azure AD. If you have just the original refresh enabled, you will see the following instead:

image

In the first of the above two screenshots I have already registered some MFA devices. If the user has never registered a device before then they will see the following when browsing to https://aka.ms/mfasetup from the initial login and then the MFA registration page, with the registration process ready to start:

image image

If you have already registered some security info, then you will be able to add a hardware token from the + Add Method button and selecting Authenticator App and clicking Next.

image

The initial steps will show you the following dialog, depending upon if you are a new user (on the left) or adding a new MFA method (on the right):

image image

You are walked through the process of installing the Microsoft Authenticator app on your phone. In this case though, we have a hardware token instead of the app, and so you need to click I want to use a different authenticator app instead. The Microsoft Authenticator App supports push notifications, which hardware tokens do not, and so the QR code provided for the Microsoft Authenticator app will not work for hardware or other authenticator apps.

image image (QR code intentionally blurred)

Now you need to scan the QR code using the Token 2 Android app or click Can’t scan image and copy and paste the secret key into the Token 2 Windows app. Links to the apps are available from the Token 2 website software page (with the Windows app shown below):

image

Click the QR button (or Scan QR button if using the NFC Burner 2 software) and scan the QR code on the screen. This enters the seed in HEX into the app. If you need to enter the QR code by hand, click enter Base32 and type in the secret key value that you get under the Can’t scan image link.

Next, turn the hardware token on (it will remain on for 30 seconds) and hold it to the NFC reader on your Android device (usually next to the camera) or plugged into your PC.

IMG_20190302_160227

Click the Connect button (or Connect Token depending upon the app you are using) – one of the Android apps are shown below:

Screenshot_20190302-173645

Then finally click burn seed.

Screenshot_20190302-173653

Turn off the token and turn it back on again – this displays the next valid code. The code that was displayed when the token was first turned on and before the new secret was burned to the device is not valid.

Click Next on the registration wizard on the computer screen. You are asked to enter the code displayed on the token. Azure AD has a 900 second range for codes, so any code displayed in the last 7 or so minutes should be valid to use

image

Success – if not, turn the token off and on again and try again. If not, go back, scan the code again and burn to the device another time – you are not restricted on the number of times you do this (though doing this wipes previous users of the token from using it again).

image

Click Done and see your first method of providing MFA shown to the user.

image

I recommend you add the users phone (for a call or text) as a second method at this point (in case they loose the token, they have a second route in). The user experience for when adding a phone looks as follows:

image

It is a shame we cannot rename the MFA method – that would be useful, as we could indicate the token name/type and then login to Azure AD could ask for this token by name.

If you were adding a new token to a user with existing MFA methods already in place, you end up in a very similar place:

image

Success – a new “app” added:

image

Then at next login when going to the MFA registration page, you need to enter your code:

image

Note that you don’t need the code yet for logging into Office 365 and Azure AD generally – you have to enable MFA for that and that is the next step.

Enforcing MFA for User

For all other logins apart from the MFA registration page, you need to finish by enforcing MFA for the user. If you have a P1 licence and Conditional Access then this will happen based on the rules, but where you don’t have AAD P1 licence, then you need to enforce MFA for all logins. Do this by browsing to the multi-factor authentication and users page via the Office 365 admin portal > active users > ellipses button > setup multifactor authentication:

Search for your user:

image

Once you have found the user to enable, select the user and then click the Enable hyperlink:

image

Followed by enable multi-factor authentication

image

The comment about “regularly sign in through the browser” is not valid for modern authentication supporting apps such as Microsoft Teams or where you have enabled Modern Authentication for Skype for Business Online and Exchange Online (you need to do this if your tenant exists from before August 2017)

image

Finally for completion, there is an MFA setting called Enforce. Enforce requires MFA for all logins including rich client applications that do not support MFA – therefore if you have modern authentication enabled and are using Outlook 2013 (with the Modern Auth settings for Outlook 2013 turned on) or Outlook 2016 and later then having the end user remain in Enable mode is fine. If you are using older clients that do not support MFA then Enforce mode will force them to use App Passwords for non-browser apps, and you want to try and avoid that.

Therefore we need to take the user to a minimum of Enable mode in Office 365 MFA so that MFA is triggered for all logins. This step is probably done after hardware token registration as when we set up the token for the user or when we sent the user through the registration workflow we did not first enabling MFA for the user – therefore the user is registered for MFA but not required to use it to login.

Set the user to Enable mode to trigger MFA for all logins:

image

Categories
groups Microsoft Teams Office 365 Office 365 Groups Teams

Convert Office 365 Group to Microsoft Team Totally Failing

This one has been annoying me for a while – I had an Office 365 Group that I created many years ago in Office 365 that I cannot convert to a Microsoft Team.

This is what I see in Teams to do this process. First, click “Create a team”

image

Followed by “Create a team from an existing Office 365 group” which is found at the bottom of the creation dialog in the Teams app:

image

I get a list of Office 365 Groups but not the one I want. In my example I see six groups:

image

The rules for converting an Office 365 Group to a Team is the following:

  • Must be private
  • Must have an owner

Both of these are true for the group I want to convert, but the group still does not appear in the Teams conversion page shown above:

So I resort to PowerShell:

Get-UnifiedGroup returns all my groups and shows that the group exists (I know it does – its got content in it!)

image

So I get the Group ID using Get-UnifiedGroup <name> | FL *id*

image

Specifically I am after the ExternalDirectoryObjectID value

Then I try to make a new Team using PowerShell using the ExternalDirectoryObjectID. This is New-Team -Group <ExternalDirectoryObjectID>

image

I get back a lot of red text. In this is reads “Message: Team owner not found”. This is odd, as the Team does have an owner – I can see this in OWA for the Office 365 Group

image

And I can see this using Get-UnifiedGroupLinks as well:

image

So I decide to set the owner back to the owner again using Add-UnifiedGroupLinks PowerShell cmdlet (Add-UnifiedGroupLinks <group_name> -Links <myemailaddress> -LinkType Owner)

image

This returns nothing, so I presume nothing has changed.

I take a look in New Team > Convert Group option – and as if by magic, I can see the Office 365 Group I want to make into a Team

image

Its the one at the top in all these redacted images – the logo matches the group above, and I now have seven Office 365 Groups that are candidates for Teams.

Conversion then happens seamlessly!

Categories
crm Dynamics exchange exchange online Exchange Server router

CRM Router and Dynamics CRM V9 Online–No Emails Being Processed

This one is an interesting one – and it was only resolved by a call to Microsoft Support, who do not document that this setting is required.

The scenario is that you upgrade your CRM Router to v9 (as this is required before you upgrade Dynamics to V9) and you enable TLS 1.2 on the router server as well (also documented as required as part of the upgrade).

Dynamics is updated and all your email that is processed using the Router stops. Everything was working before and now it is not!

The fix is simple though – and complex as well. The simple thing is that it is a a single check box you need to set. The complex thing is that as this is a GDPR setting, each user needs to do it themselves and it cannot be enabled in bulk!

The option each user needs to allow is “Allow other Microsoft Dynamics 365 users to send email on your behalf” and that this was checked. This option is located in CRM > Options > Email > Select whether other users can send email for you

image

Once each user does this, the router will start to process emails for this user again.

Categories
exchange Exchange Server install vc++

Exchange Server Dependency on Visual C++ Failing Detection

Exchange Server for rollup updates and cumulative updates at the time of writing (Feb 2019) has a dependency on Visual C++ 2012. The link in the error message you get points you to the VC++ 2013 Redistributable though, and there is are later versions of this as well.

I found that by installing all versions VC++ 2011, 2012 and 2014 I was able to get past the following error – which I had on only one out of many servers.

Performing Microsoft Exchange Server Prerequisite Check

    Configuring Prerequisites                                 COMPLETED
     Prerequisite Analysis                                     FAILED
      Visual C++ 2012 Redistributable Package is a required component. Please ins
tall the required binaries and re-run the setup. Use URI https://www.microsoft.c
om/download/details.aspx?id=30679 to download the binaries.
      For more information, visit: http://technet.microsoft.com/library(EXCHG.150
)/ms.exch.setupreadiness.VC2012RedistDependencyRequirement.aspx

So regardless of what you see in the error and the download site you go to, you need another version.

I found this article lists all versions: https://stackoverflow.com/questions/12206314/detect-if-visual-c-redistributable-for-visual-studio-2012-is-installed/34209692

And I specifically installed the following versions which then put some DLL’s onto the server to get past the error:

image image image

Categories
DNS error Exchange Server

451 4.7.0 Temporary server error. Please try again later. PRX2

There are a few articles online about this error, but none were correct for the scenario i found a clients network in.

Not that I think the specifics matter, but this was Exchange Server 2016, Windows Domain Controllers running 2012 R2 and Exchange Hybrid. All the mailboxes had already moved to the cloud and the Exchange Server is used for attribute management and SMTP relay.

Sometimes, randomly it would seem, the applications fail to send email and get back the above error. So what does it mean! Lets dive into the Exchange logs to find out more.

In my example, TCP 25 is listening on a number of separate IPs on two different network cards on a server hosted in Azure (maybe all that matters for this case?)

Protocol Logs (Frontend)

In the Exchange Transport logs I turned on Protocol Logging for all connectors and sent some emails and had them rejected with the PRX2 error in the title. After 5 or so minutes the protocol logs contained the erroring session as shown below:

2019-01-31T13:45:09.477Z,SERVER\From Internal Servers (Relay),08D68772EDC476C6,0,10.10.10.16:25,10.150.14.108:59877,+,,
2019-01-31T13:45:09.478Z,SERVER\From Internal Servers (Relay),08D68772EDC476C6,1,10.10.10.16:25,10.150.14.108:59877,&amp;gt;,220 COMPANY Relay Connector SERVER,
2019-01-31T13:45:09.479Z,SERVER\From Internal Servers (Relay),08D68772EDC476C6,2,10.10.10.16:25,10.150.14.108:59877,&amp;lt;,HELO,
2019-01-31T13:45:09.479Z,SERVER\From Internal Servers (Relay),08D68772EDC476C6,3,10.10.10.16:25,10.150.14.108:59877,&amp;gt;,250 SERVER.internal.co.uk Hello [10.150.14.108],
2019-01-31T13:45:09.480Z,SERVER\From Internal Servers (Relay),08D68772EDC476C6,4,10.10.10.16:25,10.150.14.108:59877,&amp;lt;,MAIL FROM: &amp;lt;appserver@international.com&amp;gt;,
2019-01-31T13:45:09.480Z,SERVER\From Internal Servers (Relay),08D68772EDC476C6,5,10.10.10.16:25,10.150.14.108:59877,*,08D68772EDC476C6;2019-01-31T13:45:09.477Z;1,receiving message
2019-01-31T13:45:09.480Z,SERVER\From Internal Servers (Relay),08D68772EDC476C6,6,10.10.10.16:25,10.150.14.108:59877,&amp;gt;,250 2.1.0 Sender OK,
2019-01-31T13:45:09.482Z,SERVER\From Internal Servers (Relay),08D68772EDC476C6,7,10.10.10.16:25,10.150.14.108:59877,&amp;lt;,RCPT TO: &amp;lt;internal.user@international.com&amp;gt;,
2019-01-31T13:45:09.482Z,SERVER\From Internal Servers (Relay),08D68772EDC476C6,8,10.10.10.16:25,10.150.14.108:59877,&amp;gt;,250 2.1.5 Recipient OK,
2019-01-31T13:45:09.483Z,SERVER\From Internal Servers (Relay),08D68772EDC476C6,9,10.10.10.16:25,10.150.14.108:59877,&amp;lt;,RCPT TO: &amp;lt;brian@nbconsult.co&amp;gt;,
2019-01-31T13:45:09.483Z,SERVER\From Internal Servers (Relay),08D68772EDC476C6,10,10.10.10.16:25,10.150.14.108:59877,&amp;gt;,250 2.1.5 Recipient OK,
2019-01-31T13:45:09.484Z,SERVER\From Internal Servers (Relay),08D68772EDC476C6,11,10.10.10.16:25,10.150.14.108:59877,&amp;lt;,DATA,
2019-01-31T13:45:09.484Z,SERVER\From Internal Servers (Relay),08D68772EDC476C6,12,10.10.10.16:25,10.150.14.108:59877,&amp;gt;,354 Start mail input; end with &amp;lt;CRLF&amp;gt;.&amp;lt;CRLF&amp;gt;,
2019-01-31T13:45:09.498Z,SERVER\From Internal Servers (Relay),08D68772EDC476C6,13,10.10.10.16:25,10.150.14.108:59877,*,,Proxy destination(s) obtained from OnProxyInboundMessage event. Correlation Id:80e0d560-be23-4910-bcb0-43139bee131f
2019-01-31T13:45:09.501Z,SERVER\From Internal Servers (Relay),08D68772EDC476C6,14,10.10.10.16:25,10.150.14.108:59877,*,,Message or connection acked with status Retry and response 451 4.4.0 DNS query failed. The error was: DNS query failed with error InfoNoRecords -&amp;gt; DnsQueryFailed: InfoNoRecords
2019-01-31T13:45:09.501Z,SERVER\From Internal Servers (Relay),08D68772EDC476C6,15,10.10.10.16:25,10.150.14.108:59877,&amp;gt;,451 4.7.0 Temporary server error. Please try again later. PRX2 ,
2019-01-31T13:45:09.503Z,SERVER\From Internal Servers (Relay),08D68772EDC476C6,16,10.10.10.16:25,10.150.14.108:59877,&amp;lt;,QUIT,
2019-01-31T13:45:09.503Z,SERVER\From Internal Servers (Relay),08D68772EDC476C6,17,10.10.10.16:25,10.150.14.108:59877,&amp;gt;,221 2.0.0 Service closing transmission channel,
2019-01-31T13:45:09.503Z,SERVER\From Internal Servers (Relay),08D68772EDC476C6,18,10.10.10.16:25,10.150.14.108:59877,-,,Local

The protocol logs contain a number of columns to the left. The interesting ones for this are the connector name (“SERVER\From Internal Servers (Relay)”), the session ID (08D68772EDC476C6) and the sequence number (each item on the protocol has a incrementing sequence number, in the above it goes from 0 where the session connects (which is the + at the end) to 18, where it disconnects (the – at the end of the last line).

This log looks no different from a session that works (as it was random as I said above), but we see more about the error. Specifically we see the following:

Proxy destination(s) obtained from OnProxyInboundMessage event. Correlation Id:80e0d560-be23-4910-bcb0-43139bee131f
Message or connection acked with status Retry and response 451 4.4.0 DNS query failed. The error was: DNS query failed with error InfoNoRecords -&amp;gt; DnsQueryFailed: InfoNoRecords
451 4.7.0 Temporary server error. Please try again later. PRX2 ,

So we see that it is DNS. Online there are articles about this being to do with IPv6, AAAA records and invalid responses to those queries and fixes include using external DNS settings or smarthost values. None of this worked in this example.

So lets follow down the logs some more

Connectivity Logs

In the connectivity logs we search the same date/time/hour log for the session number, which in this case is 08D68772EDC476C6 from the above logs. In the connectivity logs we see a session that matches for this ID and its for “internalproxy”

2019-01-31T13:45:09.499Z,08D68772EDC476C7,SMTP,internalproxy,+,Undefined 00000000-0000-0000-0000-000000000000;QueueLength=&amp;lt;no priority counts&amp;gt;. Starting outbound connection for inbound session 08D68772EDC476C6
2019-01-31T13:45:09.501Z,08D68772EDC476C7,SMTP,internalproxy,&amp;gt;,DNS server returned InfoNoRecords reported by 10.10.10.21. [Domain:Result] = SERVER.internal.co.uk:InfoNoRecords;
2019-01-31T13:45:09.501Z,08D68772EDC476C7,SMTP,internalproxy,-,Messages: 0 Bytes: 0 (The DNS query for&amp;nbsp; 'Undefined':'internalproxy':'00000000-0000-0000-0000-000000000000' failed with error : InfoNoRecords)

Internalproxy is what Exchange users to send email from the frontend transport service to the hub transport service. But which hub transport service are we going to use? If does not matter if you have 1 or x number of Exchange Servers in your site, it will use DNS to look up the IP of one of these servers. So even if you have a single Exchange box, DNS is vital.

In the above log we see that DNS 10.10.10.21 returns InfoNoRecords when queried for the Exchange Servers own name.

So I resort to nslookup to check DNS from this Exchange server. I have two DNS server, .20 and .21. The error appears to be related to .21 in this case.

To I enter “nslookup server.internal.co.uk 10.10.10.21” which means look up the name of the server using the DNS server 10.10.10.21. I got back a message saying cannot find server.internal.co.uk: Query refused.

When I tried the other DNS server I got back a successful response and the IP address of the server.

So for immediate fix, I removed 10.10.10.21 as an option for DNS for this server. Exchange immediately went back to work and PRX2 errors where not displayed and email got to its destination.

Now to go and see who has broken DNS!

Categories
active directory Azure Active Directory Azure AD AzureAD MFA multi-factor auth phone factor token2

Token2 Hardware OAuth Tokens and Azure AD Access

This blog post walks through the process of logging into Azure AD resources (Office 365, other SaaS applications registered in Azure AD and on-premises applications that utilise Azure AD App Proxy).

First step is to order your desired hardware. For this article we are looking at the devices manufactured by Token2 (www.token2.com). These include credit card style and dongle type devices. The options are available at https://www.token2.com/site/page/product-comparison

For the purposes of this blog post I have been using the TOTP emulator on the Token2 website. This allows you to create virtual TOTP devices and step through and test the entire process without buying a physical token. The TOTP device emulator can be found at https://www.token2.com/site/page/totp-toolset. Each time you browse to this site a new device is emulated, but using key=XXXX, where XXXX is the 32 character device seed. Longer testing will require you to keep a record of the seed ID and use it in the emulator, or keep the browser window open.

Step 1: Order Devices

For this, I am using the emulator mentioned above, but you would order a number of devices and upon arrival request the unique secret identifiers for each device over encrypted email. In Feb 2019 a programmable device with time sync is being released which will allow you to set the secrets yourself and ensure that clock drift on the device is not a reason for them to stop working after a few years!

The secret is required for upload to Azure AD and is required in the form of a CSV file with six columns:

upn,serial number,secret key,timeinterval,manufacturer,model

For example, it could look like this for a two emulated devices:

upn,serial number,secret key,timeinterval,manufacturer,model
user1@token2.com,2300000000001,BMUOZOAFVH4ZYRXU7SFRIIQ3FQNKTDVL,30,Token2,miniOTP-1

user2@token2.com,2300000000002,R5AGIKGVCCGIV275IDTCI2HWDQK3PK2R,30,Token2,miniOTP-1

Ensure each UPN in the first column matches the device you are issuing to the user and upload the CSV file to Azure AD. This is done from Azure Portal > Azure Active Directory left menu > MFA (in Security area) > OAUTH tokens (in settings area):

image

Click Upload and browse for your CSV file. As long as there are no errors it will upload fine. Errors are displayed in the notifications area. Once the upload is complete click Refresh to see the imported hardware tokens. Tokens assigned to users that do not exist will appear after the user is created, if the user is created within 30 days.

I have in the below screenshot uploaded one token for a test user.

image

The token needs activating before it can be used. Activation is to confirm the token works and so you will need the next six digit sequence from the device and so have physical access to the device. End user activation is planned, but as of writing in Jan 2019 the administrator needs to activate the hardware token.

Click Activate and enter the current TOTP six digit number. The CSV file contains information about the time change for the device, which is 30 seconds for these Token2 devices, and so you need to enter the current, previous or next ID (next ID is shown in the emulator and not on the hardware token).

image

Click Activate button when the six digit code is entered. The green ticky ticky next to the number just means you have entered a six digit number, and not that it is the correct number!

After activation you will see the Activated column ticked.

image

You can now distribute the token to the user.

Step 2: Require token for login

Other blog posts and documentation on the internet cover the requirements for forcing the user to use an MFA device, so here I will just cover what is different when tokens are used. Firstly the user does not need to register their device – the upload of the CSV file and the activation of the token by the administrator covers this step. This does mean though that the user needs the token before they next are required to login with the MFA device or they will not be able to login. Therefore distribution of token following activation, or activation before MFA requirement is currently needed. End user activation which is coming soon will reduce this distribution gap.

Secondly, the code from an Authenticator app (such as Microsoft Authenticator or Google Authenticator) can be used as well as the code from the hardware token device (if the app is registered as well). Microsoft allow multiple devices to provide codes, so the user message on-screen at login will not say “Enter the code from your token2 device” but currently says “Please type in the code displayed on your authenticator app from your device”, but the current code from your hardware device and not only a registered app will work here as well. This is probably the most confusing stage of the process. The hardware token works wherever authenticator app is displayed – maybe this will change.

Finally, the user will only be required for an MFA login if either MFA is enabled for the account or conditional access enabled and the login or application access triggers an MFA requirement. The second of these is the better one to use. The admin configuration to force the user to enter their token one time code though is as follows:

  1. Ensure that MFA service settings allows verification code from mobile app or hardware token as shown:
    image
  2. Ensure that the IP range where MFA is skipped is not the range the user is currently located on, otherwise the user will not be prompted for their MFA token. This IP range is configured in the above settings as well.
  3. Ensure that the user requires MFA to login from the MFA users page as shown. Select the user and click Enable:
    image  image
  4. Or, if using Conditional Access, that the Grant option includes MFA as one of the requirements as shown:#
    image
  5. In the first step above you can see that phone calls, text and notification through app are all also enabled. These do not need to be enabled for the hardware device to work.
  6. The user needs an Azure AD P1 or P2 licence to use hardware tokens.

Step 3: The user login experience

When the user logs in or accesses an app that is MFA restricted via Conditional Access they will see the MFA login prompt as shown:

image

The code entered needs to come from their hardware device, as shown below:

image

The token changes every 30 seconds and is valid for a short while either side of the time it is displayed for on the device. Over time tokens will suffer from clock skew and eventually stop working. The new token2 programmable tokens available in Feb 2019 can have their clocks resynced to fix this issue.

As mentioned above, any registered authenticator app or hardware token uploaded against the user will work – the UX asks for an app at present, maybe this wording will get more clarification when the user has a token registered as well.

Step 4: User experience

Apart from the login, the other place where a user will see their token listed is the My account > Security and Privacy option (click your picture in an Office 365 app to see My account menu). If you have MFA enabled for the user (not just Conditional Access) then the user will see “Additional security verification” on the Security & Privacy page. Here is says “Update your phone numbers used for account security” – but this will list your hardware token as well. This is shown below:

image

In the above you can see that the user has an iPhone as well as the token, where the hardware token ID matches the serial number on the device that they hold, as well as a phone in this scenario. This allows the user to have a backup MFA method of more than one authenticator (Microsoft Authenticator on iPhone or MFA by phone call in this case) as well as the Authenticator app or hardware token. Your users can now have up to five devices in any combination of hardware or software based OATH tokens and the Microsoft Authenticator app. This gives them the ability to have backup devices ready when they need them and to use different types of credentials in different environments.

The use of a hardware token and its OAUTH code currently still requires a password before MFA token. Password-less token only is expected later in 2019.

Categories
Azure Active Directory Azure AD download exchange exchange online

Read Only And Attachment Download Restrictions in Exchange Online

Microsoft have released a tiny update to Exchange Online that has massive implications. I say tiny in that it take like 30 seconds to implement this (ok, may 60 seconds then).

When this is enabled, and below I will describe a simple configuration for this, your users when using Outlook Web Access on a computer that is not compliant with a conditional access rule in Azure AD, will result in OWA that is read only – attachments can be viewed in the browser only and not downloaded. There is even a mode to have attachments completely blocked.

So how to do this.

Step 1: Enable the OwaMailboxPolicy New Setting

Only users whose OWAMailboxPolicy have the ConditionalAccessPolicy set to ReadOnly or ReadOnlyPlusAttachmentsBlocked are impacted by this feature. For example if you wanted a subset of users to always have this restriction regardless, but not other users then you would create a new OwaMailboxPolicy and set the ConditionalAccessPolicy setting. Once that is done you would apply the policy to the selected users.

In my example I am just going to update the default policy, becuase I want read only view for all users who fall out of the conditions of the policy. So in Exchange Online PowerShell I run the following:

Set-OwaMailboxPolicy -Identity OwaMailboxPolicy-Default -ConditionalAccessPolicy ReadOnly

This, once the conditional access policy takes effect will restrict downloads in OWA. The second option is to use ReadOnlyPlusAttachmentsBlocked instead of ReadOnly. This blocks attachment viewing as well. I understand other options and therefore values for this property are coming. The value “Off” turns off the restrictions again. “Off” is the default value.

Step 2: Create a Conditional Access Policy in Azure AD

You need an Azure AD Premium P1 licence for this feature.

Here I created a policy that applied to one user and no other policy settings. This would mean this user is always in ReadOnly mode. In real world scenarios you would more likely create a policy that applied to a group and not individual users and forced ReadOnly only when other conditions such as non-compliant device (i.e. home computer) where in use.

The steps for this are:

imageimageimageimage

The pictures, as you cannot create the policies in the cmdline, are as follows:

a) New policy with a name. Here it is “Limited View for ZacharyP”

b) Under “Users and Groups” I selected my one test user. Here you are more likely to pick the users for whom data leakage is an issue

c) Under “Cloud apps” select Office 365 Exchange Online. I have also selected SharePoint, as the same idea exists in that service as well

d) Under Session, and this is the important one, select “Use app enforced restrictions”. For Exchange Online, app enforced restrictions is the value of ConditionalAccessPolicy for the given user.

Step 3: View the results

Ensure the user is licenced to have a mailbox and Azure AD Premium P1 and ensure they have an email with an attachment in it for testing.

In the screenshot you can see circled where the Download link is normally found:

image

And where the attachment is clicked, there is now a greyed out Download button and a banner is seen in both views telling the user of their limited access.

image

The new user interface to OWA looks as follows:

image

With ReadOnlyPlusAttachmentsBlocked set as the ConditionalAccessPolicy value, the attachment cannot be viewed. This is what this looks like (new OWA UI):

image

And SharePoint and OneDrive, just because it is very similar!

This is outlined in https://c7solutions.com/2019/04/read-only-and-document-download-restrictions-in-sharepoint-online

Categories
exchange exchange online Exchange Server migration Public Folders

Public Folder Migrations and the Changing Cmdlets

To complete a public folder migration from Exchange 2013/2016 to Exchange Online you need to run

Set-OrganizationConfig -PublicFolderMailboxesLockedForNewConnections $true

But if you look at lots of the documentation that is out there with their tips and tricks etc. you will see that lots of them say:

Set-OrganizationConfig –PublicFoldersLockedForMigration $true

So very near – but its the wrong cmdlet now and it does nothing. It does not lock out the public folders and in the cloud all you get is:

PS C:\Users\BrianReid> Complete-MigrationBatch PublicFolderMigration
The public folders in the source environment are not ready for finalizing the migration. Make sure that public folder
access is locked on the source Exchange server, and there are no active public folder mailbox moves or public folder
moves in the source.
     + FullyQualifiedErrorId : [Server=VI1PR09MB2909,RequestId=ca0ffb4a-cc9f-4195-94fd-e3dd060587e6,TimeStamp=13/12/2018 18:03:00] [FailureCategory=Cmdlet-MigrationBatchCannotBeCompletedException] 2FB8651C,Microsoft.Exchange.Management.Migration.MigrationService.Batch.CompleteMigrationBatch
     + PSComputerName        : outlook.office365.com

And there is nothing useful on the web for this error, so I wrote this to help you get out of this hole!

Run the correct cmdlet and migrations will start!

Categories
certificates exchange online Exchange Server Kemp SSL

Test Connectivity Website and TLS 1.2

An excellent resource for Microsoft Exchange Server and Exchange Online administrators and consultants is the Remote Test Connectivity website at http://exrca.com or https://testconnectivity.microsoft.com/.

Here I am going to document an error that indicated that the Exchange Server (in this case) was not working, but we could see that the phone was connecting fine to the server. The error we say was:

“The certificate couldn’t be validated because SSL negotiation wasn’t successful. This could have occurred as a result of a network error or because of a problem with the certificate installation.”

and also

“The Microsoft Connectivity Analyzer wasn’t able to obtain the remote SSL certificate”

The error looked like the following:

exrca tls 10 support[96033]

This error occurs when TLS 1.0 is disabled either on the end server or on a load balancer in front of the server. In my case this as the case with the Kemp load balancer we were using – TLS 1.0 was disabled under SSL Properties. Once we restored TLS 1.0 the Remote Connectivity Test tool, the tool worked instantly:

TLS Kemp setting[96034]

Categories
AADConnect exchange exchange online Exchange Server migration Office 365 Public Folders

Public Folder Sync–Duplicate Name Error

I came across this error with a client today and did not find it documented anywhere – so here it is!

When running the Public Folder sync script Sync-ModernMailPublicFolders.ps1 which is part of the process of preparing your Exchange Online environment for a public folder migration, you see the following error message:

UpdateMailEnabledPublicFolder : Active Directory operation failed on O365SERVERNAME.)365DATACENTER.PROD.OUTLOOK.COM. The
object ‘CN=PublicFolderName,OU=tenant.onmicrosoft.com,OU=Microsoft Exchange Hosted
Organizations,DC=)365DATACENTER,DC=PROD,DC=OUTLOOK,DC=COM’ already exists.
At C:\ExchangeScripts\pfToO365\Sync-ModernMailPublicFolders.ps1:746 char:9
+         UpdateMailEnabledPublicFolder $folderPair.Local $folderPair.Remote;
+         ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
     + CategoryInfo          : NotSpecified: (:) [Write-Error], WriteErrorException
     + FullyQualifiedErrorId : Microsoft.PowerShell.Commands.WriteErrorException,UpdateMailEnabledPublicFolder

This is caused because you have a user or other object in Active Directory that has the same name as the mail enabled public folder object.

In Exchange Online PowerShell if you run Get-User PublicFolderName you should not get anything back, as its a Public Folder and not a user, but where you see the above error you do get a response to Get-User (or maybe Get-Contact or any other object that is not a Public Folder. This class of object name (common name or cn) means the script can create the public folder in the cloud, but not update it on subsequent runs of the script.

The easiest fix is to rename the common name of the public folder object in Active Directory for all clashing public folders, unless you know you do not need the other object that clashes – as renaming that and letting AADConnect sync process the change is another way to resolve this.

To rename the mail public folder, in Exchange Server management shell run Set-MailPublicFolder PublicFolderName –Name NewPublicFolderName

I have changed my names to start with pf, so PublicFolderName becomes pfPublicFolderName and then the script runs without issue.

Categories
MFA Office 365

Configuring Multi Factor Authentication For Office 365

Given that Office 365 is a user service, the enabling of multi-factor authentication is very much as admin driven action – that is the administrators decide that the users should have it, or that it is is configured via Conditional Access when limiting the login for the user to certain applications and locations.

For a more security conscious user, enabling it themselves if harder! To do this, follow these steps:

  1. Go to My Apps – https://myapps.microsoft.com
  2. Click your picture icon top right and choose Profile from the menu
  3. Click Additional Security Verification from the menu to the right
  4. Select your preferred method of second factor of authentication from the first drop-down box. You need to ensure that the option you choose is enabled below.

You will now be prompted for your second authentication factor that you choose when you try to do a password change or change your verification info.

Categories
Azure Azure Information Protection cloud firewall Office 365 proxy SSL

SSL Inspection and Office 365

Lots of cloud endpoint URL’s break service flow if you enable SSL Inspection on the network devices between your client and the service. My most recent example of this Enterprise State Routing in Windows 10.

Microsoft have a list of URLs for the endpoints to their service, where they are categorised as Default, Allow or Optimize. The URLs that are Allow or Optimize should avoid SSL inspection.

The endpoint list is found at https://support.office.com/en-us/article/managing-office-365-endpoints-99cab9d4-ef59-4207-9f2b-3728eb46bf9a#webservice and the JSON for this can be downloaded, as well as a PowerShell script to return the IPs and URLs.

Within this JSON file you can look for the category and if the category is Allow or Optimize then ensure the matching URLs are not SSL inspected.

Categories
active directory Azure Active Directory Azure AD AzureAD EM+S enterprise mobility + security microsoft Office 365 password security

Improving Password Security In the Cloud and On-Premises

Passwords are well known to be generally insecure the way users create them. They don’t like “complex” passwords such as p9Y8Li!uk%al and so if they are forced to create a “complex” password due to a policy in say Active Directory, or because their password has expired and they need to generate a new one, they will go for something that is easy to remember and matches the “complexity” rules required by their IT department. This means users will go for passwords such as WorldCup2018! and Summ3r!!. Both these exceed 8 characters, both have mixed case, both have symbols and numbers – so both are complex passwords. Except they are not – they are easy to guess. For example, you can tell the date of this blog post from my suggestions! Users will not tend to pick passwords that are really random and malicious actors know this. So current password guidance from NIST and UK National Cyber Security Centre is to have non-expiring unknown, not simple passwords that are changed on compromise. Non-expiring allows the user to remember it if they need to (though a password manager is better) and as it is unknown beforehand (or unique) means its not on any existing password guess list that might exist.

So how can we ensure that users will choose these passwords! One is end user training, but another just released feature in the Microsoft Cloud is to block common passwords and password lists. This feature is called Azure AD Password Protection. With the password management settings in Azure AD, cloud accounts have been blocked from common passwords for a while (passwords that Microsoft see being used to attempt non-owner access on accounts) but with the password authentication restrictions you can link this to block lists and implement it with password changes that happen on domain controllers.

So how does all this work, and what sort of changes can I expect with my passwords.

Well what to expect can look like this:

image

Note all the below is what I currently know Microsoft do. This is based on info made public in November 2018 and documented at https://docs.microsoft.com/en-us/azure/active-directory/authentication/concept-password-ban-bad and is subject to change as Microsoft’s security graph and machine learning determines change is needed to keep accounts secure.

Password Scoring

First, each password is scored when changed or set by an administrator or a user on first use. A password with a score less than 5 is not allowed. For example:

spring2018 = [spring] + [2018] = 2 points

spring2018asdfj236 = [spring] + [2018] + [asdf] + [f] + [j] + [2] + [3] + [6] = 8 points

This shows that common phrases (like Spring and 2018) can be allowed as part of password that also contains stuff that is hard to guess. In this, the asdf pattern is something straight from a Qwerty keyboard and so gets a low score. In addition to the score needing to exceed 5, other complexity rules such as certain characters and length are still required if you enforce those options. Passwords are also “normalized”, which lower cases them and compares them in lower case – so spRinG2018 is as weak as spring2018. Normalization also does common character replacesments such as $=s, so $PrinG2018 is the same as spring2018 and also just as weak!

You name is not allowed in any password you set and the logic applies to an edit distance of 1 character – that is if “spring” is a blocked word then “sprinh” would also be blocked, h being one character away from g.

Common and Blocked Lists

Microsoft provide the common password lists, and these change as Microsoft see different passwords getting used in account attacks. You provide a custom blocked list. This can contain up to 1000 words, and Microsoft apply fuzzy logic to yours and the common list. For example we added all our office locations as shown:

image

This means that both capetown and C@p3t0wn would be blocked. The @=a, the 3=e and the 0=o. So the more complex one is really not complex at all as it contains common replacements.

In terms of licences, the banned password list that Microsoft provides is licence free to all cloud accounts. You need AAD Basic if you want to add your own custom banned password list. For accounts in Windows Server Active Directory you need the Azure AD Premium (P1) licence for all synced users to allow downloading of the banned password list as well as customising it with your words so that Active Directory can apply it to all users on-premises to block bad passwords (even those users not synced to AzureAD).

Hybrid Password Change Events Protected

The checks on whether a password change should be stopped is included in hybrid scenarios using self-service password reset, password hash sync, and pass-through authentication, though changes to the custom banned password list may take a few hours to be applied to the list that is downloaded to your domain controllers.

On-Premises Changes

There is an agent that is installed on the domain controllers. Password changes are passed to the agent and it checks the password against the common list and your blocked list. The agent does the password check, and it checks it against the most recently downloaded list from Azure AD. The password for on-premises is not passed up to Azure AD, the list is downloaded from Azure AD and processed locally on the domain controller. This download is done by the Azure AD password protection proxy. The list is then downloaded once per hour per AD site to include the latest changes. If your Azure AD password protection proxy fails, then you just use the last list that was successfully downloaded. Password changes are still allowed even if you lose internet access.

Note that the Azure AD password protection proxy is not the same as the Pass-Through Authentication agent or the AAD Connect Health agent. The Azure AD password protection proxy can though be installed on the same servers as the PTA or Connect Health agent. Provisioning new servers for the proxy download service are not required.

The Azure AD password protection proxy wakes up hourly, checks SYSVOL to see the timestamp of the most recently downloaded copy and decides if a new copy is needed. Therefore if your intra-site replication is within the hour, proxy agents in other sites might not need to download the list as the latest is already available via DFSR between the domain controllers.

The Azure AD password protection proxy does not need to run on a domain controller, so your domain controllers do not need internet access to obtain the latest list. The Azure AD password protection proxy downloads the list and places it in SYSVOL so that DFSR replication can take it to the domain controller that needs it.

Getting Started

To set a custom password block list, in the Azure Portal visit the Azure AD page, click Security and then click Authentication Methods (in the Manage section). Enter your banned passwords, lower case will do as Microsoft apply fuzzy logic as described above to match your list to similar other values. Your list should include common words to your organization, such as location, office address keywords, functions and features of what the company does etc.

For Active Directory, download the agent (from the Microsoft Download Center) to one or more servers (for fault tolerance). These will download the latest list and place it in SYSVOL so that the domain controllers can process it. Two servers in two sites would probably ensure one of them is always able to download the latest copy of the list.

The documentation is found at https://aka.ms/deploypasswordprotection.

Microsoft suggests that any deployment start in audit mode. Audit mode is the default initial setting where passwords can continue to be set even if they would be blocked. Those that would fail in Enforce mode are allowed in Audit mode, but when in audit mode entries in the event log record the fact that the password would fail if enforce was turned on. Once proxy server(s) and DC agents are fully deployed in audit mode, regular monitoring should be done in order to determine what impact password policy enforcement would have on users and the environment if the policy was enforced.

This audit mode allows you to update in-house policy, extend training programs and offer password advice and see what users are doing that would be considered weak. Once you are happy that users are able to respond to an password change error because the password is too weak, move to enforce mode. Enforce mode should kick in within a few hours of you changing it in the cloud.

Installing the Proxy and DC Agent

Domain Controllers need to be running Windows Server 2012 and later, though there are no requirements for specific domain or forest functional levels. The Proxy software needs to run on a Windows Server 2012 R2 or later server and be running .NET 4.6.2 or later. Visit the Microsoft Download Center to download both the agent and the password protection proxy. The proxy is installed and then configured on a few (two at most during preview) servers in a forest. The agent is installed on all domain controllers as password changes can be enacted on any of them.

To install the agent, run AzureADPasswordProtectionProxy.msi on the server that has internet connectivity to Azure AD. This could be your domain controller, but it would need internet access to do this.

To configure the agent, you need to run once Import-Module AzureADPasswordProtection followed by Register-AzureADPasswordProtectionProxy and then once the proxy is registered, register the forest as well with Register-AzureADPasswordProtectionForest all from an administrative PowerShell session (enterprise admin and global admin roles required). Registering the server adds information to the Active Directory domain partition about the server and port the proxy servers can be found at and registering the forest settings ensure that information about the service is stored in the configuration partition.

Import-Module AzureADPasswordProtection
Get-Service AzureADPasswordProtectionProxy | FL
$tenantAdminCreds = Get-Credential
Register-AzureADPasswordProtectionProxy -AzureCredential $tenantAdminCreds
Register-AzureADPasswordProtectionForest -AzureCredential $tenantAdminCreds

If you get an error that reads “InvalidOperation: (:) [Register-AzureADPasswordProtectionProxy], AggregateException” then this is because your AzureAD requires MFA for device join. The proxy did not support MFA for device join during the early preview but that should now be resolved – make sure you are using the latest download of the agent and proxy code. you need to disable this setting in Azure AD for the period covering the time you make these changes – you can turn it back on again (as on is recommended) once you are finished configuring your proxy servers. This setting, should you need to disable it, is found at:

  • Navigate to Azure Active Directory -> Devices -> Device settings
  • Set “Require Multi-Factor Auth to join devices” to “No”
  • As shown
    image
  • Then once the registration of your two proxies is complete, reverse this change and turn it back on again.

Once at least one proxy is installed, you can install the agent on your domain controllers. This is the AzureADPasswordProtectionDCAgent.msi and once installed requires a restart of the server to take its role within the password change process.

The PowerShell cmdlet Get-AzureADPasswordProtectionDCAgent will report the state of the DCAgent and the date/time stamp of the last downloaded password block list that the agent knows about.

image

Changes In Forest

Once the domain controller the agent is installed on is rebooted, it comes back online, finds the server(s) running the proxy application and asks it to download the latest password block list. The proxy downloads this to C:\Windows\SYSVOL\domain\AzureADPasswordProtection. Older versions of the proxy and agent used a folder called
{4A9AB66B-4365-4C2A-996C-58ED9927332D} under Policies folder. These versions of the agent stop working in July 2019 and need to be updated to the latest release.

In the Configuration partition at CN=Azure AD Password Protection,CN=Services,CN=Configuration,DC=domain,DC=com some settings about the service are persisted. If the domain controller has the agent installed then

CN=AzureADConnectPasswordPolicyDCAgent,CN=<DomainControllerName>,OU=Domain Controllers,DC=domain,DC=com is created.

And then on each domain controller, in the Event Viewer, you get Application and Services Logs > Microsoft > AzureADPasswordProxy with DCAgent on the DC’s and ProxyService on the proxy servers. The EventID’s are documented at https://docs.microsoft.com/en-us/azure/active-directory/authentication/howto-password-ban-bad-on-premises-monitor.

The monitoring Event ID will cover things like user password change differently than admin password set. This allows you to audit who (end user or service desk) is setting weak passwords.

Once the proxy starts to work, the above folder starts to get content. In my case in the preview it took over 2 hours from installing the proxy as well as the documented installation and configuration PowerShell cmdlets listed above to get the proxy to download anything. The above listed Configuration folder contained some cfge files and the above listed PasswordPolicies contains what I assume is the downloaded password block list, compressed as its only 12KB. This is the .ppe file and there is one of these per hour downloaded. Older versions of this download are deleted by the proxy service automatically.

Audit Mode

Using the above Event ID’s you can track the users who have changed to weak passwords (in that they are on your or Microsoft’s banned password list) when the user or admin sets (or resets) the users password. Audit mode does not stop the user choosing the password that would “normally have been rejected” but will record different Event IDs depending upon the activity and which block list it would have failed against. Event id DCAgent/30009 for example has the message “The reset password for the specified user would normally have been rejected because it matches at least one of the tokens present in the Microsoft global banned password list of the current Azure password policy. The current Azure password policy is configured for audit-only mode so the password was accepted.”. This message is a failure against Microsoft’s list on password set. The user doing a password change and the Microsoft list gets DCAgent/30010 Event ID recorded instead.

Using these two Event ID’s along with DCAgent/30007 (logged when password set but fails your custom list) and or DCAgent/30008 (password changed, fails your custom list) allows you to audit the impact of the new policy before you enforce it.

Enforce Mode

This mode ensures that password set or change events cannot have passwords that would fail the list policies. This is enabled in the password policy in Azure AD as shown:

image

Once this is enabled it takes a few hours to be picked up by the proxy servers and then for the agent to start rejecting banned passwords.

When a user changes their password in enforce mode they get the following error (different graphic depending upon Windows versions). If the admin changing the password uses a blocked password then they see the left hand graphic as well (Active Directory Users and Computers).

image
image

or

This is no different to the old error you get when your password complexity, length or history is not met. Therefore there is no indication to the user that the password they chose might be banned rather than not allowed for the given reasons. So though password policy with a banned list is an excellent step forward, there needs to be help desk and end user awareness and communications (even if they are just a simple notification) as the user would not be able to tell from the client error they get. Maybe Microsoft have plans to update the client error?

Password changes that fail once enforce mode is enabled get Event ID’s such as DCAgent/30002, DCAgent/30003, DCAgent/30004 and DCAgent/30005 depending upon which password list the fail happened against and the method of password set or change. For example when I used the password Oxford123, as “oxford” is in the custom banned password list, Event ID 30003 returns “The reset password for the specified user was rejected because it matched at least one of the tokens present in the per-tenant banned password list of the current Azure password policy”. As mentioned above, the sequence of 123 following the banned word is not enough to make to score more than 5 points and so the password change is rejected.

On the other hand, 0xf0rdEng1and! was allowed as England was not on my banned list an so although my new password contained a banned word, there were enough other components of the password to make it secure enough. Based on the above mentioned scoring of 5 or more is required to have a password accepted, [Oxford] + [E] + [n] + [g] + [1] + [a] + [n] + [d] + [!], a total of 9. 9 >= 5 and so the password is accepted.

Finally, when testing users, other password policies like the date that the password can next be changed and “user cannot change password” property etc. will take effect over the banned password list. For example, if you have a cannot change password for 5 days setting, and you set the users password as an administrator – that will work or fail based on the password you enter, but if you change the password as the user within that time period, that will fail as five days have not gone by and not because the user picked a guessable password.