MFA and End User Impacts

Posted on 5 CommentsPosted in app password, ATP, Authentication, Azure, Azure Active Directory, Azure AD, Azure Information Protection, AzureAD, conditional access, EM+S, email, enterprise mobility + security, management, mcm, mcsm, MFA, microsoft, modern authentication, multi-factor auth, Multi-Factor Authentication, sspr

This article will look at the various different MFA settings found in Azure AD (which controls MFA for Office 365 and other SaaS services) and how those decisions impact users.

There is lots on the internet on enabling MFA, and lots on what that looks like for the user – but nothing I could see that directly laid out all the options and the impact of each option.

The options that the admin can set that I will cover in this article are:

  • Default settings for the MFA registration service
  • The enhanced registration service (now depreciated)
  • The refreshed enhanced registration service (MFA and Self-Service Password Reset registration combined)

The general impact to the user is that the user needs to provide a second factor to login. In this article I will not detail the above registration for each of the second factors and only cover the general process of registration – your exact experience on registration will depend upon what second factors (app notifications, app code, phone call and text message) you choose to implement.

This article will look mainly at the different between having no MFA and what happens from the users perspective as the admin turns on a requirement to have MFA. The various options that the admin can use to enable MFA are as follows:

  • Office 365 MFA (aka the legacy method) that is available to all users with or without a licence
  • Azure AD Conditional Access and setting a rule that requires MFA (when the user is not registered)
  • Azure AD Premium 2 licence and MFA Registration (register without requiring MFA to be enabled)
  • Azure AD Free fixed Conditional Access rules (MFA for all users) which is in preview at the time of writing (Aug 2019)

Terminology and Settings

This article refers back to a series of different settings in each of the following sections. To make the article avoid repeating itself, this section outlines each of the general settings, what I mean by the description I use and where I turn that setting on or off.

Office 365 MFA – This is the legacy MFA options set via https://admin.microsoft.com > User Management > Multi-Factor Authentication. This user experience turns on or off MFA for users regardless of app or location (unlike Conditional Access) and has settings for the different second factor methods (for example you can disable SMS from here).

Legacy MFA Admin Page

Conditional Access Based MFA – This is where you set rules for accessing cloud apps based on the user, the location, the risk (P2 licence required), the device (domain joined or compliant), the location (IP), the device risk (MDATP licence required), compliance (Intune required) etc. If you rule requires MFA and the logging in user passes the requirements for this rule (and is not otherwise blocked) then this is what I call Conditional Access Based MFA. This is set in https://portal.azure.com > Azure Active Directory > Enterprise Applications > Conditional Access

Conditional Access Based MFA

Azure AD Premium 2 MFA Registration – This is where you can get users to register before you turn on MFA via either of the above routes. Without the P2 licence you turn on MFA and at the next login the user needs to register. With P2 you can turn on registration at login without forcing MFA. You would then enable MFA later or you can have registration at next login (and defer that by 14 days) so that the user registers even if they never hit an endpoint that the need to do MFA on. For example, MFA when external and the user never works remotely. Therefore they will never have to do MFA and therefore never be required to register – which P2 licence you can get them to register independent of the requirement to do MFA. You access these settings via https://portal.azure.com > Azure AD Identity Protection > MFA Registration

Azure AD P2 MFA Registration

Self Service Password Reset Registration – This is shown if the user is in scope for SSPR and SSPR is enabled. This is not MFA registration – but if the user is in scope they will be asked to register for this as well. This therefore can result in two registrations at next login – one for SSPR and one for MFA. We will show this below, but it is best if you move to the combined MFA/SSPR registration wizard mentioned below.

SSPR Enablement

Enhanced Registration (Depreciated) – This was the new registration wizard in 2018 and have been replaced by the next option. If you still have users on this option you will see it, otherwise the option to enable this older wizard is now removed. This is accessed via https://portal.azure.com > Azure AD > User Settings > Manage user features preview

MFA Registration Wizard

Combined MFA and SSPR Registration – This is the current recommended MFA registration process and it includes self-service password reset registration as well. You should aim to move your settings to this. All the new MFA reporting and insights are based on this process. This is accessed via https://portal.azure.com > Azure AD > User Settings > Manage user features preview. Note that if you still have users on the previous “Enhanced Registration” shown above then this one is listed as “enhanced”. If not – if only one slider is shown – it is the new registration process. You can enable this for a group of users (for pilot) or all users:

Combined MFA and SSPR Registration Wizard

Office 365 MFA + Original Registration

This is not recommended to be used any more – use the Azure AD Free Conditional Access rules for all users or all admins instead. But for completion of the process to show all the options, you select a user(s) in the Office 365 MFA page and click Enable. In the below screenshot we can see that Cameron White is enabled for MFA. This means that it has been turned on for him, but he has not yet gone through the registration wizard:

Office 365 MFA for Single User

The video below shows the first run experience of this user – they login and are prompted to register for MFA. They register using the legacy experience and are then granted access to the application.

OFFICE 365 MFA + LEGACY REGISTRATION

Office 365 MFA + Enhanced Registration

For this scenario I have a user called Brian Johnson. He has been enabled for MFA as above (Office 365 method) but additionally has been added to a group that is configured to support the new MFA+SSPR combined registration process. Brian is not enabled for SSPR. The video shows the user experience. Note that the user needs a valid licence to be able to use this experience. If they do not have any licences they will get the old experience:

VIDEO OFFICE 365 MFA + ENHANCED REGISTRATION

Conditional Access MFA

The following video looks at the experience of two users who are enforced for MFA via Conditional Access. The login will trigger the registration for MFA as neither user is already registered. The first user (Christie) gets the old registration wizard and the second (Debra) gets the new registration wizard. The Conditional Access settings are basic – MFA in all circumstances for our two users:

Conditional Access Two Users
Conditional Access MFA Enabled
CONDITIONAL ACCESS WITH OLD AND NEW REGISTRATION

Impact of SSPR on MFA Registration and User Sign-In

When users are set up to register their password reset security methods and MFA, but using the old registration wizard the user needs to do two sets of registration. Again, it is recommended that the combined registration process is used instead of this process.

For this demostration, we are enabling SSPR for our test users. One with the old registration wizard and one with the new one:

SSPR WITH AND WITHOUT COMBINED REGISTRATION

Adding SSPR To Already Registered Users

Once a user has registered for MFA (old or new registration) it might come a time where you enable SSPR for them after that (and not at the time of original registration). In this scenario the users that registered with the old registration wizard are asked to register for SSPR, but users who went through the new wizard – though they did not specifically register for SSPR – there is enough details already available for them to use the service (as long as app notifications and codes is enabled for SSPR). If SSPR is left on the default of SMS and Email, then the new registration wizard does not have your alternative email and so SSPR is unavailable to you. The user process and flow is shown in the next video:

ENABLE SSPR AFTER REGISTRATION

Azure AD Identity Protection and MFA Registration

The Azure AD Premium 2 licensed feature called Identity Protection contains the ability to request that the user registers for MFA (and SSPR if via the new combined registration wizard) even if the user is not required to perform MFA for login – all our previous registrations only required registration because the user needed to do MFA. You can ask users to pre-register via https://aka.ms/mfasetup but Identity Protection adds this functionality with a 14 day option to defer. The video shows the settings and the user experience:

Azure AD IDENTITY PROTECTION WITH AND WITHOUT NEW REGISTRATION

Azure AD Free Conditional Access for All Users

Early Q2 2019 Microsoft rolled out new baseline policies for Azure AD Conditional Access. These are available even without the Azure AD P1 licence needed for Conditional Access – but as they are licence free they are heavily restricted – they apply to all users and need MFA if sign-in is risky. So though they do not require MFA on all logins (unlike the O365 MFA legacy settings) they do require registration. But they offer a 14 day deferral process if the user is not ready to register. But unlike Azure AD Identity Protection mentioned above, you cannot do this for some users – it is enabled for all users upon enabling the rule. Lets see the settings and the user experience in the video. The video will also enable the “all admins” baseline policy as well, as that should always be turned on.

BASELINE POLICY FOR ALL USERS WITH REGISTRATION

Read Only And Document Download Restrictions in SharePoint Online

Posted on Leave a commentPosted in AADConnect, AADSync, active directory, Azure Active Directory, Azure AD, compliance, conditional access, device, download, enterprise mobility + security, exchange online, microsoft, Office 365, OneDrive, OneDrive For Business, sharepoint, Uncategorized

Read Only And Document Download Restrictions in SharePoint Online

Both SharePoint Online (including OneDrive for Business) and Exchange Online allow a read only mode to be implemented based on certain user or device or network conditions.

For these settings in Exchange Online see my other post at https://c7solutions.com/2018/12/read-only-and-attachment-download-restrictions-in-exchange-online.

When this is enabled documents can be viewed in the browser only and not downloaded. So how to do this.

Step 1: Create a Conditional Access Policy in Azure AD

You need an Azure AD Premium P1 licence for this feature.

Here I created a policy that applied to one user and no other policy settings. This would mean this user is always in ReadOnly mode.

In real world scenarios you would more likely create a policy that applied to a group and not individual users and forced ReadOnly only when other conditions such as non-compliant device (i.e. home computer) where in use. The steps for this are:

imageimageimageimage

The pictures, as you cannot create the policies in the cmdline, are as follows:

  1. New policy with a name. Here it is “Limited View for ZacharyP”
  2. Under “Users and Groups” I selected my one test user. Here you are more likely to pick the users for whom data leakage is an issue
  3. Under “Cloud apps” select Office 365 SharePoint Online. I have also selected Exchange Online, as the same idea exists in that service as well
  4. Under Session, and this is the important one, select “Use app enforced restrictions”.

SharePoint Online will then implement read only viewing for all users that fall into this policy you have just created.

Step 3: View the results

Ensure the user is licensed for SharePoint Online (and a mailbox if you are testing Exchange Online) and an Azure AD Premium P1 licence and ensure there is a document library with documents in it for testing!

Login as the user under the conditions you have set in the policy (in my example, the conditions where for the specific user only, but you could do network or device conditions as well.

SharePoint and OneDrive Wizard Driven Setup

For reference, in the SharePoint Admin Centre and Policies > Access Control > Unmanaged Devices, here you turn on “Allow limited web-only access” or “Block access” to do the above process of creating the conditional access rule for you, but with pre-canned conditions:

image

In the classic SharePoint Admin Center it is found under that Access Control menu, and in SharePoint PowerShell use Set-SPOTenant -ConditionalAccessPolicy AllowLimitedAccess

Turning the settings on in SharePoint creates the Conditional Access policies for you, so for my demo I disabled those as the one I made for had different conditions and included SharePoint as well as a service. This is as shown for SharePoint – the banner is across the top and the Download link on the ribbon is missing:

image

And for OneDrive, which is automatic when you turn it on for SharePoint:

image

Improving Password Security In the Cloud and On-Premises

Posted on 1 CommentPosted in active directory, Azure Active Directory, Azure AD, AzureAD, EM+S, enterprise mobility + security, microsoft, Office 365, password, security

Passwords are well known to be generally insecure the way users create them. They don’t like “complex” passwords such as p9Y8Li!uk%al and so if they are forced to create a “complex” password due to a policy in say Active Directory, or because their password has expired and they need to generate a new one, they will go for something that is easy to remember and matches the “complexity” rules required by their IT department. This means users will go for passwords such as WorldCup2018! and Summ3r!!. Both these exceed 8 characters, both have mixed case, both have symbols and numbers – so both are complex passwords. Except they are not – they are easy to guess. For example, you can tell the date of this blog post from my suggestions! Users will not tend to pick passwords that are really random and malicious actors know this. So current password guidance from NIST and UK National Cyber Security Centre is to have non-expiring unknown, not simple passwords that are changed on compromise. Non-expiring allows the user to remember it if they need to (though a password manager is better) and as it is unknown beforehand (or unique) means its not on any existing password guess list that might exist.

So how can we ensure that users will choose these passwords! One is end user training, but another just released feature in the Microsoft Cloud is to block common passwords and password lists. This feature is called Azure AD Password Protection. With the password management settings in Azure AD, cloud accounts have been blocked from common passwords for a while (passwords that Microsoft see being used to attempt non-owner access on accounts) but with the password authentication restrictions you can link this to block lists and implement it with password changes that happen on domain controllers.

So how does all this work, and what sort of changes can I expect with my passwords.

Well what to expect can look like this:

image

Note all the below is what I currently know Microsoft do. This is based on info made public in November 2018 and documented at https://docs.microsoft.com/en-us/azure/active-directory/authentication/concept-password-ban-bad and is subject to change as Microsoft’s security graph and machine learning determines change is needed to keep accounts secure.

Password Scoring

First, each password is scored when changed or set by an administrator or a user on first use. A password with a score less than 5 is not allowed. For example:

spring2018 = [spring] + [2018] = 2 points

spring2018asdfj236 = [spring] + [2018] + [asdf] + [f] + [j] + [2] + [3] + [6] = 8 points

This shows that common phrases (like Spring and 2018) can be allowed as part of password that also contains stuff that is hard to guess. In this, the asdf pattern is something straight from a Qwerty keyboard and so gets a low score. In addition to the score needing to exceed 5, other complexity rules such as certain characters and length are still required if you enforce those options. Passwords are also “normalized”, which lower cases them and compares them in lower case – so spRinG2018 is as weak as spring2018. Normalization also does common character replacesments such as $=s, so $PrinG2018 is the same as spring2018 and also just as weak!

You name is not allowed in any password you set and the logic applies to an edit distance of 1 character – that is if “spring” is a blocked word then “sprinh” would also be blocked, h being one character away from g.

Common and Blocked Lists

Microsoft provide the common password lists, and these change as Microsoft see different passwords getting used in account attacks. You provide a custom blocked list. This can contain up to 1000 words, and Microsoft apply fuzzy logic to yours and the common list. For example we added all our office locations as shown:

image

This means that both capetown and C@p3t0wn would be blocked. The @=a, the 3=e and the 0=o. So the more complex one is really not complex at all as it contains common replacements.

In terms of licences, the banned password list that Microsoft provides is licence free to all cloud accounts. You need AAD Basic if you want to add your own custom banned password list. For accounts in Windows Server Active Directory you need the Azure AD Premium (P1) licence for all synced users to allow downloading of the banned password list as well as customising it with your words so that Active Directory can apply it to all users on-premises to block bad passwords (even those users not synced to AzureAD).

Hybrid Password Change Events Protected

The checks on whether a password change should be stopped is included in hybrid scenarios using self-service password reset, password hash sync, and pass-through authentication, though changes to the custom banned password list may take a few hours to be applied to the list that is downloaded to your domain controllers.

On-Premises Changes

There is an agent that is installed on the domain controllers. Password changes are passed to the agent and it checks the password against the common list and your blocked list. The agent does the password check, and it checks it against the most recently downloaded list from Azure AD. The password for on-premises is not passed up to Azure AD, the list is downloaded from Azure AD and processed locally on the domain controller. This download is done by the Azure AD password protection proxy. The list is then downloaded once per hour per AD site to include the latest changes. If your Azure AD password protection proxy fails, then you just use the last list that was successfully downloaded. Password changes are still allowed even if you lose internet access.

Note that the Azure AD password protection proxy is not the same as the Pass-Through Authentication agent or the AAD Connect Health agent. The Azure AD password protection proxy can though be installed on the same servers as the PTA or Connect Health agent. Provisioning new servers for the proxy download service are not required.

The Azure AD password protection proxy wakes up hourly, checks SYSVOL to see the timestamp of the most recently downloaded copy and decides if a new copy is needed. Therefore if your intra-site replication is within the hour, proxy agents in other sites might not need to download the list as the latest is already available via DFSR between the domain controllers.

The Azure AD password protection proxy does not need to run on a domain controller, so your domain controllers do not need internet access to obtain the latest list. The Azure AD password protection proxy downloads the list and places it in SYSVOL so that DFSR replication can take it to the domain controller that needs it.

Getting Started

To set a custom password block list, in the Azure Portal visit the Azure AD page, click Security and then click Authentication Methods (in the Manage section). Enter your banned passwords, lower case will do as Microsoft apply fuzzy logic as described above to match your list to similar other values. Your list should include common words to your organization, such as location, office address keywords, functions and features of what the company does etc.

For Active Directory, download the agent (from the Microsoft Download Center) to one or more servers (for fault tolerance). These will download the latest list and place it in SYSVOL so that the domain controllers can process it. Two servers in two sites would probably ensure one of them is always able to download the latest copy of the list.

The documentation is found at https://aka.ms/deploypasswordprotection.

Microsoft suggests that any deployment start in audit mode. Audit mode is the default initial setting where passwords can continue to be set even if they would be blocked. Those that would fail in Enforce mode are allowed in Audit mode, but when in audit mode entries in the event log record the fact that the password would fail if enforce was turned on. Once proxy server(s) and DC agents are fully deployed in audit mode, regular monitoring should be done in order to determine what impact password policy enforcement would have on users and the environment if the policy was enforced.

This audit mode allows you to update in-house policy, extend training programs and offer password advice and see what users are doing that would be considered weak. Once you are happy that users are able to respond to an password change error because the password is too weak, move to enforce mode. Enforce mode should kick in within a few hours of you changing it in the cloud.

Installing the Proxy and DC Agent

Domain Controllers need to be running Windows Server 2012 and later, though there are no requirements for specific domain or forest functional levels. The Proxy software needs to run on a Windows Server 2012 R2 or later server and be running .NET 4.6.2 or later. Visit the Microsoft Download Center to download both the agent and the password protection proxy. The proxy is installed and then configured on a few (two at most during preview) servers in a forest. The agent is installed on all domain controllers as password changes can be enacted on any of them.

To install the agent, run AzureADPasswordProtectionProxy.msi on the server that has internet connectivity to Azure AD. This could be your domain controller, but it would need internet access to do this.

To configure the agent, you need to run once Import-Module AzureADPasswordProtection followed by Register-AzureADPasswordProtectionProxy and then once the proxy is registered, register the forest as well with Register-AzureADPasswordProtectionForest all from an administrative PowerShell session (enterprise admin and global admin roles required). Registering the server adds information to the Active Directory domain partition about the server and port the proxy servers can be found at and registering the forest settings ensure that information about the service is stored in the configuration partition.

Import-Module AzureADPasswordProtection
Get-Service AzureADPasswordProtectionProxy | FL
$tenantAdminCreds = Get-Credential
Register-AzureADPasswordProtectionProxy -AzureCredential $tenantAdminCreds
Register-AzureADPasswordProtectionForest -AzureCredential $tenantAdminCreds

If you get an error that reads “InvalidOperation: (:) [Register-AzureADPasswordProtectionProxy], AggregateException” then this is because your AzureAD requires MFA for device join. The proxy did not support MFA for device join during the early preview but that should now be resolved – make sure you are using the latest download of the agent and proxy code. you need to disable this setting in Azure AD for the period covering the time you make these changes – you can turn it back on again (as on is recommended) once you are finished configuring your proxy servers. This setting, should you need to disable it, is found at:

  • Navigate to Azure Active Directory -> Devices -> Device settings
  • Set “Require Multi-Factor Auth to join devices” to “No”
  • As shown
    image
  • Then once the registration of your two proxies is complete, reverse this change and turn it back on again.

Once at least one proxy is installed, you can install the agent on your domain controllers. This is the AzureADPasswordProtectionDCAgent.msi and once installed requires a restart of the server to take its role within the password change process.

The PowerShell cmdlet Get-AzureADPasswordProtectionDCAgent will report the state of the DCAgent and the date/time stamp of the last downloaded password block list that the agent knows about.

image

Changes In Forest

Once the domain controller the agent is installed on is rebooted, it comes back online, finds the server(s) running the proxy application and asks it to download the latest password block list. The proxy downloads this to C:\Windows\SYSVOL\domain\AzureADPasswordProtection. Older versions of the proxy and agent used a folder called
{4A9AB66B-4365-4C2A-996C-58ED9927332D} under Policies folder. These versions of the agent stop working in July 2019 and need to be updated to the latest release.

In the Configuration partition at CN=Azure AD Password Protection,CN=Services,CN=Configuration,DC=domain,DC=com some settings about the service are persisted. If the domain controller has the agent installed then

CN=AzureADConnectPasswordPolicyDCAgent,CN=<DomainControllerName>,OU=Domain Controllers,DC=domain,DC=com is created.

And then on each domain controller, in the Event Viewer, you get Application and Services Logs > Microsoft > AzureADPasswordProxy with DCAgent on the DC’s and ProxyService on the proxy servers. The EventID’s are documented at https://docs.microsoft.com/en-us/azure/active-directory/authentication/howto-password-ban-bad-on-premises-monitor.

The monitoring Event ID will cover things like user password change differently than admin password set. This allows you to audit who (end user or service desk) is setting weak passwords.

Once the proxy starts to work, the above folder starts to get content. In my case in the preview it took over 2 hours from installing the proxy as well as the documented installation and configuration PowerShell cmdlets listed above to get the proxy to download anything. The above listed Configuration folder contained some cfge files and the above listed PasswordPolicies contains what I assume is the downloaded password block list, compressed as its only 12KB. This is the .ppe file and there is one of these per hour downloaded. Older versions of this download are deleted by the proxy service automatically.

Audit Mode

Using the above Event ID’s you can track the users who have changed to weak passwords (in that they are on your or Microsoft’s banned password list) when the user or admin sets (or resets) the users password. Audit mode does not stop the user choosing the password that would “normally have been rejected” but will record different Event IDs depending upon the activity and which block list it would have failed against. Event id DCAgent/30009 for example has the message “The reset password for the specified user would normally have been rejected because it matches at least one of the tokens present in the Microsoft global banned password list of the current Azure password policy. The current Azure password policy is configured for audit-only mode so the password was accepted.”. This message is a failure against Microsoft’s list on password set. The user doing a password change and the Microsoft list gets DCAgent/30010 Event ID recorded instead.

Using these two Event ID’s along with DCAgent/30007 (logged when password set but fails your custom list) and or DCAgent/30008 (password changed, fails your custom list) allows you to audit the impact of the new policy before you enforce it.

Enforce Mode

This mode ensures that password set or change events cannot have passwords that would fail the list policies. This is enabled in the password policy in Azure AD as shown:

image

Once this is enabled it takes a few hours to be picked up by the proxy servers and then for the agent to start rejecting banned passwords.

When a user changes their password in enforce mode they get the following error (different graphic depending upon Windows versions). If the admin changing the password uses a blocked password then they see the left hand graphic as well (Active Directory Users and Computers).

image
image

or

This is no different to the old error you get when your password complexity, length or history is not met. Therefore there is no indication to the user that the password they chose might be banned rather than not allowed for the given reasons. So though password policy with a banned list is an excellent step forward, there needs to be help desk and end user awareness and communications (even if they are just a simple notification) as the user would not be able to tell from the client error they get. Maybe Microsoft have plans to update the client error?

Password changes that fail once enforce mode is enabled get Event ID’s such as DCAgent/30002, DCAgent/30003, DCAgent/30004 and DCAgent/30005 depending upon which password list the fail happened against and the method of password set or change. For example when I used the password Oxford123, as “oxford” is in the custom banned password list, Event ID 30003 returns “The reset password for the specified user was rejected because it matched at least one of the tokens present in the per-tenant banned password list of the current Azure password policy”. As mentioned above, the sequence of 123 following the banned word is not enough to make to score more than 5 points and so the password change is rejected.

On the other hand, 0xf0rdEng1and! was allowed as England was not on my banned list an so although my new password contained a banned word, there were enough other components of the password to make it secure enough. Based on the above mentioned scoring of 5 or more is required to have a password accepted, [Oxford] + [E] + [n] + [g] + [1] + [a] + [n] + [d] + [!], a total of 9. 9 >= 5 and so the password is accepted.

Finally, when testing users, other password policies like the date that the password can next be changed and “user cannot change password” property etc. will take effect over the banned password list. For example, if you have a cannot change password for 5 days setting, and you set the users password as an administrator – that will work or fail based on the password you enter, but if you change the password as the user within that time period, that will fail as five days have not gone by and not because the user picked a guessable password.

Azure AD Single Sign-On Basic Auth Popup

Posted on Leave a commentPosted in AADConnect, AADSync, Azure Active Directory, Azure AD, AzureAD, conditional access, microsoft, modern authentication, SSO

When configuring Azure AD SSO as part of Pass-Through Authentication (PTA) or with Password Hash Authentication (PHA) you need now (since March 2018) to only configure a single URL in the Intranet Zone in Windows. That URL is https://autologon.microsoftazuread-sso.com and this can be rolled out as a registry preference via Group Policy. Before March 2018 there was a second URL that was needed in the intranet zone, but that is no longer required (see notes).

So this short blog post is how to fix SSO when you do see a popup for this second URL though it is no longer required. The popup looks like:

image

It has OK and Cancel on it as well, but my screengrab I made when I saw the issue was not brilliant, so I “fixed” the bottom of the image so its approx. correct!

The URL is aadg.windows.net.nsatc.net. Adding this to Local Intranet Zone even though it is not needed does not fix the issue. The issue is caused because on Windows 10 (version 1703 and maybe others) someone has enabled Enhanced Protected Mode. Azure SSO does not work when Enhanced Protected Mode is enabled. This is not a setting that is enabled on client machines by default.

Enhanced Protected Mode provides additional protection against malicious websites by using 64-bit processes on 64-bit versions of Windows. For computers running at least Windows 8, Enhanced Protected Mode also limits the locations Internet Explorer can read from in the registry and the file system.

It is probable that Enhanced Protected Mode is enabled via Group Policy. It will either have the value Isolation (or Isolation64bit) set to a value of PMEM at HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer\Main or HKEY_CURRENT_USER\SOFTWARE\Microsoft\Internet Explorer\Main or the policy equivalent at  HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Internet Explorer\Main or HKEY_CURRENT_USER\SOFTWARE\Policies\Microsoft\Internet Explorer\Main when set via GPO settings.

This issue is listed in the Azure AD SSO known issues page at https://docs.microsoft.com/en-us/azure/active-directory/connect/active-directory-aadconnect-troubleshoot-sso. The reason why Enhanced Protected Mode does not work with Azure AD SSO is that whilst Enhanced Protected Mode is enabled, Internet Explorer has no access to corporate domain credentials.

Exchange Online Migration Batches–How Long Do They Exist For

Posted on 5 CommentsPosted in exchange, exchange online, Exchange Server, hybrid, microsoft, migration, Office 365

When you create a migration batch in Exchange Online, the default setting for a migration is to start the batch immediately and complete manually. So how long can you leave this batch before you need to complete it?

As you can see from the below screenshot, the migration batch here was created on Feb 19th, which was only yesterday as I write this blog.

image

The batch was created on the morning of the 19th Feb, and set to manual start (rather than the default of automatic start, as did not want to migrate lots of data during the business day) and then it was started close to 5:30pm that evening. By 11:25pm the batch had completed its initial sync of all 28 mailboxes and there were no failures. There were other batches syncing at the same time, so this is not indicative of any expected or determined migration performance speeds.

So what happens next. In the background a new mailbox move request was created for each mailbox in the batch, and each individual mailbox was synced to Exchange Online and associated with the synced Mail User object created in the cloud by the AADSync process. When each move reached 95% complete, the move was suspended. It will be resumed around 24 hours later, so that each mailbox is kept up to date once a day automatically.

If you leave the migration running but not completed you will see from the migration batch status above that the batch will complete in 7,981 years (on the 31st Dec 9999 and one second before the next millennium bug hits). In the meantime the migration batch sync will stop doing its daily updates after two months.

After two months of syncing to the cloud and not being completed, Exchange Online assumes you are still no closer to migrating and they stop keeping the mailbox on-premise and the mailbox in the cloud in sync. You can restart this process by interacting with the migration batch before this time, or if it does stop by just clicking the Resume icon, and this will restart it for a further period of time.

The New Rights Management Service

Posted on 3 CommentsPosted in aadrm, active directory, certificates, cloud, compliance, dirsync, exchange, exchange online, https, hybrid, journal, journaling, mcm, mcsm, microsoft, Office 365, Outlook, pki, policy, rms, smarthost, transport, unified messaging, voicemail

This blog is the start of a series of articles I will write over the next few months on how to ensure that your data is encrypted and secured to only the people you want to access it, and only for the level of rights you want to give them.

The technology that we will look at to do this is Microsoft’s recently released Windows Azure Active Directory Rights Management product, also known as AADRM or Microsoft Rights Management, or “the new RMS”.

In this series of articles we will look at the following:

The items above will get lit up as the article is released – so check back or leave a comment to this post and I will let you know when new content is added to this series.

What is “rights management”

Simply this is the ability to ensure that your content is only used by whom you want it to be used by and only for what you grant. Its known in various guises, and the most common guise is Digital Rights Management (DRM) as applied to the music and films you have been downloading for years.

With the increase in sharing music and other mp3 content in the last ten plus years, the recording companies and music sellers started to protect music. It did not go down well, and I would say this is mainly because the content was bought and so the owner wanted to do with it as they liked – even if what they liked was legal they were limited from doing so. I have music I bought that I cannot use because the music retailer is out of business or I tried to transfer it too many times. I now buy all my music DRM free.

But if the content is something I created and sold, rather than something I bought I see it very differently. When the program was running I was one of the instructors for the Microsoft Certified Master program. I wrote and delivered part of the Exchange Server training. And following the reuse of my and other peoples content outside of the classroom, the content was rights protected – it could be read only by those who I had taught. Those I taught think differently about this, but usually because the management of getting a new copy of the content when it expires!

But this is what rights management is, and this series of articles will look at enabling Azure Active Directory Rights Management, a piece of Office 365 that if you are an E3 or E4 subscriber then you already have, and if you have a lower level of subscription or none at all you can buy for £2/user/month and this will allow you to protect the content that you create, that it can be used by only those you want to read it (regardless of where you or they put it) and if you want it can expire after a given time.

In this series we will look at enabling the service and connecting various technologies to it, from our smartphones to PC’s to servers and then distributing our protected content to whom needs to see it. Those who receive it will be able to use the content for free. You only pay to create protected content. We will also look at protecting content automatically, for example content that is classified in a given way by Windows Server or emails that match certain conditions (for example they contain credit cards or other personally identifiable information (PII) information such as passport or tax IDs) and though I am not a SharePoint guru, we will look at protecting content downloaded from SharePoint document libraries.

Finally we will look at users protecting their own content – either the photographs they take on their phones of information they need to share (documents, aka using the phones camera as a scanner) or taking photos of whiteboards in meetings where the contents on the board should not be shared too widely.

Stick around – its a new technology and its going to have a big impact on the way we share data, regardless of whether we share it with Dropbox or the like or email or whatever comes next.

Installing and Configuring AD RMS and Exchange Server

Posted on 4 CommentsPosted in 2007, 2008 R2, 2010, active directory, certificates, exchange, exchange online, microsoft, networking, Office 365, organization relationships, owa, rms, server administrator

Earlier this week at the Microsoft Exchange Conference (MEC 2012) I led a session titled Configuring Rights Management Server for Office 365 and Exchange On-Premises [E14.314]. This blog shows three videos covering installation, configuration and integration of RMS with Exchange 2010 and Office 365. For Exchange 2013, the steps are mostly identical.

Installing AD RMS

This video looks at the steps to install AD RMS. For the purposes of the demonstration, this is a single server lab deployment running Windows Server 2008 R2, Exchange Server 2010 (Mailbox, CAS and Hub roles) and is the domain controller for the domain. As it is a domain controller, a few of the install steps are slightly different (those that are to do with user accounts) and these changes are pointed out in the video, as the recommendation is to install AD RMS on its own server or set of servers behind a IP load balancer.

Configuring AD RMS for Exchange 2010

The second video looks at the configuration of AD RMS for use in Exchange. For the purposes of the demonstration, this is a single server lab deployment running Windows Server 2008 R2, Exchange Server 2010 (Mailbox, CAS and Hub roles) and is the domain controller for the domain. This video looks at the default ‘Do Not Forward’ restriction as well as creating new templates for use in Exchange Server (OWA and Transport Rules) and then publishing these templates so they can be used in Outlook and other Microsoft Office products.

 

Integrating AD RMS with Office 365

The third video looks at the steps needed to ensure that your Office 365 mailboxes can use the RMS server on premises. The steps include exporting and importing the Trusted Publishing Domain (the TPD) and then marking the templates as distributed (i.e. available for use). The video finishes with a demo of the templates in action.

Creating GeoDNS with Amazon Route 53 DNS

Posted on 3 CommentsPosted in 2013, cloud, exchange, GeoDNS, https, load balancer, mcm, microsoft, MX, networking, owa, smtp, transport

UPDATE: 13 Aug 2014 – Amazon Route 53 now does native GeoDNS within the product – see Amazon Route 53 GeoDNS Routing Policy

A new feature to Exchange 2013 is supported use of a single namespace for your global email infrastructure. For example mail.contoso.com rather than different ones for each region such as uk-mail.contoso.com; usa-mail.contoso.com and apac-mail.contoso.com.
GeoDNS means that you are given the IP address of a server that is in or close to the region that you are in. For example if you work in London and your mailbox is also in London then most of the time you will want to be connected to the London CAS servers as that gives you the best network response. So in Exchange 2010 you would use your local URL of uk-mail.contoso.com and if you used the others you would be told to use uk-mail.contoso.com. For GeoDNS support you use mail.contoso.com and as you are in the UK you get the IP address of the CAS array in London. When you travel to the US (occasionally) you would get the US CAS array IP address, but this CAS array is able to proxy your OWA, RPC/HTTP etc traffic to the UK mailbox servers.
The same is true for email delivery via SMTP. Email that comes from UK sourced IP addresses is on balance a statistical likelihood that it is going to the UK mailbox. So when you look up the MX record for contoso.com from a UK company you get the UK CAS array and the email gets delivered to the CAS array that is in the same site as the target mailbox. If the email is for a user in a different region and it hits the UK CAS array then it is proxied to the other region seamlessly.
GeoDNS is a feature provided by some high-scale DNS providers, but not something Amazon Web Services (AWS) Route 53 provides – so how do I configure GeoDNS with Amazon Web Services (AWS) Route 53 DNS Service?
Quite easily is the answer. Route 53 does not offer GeoDNS but does offer DNS that directs you towards the closest AWS datacentre. If your datacentres are in regions similar to AWS then the DNS redirection that AWS offers is probably accurate.
To set it up, open your Route 53 DNS console, or move your DNS to AWS (it costs $0.50/month for a zone at time of writing, AWS Route 53 pricing here) and then create your global Exchange 2013 namespace record in DNS:

  1. Click Create Record Set and enter the name. In the below example I’m using geo.c7solutions.com as I don’t actually have a globally distributed email infrastructure!
  2. Select A – IPv4 or if you are doing IPv6 select AAAA.
  3. Set Alias to No and enter the IP address of one of your datacentres
  4. Select the AWS region that is closest to this Exchange server(s) and enter a unique description for the Set ID value.
  5. The entry will look something like this:
    image
  6. Save the Record Set and create additional entries for other regions. For the purposes of this blog I have created geo.c7solutions.com in four regions with the following IP addresses:
    Region IP Address Region
    us-east-1 1.2.3.4 Northern Virginia
    us-west-1 6.7.8.9 Northern California
    eu-west-1 2.3.4.5 Ireland
    ap-northwest-1 3.4.5.6 Singapore
    sa-east-1 4.5.6.7 Sao Paulo
    ap-southeast-1 5.6.7.8 Sydney
  7. The configuration in AWS for the remaining entries looks like the following:
    imageimageimage
  8. And also, once created, it appears like this:
    image

In addition to this blog, I’ve left the record described above on my c7solutions.com DNS zone. So depending upon your location in the world, if you open a command prompt and ping geo.c7solutions.com you should get back the IP address for the AWS region closest to you, and so get back an IP that represents a Exchange resource in your global region. Of course the IP’s I have used are not mine to use and probably will not respond to ping requests – but all you need to do is see it DNS returns the IP above that best matches the region that you are in.
I wrote this blog when in a hotel in Orlando and as you can see from the image below, it returns 1.2.3.4 which is the IP address associated with us-east-1.
image
But when I connected to a server in the UK and did the same ping geo.c7solutions.com I got the following, which show GeoDNS working when equating GeoDNS to AWS Latency DNS.
image
What do you get for your regions? Add comments and let us where you are (approximately) and what region you got. If enough people respond from enough places we can see if AWS can go GeoDNS without massive cost.
[Updated 13 Nov 2012] Added Sydney (ap-southeast-1) and fake IP address of 5.6.7.8
[Updated 27 April 2013] Added Northern California (us-west-1) and fake IP of 6.7.8.9

How To Speed Up Exchange Server Transport Logging

Posted on 1 CommentPosted in 2007, 2010, 2013, exchange, mcm, microsoft, transport

In Exchange 2010 SP1 and later any writing to the transport log files for activity logging (not the transaction logging on the mail.que database) is cached in RAM and written to disk every five minutes.

In a lab environment you might be impacted by this as you might have sent an email and want to check the logs for the information they contain for diagnostic reasons. The problem is you might need to wait five minutes to get this information.

Exchange 2010

To reduce the memory cache time to 30 seconds set the following two entries in the Edge.Transport.exe.config file (found in \Program Files\Microsoft\Exchange Server\v14\bin) within the AppSettings area:

<add key="SmtpSendLogFlushInterval" value="0:00:30" />
<add key="SmtpRecvLogFlushInterval" value="0:00:30" />

The two values above control different log files. Each transport log file has a different setting – so its possible to set Receive Connector protocol logging to a different value from Send Connector protocol logging if you wanted to. Once you make your changes to Edge.Transport.exe.config you need to restart the Microsoft Exchange Transport service for the changes to be picked up.

Here is a list of the properties that I know about that can be changed:

  • SmtpSendLogFlushInterval – Timespan value on how often to write the Send Connector protocol logging log to disk
  • SmtpRecvLogFlushInterval – Timespan value on how often to write the Receive Connector protocol logging log to disk
  • ConnectivityLogFlushInterval – Timespan value on how often the Connectivity log is written to disk.

In addition to the above, which are all timespan values for how often to write to disk, if the memory buffer that contains the log entries fills up then it will be written to disk as well. The default memory buffers are 1MB. So on a very busy server you might find that the log writing is not every five minutes exactly but of a more “random” nature as the buffer is filled. The following settings control the size of the buffer for the above timespans:

  • SmtpSendLogBufferSize
  • SmtpRecvLogBufferSize
  • ConnectivityLogBufferSize

Exchange 2013 CU1 and Later

The process for this version of Exchange is similar, just different log files because of the different services in use.

In Exchange 2013 you have transport services for mailbox (submission and delivery), transport core and CAS (frontend transport). The config files for these services are:

  • EdgeTransport.exe.config (transport core)
  • MSExchangeFrontEndTransport.exe.config (Frontend Transport on the CAS role)
  • MSExchangeDelivery.exe.config (for mailbox delivery on the Mailbox role)
  • MSExchangeSubmission.exe.config (for mailbox submission on the Mailbox role)

You will find these config files in \Program Files\Microsoft\Exchange Server\v15\bin.

 

Highly Available Geo Redundancy with Outbound Send Connectors in Exchange 2003 and Later

Posted on 6 CommentsPosted in 2003, 2007, 2010, cloud, DNS, domain, door, exchange, exchange online, load balancer, loadbalancer, mcm, microsoft, MX, Office 365, smarthost

This is something I’ve been meaning to write down for a while. I wrote an answer for this question to LinkedIn about a week ago and I’ve just emailed a MCM Exchange consultant with this – so here we go…

If you configure a Send Connector (Exchange 2007 and 2010) or Exchange 2003 SMTP Connector with multiple smarthosts for delivery to, then Exchange will round-robin across them all equally. This gives high availability, as if a smarthost is unavailable then Exchange will pick the next one and mail will get delivered, but it does not give redundancy across sites. If you add a smarthost in a remote site to the send connector Exchange will use it in turn equally.

So how can get get geographical redundancy with outbound smarthosts? Quite easily it appears, and it all uses a feature of Exchange that’s been around for a while. But first these important points:

  • This works for smarthost delivery and not MX (i.e. DNS) delivery.
  • This is only useful for companies with multiple sites, internet connections in these sites and smarthosts in those sites.
  • This is typically done on your internet send connectors, the ones using the * address space.

You do this by creating a fake domain in DNS. Lets say smarthost.local and then creating A records in this zone for each SMTP smarthost (i.e. mail.oxford.smarthost.local). Then create an MX record for your first site (oxford.smarthost.local MX 10 mail.oxford.smarthost.local). Repeat for each site, where oxford is the site name of the first site in this example.

Then you create second MX records, lower priority, in any site but use the A record of a smarthost in a different site (oxford.smarthost.local MX 20 mail.cambridge.smarthost.local).

Then add oxford.smarthost.local as the target smarthost in the send connector. Exchange will look up the address in DNS as MX first, A record second, IP address last), so it will find the MX record and resolve the A records for the highest priority for the domain and then round-robin across these A records.

If you have more than one smarthost in a site, add more than one MX 10 record, one per smarthost. Exchange will round-robin across the 10’s. When all the 10’s are offline then Exchange will automatically route to mail.cambridge.smarthost.local (MX priority 20 for the oxford site) without needing to disable the connector and retry the queues.

If you used servernames and not MX’s then it would round-robin amongst all entries, and so equally sent email to Cambridge for delivery. The MX option keeps mail in site for delivery until it cannot and then sends it automatically to the failover site.

Publishing ADFS Through ISA or TMG Server

Posted on 2 CommentsPosted in 2010, 2013, 64 bit, active directory, ADFS, ADFS 2.0, certificates, exchange, exchange online, https, isa, mcm, microsoft, Office 365, pki, tmg

To enable single sign-on in Office 365 and a variety of other applications you need to provide a federated authentication system. Microsoft’s free server software for this is currently Active Directory Federation Server 2.0 (ADFS), which is downloaded from Microsoft’s website.

ADFS is installed on a server within your organisation, and a trust (utilising trusted digital certificates) is set up with your partners. If you want to authenticate to the partner system from within your environment it is usual that your application connects to your AFDS server (as part of a bigger process that is better described here: http://blogs.msdn.com/b/plankytronixx/archive/2010/11/05/primer-federated-identity-in-a-nutshell.aspx). But if you are outside of your organisation, or the connection to ADFS is made by the partner rather than the application (and in Office 365 both of these take place) then you either need to install ADFS Proxy or publish the ADFS server through a firewall.

This subject of the blog is how to do this via ISA Server or TMG Server. In addition to configuring a standard HTTPS publishing rule you need to disable Link Translation and high-bit filtering on the HTTP filter to get it to work.

Here are the full steps to set up AFDS inside your organisation and have it published via ISA Server – TMG Server is to all intents and purposes the same, the UI just looks slightly different:

  1. New Web Site Publishing Rule. Provide a name.
  2. Select the Action (allow).
  3. Choose either a single website or load balancer or use ISA’s load balancing feature depending on the number of ADFS servers in your farm.
  4. Use SSL to connect:
    image
  5. Enter the Internal site name (which must be on the SSL certificate on the ADFS server and must be the same as the externally published name as well). Also enter the IP address of the server or configure the HOSTS files on the firewall to resolve this name as you do not want to loop back to the externally resolved name:
    image
  6. Enter /adfs/* as the path.
  7. Enter the ADFS published endpoint as the Public name (which will be subject or SAN on the certificate and will be the same certificate on the ADFS server and the ISA Server):
    image
  8. Select or create a suitable web listener. The properties of this will include listening on the external IP address that your ADFS namespace resolves to, over SSL only, using the certificate on your ADFS server (exported with private key and installed on ISA Server), no authentication.
  9. Allow the client to authenticate directly with the endpoint server:
    image
  10. All Users and then click Finish.
  11. Before you save your changes though, you need to make the following two changes
  12. Right-click the rule and select Configure HTTP:
    image
  13. Uncheck Block high bit characters and click OK.
  14. Double-click the rule to bring up its properties and change to the Link Translation tab. Uncheck Apply link translation to this rule:
    image
  15. Click OK and save your changes.

ADFS should now work through ISA or TMG assuming you have configured ADFS and your partner organisations correctly!

To test your ADFS service connect to your ADFS published endpoint from outside of TMG and visit https://fqdn-for-adfs/adfs/ls/idpinitiatedsignon.aspx to get a login screen

Creating Subject Alternative Name Certificates with Microsoft Certificate Server

Posted on Leave a commentPosted in 2007, certificates, exchange, iis, microsoft, pkcs, powershell, web

A new feature in digital certificates is the Subject Alternative Name property. This allows you to have a certificate for more than one URI (i.e. www.c7solutions.com and www.c7solutions.co.uk) in the same certificate. It also means that in web servers such as IIS you can bind this certificate to the site and use up only one IP address.

A number of commercial companies now sell certificates with the Subject Alternative Name field set, but this article describes how to use the Exchange Server 2007 command line to create certificate requests for other web sites that can be uploaded to Microsoft Certificate Server (which does not support this property in its own web pages) to create certificates for web servers such as IIS (which also do not support this property in the requests that they make).

The command that you need to run is via PowerShell, and specifically via the Microsoft Exchange Server 2007 extensions to PowerShell. So start up the Microsoft Management Shell and enter the following (replacing your domain names as indicated:

New-ExchangeCertificate -GenerateRequest:$true -Path c:\newCert.req -DomainName www.domain.com,sales.domain.com,support.domain.com -PrivateKeyExportable:$true -FriendlyName “My New Certificate” -IncludeAcceptedDomains:$false -Force:$true

The DomainName property is set to each URL that you want the certificate to be valid for, with the first value in the string being the value for the Subject field and all the values each being used in the Subject Alternative Name field.

Once you have executed the command above you will have a file with the name set in the Path property. This file can be opened in Notepad and used in Microsoft Certificate Services:

  1. Browse to your Microsoft Certificate Services URL and click Request a certificate
  2. Click advanced certificate request
  3. Click submit a certificate…
  4. Copy and paste the entire text of the certificate request from notepad into the Saved Request field on this page and select Web Server as the Certificate Template. Click Submit.
  • With a default installation the Web Server template value will not be present and that needs to be enabled by your Certificate Services administrator for your user account
  • With the default installation of Certificate Services, the certificate will now be ready to download. Click Download certificate (or Download Certificate Chain if the end server does not trust your issuer) to save your certificate to the computer.
  • Install the certificate on to the same computer that you issued the request from (this is a very important step), and then you can export the certificate and import it on your web server or firewalls.

To install the certificate, run the Import-ExchangeCertificate powershell command on the same computer as the request was issued from (this is a very important, it must be on the same computer). This is a simpler command to run that the creation of the request above.

The syntax of this command is (where the filename is the name of the file downloaded above):

Import-ExchangeCertificate c:\newCert.cer

To export the certificate to your web server or firewall you need to open the local computer certificate store in the Microsoft Management Console – run mmc, add a snap-in and choose Certificates, Computer account. You will find your certificates under the Personal store. You can right-click these certificates and export them (with the private key) to a .pfx file. This file can then be imported using the MMC tool on the web server or firewall ready for importing using an mmc with the certificates/computer account snap-in load into it.