Improving Password Security In the Cloud and On-Premises

Posted on Leave a commentPosted in active directory, Azure Active Directory, Azure AD, AzureAD, EM+S, enterprise mobility + security, microsoft, Office 365, password, security

Passwords are well known to be generally insecure the way users create them. They don’t like “complex” passwords such as p9Y8Li!uk%al and so if they are forced to create a “complex” password due to a policy in say Active Directory, or because their password has expired and they need to generate a new one, they will go for something that is easy to remember and matches the “complexity” rules required by their IT department. This means users will go for passwords such as WorldCup2018! and Summ3r!!. Both these exceed 8 characters, both have mixed case, both have symbols and numbers – so both are complex passwords. Except they are not – they are easy to guess. For example, you can tell the date of this blog post from my suggestions! Users will not tend to pick passwords that are really random and malicious actors know this. So current password guidance from NIST and UK National Cyber Security Centre is to have non-expiring unknown, not simple passwords that are changed on compromise. Non-expiring allows the user to remember it if they need to (though a password manager is better) and as it is unknown beforehand (or unique) means its not on any existing password guess list that might exist.

So how can we ensure that users will choose these passwords! One is end user training, but another just released feature in the Microsoft Cloud is to block common passwords and password lists. This feature is called Azure AD Password Protection. With the password management settings in Azure AD, cloud accounts have been blocked from common passwords for a while (passwords that Microsoft see being used to attempt non-owner access on accounts) but with the password authentication restrictions you can link this to block lists and implement it with password changes that happen on domain controllers.

So how does all this work, and what sort of changes can I expect with my passwords.

Well what to expect can look like this:

image

Note all the below is what I currently know Microsoft do. This is based on info made public in June 2018 and is subject to change as Microsoft’s security graph and machine learning determines change is needed to keep accounts secure.

Password Scoring

First, each password is scored when changed or set by an administrator or a user on first use. A password with a score less than 5 is not allowed. For example:

Spring2018 = [spring] + [2018] = 2 points

Spring2018asdfj236 = [spring] + [2018] + [asdf] + [f] + [j] + [2] + [3] + [6] = 8 points

This shows that common phrases (like Spring and 2018) can be allowed as part of password that also contains stuff that is hard to guess. In this, the asdf pattern is something straight from a Qwerty keyboard and so gets a low score. In addition to the score needing to exceed 5, other complexity rules such as certain characters and length are still required if you enforce those options.

Common and Blocked Lists

Microsoft provide the common password lists, and these change as Microsoft see different passwords getting used in account attacks. You provide a custom blocked list. This can contain up to 1000 words, and Microsoft apply fuzzy logic to yours and the common list. For example we added all our office locations as shown:

image

This means that both capetown and C@p3t0wn would be blocked. The @=a, the 3=e and the 0=o. So the more complex one is really not complex at all as it contains common replacements.

In terms of licences, the banned password list that Microsoft provides is licence free to all cloud accounts. You need AAD Basic if you want to add your own custom banned password list. For accounts in Windows Server Active Directory you need the Azure AD Premium (P1) licence for all synced users to allow downloading of the banned password list as well as customising it with your words so that Active Directory can apply it to all users on-premises to block bad passwords (even those users not synced to AzureAD).

Hybrid Password Change Events Protected

The checks on whether a password change should be stopped is included in hybrid scenarios using self-service password reset, password hash sync, and pass-through authentication, though changes to the custom banned password list may take a few hours to be applied to the list that is downloaded to your domain controllers.

On-Premises Changes

There is an agent that is installed on the domain controllers. Password changes are passed to the agent and it checks the password against the common list and your blocked list. The agent does the password check, and it checks it against the most recently downloaded list from Azure AD. The password for on-premises is not passed up to Azure AD, the list is downloaded from Azure AD and processed locally on the domain controller. This download is done by the Azure AD password protection proxy. The list is then downloaded once per hour per AD site to include the latest changes. If your Azure AD password protection proxy fails, then you just use the last list that was successfully downloaded. Password changes are still allowed even if you lose internet access.

Note that the Azure AD password protection proxy is not the same as the Pass-Through Authentication agent or the AAD Connect Health agent. The Azure AD password protection proxy can though be installed on the same servers as the PTA or Connect Health agent. Provisioning new servers for the proxy download service are not required.

The Azure AD password protection proxy wakes up hourly, checks SYSVOL to see the timestamp of the most recently downloaded copy and decides if a new copy is needed. Therefore if your intra-site replication is within the hour, proxy agents in other sites might not need to download the list as the latest is already available via DFSR between the domain controllers.

The Azure AD password protection proxy does not need to run on a domain controller, so your domain controllers do not need internet access to obtain the latest list. The Azure AD password protection proxy downloads the list and places it in SYSVOL so that DFSR replication can take it to the domain controller that needs it.

Getting Started

To set a custom password block list, in the Azure Portal visit the Azure AD page, click Security and then click Authentication Methods (in the Manage section). Enter your banned passwords, lower case will do as Microsoft apply fuzzy logic as described above to match your list to similar other values. Your list should include common words to your organization, such as location, office address keywords, functions and features of what the company does etc.

For Active Directory, download the agent (from the Microsoft Download Center) to one or more servers (for fault tolerance). These will download the latest list and place it in SYSVOL so that the domain controllers can process it. Two servers in two sites would probably ensure one of them is always able to download the latest copy of the list.

The documentation is found at https://aka.ms/deploypasswordprotection.

Microsoft suggests that any deployment start in audit mode. Audit mode is the default initial setting where passwords can continue to be set even if they would be blocked. Those that would fail in Enforce mode are allowed in Audit mode, but when in audit mode entries in the event log record the fact that the password would fail if enforce was turned on. Once proxy server(s) and DC agents are fully deployed in audit mode, regular monitoring should be done in order to determine what impact password policy enforcement would have on users and the environment if the policy was enforced.

This audit mode allows you to update in-house policy, extend training programs and offer password advice and see what users are doing that would be considered weak. Once you are happy that users are able to respond to an password change error because the password is too weak, move to enforce mode. Enforce mode should kick in within a few hours of you changing it in the cloud.

Installing the Proxy and DC Agent

Domain Controllers need to be running Windows Server 2012 and later, though there are no requirements for specific domain or forest functional levels. Visit the Microsoft Download Center to download both the agent and the password protection proxy. The proxy is installed and then configured on a few (two at most during preview) servers in a forest. The agent is installed on all domain controllers as password changes can be enacted on any of them.

To install the agent, run AzureADPasswordProtectionProxy.msi on the server that has internet connectivity to Azure AD. This could be your domain controller, but it would need internet access to do this.

To configure the agent, you need to run once Import-Module AzureADPasswordProtection followed by Register-AzureADPasswordProtectionProxy and then once the proxy is registered, register the forest as well with Register-AzureADPasswordProtectionForest all from an administrative PowerShell session (enterprise admin and global admin roles required). Registering the server adds information to the Active Directory domain partition about the server and port the proxy servers can be found at and registering the forest settings ensure that information about the service is stored in the configuration partition.

Import-Module AzureADPasswordProtection 
Get-Service AzureADPasswordProtectionProxy | FL
$tenantAdminCreds = Get-Credential
Register-AzureADPasswordProtectionProxy -AzureCredential $tenantAdminCreds
Register-AzureADPasswordProtectionForest -AzureCredential $tenantAdminCreds

If you get an error that reads “InvalidOperation: (:) [Register-AzureADPasswordProtectionProxy], AggregateException” then this is because your AzureAD requires MFA for device join. The proxy does not support MFA for device join during the preview – you need to disable this setting in Azure AD for the period covering the time you make these changes – you can turn it back on again (as on is recommended) once you are finished configuring your proxy servers. This setting is found at:

  • Navigate to Azure Active Directory -> Devices -> Device settings
  • Set “Require Multi-Factor Auth to join devices” to “No”
  • As shown
    image
  • Then once the registration of your two proxies is complete, reverse this change and turn it back on again.

Once at least one proxy is installed, you can install the agent on your domain controllers. This is the AzureADPasswordProtectionDCAgent.msi and once installed requires a restart of the server to take its role within the password change process.

The PowerShell cmdlet Get-AzureADPasswordProtectionDCAgent will report the state of the DCAgent and the date/time stamp of the last downloaded password block list that the agent knows about.

image

Changes In Forest

Once the domain controller the agent is installed on is rebooted, it comes back online, finds the server(s) running the proxy application and asks it to download the latest password block list. The proxy downloads this to C:\Windows\SYSVOL\domain\Policies\{4A9AB66B-4365-4C2A-996C-58ED9927332D}. Within this locations I see the AzureADPasswordProtection folder, containing three subfolders called Azure (empty), Configuration (initially empty) and PasswordPolicies (also initially empty).

In the Configuration partition at CN=Azure AD Password Protection,CN=Services,CN=Configuration,DC=domain,DC=com some settings about the service are persisted. If the domain controller has the agent installed then

CN=AzureADConnectPasswordPolicyDCAgent,CN=<DomainControllerName>,OU=Domain Controllers,DC=domain,DC=com is created.

And then on each domain controller, in the Event Viewer, you get Application and Services Logs > Microsoft > AzureADPasswordProxy with DCAgent on the DC’s and ProxyService on the proxy servers. The following EventID’s have been seen:

  • DCAgent/30016: Forest registration has not happened yet
  • DCAgent/30001: The password for the specified user was accepted because an Azure password policy is not available yet.
  • DCAgent/30009: [audit mode] The password reset was allowed, but would have been rejected as the password used was on Microsoft’s block list
  • DCAgent/10014: Password compliant with the current Azure password policy
  • DCAgent/10015: The reset password for the specified user was validated as compliant with the current Azure password policy.
  • DCAgent/10017: Password reset rejected because it did not comply with the current Azure password policy
  • DCAgent/30016: The service is now enforcing the following Azure password policy along with the date/time stamp of the policy file it is using. This entry will state if audit or enforce mode is in play as well.

 

  • ProxyService/30000: A proxy registration message was sent to Azure and a successful response was received.
  • ProxyService/30001: A new proxy certificate credential was successfully persisted.
  • ProxyService/20000: A new password block list was downloaded.

Further log IDs and meanings can be found at https://docs.microsoft.com/en-us/azure/active-directory/authentication/howto-password-ban-bad-on-premises-troubleshoot

Once the proxy starts to work, the above empty folders begin to populate. In my case in the preview it took over 2 hours from installing the proxy as well as the documented installation and configuration PowerShell cmdlets listed above to get the proxy to download anything. The above listed Configuration folder contained some cfge files and the above listed PasswordPolicies contains what I assume is the downloaded password block list, compressed as its only 12KB. This is the .ppe file and there is one of these per hour downloaded. Older versions of this download are deleted by the proxy service automatically.

Audit Mode

Using the above Event ID’s you can track the users who have changed to weak passwords (in that they are on your or Microsoft’s banned password list) when the user or admin sets (or resets) the users password. Audit mode does not stop the user choosing the password that would “normally have been rejected” but will record different Event IDs depending upon the activity and which block list it would have failed against. Event id DCAgent/30009 for example has the message “The reset password for the specified user would normally have been rejected because it matches at least one of the tokens present in the Microsoft global banned password list of the current Azure password policy. The current Azure password policy is configured for audit-only mode so the password was accepted.”. This message is a failure against Microsoft’s list on password set. The user doing a password change and the Microsoft list gets DCAgent/30010 Event ID recorded instead.

Using these two Event ID’s along with DCAgent/30007 (logged when password set but fails your custom list) and or DCAgent/30008 (password changed, fails your custom list) allows you to audit the impact of the new policy before you enforce it.

Enforce Mode

This mode ensures that password set or change events cannot have passwords that would fail the list policies. This is enabled in the password policy in Azure AD as shown:

image

Once this is enabled it takes a few hours to be picked up by the proxy servers and then for the agent to start rejecting banned passwords.

When a user changes their password in enforce mode they get the following error (different graphic depending upon Windows versions). If the admin changing the password uses a blocked password then they see the left hand graphic as well (Active Directory Users and Computers).

image or image

This is no different to the old error you get when your password complexity, length or history is not met. Therefore there is no indication to the user that the password they chose might be banned rather than not allowed for the given reasons. So though password policy with a banned list is an excellent step forward, there needs to be help desk and end user awareness and communications (even if they are just a simple notification) as the user would not be able to tell from the client error they get. Maybe Microsoft have plans to update the client error?

Password changes that fail once enforce mode is enabled get Event ID’s such as DCAgent/30002, DCAgent/30003, DCAgent/30004 and DCAgent/30005 depending upon which password list the fail happened against and the method of password set or change. For example when I used the password Oxford123, as “oxford” is in the custom banned password list, Event ID 30003 returns “The reset password for the specified user was rejected because it matched at least one of the tokens present in the per-tenant banned password list of the current Azure password policy”. As mentioned above, the sequence of 123 following the banned word is not enough to make to score more than 5 points and so the password change is rejected.

On the other hand, 0xf0rdEng1and! was allowed as England was not on my banned list an so although my new password contained a banned word, there were enough other components of the password to make it secure enough. Based on the above mentioned scoring of 5 or more is required to have a password accepted, [Oxford] + [E] + [n] + [g] + [1] + [a] + [n] + [d] + [!], a total of 9. 9 >= 5 and so the password is accepted.

Finally, when testing users, other password policies like the date that the password can next be changed and “user cannot change password” property etc. will take effect over the banned password list. For example, if you have a cannot change password for 5 days setting, and you set the users password as an administrator – that will work or fail based on the password you enter, but if you change the password as the user within that time period, that will fail as five days have not gone by and not because the user picked a guessable password.

Azure AD Single Sign-On Basic Auth Popup

Posted on Leave a commentPosted in AADConnect, AADSync, Azure Active Directory, Azure AD, AzureAD, conditional access, microsoft, modern authentication, SSO

When configuring Azure AD SSO as part of Pass-Through Authentication (PTA) or with Password Hash Authentication (PHA) you need now (since March 2018) to only configure a single URL in the Intranet Zone in Windows. That URL is https://autologon.microsoftazuread-sso.com and this can be rolled out as a registry preference via Group Policy. Before March 2018 there was a second URL that was needed in the intranet zone, but that is no longer required (see notes).

So this short blog post is how to fix SSO when you do see a popup for this second URL though it is no longer required. The popup looks like:

image

It has OK and Cancel on it as well, but my screengrab I made when I saw the issue was not brilliant, so I “fixed” the bottom of the image so its approx. correct!

The URL is aadg.windows.net.nsatc.net. Adding this to Local Intranet Zone even though it is not needed does not fix the issue. The issue is caused because on Windows 10 (version 1703 and maybe others) someone has enabled Enhanced Protected Mode. Azure SSO does not work when Enhanced Protected Mode is enabled. This is not a setting that is enabled on client machines by default.

Enhanced Protected Mode provides additional protection against malicious websites by using 64-bit processes on 64-bit versions of Windows. For computers running at least Windows 8, Enhanced Protected Mode also limits the locations Internet Explorer can read from in the registry and the file system.

It is probable that Enhanced Protected Mode is enabled via Group Policy. It will either have the value Isolation (or Isolation64bit) set to a value of PMEM at HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer\Main or HKEY_CURRENT_USER\SOFTWARE\Microsoft\Internet Explorer\Main or the policy equivalent at  HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Internet Explorer\Main or HKEY_CURRENT_USER\SOFTWARE\Policies\Microsoft\Internet Explorer\Main when set via GPO settings.

This issue is listed in the Azure AD SSO known issues page at https://docs.microsoft.com/en-us/azure/active-directory/connect/active-directory-aadconnect-troubleshoot-sso. The reason why Enhanced Protected Mode does not work with Azure AD SSO is that whilst Enhanced Protected Mode is enabled, Internet Explorer has no access to corporate domain credentials.

Exchange Online Migration Batches–How Long Do They Exist For

Posted on 5 CommentsPosted in exchange, exchange online, Exchange Server, hybrid, microsoft, migration, Office 365

When you create a migration batch in Exchange Online, the default setting for a migration is to start the batch immediately and complete manually. So how long can you leave this batch before you need to complete it?

As you can see from the below screenshot, the migration batch here was created on Feb 19th, which was only yesterday as I write this blog.

image

The batch was created on the morning of the 19th Feb, and set to manual start (rather than the default of automatic start, as did not want to migrate lots of data during the business day) and then it was started close to 5:30pm that evening. By 11:25pm the batch had completed its initial sync of all 28 mailboxes and there were no failures. There were other batches syncing at the same time, so this is not indicative of any expected or determined migration performance speeds.

So what happens next. In the background a new mailbox move request was created for each mailbox in the batch, and each individual mailbox was synced to Exchange Online and associated with the synced Mail User object created in the cloud by the AADSync process. When each move reached 95% complete, the move was suspended. It will be resumed around 24 hours later, so that each mailbox is kept up to date once a day automatically.

If you leave the migration running but not completed you will see from the migration batch status above that the batch will complete in 7,981 years (on the 31st Dec 9999 and one second before the next millennium bug hits). In the meantime the migration batch sync will stop doing its daily updates after two months.

After two months of syncing to the cloud and not being completed, Exchange Online assumes you are still no closer to migrating and they stop keeping the mailbox on-premise and the mailbox in the cloud in sync. You can restart this process by interacting with the migration batch before this time, or if it does stop by just clicking the Resume icon, and this will restart it for a further period of time.

The New Rights Management Service

Posted on 3 CommentsPosted in aadrm, active directory, certificates, cloud, compliance, dirsync, exchange, exchange online, https, hybrid, journal, journaling, mcm, mcsm, microsoft, Office 365, Outlook, pki, policy, rms, smarthost, transport, unified messaging, voicemail

This blog is the start of a series of articles I will write over the next few months on how to ensure that your data is encrypted and secured to only the people you want to access it, and only for the level of rights you want to give them.

The technology that we will look at to do this is Microsoft’s recently released Windows Azure Active Directory Rights Management product, also known as AADRM or Microsoft Rights Management, or “the new RMS”.

In this series of articles we will look at the following:

The items above will get lit up as the article is released – so check back or leave a comment to this post and I will let you know when new content is added to this series.

What is “rights management”

Simply this is the ability to ensure that your content is only used by whom you want it to be used by and only for what you grant. Its known in various guises, and the most common guise is Digital Rights Management (DRM) as applied to the music and films you have been downloading for years.

With the increase in sharing music and other mp3 content in the last ten plus years, the recording companies and music sellers started to protect music. It did not go down well, and I would say this is mainly because the content was bought and so the owner wanted to do with it as they liked – even if what they liked was legal they were limited from doing so. I have music I bought that I cannot use because the music retailer is out of business or I tried to transfer it too many times. I now buy all my music DRM free.

But if the content is something I created and sold, rather than something I bought I see it very differently. When the program was running I was one of the instructors for the Microsoft Certified Master program. I wrote and delivered part of the Exchange Server training. And following the reuse of my and other peoples content outside of the classroom, the content was rights protected – it could be read only by those who I had taught. Those I taught think differently about this, but usually because the management of getting a new copy of the content when it expires!

But this is what rights management is, and this series of articles will look at enabling Azure Active Directory Rights Management, a piece of Office 365 that if you are an E3 or E4 subscriber then you already have, and if you have a lower level of subscription or none at all you can buy for £2/user/month and this will allow you to protect the content that you create, that it can be used by only those you want to read it (regardless of where you or they put it) and if you want it can expire after a given time.

In this series we will look at enabling the service and connecting various technologies to it, from our smartphones to PC’s to servers and then distributing our protected content to whom needs to see it. Those who receive it will be able to use the content for free. You only pay to create protected content. We will also look at protecting content automatically, for example content that is classified in a given way by Windows Server or emails that match certain conditions (for example they contain credit cards or other personally identifiable information (PII) information such as passport or tax IDs) and though I am not a SharePoint guru, we will look at protecting content downloaded from SharePoint document libraries.

Finally we will look at users protecting their own content – either the photographs they take on their phones of information they need to share (documents, aka using the phones camera as a scanner) or taking photos of whiteboards in meetings where the contents on the board should not be shared too widely.

Stick around – its a new technology and its going to have a big impact on the way we share data, regardless of whether we share it with Dropbox or the like or email or whatever comes next.

Installing and Configuring AD RMS and Exchange Server

Posted on 4 CommentsPosted in 2007, 2008 R2, 2010, active directory, certificates, exchange, exchange online, microsoft, networking, Office 365, organization relationships, owa, rms, server administrator

Earlier this week at the Microsoft Exchange Conference (MEC 2012) I led a session titled Configuring Rights Management Server for Office 365 and Exchange On-Premises [E14.314]. This blog shows three videos covering installation, configuration and integration of RMS with Exchange 2010 and Office 365. For Exchange 2013, the steps are mostly identical.

Installing AD RMS

This video looks at the steps to install AD RMS. For the purposes of the demonstration, this is a single server lab deployment running Windows Server 2008 R2, Exchange Server 2010 (Mailbox, CAS and Hub roles) and is the domain controller for the domain. As it is a domain controller, a few of the install steps are slightly different (those that are to do with user accounts) and these changes are pointed out in the video, as the recommendation is to install AD RMS on its own server or set of servers behind a IP load balancer.

Configuring AD RMS for Exchange 2010

The second video looks at the configuration of AD RMS for use in Exchange. For the purposes of the demonstration, this is a single server lab deployment running Windows Server 2008 R2, Exchange Server 2010 (Mailbox, CAS and Hub roles) and is the domain controller for the domain. This video looks at the default ‘Do Not Forward’ restriction as well as creating new templates for use in Exchange Server (OWA and Transport Rules) and then publishing these templates so they can be used in Outlook and other Microsoft Office products.

 

Integrating AD RMS with Office 365

The third video looks at the steps needed to ensure that your Office 365 mailboxes can use the RMS server on premises. The steps include exporting and importing the Trusted Publishing Domain (the TPD) and then marking the templates as distributed (i.e. available for use). The video finishes with a demo of the templates in action.

Creating GeoDNS with Amazon Route 53 DNS

Posted on 3 CommentsPosted in 2013, cloud, exchange, GeoDNS, https, load balancer, mcm, microsoft, MX, networking, owa, smtp, transport

UPDATE: 13 Aug 2014 – Amazon Route 53 now does native GeoDNS within the product – see Amazon Route 53 GeoDNS Routing Policy

A new feature to Exchange 2013 is supported use of a single namespace for your global email infrastructure. For example mail.contoso.com rather than different ones for each region such as uk-mail.contoso.com; usa-mail.contoso.com and apac-mail.contoso.com.
GeoDNS means that you are given the IP address of a server that is in or close to the region that you are in. For example if you work in London and your mailbox is also in London then most of the time you will want to be connected to the London CAS servers as that gives you the best network response. So in Exchange 2010 you would use your local URL of uk-mail.contoso.com and if you used the others you would be told to use uk-mail.contoso.com. For GeoDNS support you use mail.contoso.com and as you are in the UK you get the IP address of the CAS array in London. When you travel to the US (occasionally) you would get the US CAS array IP address, but this CAS array is able to proxy your OWA, RPC/HTTP etc traffic to the UK mailbox servers.
The same is true for email delivery via SMTP. Email that comes from UK sourced IP addresses is on balance a statistical likelihood that it is going to the UK mailbox. So when you look up the MX record for contoso.com from a UK company you get the UK CAS array and the email gets delivered to the CAS array that is in the same site as the target mailbox. If the email is for a user in a different region and it hits the UK CAS array then it is proxied to the other region seamlessly.
GeoDNS is a feature provided by some high-scale DNS providers, but not something Amazon Web Services (AWS) Route 53 provides – so how do I configure GeoDNS with Amazon Web Services (AWS) Route 53 DNS Service?
Quite easily is the answer. Route 53 does not offer GeoDNS but does offer DNS that directs you towards the closest AWS datacentre. If your datacentres are in regions similar to AWS then the DNS redirection that AWS offers is probably accurate.
To set it up, open your Route 53 DNS console, or move your DNS to AWS (it costs $0.50/month for a zone at time of writing, AWS Route 53 pricing here) and then create your global Exchange 2013 namespace record in DNS:

  1. Click Create Record Set and enter the name. In the below example I’m using geo.c7solutions.com as I don’t actually have a globally distributed email infrastructure!
  2. Select A – IPv4 or if you are doing IPv6 select AAAA.
  3. Set Alias to No and enter the IP address of one of your datacentres
  4. Select the AWS region that is closest to this Exchange server(s) and enter a unique description for the Set ID value.
  5. The entry will look something like this:
    image
  6. Save the Record Set and create additional entries for other regions. For the purposes of this blog I have created geo.c7solutions.com in four regions with the following IP addresses:
    Region IP Address Region
    us-east-1 1.2.3.4 Northern Virginia
    us-west-1 6.7.8.9 Northern California
    eu-west-1 2.3.4.5 Ireland
    ap-northwest-1 3.4.5.6 Singapore
    sa-east-1 4.5.6.7 Sao Paulo
    ap-southeast-1 5.6.7.8 Sydney
  7. The configuration in AWS for the remaining entries looks like the following:
    imageimageimage
  8. And also, once created, it appears like this:
    image

In addition to this blog, I’ve left the record described above on my c7solutions.com DNS zone. So depending upon your location in the world, if you open a command prompt and ping geo.c7solutions.com you should get back the IP address for the AWS region closest to you, and so get back an IP that represents a Exchange resource in your global region. Of course the IP’s I have used are not mine to use and probably will not respond to ping requests – but all you need to do is see it DNS returns the IP above that best matches the region that you are in.
I wrote this blog when in a hotel in Orlando and as you can see from the image below, it returns 1.2.3.4 which is the IP address associated with us-east-1.
image
But when I connected to a server in the UK and did the same ping geo.c7solutions.com I got the following, which show GeoDNS working when equating GeoDNS to AWS Latency DNS.
image
What do you get for your regions? Add comments and let us where you are (approximately) and what region you got. If enough people respond from enough places we can see if AWS can go GeoDNS without massive cost.
[Updated 13 Nov 2012] Added Sydney (ap-southeast-1) and fake IP address of 5.6.7.8
[Updated 27 April 2013] Added Northern California (us-west-1) and fake IP of 6.7.8.9

How To Speed Up Exchange Server Transport Logging

Posted on 1 CommentPosted in 2007, 2010, 2013, exchange, mcm, microsoft, transport

In Exchange 2010 SP1 and later any writing to the transport log files for activity logging (not the transaction logging on the mail.que database) is cached in RAM and written to disk every five minutes.

In a lab environment you might be impacted by this as you might have sent an email and want to check the logs for the information they contain for diagnostic reasons. The problem is you might need to wait five minutes to get this information.

Exchange 2010

To reduce the memory cache time to 30 seconds set the following two entries in the Edge.Transport.exe.config file (found in \Program Files\Microsoft\Exchange Server\v14\bin) within the AppSettings area:

<add key="SmtpSendLogFlushInterval" value="0:00:30" />
<add key="SmtpRecvLogFlushInterval" value="0:00:30" />

The two values above control different log files. Each transport log file has a different setting – so its possible to set Receive Connector protocol logging to a different value from Send Connector protocol logging if you wanted to. Once you make your changes to Edge.Transport.exe.config you need to restart the Microsoft Exchange Transport service for the changes to be picked up.

Here is a list of the properties that I know about that can be changed:

  • SmtpSendLogFlushInterval – Timespan value on how often to write the Send Connector protocol logging log to disk
  • SmtpRecvLogFlushInterval – Timespan value on how often to write the Receive Connector protocol logging log to disk
  • ConnectivityLogFlushInterval – Timespan value on how often the Connectivity log is written to disk.

In addition to the above, which are all timespan values for how often to write to disk, if the memory buffer that contains the log entries fills up then it will be written to disk as well. The default memory buffers are 1MB. So on a very busy server you might find that the log writing is not every five minutes exactly but of a more “random” nature as the buffer is filled. The following settings control the size of the buffer for the above timespans:

  • SmtpSendLogBufferSize
  • SmtpRecvLogBufferSize
  • ConnectivityLogBufferSize

Exchange 2013 CU1 and Later

The process for this version of Exchange is similar, just different log files because of the different services in use.

In Exchange 2013 you have transport services for mailbox (submission and delivery), transport core and CAS (frontend transport). The config files for these services are:

  • EdgeTransport.exe.config (transport core)
  • MSExchangeFrontEndTransport.exe.config (Frontend Transport on the CAS role)
  • MSExchangeDelivery.exe.config (for mailbox delivery on the Mailbox role)
  • MSExchangeSubmission.exe.config (for mailbox submission on the Mailbox role)

You will find these config files in \Program Files\Microsoft\Exchange Server\v15\bin.

 

Highly Available Geo Redundancy with Outbound Send Connectors in Exchange 2003 and Later

Posted on 6 CommentsPosted in 2003, 2007, 2010, cloud, DNS, domain, door, exchange, exchange online, load balancer, loadbalancer, mcm, microsoft, MX, Office 365, smarthost

This is something I’ve been meaning to write down for a while. I wrote an answer for this question to LinkedIn about a week ago and I’ve just emailed a MCM Exchange consultant with this – so here we go…

If you configure a Send Connector (Exchange 2007 and 2010) or Exchange 2003 SMTP Connector with multiple smarthosts for delivery to, then Exchange will round-robin across them all equally. This gives high availability, as if a smarthost is unavailable then Exchange will pick the next one and mail will get delivered, but it does not give redundancy across sites. If you add a smarthost in a remote site to the send connector Exchange will use it in turn equally.

So how can get get geographical redundancy with outbound smarthosts? Quite easily it appears, and it all uses a feature of Exchange that’s been around for a while. But first these important points:

  • This works for smarthost delivery and not MX (i.e. DNS) delivery.
  • This is only useful for companies with multiple sites, internet connections in these sites and smarthosts in those sites.
  • This is typically done on your internet send connectors, the ones using the * address space.

You do this by creating a fake domain in DNS. Lets say smarthost.local and then creating A records in this zone for each SMTP smarthost (i.e. mail.oxford.smarthost.local). Then create an MX record for your first site (oxford.smarthost.local MX 10 mail.oxford.smarthost.local). Repeat for each site, where oxford is the site name of the first site in this example.

Then you create second MX records, lower priority, in any site but use the A record of a smarthost in a different site (oxford.smarthost.local MX 20 mail.cambridge.smarthost.local).

Then add oxford.smarthost.local as the target smarthost in the send connector. Exchange will look up the address in DNS as MX first, A record second, IP address last), so it will find the MX record and resolve the A records for the highest priority for the domain and then round-robin across these A records.

If you have more than one smarthost in a site, add more than one MX 10 record, one per smarthost. Exchange will round-robin across the 10’s. When all the 10’s are offline then Exchange will automatically route to mail.cambridge.smarthost.local (MX priority 20 for the oxford site) without needing to disable the connector and retry the queues.

If you used servernames and not MX’s then it would round-robin amongst all entries, and so equally sent email to Cambridge for delivery. The MX option keeps mail in site for delivery until it cannot and then sends it automatically to the failover site.

Publishing ADFS Through ISA or TMG Server

Posted on 2 CommentsPosted in 2010, 2013, 64 bit, active directory, ADFS, ADFS 2.0, certificates, exchange, exchange online, https, isa, mcm, microsoft, Office 365, pki, tmg

To enable single sign-on in Office 365 and a variety of other applications you need to provide a federated authentication system. Microsoft’s free server software for this is currently Active Directory Federation Server 2.0 (ADFS), which is downloaded from Microsoft’s website.

ADFS is installed on a server within your organisation, and a trust (utilising trusted digital certificates) is set up with your partners. If you want to authenticate to the partner system from within your environment it is usual that your application connects to your AFDS server (as part of a bigger process that is better described here: http://blogs.msdn.com/b/plankytronixx/archive/2010/11/05/primer-federated-identity-in-a-nutshell.aspx). But if you are outside of your organisation, or the connection to ADFS is made by the partner rather than the application (and in Office 365 both of these take place) then you either need to install ADFS Proxy or publish the ADFS server through a firewall.

This subject of the blog is how to do this via ISA Server or TMG Server. In addition to configuring a standard HTTPS publishing rule you need to disable Link Translation and high-bit filtering on the HTTP filter to get it to work.

Here are the full steps to set up AFDS inside your organisation and have it published via ISA Server – TMG Server is to all intents and purposes the same, the UI just looks slightly different:

  1. New Web Site Publishing Rule. Provide a name.
  2. Select the Action (allow).
  3. Choose either a single website or load balancer or use ISA’s load balancing feature depending on the number of ADFS servers in your farm.
  4. Use SSL to connect:
    image
  5. Enter the Internal site name (which must be on the SSL certificate on the ADFS server and must be the same as the externally published name as well). Also enter the IP address of the server or configure the HOSTS files on the firewall to resolve this name as you do not want to loop back to the externally resolved name:
    image
  6. Enter /adfs/* as the path.
  7. Enter the ADFS published endpoint as the Public name (which will be subject or SAN on the certificate and will be the same certificate on the ADFS server and the ISA Server):
    image
  8. Select or create a suitable web listener. The properties of this will include listening on the external IP address that your ADFS namespace resolves to, over SSL only, using the certificate on your ADFS server (exported with private key and installed on ISA Server), no authentication.
  9. Allow the client to authenticate directly with the endpoint server:
    image
  10. All Users and then click Finish.
  11. Before you save your changes though, you need to make the following two changes
  12. Right-click the rule and select Configure HTTP:
    image
  13. Uncheck Block high bit characters and click OK.
  14. Double-click the rule to bring up its properties and change to the Link Translation tab. Uncheck Apply link translation to this rule:
    image
  15. Click OK and save your changes.

ADFS should now work through ISA or TMG assuming you have configured ADFS and your partner organisations correctly!

To test your ADFS service connect to your ADFS published endpoint from outside of TMG and visit https://fqdn-for-adfs/adfs/ls/idpinitiatedsignon.aspx to get a login screen

Creating Subject Alternative Name Certificates with Microsoft Certificate Server

Posted on Leave a commentPosted in 2007, certificates, exchange, iis, microsoft, pkcs, powershell, web

A new feature in digital certificates is the Subject Alternative Name property. This allows you to have a certificate for more than one URI (i.e. www.c7solutions.com and www.c7solutions.co.uk) in the same certificate. It also means that in web servers such as IIS you can bind this certificate to the site and use up only one IP address.

A number of commercial companies now sell certificates with the Subject Alternative Name field set, but this article describes how to use the Exchange Server 2007 command line to create certificate requests for other web sites that can be uploaded to Microsoft Certificate Server (which does not support this property in its own web pages) to create certificates for web servers such as IIS (which also do not support this property in the requests that they make).

The command that you need to run is via PowerShell, and specifically via the Microsoft Exchange Server 2007 extensions to PowerShell. So start up the Microsoft Management Shell and enter the following (replacing your domain names as indicated:

New-ExchangeCertificate -GenerateRequest:$true -Path c:\newCert.req -DomainName www.domain.com,sales.domain.com,support.domain.com -PrivateKeyExportable:$true -FriendlyName “My New Certificate” -IncludeAcceptedDomains:$false -Force:$true

The DomainName property is set to each URL that you want the certificate to be valid for, with the first value in the string being the value for the Subject field and all the values each being used in the Subject Alternative Name field.

Once you have executed the command above you will have a file with the name set in the Path property. This file can be opened in Notepad and used in Microsoft Certificate Services:

  1. Browse to your Microsoft Certificate Services URL and click Request a certificate
  2. Click advanced certificate request
  3. Click submit a certificate…
  4. Copy and paste the entire text of the certificate request from notepad into the Saved Request field on this page and select Web Server as the Certificate Template. Click Submit.
  • With a default installation the Web Server template value will not be present and that needs to be enabled by your Certificate Services administrator for your user account
  • With the default installation of Certificate Services, the certificate will now be ready to download. Click Download certificate (or Download Certificate Chain if the end server does not trust your issuer) to save your certificate to the computer.
  • Install the certificate on to the same computer that you issued the request from (this is a very important step), and then you can export the certificate and import it on your web server or firewalls.

To install the certificate, run the Import-ExchangeCertificate powershell command on the same computer as the request was issued from (this is a very important, it must be on the same computer). This is a simpler command to run that the creation of the request above.

The syntax of this command is (where the filename is the name of the file downloaded above):

Import-ExchangeCertificate c:\newCert.cer

To export the certificate to your web server or firewall you need to open the local computer certificate store in the Microsoft Management Console – run mmc, add a snap-in and choose Certificates, Computer account. You will find your certificates under the Personal store. You can right-click these certificates and export them (with the private key) to a .pfx file. This file can then be imported using the MMC tool on the web server or firewall ready for importing using an mmc with the certificates/computer account snap-in load into it.