The New Rights Management Service

Posted on 3 CommentsPosted in aadrm, active directory, certificates, cloud, compliance, dirsync, exchange, exchange online, https, hybrid, journal, journaling, mcm, mcsm, microsoft, Office 365, Outlook, pki, policy, rms, smarthost, transport, unified messaging, voicemail

This blog is the start of a series of articles I will write over the next few months on how to ensure that your data is encrypted and secured to only the people you want to access it, and only for the level of rights you want to give them.

The technology that we will look at to do this is Microsoft’s recently released Windows Azure Active Directory Rights Management product, also known as AADRM or Microsoft Rights Management, or “the new RMS”.

In this series of articles we will look at the following:

The items above will get lit up as the article is released – so check back or leave a comment to this post and I will let you know when new content is added to this series.

What is “rights management”

Simply this is the ability to ensure that your content is only used by whom you want it to be used by and only for what you grant. Its known in various guises, and the most common guise is Digital Rights Management (DRM) as applied to the music and films you have been downloading for years.

With the increase in sharing music and other mp3 content in the last ten plus years, the recording companies and music sellers started to protect music. It did not go down well, and I would say this is mainly because the content was bought and so the owner wanted to do with it as they liked – even if what they liked was legal they were limited from doing so. I have music I bought that I cannot use because the music retailer is out of business or I tried to transfer it too many times. I now buy all my music DRM free.

But if the content is something I created and sold, rather than something I bought I see it very differently. When the program was running I was one of the instructors for the Microsoft Certified Master program. I wrote and delivered part of the Exchange Server training. And following the reuse of my and other peoples content outside of the classroom, the content was rights protected – it could be read only by those who I had taught. Those I taught think differently about this, but usually because the management of getting a new copy of the content when it expires!

But this is what rights management is, and this series of articles will look at enabling Azure Active Directory Rights Management, a piece of Office 365 that if you are an E3 or E4 subscriber then you already have, and if you have a lower level of subscription or none at all you can buy for £2/user/month and this will allow you to protect the content that you create, that it can be used by only those you want to read it (regardless of where you or they put it) and if you want it can expire after a given time.

In this series we will look at enabling the service and connecting various technologies to it, from our smartphones to PC’s to servers and then distributing our protected content to whom needs to see it. Those who receive it will be able to use the content for free. You only pay to create protected content. We will also look at protecting content automatically, for example content that is classified in a given way by Windows Server or emails that match certain conditions (for example they contain credit cards or other personally identifiable information (PII) information such as passport or tax IDs) and though I am not a SharePoint guru, we will look at protecting content downloaded from SharePoint document libraries.

Finally we will look at users protecting their own content – either the photographs they take on their phones of information they need to share (documents, aka using the phones camera as a scanner) or taking photos of whiteboards in meetings where the contents on the board should not be shared too widely.

Stick around – its a new technology and its going to have a big impact on the way we share data, regardless of whether we share it with Dropbox or the like or email or whatever comes next.

Installing and Configuring AD RMS and Exchange Server

Posted on 2 CommentsPosted in 2007, 2008 R2, 2010, active directory, certificates, exchange, exchange online, microsoft, networking, Office 365, organization relationships, owa, rms, server administrator

Earlier this week at the Microsoft Exchange Conference (MEC 2012) I led a session titled Configuring Rights Management Server for Office 365 and Exchange On-Premises [E14.314]. This blog shows three videos covering installation, configuration and integration of RMS with Exchange 2010 and Office 365. For Exchange 2013, the steps are mostly identical.

Installing AD RMS

This video looks at the steps to install AD RMS. For the purposes of the demonstration, this is a single server lab deployment running Windows Server 2008 R2, Exchange Server 2010 (Mailbox, CAS and Hub roles) and is the domain controller for the domain. As it is a domain controller, a few of the install steps are slightly different (those that are to do with user accounts) and these changes are pointed out in the video, as the recommendation is to install AD RMS on its own server or set of servers behind a IP load balancer.

Configuring AD RMS for Exchange 2010

The second video looks at the configuration of AD RMS for use in Exchange. For the purposes of the demonstration, this is a single server lab deployment running Windows Server 2008 R2, Exchange Server 2010 (Mailbox, CAS and Hub roles) and is the domain controller for the domain. This video looks at the default ‘Do Not Forward’ restriction as well as creating new templates for use in Exchange Server (OWA and Transport Rules) and then publishing these templates so they can be used in Outlook and other Microsoft Office products.

 

Integrating AD RMS with Office 365

The third video looks at the steps needed to ensure that your Office 365 mailboxes can use the RMS server on premises. The steps include exporting and importing the Trusted Publishing Domain (the TPD) and then marking the templates as distributed (i.e. available for use). The video finishes with a demo of the templates in action.

Creating GeoDNS with Amazon Route 53 DNS

Posted on 3 CommentsPosted in 2013, cloud, exchange, GeoDNS, https, load balancer, mcm, microsoft, MX, networking, owa, smtp, transport

UPDATE: 13 Aug 2014 – Amazon Route 53 now does native GeoDNS within the product – see Amazon Route 53 GeoDNS Routing Policy

A new feature to Exchange 2013 is supported use of a single namespace for your global email infrastructure. For example mail.contoso.com rather than different ones for each region such as uk-mail.contoso.com; usa-mail.contoso.com and apac-mail.contoso.com.
GeoDNS means that you are given the IP address of a server that is in or close to the region that you are in. For example if you work in London and your mailbox is also in London then most of the time you will want to be connected to the London CAS servers as that gives you the best network response. So in Exchange 2010 you would use your local URL of uk-mail.contoso.com and if you used the others you would be told to use uk-mail.contoso.com. For GeoDNS support you use mail.contoso.com and as you are in the UK you get the IP address of the CAS array in London. When you travel to the US (occasionally) you would get the US CAS array IP address, but this CAS array is able to proxy your OWA, RPC/HTTP etc traffic to the UK mailbox servers.
The same is true for email delivery via SMTP. Email that comes from UK sourced IP addresses is on balance a statistical likelihood that it is going to the UK mailbox. So when you look up the MX record for contoso.com from a UK company you get the UK CAS array and the email gets delivered to the CAS array that is in the same site as the target mailbox. If the email is for a user in a different region and it hits the UK CAS array then it is proxied to the other region seamlessly.
GeoDNS is a feature provided by some high-scale DNS providers, but not something Amazon Web Services (AWS) Route 53 provides – so how do I configure GeoDNS with Amazon Web Services (AWS) Route 53 DNS Service?
Quite easily is the answer. Route 53 does not offer GeoDNS but does offer DNS that directs you towards the closest AWS datacentre. If your datacentres are in regions similar to AWS then the DNS redirection that AWS offers is probably accurate.
To set it up, open your Route 53 DNS console, or move your DNS to AWS (it costs $0.50/month for a zone at time of writing, AWS Route 53 pricing here) and then create your global Exchange 2013 namespace record in DNS:

  1. Click Create Record Set and enter the name. In the below example I’m using geo.c7solutions.com as I don’t actually have a globally distributed email infrastructure!
  2. Select A – IPv4 or if you are doing IPv6 select AAAA.
  3. Set Alias to No and enter the IP address of one of your datacentres
  4. Select the AWS region that is closest to this Exchange server(s) and enter a unique description for the Set ID value.
  5. The entry will look something like this:
    image
  6. Save the Record Set and create additional entries for other regions. For the purposes of this blog I have created geo.c7solutions.com in four regions with the following IP addresses:
    Region IP Address Region
    us-east-1 1.2.3.4 Northern Virginia
    us-west-1 6.7.8.9 Northern California
    eu-west-1 2.3.4.5 Ireland
    ap-northwest-1 3.4.5.6 Singapore
    sa-east-1 4.5.6.7 Sao Paulo
    ap-southeast-1 5.6.7.8 Sydney
  7. The configuration in AWS for the remaining entries looks like the following:
    imageimageimage
  8. And also, once created, it appears like this:
    image

In addition to this blog, I’ve left the record described above on my c7solutions.com DNS zone. So depending upon your location in the world, if you open a command prompt and ping geo.c7solutions.com you should get back the IP address for the AWS region closest to you, and so get back an IP that represents a Exchange resource in your global region. Of course the IP’s I have used are not mine to use and probably will not respond to ping requests – but all you need to do is see it DNS returns the IP above that best matches the region that you are in.
I wrote this blog when in a hotel in Orlando and as you can see from the image below, it returns 1.2.3.4 which is the IP address associated with us-east-1.
image
But when I connected to a server in the UK and did the same ping geo.c7solutions.com I got the following, which show GeoDNS working when equating GeoDNS to AWS Latency DNS.
image
What do you get for your regions? Add comments and let us where you are (approximately) and what region you got. If enough people respond from enough places we can see if AWS can go GeoDNS without massive cost.
[Updated 13 Nov 2012] Added Sydney (ap-southeast-1) and fake IP address of 5.6.7.8
[Updated 27 April 2013] Added Northern California (us-west-1) and fake IP of 6.7.8.9

How To Speed Up Exchange Server Transport Logging

Posted on 1 CommentPosted in 2007, 2010, 2013, exchange, mcm, microsoft, transport

In Exchange 2010 SP1 and later any writing to the transport log files for activity logging (not the transaction logging on the mail.que database) is cached in RAM and written to disk every five minutes.

In a lab environment you might be impacted by this as you might have sent an email and want to check the logs for the information they contain for diagnostic reasons. The problem is you might need to wait five minutes to get this information.

Exchange 2010

To reduce the memory cache time to 30 seconds set the following two entries in the Edge.Transport.exe.config file (found in \Program Files\Microsoft\Exchange Server\v14\bin) within the AppSettings area:

<add key="SmtpSendLogFlushInterval" value="0:00:30" />
<add key="SmtpRecvLogFlushInterval" value="0:00:30" />

The two values above control different log files. Each transport log file has a different setting – so its possible to set Receive Connector protocol logging to a different value from Send Connector protocol logging if you wanted to. Once you make your changes to Edge.Transport.exe.config you need to restart the Microsoft Exchange Transport service for the changes to be picked up.

Here is a list of the properties that I know about that can be changed:

  • SmtpSendLogFlushInterval – Timespan value on how often to write the Send Connector protocol logging log to disk
  • SmtpRecvLogFlushInterval – Timespan value on how often to write the Receive Connector protocol logging log to disk
  • ConnectivityLogFlushInterval – Timespan value on how often the Connectivity log is written to disk.

In addition to the above, which are all timespan values for how often to write to disk, if the memory buffer that contains the log entries fills up then it will be written to disk as well. The default memory buffers are 1MB. So on a very busy server you might find that the log writing is not every five minutes exactly but of a more “random” nature as the buffer is filled. The following settings control the size of the buffer for the above timespans:

  • SmtpSendLogBufferSize
  • SmtpRecvLogBufferSize
  • ConnectivityLogBufferSize

Exchange 2013 CU1 and Later

The process for this version of Exchange is similar, just different log files because of the different services in use.

In Exchange 2013 you have transport services for mailbox (submission and delivery), transport core and CAS (frontend transport). The config files for these services are:

  • EdgeTransport.exe.config (transport core)
  • MSExchangeFrontEndTransport.exe.config (Frontend Transport on the CAS role)
  • MSExchangeDelivery.exe.config (for mailbox delivery on the Mailbox role)
  • MSExchangeSubmission.exe.config (for mailbox submission on the Mailbox role)

You will find these config files in \Program Files\Microsoft\Exchange Server\v15\bin.

 

Highly Available Geo Redundancy with Outbound Send Connectors in Exchange 2003 and Later

Posted on 6 CommentsPosted in 2003, 2007, 2010, cloud, DNS, domain, door, exchange, exchange online, load balancer, loadbalancer, mcm, microsoft, MX, Office 365, smarthost

This is something I’ve been meaning to write down for a while. I wrote an answer for this question to LinkedIn about a week ago and I’ve just emailed a MCM Exchange consultant with this – so here we go…

If you configure a Send Connector (Exchange 2007 and 2010) or Exchange 2003 SMTP Connector with multiple smarthosts for delivery to, then Exchange will round-robin across them all equally. This gives high availability, as if a smarthost is unavailable then Exchange will pick the next one and mail will get delivered, but it does not give redundancy across sites. If you add a smarthost in a remote site to the send connector Exchange will use it in turn equally.

So how can get get geographical redundancy with outbound smarthosts? Quite easily it appears, and it all uses a feature of Exchange that’s been around for a while. But first these important points:

  • This works for smarthost delivery and not MX (i.e. DNS) delivery.
  • This is only useful for companies with multiple sites, internet connections in these sites and smarthosts in those sites.
  • This is typically done on your internet send connectors, the ones using the * address space.

You do this by creating a fake domain in DNS. Lets say smarthost.local and then creating A records in this zone for each SMTP smarthost (i.e. mail.oxford.smarthost.local). Then create an MX record for your first site (oxford.smarthost.local MX 10 mail.oxford.smarthost.local). Repeat for each site, where oxford is the site name of the first site in this example.

Then you create second MX records, lower priority, in any site but use the A record of a smarthost in a different site (oxford.smarthost.local MX 20 mail.cambridge.smarthost.local).

Then add oxford.smarthost.local as the target smarthost in the send connector. Exchange will look up the address in DNS as MX first, A record second, IP address last), so it will find the MX record and resolve the A records for the highest priority for the domain and then round-robin across these A records.

If you have more than one smarthost in a site, add more than one MX 10 record, one per smarthost. Exchange will round-robin across the 10’s. When all the 10’s are offline then Exchange will automatically route to mail.cambridge.smarthost.local (MX priority 20 for the oxford site) without needing to disable the connector and retry the queues.

If you used servernames and not MX’s then it would round-robin amongst all entries, and so equally sent email to Cambridge for delivery. The MX option keeps mail in site for delivery until it cannot and then sends it automatically to the failover site.

Publishing ADFS Through ISA or TMG Server

Posted on 2 CommentsPosted in 2010, 2013, 64 bit, active directory, ADFS, ADFS 2.0, certificates, exchange, exchange online, https, isa, mcm, microsoft, Office 365, pki, tmg

To enable single sign-on in Office 365 and a variety of other applications you need to provide a federated authentication system. Microsoft’s free server software for this is currently Active Directory Federation Server 2.0 (ADFS), which is downloaded from Microsoft’s website.

ADFS is installed on a server within your organisation, and a trust (utilising trusted digital certificates) is set up with your partners. If you want to authenticate to the partner system from within your environment it is usual that your application connects to your AFDS server (as part of a bigger process that is better described here: http://blogs.msdn.com/b/plankytronixx/archive/2010/11/05/primer-federated-identity-in-a-nutshell.aspx). But if you are outside of your organisation, or the connection to ADFS is made by the partner rather than the application (and in Office 365 both of these take place) then you either need to install ADFS Proxy or publish the ADFS server through a firewall.

This subject of the blog is how to do this via ISA Server or TMG Server. In addition to configuring a standard HTTPS publishing rule you need to disable Link Translation and high-bit filtering on the HTTP filter to get it to work.

Here are the full steps to set up AFDS inside your organisation and have it published via ISA Server – TMG Server is to all intents and purposes the same, the UI just looks slightly different:

  1. New Web Site Publishing Rule. Provide a name.
  2. Select the Action (allow).
  3. Choose either a single website or load balancer or use ISA’s load balancing feature depending on the number of ADFS servers in your farm.
  4. Use SSL to connect:
    image
  5. Enter the Internal site name (which must be on the SSL certificate on the ADFS server and must be the same as the externally published name as well). Also enter the IP address of the server or configure the HOSTS files on the firewall to resolve this name as you do not want to loop back to the externally resolved name:
    image
  6. Enter /adfs/* as the path.
  7. Enter the ADFS published endpoint as the Public name (which will be subject or SAN on the certificate and will be the same certificate on the ADFS server and the ISA Server):
    image
  8. Select or create a suitable web listener. The properties of this will include listening on the external IP address that your ADFS namespace resolves to, over SSL only, using the certificate on your ADFS server (exported with private key and installed on ISA Server), no authentication.
  9. Allow the client to authenticate directly with the endpoint server:
    image
  10. All Users and then click Finish.
  11. Before you save your changes though, you need to make the following two changes
  12. Right-click the rule and select Configure HTTP:
    image
  13. Uncheck Block high bit characters and click OK.
  14. Double-click the rule to bring up its properties and change to the Link Translation tab. Uncheck Apply link translation to this rule:
    image
  15. Click OK and save your changes.

ADFS should now work through ISA or TMG assuming you have configured ADFS and your partner organisations correctly!

To test your ADFS service connect to your ADFS published endpoint from outside of TMG and visit https://fqdn-for-adfs/adfs/ls/idpinitiatedsignon.aspx to get a login screen

Creating Subject Alternative Name Certificates with Microsoft Certificate Server

Posted on Leave a commentPosted in 2007, certificates, exchange, iis, microsoft, pkcs, powershell, web

A new feature in digital certificates is the Subject Alternative Name property. This allows you to have a certificate for more than one URI (i.e. www.c7solutions.com and www.c7solutions.co.uk) in the same certificate. It also means that in web servers such as IIS you can bind this certificate to the site and use up only one IP address.

A number of commercial companies now sell certificates with the Subject Alternative Name field set, but this article describes how to use the Exchange Server 2007 command line to create certificate requests for other web sites that can be uploaded to Microsoft Certificate Server (which does not support this property in its own web pages) to create certificates for web servers such as IIS (which also do not support this property in the requests that they make).

The command that you need to run is via PowerShell, and specifically via the Microsoft Exchange Server 2007 extensions to PowerShell. So start up the Microsoft Management Shell and enter the following (replacing your domain names as indicated:

New-ExchangeCertificate -GenerateRequest:$true -Path c:\newCert.req -DomainName www.domain.com,sales.domain.com,support.domain.com -PrivateKeyExportable:$true -FriendlyName “My New Certificate” -IncludeAcceptedDomains:$false -Force:$true

The DomainName property is set to each URL that you want the certificate to be valid for, with the first value in the string being the value for the Subject field and all the values each being used in the Subject Alternative Name field.

Once you have executed the command above you will have a file with the name set in the Path property. This file can be opened in Notepad and used in Microsoft Certificate Services:

  1. Browse to your Microsoft Certificate Services URL and click Request a certificate
  2. Click advanced certificate request
  3. Click submit a certificate…
  4. Copy and paste the entire text of the certificate request from notepad into the Saved Request field on this page and select Web Server as the Certificate Template. Click Submit.
  • With a default installation the Web Server template value will not be present and that needs to be enabled by your Certificate Services administrator for your user account
  • With the default installation of Certificate Services, the certificate will now be ready to download. Click Download certificate (or Download Certificate Chain if the end server does not trust your issuer) to save your certificate to the computer.
  • Install the certificate on to the same computer that you issued the request from (this is a very important step), and then you can export the certificate and import it on your web server or firewalls.

To install the certificate, run the Import-ExchangeCertificate powershell command on the same computer as the request was issued from (this is a very important, it must be on the same computer). This is a simpler command to run that the creation of the request above.

The syntax of this command is (where the filename is the name of the file downloaded above):

Import-ExchangeCertificate c:\newCert.cer

To export the certificate to your web server or firewall you need to open the local computer certificate store in the Microsoft Management Console – run mmc, add a snap-in and choose Certificates, Computer account. You will find your certificates under the Personal store. You can right-click these certificates and export them (with the private key) to a .pfx file. This file can then be imported using the MMC tool on the web server or firewall ready for importing using an mmc with the certificates/computer account snap-in load into it.