Exchange Web Services (EWS) and 501 Error

Posted on 12 CommentsPosted in 2010, 2013, EWS, exchange, exchange online, hybrid, IAmMEC, iis, Kemp, Load Master, Office 365, tmg

As is common with a lot that I write in this blog, it is based on noting down the answers to stuff I could not find online. For this issue, I did find something online by Michael Van “Hybrid”, but finding it was the challenge.

So rather than detailing the issue and the reason (you can read that on Michael’s blog) this talks about the steps to troubleshoot this issue.

So first the issue. When starting a migration test from an Exchange 2010 mailbox with an Exchange 2013 hybrid server (running the mailbox and CAS roles) behind a Kemp load balancing (running 7.16 – the latest release at the time of writing, but recently upgraded from version 5 through 6 to 7) I got the following error:

image

The server name will be different (thanks Michael for the screenshot). In my case this was my clients UK datacentre. My clients Hong Kong datacentre was behind a Kemp load balancer as well, but is only running Exchange 2010 and the New York datacentre has an F5 load balancer. Moves from HK worked, but UK and NY failed for different reasons.

The issue shown above is not easy to solve as the migration dialog tells you nothing. In my case it was also telling me the wrong server name. It should have been returning the External EWSUrl from Autodiscover for the mailbox I was trying to move, instead it was returning the Outlook Anywhere external URL from the New York site (as the UK is proxied via NY for the OA connections). For moves to the cloud, we added the External URL for EWS to each site directly so we would move direct and not via the only site that offered internet connected email.

So troubleshooting started with exrca.com – the Microsoft Connectivity Analyser. Autodiscover worked most of the time in the UK but Synchronization, Notification, Availability, and Automatic Replies tests (to test EWS) always failed after six and a half seconds.

Autodiscover returned the following error:

A Web exception occurred because an HTTP 400 – BadRequest response was received from Unknown.
HTTP Response Headers:
Connection: close
Content-Length: 87
Content-Type: text/html
Date: Tue, 24 Jun 2014 09:03:40 GMT

Elapsed Time: 108 ms.

And EWS, when AutoDiscover was returning data correctly, was as follows:

Creating a temporary folder to perform synchronization tests.

Failed to create temporary folder for performing tests.

Additional Details

Exception details:
Message: The request failed. The remote server returned an error: (501) Not Implemented.
Type: Microsoft.Exchange.WebServices.Data.ServiceRequestException
Stack trace:
at Microsoft.Exchange.WebServices.Data.ServiceRequestBase.GetEwsHttpWebResponse(IEwsHttpWebRequest request)
at Microsoft.Exchange.WebServices.Data.ServiceRequestBase.ValidateAndEmitRequest(IEwsHttpWebRequest& request)
at Microsoft.Exchange.WebServices.Data.ExchangeService.InternalFindFolders(IEnumerable`1 parentFolderIds, SearchFilter searchFilter, FolderView view, ServiceErrorHandling errorHandlingMode)
at Microsoft.Exchange.WebServices.Data.ExchangeService.FindFolders(FolderId parentFolderId, SearchFilter searchFilter, FolderView view)
at Microsoft.Exchange.WebServices.Data.Folder.FindFolders(SearchFilter searchFilter, FolderView view)
at Microsoft.Exchange.Tools.ExRca.Tests.GetOrCreateSyncFolderTest.PerformTestReally()
Exception details:
Message: The remote server returned an error: (501) Not Implemented.
Type: System.Net.WebException
Stack trace:
at System.Net.HttpWebRequest.GetResponse()
at Microsoft.Exchange.WebServices.Data.EwsHttpWebRequest.Microsoft.Exchange.WebServices.Data.IEwsHttpWebRequest.GetResponse()
at Microsoft.Exchange.WebServices.Data.ServiceRequestBase.GetEwsHttpWebResponse(IEwsHttpWebRequest request)

Elapsed Time: 6249 ms.

What was interesting here was the 501 and that it was always approx. 6 seconds before it failed.

Looking in the IIS logs from the 2010 servers that hold the UK mailboxes there were no 501 errors logged. The same was true for the EWS logs as well. So where is the 501 coming from. I decided to bypass Exchange 2013 for the exrca.com test (as my system is not yet live and that is easy to do) and so in Kemp pointed the EWS SubVDir directly to a specific Exchange 2010 server. Everything worked. So I decided it was an Exchange 2013 issue, apart from the fact I have lab environments the same as this (without Kemp) and it works fine there. So I decided to search for “Kemp EWS 501” and that was the bingo keyword combination. EWS and 501 or Exchange EWS and 501 got nothing at all.

With my environment back to Kemp >  2013 >  2010 I looked at Michaels suggestions. The first was to run Test-MigrationServerAvailability –ExchangeRemoteMove –RemoteServer servername.domain.com. I changed this slightly, as I was not convinced that I was connecting to the correct endpoint. The migration reported the wrong server name and the Exrca tests do not tell you what endpoint they are connecting to. So I tried Test-MigrationServerAvailability –ExchangeRemoteMove –Autodiscover –EmailAddress user-on-premises@domain.com

As AutoDiscover is reporting errors at times, the second of these cmdlets sometimes reported the following:

RunspaceId         : a711bdd3-b6a1-4fb8-96b8-f669239ea534
Result             : Failed
Message            : AutoDiscover failed with a configuration error: The migration service failed to detect the
migration endpoint using the Autodiscover service. Please enter the migration endpoint settings
or go back to the first step and retry using the Autodiscover service. Consider using the
Exchange Remote Connectivity Analyzer (
https://testexchangeconnectivity.com) to diagnose the
connectivity issues.
ConnectionSettings :
SupportsCutover    : False
ErrorDetail        : internal error:Microsoft.Exchange.Migration.AutoDiscoverFailedConfigurationErrorException:
AutoDiscover failed with a configuration error: The migration service failed to detect the
migration endpoint using the Autodiscover service. Please enter the migration endpoint settings
or go back to the first step and retry using the Autodiscover service. Consider using the
Exchange Remote Connectivity Analyzer (
https://testexchangeconnectivity.com) to diagnose the
connectivity issues.
at Microsoft.Exchange.Migration.DataAccessLayer.MigrationEndpointBase.InitializeFromAutoDiscove
r(SmtpAddress emailAddress, PSCredential credentials)
at Microsoft.Exchange.Management.Migration.TestMigrationServerAvailability.InternalProcessExcha
ngeRemoteMoveAutoDiscover()
IsValid            : True
Identity           :
ObjectState        : New

And when AutoDiscover was working (as it was random) I would get this:

RunspaceId         : a711bdd3-b6a1-4fb8-96b8-f669239ea534
Result             : Failed
Message            : The ExchangeRemote endpoint settings could not be determined from the autodiscover response. No
MRSProxy was found running at ‘outlook.domain.com’.
ConnectionSettings :
SupportsCutover    : False
ErrorDetail        : internal error:Microsoft.Exchange.Migration.MigrationRemoteEndpointSettingsCouldNotBeAutodiscovere
dException: The ExchangeRemote endpoint settings could not be determined from the autodiscover
response. No MRSProxy was found running at ‘outlook.domain.com’. —>
Microsoft.Exchange.Migration.MigrationServerConnectionFailedException: The connection to the
server ‘outlook.domain.com’ could not be completed. —>
Microsoft.Exchange.MailboxReplicationService.RemoteTransientException: The call to
https://outlook.domain.com/EWS/mrsproxy.svc’ failed. Error details: The HTTP request is
unauthorized with client authentication scheme ‘Negotiate’. The authentication header received
from the server was ‘Basic Realm=”outlook.domain.com”‘. –> The remote server returned an
error: (401) Unauthorized.. —>
Microsoft.Exchange.MailboxReplicationService.RemotePermanentException: The HTTP request is
unauthorized with client authentication scheme ‘Negotiate’. The authentication header received
from the server was ‘Basic Realm=”outlook.domain.com”‘. —>
Microsoft.Exchange.MailboxReplicationService.RemotePermanentException: The remote server returned
an error: (401) Unauthorized.
— End of inner exception stack trace —
— End of inner exception stack trace —
at Microsoft.Exchange.MailboxReplicationService.MailboxReplicationServiceFault.<>c__DisplayClas
s1.<ReconstructAndThrow>b__0()
at Microsoft.Exchange.MailboxReplicationService.ExecutionContext.Execute(Action operation)
at Microsoft.Exchange.MailboxReplicationService.MailboxReplicationServiceFault.ReconstructAndTh
row(String serverName, VersionInformation serverVersion)
at Microsoft.Exchange.MailboxReplicationService.WcfClientWithFaultHandling`2.<>c__DisplayClass1
.<CallService>b__0()
at Microsoft.Exchange.Net.WcfClientBase`1.CallService(Action serviceCall, String context)
at Microsoft.Exchange.Migration.MigrationExchangeProxyRpcClient.CanConnectToMrsProxy(Fqdn
serverName, Guid mbxGuid, NetworkCredential credentials, LocalizedException& error)
— End of inner exception stack trace —
at Microsoft.Exchange.Migration.DataAccessLayer.ExchangeRemoteMoveEndpoint.VerifyConnectivity()
at Microsoft.Exchange.Management.Migration.TestMigrationServerAvailability.InternalProcessEndpo
int(Boolean fromAutoDiscover)
— End of inner exception stack trace —
IsValid            : True
Identity           :
ObjectState        : New

This was returning the URL https://outlook.domain.com/EWS/mrsproxy.svc’ which is not correct for this mailbox (this was the OA endpoint in a different datacentre) and external Outlook access is not allowed at this company and so the TMG server in front of the F5 load balancer in the NY datacentre was not configured for OA anyway and browsing the the above URL returned the following picture, which is a well broken scenario but not the issue at hand here!

image

If OA (Outlook Anywhere) was available for this company, this is not what I would expect to see when browsing to the External EWS URL. To that end we have EWS URL’s are bypass TMG and go direct to the load balancer.

So now we have either no valid AutoDiscover response or EWS using the wrong URL. Back to the version of the cmdlet Michael was using as that ignores AutoDiscover: Test-MigrationServerAvailability –ExchangeRemoteMove –RemoteServer servername.domain.com

RunspaceId         : 5874c796-54ce-420f-950b-1d300cf0a64a
Result             : Failed
Message            : The connection to the server ‘ewsinukdatacentre.domain.com’ could not be completed.
ConnectionSettings :
SupportsCutover    : False
ErrorDetail        : Microsoft.Exchange.Migration.MigrationServerConnectionFailedException: The connection to the
server ‘ewsinukdatacentre.domain.com’ could not be completed. —>
Microsoft.Exchange.MailboxReplicationService.RemoteTransientException: The call to
https://ewsinukdatacentre.domain.com/EWS/mrsproxy.svc’ failed. Error details: The remote server returned
an unexpected response: (501) Invalid Request. –> The remote server returned an error: (501) Not
                     Implemented.. —> Microsoft.Exchange.MailboxReplicationService.RemotePermanentException: The
remote server returned an unexpected response: (501) Invalid Request. —>
Microsoft.Exchange.MailboxReplicationService.RemotePermanentException: The remote server returned
an error: (501) Not Implemented.
— End of inner exception stack trace —
— End of inner exception stack trace —
at Microsoft.Exchange.MailboxReplicationService.MailboxReplicationServiceFault.<>c__DisplayClas
s1.<ReconstructAndThrow>b__0()
at Microsoft.Exchange.MailboxReplicationService.ExecutionContext.Execute(Action operation)
at Microsoft.Exchange.MailboxReplicationService.MailboxReplicationServiceFault.ReconstructAndTh
row(String serverName, VersionInformation serverVersion)
at Microsoft.Exchange.MailboxReplicationService.WcfClientWithFaultHandling`2.<>c__DisplayClass1
.<CallService>b__0()
at Microsoft.Exchange.Net.WcfClientBase`1.CallService(Action serviceCall, String context)
at Microsoft.Exchange.Migration.MigrationExchangeProxyRpcClient.CanConnectToMrsProxy(Fqdn
serverName, Guid mbxGuid, NetworkCredential credentials, LocalizedException& error)
— End of inner exception stack trace —
at Microsoft.Exchange.Migration.DataAccessLayer.ExchangeRemoteMoveEndpoint.VerifyConnectivity()
at Microsoft.Exchange.Management.Migration.TestMigrationServerAvailability.InternalProcessEndpo
int(Boolean fromAutoDiscover)
IsValid            : True
Identity           :
ObjectState        : New

Now we can see the 501 error that exrca was returning. It would seem that the 501 is coming from the Kemp and not from the endpoint servers, which is why I could not located it in IIS or EWS logs and so in the Kemp System Message File (logging options > system log files) I found the 501 error:

kernel: L7: badrequest-client_read [157.56.251.92:61541->192.168.1.2:443] (-501): <s:Envelope ? , 0 [hlen 1270, nhdrs 8]

Where the first IP address was a Microsoft datacentre IP and the second was the Kemp listener IP.

It turns out this is due to the Kemp load balancer not returning to Microsoft the Continue-100 status that it should get. Microsoft waits 5 seconds and then sends the data it would have done if it got the response back. At this point the Kemp blocks this data with a 501 error.

It is possible to turn off Kemp’s processing of Continue-100 HTTP packets so it lets them through and this is covered in http://blog.masteringmsuc.com/2013/10/kemp-load-balancer-and-lync-unified.html:

 

In the version at my clients which was upgraded to version 7.0-16 from a v5 to v6 to v7 it was defaulting to RFC Conformant, but needs to be Ignore Continue-100 to work with Office 365. In versions of Kemp 7.1 and later the value needs to be changed to RFC-7231 Compliant from the default of “RFC-2616 Compliant”. Now EWS works, hybrid mailbox moves work, and AutoDiscover is always working on that server – so a mix of issues caused by differing interpretations of an RFC. To cover all these issues the Kemp load balancer started to include this 100-Continue Handling setting. We as ITPro’s need to ensure that we set the correct setting for our environment based on the software we use.

Changing AD FS 3.0 Certificates

Posted on 7 CommentsPosted in 2012, 2012 R2, ADFS, ADFS 3.0, certificates, IAmMEC, Office 365, WAP, Web Application Proxy

I am quite adept at configuring certificates and changing them around, but this one took me completely by surprise as it has a bunch of oddities to consider.

First the errors: Web Application Proxy (WAP) reported 0x80075213. In the event log the following:

The federation server proxy could not establish a trust with the Federation Service.

Additional Data
Exception details:
The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel.

User Action
Ensure that the credentials being used to establish a trust between the federation server proxy and the Federation Service are valid and that the Federation Service can be reached.

And

Unable to retrieve proxy configuration data from the Federation Service.

Additional Data

Trust Certificate Thumbprint:
431116CCF0AA65BB5DCCD9DD3BEE3DFF33AA4DBB

Status Code:
Exception details:
System.Net.WebException: The underlying connection was closed: The connection was closed unexpectedly.
at System.Net.HttpWebRequest.GetResponse()
at Microsoft.IdentityServer.Management.Proxy.StsConfigurationProvider.GetStsProxyConfiguration()

This was ultimately caused by the certificate on the AD FS Server having been replaced in the user interface, but this did not replace the certificate that HTTP was using or the published web applications and the certificates they were using.

So here are the steps to fix this issue:

AD FS Server

  1. Ensure the certificate is installed in the computer store of all the AD FS servers in the farm
  2. Grant permissions to the digital certificate to the ADFS Service account. Do this by right-clicking the new digital certificate in the MMC snap-in for certificates and choosing All Tasks > Manage Private Keys. Grant read permission to the service account that ADFS is using (you need to click Object Types and select Service Accounts to be able to select this user). Repeat for each server in the farm.
  3. In the AD FS management console expand service > certificates and ensure that the service communications certificate is correct and that the date is valid.
  4. Open this certificate by double-clicking it and on the Details tab, check the value for the Thumbprint.
  5. Start PowerShell on the AD FS Server and run Get-AdfsSslCertificate (not Get-AdfsCertificate). You should get back a few rows of data listing localhost and your federation service name along with a PortNumber and CertificateHash. Make sure the CertificateHash matches the Thumbprint for the service communications certificate.
  6. If they do not, then run Set-AdfsSslCertificate -Thumbprint XXXXXX (where XXXXX is the thumbprint value without spaces).
  7. Then you need to restart the ADFS Server.
  8. Repeat for each member of the farm, taking them out and in from any load balancer configuration you have. Ensure any SSL certs on the load balancer are updated as well.

Web Application Proxy

  1. Ensure the certificate is installed in the computer store of the web application proxy server as well. Permissions do not need to be set for this service.
  2. Run Get-WebApplicationProxySslCertificate. You should get back a few rows of data listing localhost and your federation service name along with a PortNumber and CertificateHash. Make sure the CertificateHash matches the Thumbprint for the service communications certificate. If it does not use Set-WebApplicationProxySslCertificate -Thumbprint XXXXX to change it. Here XXXXX is the thumbprint value without spaces.
  3. Restart WAP server and it should now connect to the AD FS endpoint and everything should be working again.
  4. Check that each Web Proxy Application is using the new certificate. Do this with Get-WebApplicationProxyApplication | Set-WebApplicationProxyApplication -ExternalCertificateThumbprint XXXXXX (where XXXXX is the thumbprint value without spaces). This sets all your applications to use the same certificate. If you have different certificates in use, then do them one by one with Get-WebApplicationProxyApplication | Format-Table Name,ExternalCert* to see what the existing thumbprints are and then Get-WebApplicationProxyApplication “name” | Set-WebApplicationProxyApplication -ExternalCertificateThumbprint XXXXXX to do just the one called “name”.

Getting Exchange Message Sizing Raw Data

Posted on 2 CommentsPosted in exchange, exchange online, IAmMEC, mcm, mcsm, Office 365

On the internet there are a number of resources for collecting the raw data needed to size Exchange Server deployments. These include:

This blog outlines my process for collecting the data needed for the average message size. What is missing from the above two posts is the ability to collect this data in one go for the last seven (or so days) and then get the message tracking log average over the busiest five or so days. Different countries have different working patterns and public holidays, and the scripts above only do the previous day (though Neil’s says the script uses averages across many days – it does not).

The Script

Download the script from the source and save to Notepad. Edit the top of the script by deleting the first five lines of the code (not the comments or the blank lines) and then replace them with the alternative lines shown below:

Remove These Lines

$today = get-date
$rundate = $($today.adddays(-1)).toshortdatestring()

$outfile_date = ([datetime]$rundate).tostring("yyyy_MM_dd")
$outfile = "email_stats_" + $outfile_date + ".csv"

Replace With These Lines

$today = get-date
$whichDay = $args[0]
if (! $whichDay) { $whichDay = 1 }
$rundate = $($today.adddays(-$whichDay))

$outfile_date = ([datetime]$rundate).tostring("yyyy_MM_dd")
$outfile = "email_stats_" + $outfile_date + ".csv"

Before running this script you need to do two things. First if you have different Exchange Server locations around the world you need to alter the script to only include the servers in those geographies that you want to search. Do this by changing the two lines that read as follows and add in the server name or a unique differentiator that will pick up just the mailbox and hub transport servers in these regions. Edge servers do not need to be considered:

  • $mbx_servers = Get-ExchangeServer |? {$_.serverrole -match “Mailbox”}|% {$_.fqdn}
  • $hts = get-exchangeserver |? {$_.serverrole -match “hubtransport”} |% {$_.name}

These two lines are near the top and need to be changed to something like the following:

  • $mbx_servers = Get-ExchangeServer UK* |? {$_.serverrole -match “Mailbox”}|% {$_.fqdn}
  • $hts = get-exchangeserver UK* |? {$_.serverrole -match “hubtransport”} |% {$_.name}

In the above I altered the script to find just the Exchange Servers starting with the letters UK. Adjust as needed. If you have one location/timezone worldwide then these changes are not needed.

If you are collecting data from an Exchange 2013 server(s) then change “hubtransport” in the $hts line to read “mailbox”.

Save the file as a PowerShell script (Get-MessageTrackingLogStats.ps1) and copy it to your Exchange Server.

Before you can run the script you need to check that you have enough tracking logs to process, or you will get invalid and skewed data. Run Get-TransportServer | FL name,*trackinglog* and make sure that you have a large enough quota for each day of logs and then check each of these folders and make sure they do not exceed this value. If they do, then you need to run the below script frequently before the log files are removed from the server rather than at the end of 7 or 14 days.

Now you can run the script with a number after it for how many days back you want to look at the logs. This will process the message tracking logs for that day, that number of days back. For example is you run the script Get-MessageTrackingLogStats.ps1 5 then it will look back for one day of data, five days ago. Repeat the running of the script until you have run it seven times. You will have one CSV file of tracking log reports for each of the last seven days:

  • Get-MessageTrackingLogStats.ps1 1
  • Get-MessageTrackingLogStats.ps1 2
  • Get-MessageTrackingLogStats.ps1 3
  • Get-MessageTrackingLogStats.ps1 4
  • Get-MessageTrackingLogStats.ps1 5
  • Get-MessageTrackingLogStats.ps1 6
  • Get-MessageTrackingLogStats.ps1 7

Zip up these files and take them to a computer running Microsoft Excel.

One the oldest file and process each of them as follows:
image

Hide columns C through E and H through to O and R through to U:
image

image

You now have a spreadsheet with Date/User/Received Total/Received MB Total/Sent Unique Total/Sent Unique MB Total:
image

Format as a table by selecting a cell in the table and clicking the Table button in the Insert tab:
image

The Table Tools / Design tab appears. Select Total Row check box from here. This will scroll you to the bottom of the table.
image

Select the third cell in the total row and drop-down the options and choose Average:
image

Drag this Average cell formula across all the four columns of numbers:
image

Modify the fourth column (Received MB Total) to read as follows =SUBTOTAL(101,[Received MB Total])/Table1[[#Totals],[Received Total]]*1024. This is the fourth column divided by the third column (the count of received messages) and multiplied by 1024 to convert it from MB to KB. The Exchange Bandwidth Calculator and Storage Calculator work on values in KB and not MB.
image

Repeat for the fourth column of figures. This time take the formula to be =SUBTOTAL(101,[Sent Unique MB Total])/Table1[[#Totals],[Sent Unique Total]]*1024 which is the last column divided by the previous and converted to KB. This total row now shows you the average messages sent and received and the average size of these messages in KB.

At the bottom of the spreadsheet add the following:

  • Sent/Mailbox/Day
  • Received/Mailbox/Day
  • Average/MessageSize/Day (KB)

Then copy the relevant data into the cells as shown The Average is the sum of the two averages divided by 2 (=Table1[[#Totals],[Received MB Total]]+Table1[[#Totals],[Sent Unique MB Total]]/2):
image

Then take all your numbers and reduce the number of decimal points shown:
image

Now that you have the raw data calculated, save this file as an Excel Workbook (and not a CSV).

Repeat for each of the CSV files you have available

Process The Daily Data

Create a new spreadsheet that contains the following three tables:

  • Messages Sent Per Day, with a row for each day and a column for each unique geographical region
  • Messages Received Per Day, with a row for each day and a column for each unique geographical region
  • Average Message Size (KB), with a row for each day and a column for each unique geographical region

This will look like the following, once all the data is copied from the source spreadsheets:

image

Now you can create a chart for each region for each table. The following shows Sent / Received and Average (KB). Each region is overlapping as a different line colour:

image
Note its possible here that HK (blue) is missing some data as it is unexpectedly low

image

image

Note the HK (blue) Sunday Average message size. This is probably becuase one or a few users sent a disproportionate number of larger emails on the quiet day of the week. For my analysis I am going to ignore it.

Now I have my peak Messages Sent Per Day for each region – and I take the highest value for the week and not the value for yesterday which is what I would get if I just ran the above script once.

This data can now go into the Bandwidth Calculator and generate accurate figures for the business in question.

Enabling Microsoft Rights Management in SharePoint Online

Posted on Leave a commentPosted in aadrm, active directory, cloud, IAmMEC, Office 365, policy, rms, sharepoint

This article is the fifth in a series of posts looking at Microsoft’s new Rights Management product set. In an earlier previous post we looked at turning on the feature in Office 365 and in this post we will look at protecting documents in SharePoint. This means your cloud users and will have their data protected just by saving it to a document library.

In this series of articles we will look at the following:

The items above will get lit up as the articles are released – so check back or leave a comment to the first post in the series and I will let you know when new content is added.

To enable SharePoint Online to integrate with Microsoft Rights Management you need to turn on RMS in SharePoint. You do this with the following steps:

  1. Go to service settings, click sites, and then click View site collections and manage additional settings in the SharePoint admin center:
    image
  2. Click settings and find Information Rights Management (IRM) in the list:
    image
  3. Select Use the IRM service specified in your configuration and click Refresh IRM Settings:
    image
  4. Click OK

Once this is done, you can now enable selected document libraries for RMS protection.

  1. Find the document library that you want to enforce RMS protection upon and click the PAGE tab to the top left of the SharePoint site (under the Office 365 logo).
    image
  2. Then click Library Settings:
    image
  3. If the site is not a document library, for example the picture below shows a “document center” site you will not see the Library Settings option. For these sites, navigate to the document library specifically and click the LIBRARY tab and then choose Library Settings:
    image
    image
  4. Click Information Rights Management
    image
  5. Select Restrict permissions on this library on download and add your policy title and policy description. Click SHOW OPTIONS to configure additional RMS settings on the library, and then click OK.
    image
  6. The additional options allow you to enforce restrictions to the document library such as RMS key caching (for offline use) and to allow the document to be shared with a group of users. This group must be mail enabled (or at least have an email address in its email address attribute) and be synced to the cloud.

To start using the RMS functionality in SharePoint, upload a document to this library or create a new document in the library. Then download the document again – it will now be RMS protected.

Using Microsoft Rights Management from Microsoft Office

Posted on Leave a commentPosted in active directory, exchange, exchange online, IAmMEC, Office 365, rms

This article is the second last in a series of posts looking at Microsoft’s new Rights Management product set. In an earlier previous post we looked at turning on the feature in Office 365 and in this post we will look at protecting documents and emails in Microsoft Office 2010 or later. This means your cloud users and your on-premises users can shared encrypted content and as it is cloud based, you can send encrypted content to anyone even if you are not using an Office 365 mailbox.

In this series of articles we will look at the following:

The items above will get lit up as the articles are released – so check back or leave a comment to the first post in the series and I will let you know when new content is added.

Once you have enable Microsoft Rights Management and your Microsoft Office user is connected to Office 365 via the accounts feature in Office:

image

In the above picture I have Microsoft Word showing that I am connected to my organizational account in Office 365.

Now that I am logged in to Office 365 I can access the RMS templates in Office 365 and protect documents with the templates I have created. To go through this process you need to follow these steps:

  1. Select File tab in the Office application that your document has been created in.
  2. You will find yourself in the Info area:
    image
  3. Click Protect Document > Restrict Access > Select your Office 365 tenant name (if you have more than one that you have used, you will see each that is enabled for RMS) > then select the protection template that you want to use:
    image
  4. The Protect Document button is highlighted and the RMS protection template is displayed, along with the description. If you are running a non-English version of Microsoft Office and your template has a matching language name, then you will see that instead. It will default to US-English for the languages that do not have a country option in the RMS template creation process:
    image
  5. Note though, that if you do not have any company employees working in the USA then you could use US-English for your own default language if your default language is not available – for example in the above I am on a UK-English version of Office and I could have used UK-English spellings and phrases for my name and description – who is to know that the US-English default is being shown!
  6. Back in the document the yellow banner, which can be closed, shows the protection level and the description:
    image

Though I have shown Word in the above examples, this is true for other Office documents as well, as long as you are using Office ProPlus edition, as you need that version to create RMS protected content. For example, an Outlook message is shown below:

image

Configuring Exchange On-Premises to Use Azure Rights Management

Posted on 7 CommentsPosted in 2010, 2013, 64 bit, aadrm, ADFS, ADFS 2.0, DLP, DNS, exchange, exchange online, https, hybrid, IAmMEC, load balancer, loadbalancer, mcm, mcsm, MVP, Office 365, powershell, rms, sharepoint, warm

This article is the fifth in a series of posts looking at Microsoft’s new Rights Management product set. In an earlier previous post we looked at turning on the feature in Office 365 and in this post we will look at enabling on-premises Exchange Servers to use this cloud based RMS server. This means your cloud users and your on-premises users can shared encrypted content and as it is cloud based, you can send encrypted content to anyone even if you are not using an Office 365 mailbox.

In this series of articles we will look at the following:

The items above will get lit up as the articles are released – so check back or leave a comment to the first post in the series and I will let you know when new content is added.

Exchange Server integrates very nicely with on-premises RMS servers. To integrate Exchange on-premises with Windows Azure Rights Management you need to install a small service online that can connect Exchange on-premises to the cloud RMS service. On-premises file servers (classification) and SharePoint can also use this service to integrate themselves with cloud RMS.

You install this small service on-premises on servers that run Windows Server 2012 R2, Windows Server 2012, or Windows Server 2008 R2. After you install and configure the connector, it acts as a communications interface between the on-premises IRM-enabled servers and the cloud service. The service can be downloaded from http://www.microsoft.com/en-us/download/details.aspx?id=40839

From this download link there are three files to get onto the server you are going to use for the connector.

  • RMSConnectorSetup.exe (the connector server software)
  • GenConnectorConfig.ps1 (this automates the configuration of registry settings on your Exchange and SharePoint servers)
  • RMSConnectorAdminToolSetup_x86.exe (needed if you want to configure the connector from a 32bit client)

Once you have all this software (or that which you need) and you install it then IT and users can easily protect documents and pictures both inside your organization and outside, without having to install additional infrastructure or establish trust relationships with other organizations.

The overview of the structure of the link between on-premises and Windows Azure Rights Management is as follows:

IC721938

Notice therefore that there are some prerequisites needed. You need to have an Office 365 tenant and turn on Windows Azure Rights Management. Once you have this done you need the following:

  • Get your Office 365 tenant up and running
  • Configure Directory Synchronization between on-premises Active Directory and Windows Azure Active Directory (the Office 365 DirSync tool)
  • It is also recommended (but not required) to enable ADFS for Office 365 to avoid having to login to Windows Azure Rights Management when creating or opening protected content.
  • Install the connector
  • Prepare credentials for configuring the software.
  • Authorising the server for connecting to the service
  • Configuring load balancing to make this a highly available service
  • Configuring Exchange Server on-premises to use the connector

Installing the Connector Service

  1. You need to set up an RMS administrator. This administrator is either the a specific user object in Office 365 or all the members of a security group in Office 365.
    1. To do this start PowerShell and connect to the cloud RMS service by typing Import-Module aadrm and then Connect-AadrmService.
    2. Enter your Office 365 global administrator username and password
    3. Run Add-AadrmRoleBasedAdministrator -EmailAddress <email address> -Role “GlobalAdministrator” or Add-AadrmRoleBasedAdministrator -SecurityGroupDisplayName <group Name> -Role “ConnectorAdministrator”. If the administrator object does not have an email address then you can lookup the ObjectID in Get-MSOLUser and use that instead of the email address.
  2. Create a namespace for the connector on any DNS namespace that you own. This namespace needs to be reachable from your on-premises servers, so it could be your .local etc. AD domain namespace. For example rmsconnector.contoso.local and an IP address of the connector server or load balancer VIP that you will use for the connector.
  3. Run RMSConnectorSetup.exe on the server you wish to have as the service endpoint on premises. If you are going to make a highly available solutions, then this software needs installing on multiple machines and can be installed in parallel. Install a single RMS connector (potentially consisting of multiple servers for high availability) per Windows Azure RMS tenant. Unlike Active Directory RMS, you do not have to install an RMS connector in each forest. Select to install the software on this computer:
    IC001
  4. Read and accept the licence agreement!
  5. Enter your RMS administrator credentials as configured in the first step.
  6. Click Next to prepare the cloud for the installation of the connector.
  7. Once the cloud is ready, click Install. During the RMS installation process, all prerequisite software is validated and installed, Internet Information Services (IIS) is installed if not already present, and the connector software is installed and configured
    IC002
  8. If this is the last server that you are installing the connector service on (or the first if you are not building a highly available solution) then select Launch connector administrator console to authorize servers. If you are planning on installing more servers, do them now rather than authorising servers:
    IC003
  9. To validate the connector quickly, connect to http://<connectoraddress>/_wmcs/certification/servercertification.asmx, replacing <connectoraddress> with the server address or name that has the RMS connector installed. A successful connection displays a ServerCertificationWebService page.
  10. For and Exchange Server organization or SharePoint farm it is recommended to create a security group (one for each) that contains the security objects that Exchange or SharePoint is. This way the servers all get the rights needed for RMS with the minimal of administration interaction. Adding servers individually rather than to the group results in the same outcome, it just requires you to do more work. It is important that you authorize the correct object. For a server to use the connector, the account that runs the on-premises service (for example, Exchange or SharePoint) must be selected for authorization. For example, if the service is running as a configured service account, add the name of that service account to the list. If the service is running as Local System, add the name of the computer object (for example, SERVERNAME$).
    1. For servers that run Exchange: You must specify a security group and you can use the default group (DOMAIN\Exchange Servers) that Exchange automatically creates and maintains of all Exchange servers in the forest.
    2. For SharePoint you can use the SERVERNAME$ object, but the recommendation configuration is to run SharePoint by using a manually configured service account. For the steps for this see http://technet.microsoft.com/en-us/library/dn375964.aspx.
    3. For file servers that use File Classification Infrastructure, the associated services run as the Local System account, so you must authorize the computer account for the file servers (for example, SERVERNAME$) or a group that contains those computer accounts.
  11. Add all the required groups (or servers) to the authorization dialog and then click close. For Exchange Servers, they will get SuperUser rights to RMS (to decrypt content):
    image
    image
  12. If you are using a load balancer, then add all the IP addresses of the connector servers to the load balancer under a new virtual IP and publish it for TCP port 80 (and 443 if you want to configure it to use certificates) and equally distribute the data across all the servers. No affinity is required. Add a health check for the success of a HTTP or HTTPS connection to http://<connectoraddress>/_wmcs/certification/servercertification.asmx so that the load balancer fails over correctly in the event of connector server failure.
  13. To use SSL (HTTPS) to connect to the connector server, on each server that runs the RMS connector, install a server authentication certificate that contains the name that you will use for the connector. For example, if your RMS connector name that you defined in DNS is rmsconnector.contoso.com, deploy a server authentication certificate that contains rmsconnector.contoso.com in the certificate subject as the common name. Or, specify rmsconnector.contoso.com in the certificate alternative name as the DNS value. The certificate does not have to include the name of the server. Then in IIS, bind this certificate to the Default Web Site.
  14. Note that any certificate chains or CRL’s for the certificates in use must be reachable.
  15. If you use proxy servers to reach the internet then see http://technet.microsoft.com/en-us/library/dn375964.aspx for steps on configuring the connector servers to reach the Windows Azure Rights Management cloud via a proxy server.
  16. Finally you need to configure the Exchange or SharePoint servers on premises to use Windows Azure Active Directory via the newly installed connector.
    • To do this you can either download and run GenConnectorConfig.ps1 on the server you want to configure or use the same tool to generate Group Policy script or a registry key script that can be used to deploy across multiple servers.
    • Just run the tool and at the prompt enter the URL that you have configured in DNS for the connector followed by the parameter to make the local registry settings or the registry files or the GPO import file. Enter either http:// or https:// in front of the URL depending upon whether or not SSL is in use of the connectors IIS website.
    • For example .\GenConnectorConfig.ps1 –ConnectorUri http://rmsconnector.contoso.com -SetExchange2013 will configure a local Exchange 2013 server
  17. If you have lots of servers to configure then run the script with –CreateRegEditFiles or –CreateGPOScript along with –ConnectorUri. This will make five reg files (for Exchange 2010 or 2013, SharePoint 2010 or 2013 and the File Classification service). For the GPO option it will make one GPO import script.
  18. Note that the connector can only be used by Exchange Server 2010 SP3 RU2 or later or Exchange 2013 CU3 or later. The OS on the server also needs to be include a version of the RMS client that supports RMS Cryptographic Mode 2. This is Windows Server 2008 + KB2627272 or Windows Server 2008 R2 + KB2627273 or Windows Server 2012 or Windows Server 2012 R2.
  19. For Exchange Server you need to manually enable IRM as you would do if you had an on-premises RMS server. This is covered in http://technet.microsoft.com/en-us/library/dd351212.aspx but in brief you run Set-IRMConfiguration -InternalLicensingEnabled $true. The rest, such as transport rules and OWA and search configuration is covered in the mentioned TechNet article.
  20. Finally you can test if RMS is working with Test-IRMConfiguration –Sender billy@contoso.com. You should get a message at the end of the test saying Pass.
  21. If you have downloaded GenConnectorConfig.ps1 before May 1st 2014 then download it again, as the version before this date writes the registry keys incorrectly and you get errors such as “FAIL: Failed to verify RMS version. IRM features require AD RMS on Windows Server 2008 SP2 with the hotfixes specified in Knowledge Base article 973247” and “Microsoft.Exchange.Security.RightsManagement.RightsManagementException: Failed to get Server Info from http://rmsconnector.contoso.com/_wmcs/certification/server.asmx. —> System.Net.WebException: The request failed with HTTP status 401: Unauthorized.”. If you get these then turn of IRM, delete the “C:\ProgramData\Microsoft\DRM\Server” folder to remove old licences, delete the registry keys and run the latest version of GetConnectorConfig.ps1, refresh the RMS keys with Set-IRMConfiguration –RefreshServerCertificates and reset IIS with IISRESET.

Now you can encrypt messages on-premises using your AADRM licence and so not require RMS Server deployed locally.

Is Your SenderID/SPF or DKIM Record Correctly Configured

Posted on Leave a commentPosted in DNS, domain, exchange, exchange online, IAmMEC, MX, Office 365, spam

With Microsoft having just announced that DKIM is coming to Office 365 soon (release notes here) and SenderID is already available, I thought this is a good time to write a blog on the use of DMARC to show if your records are correct.

DMARC is a protocol that allows you to see the effect of your SenderID/SPF records from the viewpoint of the recipient – as long as the recipient is one of the larger email receivers in the world, but that should be enough to help you validate your SenderID/SPF records. DMARC is currently in use at Outlook.com/Hotmail, Gmail, Facebook, Yahoo and Twitter to name some of the larger users of it.

To receive the results of a DMARC report you need to add your DMARC policy as a new TXT record to DNS. It looks something like this:

_dmarc IN TXT “v=DMARC1;p=reject;pct=100;rua=mailto:postmaster@dmarcdomain.com”

This TXT record for “_dmarc” at the root of your domain tells the receiver to report back to you all (pct=100) SenderID/SPF failures. They also report DKIM failures as well. If you send lots of email you might want to report on a smaller number of overall failures, say pct=20! The record says who to send the report to (rua=email address).

Finally, and one of the most useful bits of DMARC is that you can tell the receiver what to do with failures in your SenderID/SFP or DKIM record. For example in SenderID/SPF you can set the modifier “-all” to the end. This says that anything not covered in the rest of the record is to fail SenderID. The recipient then decides what to do with your email (as it might be your email, as you might have got your SPF or DKIM record wrong). In the example above you tell the recipient email system you would like them to reject (p=) the email. The policy could be quarantine or none. The policy p=none means accept the message, but report it back to the rua email address in the DMARC record.

p=none allows you to introduce SPF/DKIM and ensure no rejection of your messages at the recipient end, but get reporting back on the IP addresses from which your emails (or spoofed emails) are coming from.

Once you are sure your SPF/DKIM records are correct change your policy to p=quarantine and then when you are finally sure, change it to p=reject.

An example of creating a DMARC policy on your domain is shown below. The below example is AWS Route 53 DNS, but any DNS management console that support TXT records should work:

image

Finally, as this is just a quick intro to DMARC, you can use the ruf= qualifier to set the email address of per message failure reports.

Updating Exchange 2013 Anti-Malware Agent From A Non-Internet Connected Server

Posted on Leave a commentPosted in 2013, 64 bit, antivirus, exchange, Exchange Online Protection, IAmMEC, malware, mcm, mcsm, powershell, x64

In Forefront Protection for Exchange (now discontinued) for Exchange 2010 it was possible to run the script at http://support.microsoft.com/kb/2292741 to download the signatures and scan engines when the server did not have a direct connection to the download site at forefrontdl.microsoft.com.

To achieve the same with Exchange 2013 and the built-in anti-malware transport agent you can repurpose the 2010 script to download the engine updates to a folder on a machine with internet access and then use a script from Exchange Server 2013 to download from a share on the first machine that you downloaded the files to, and that the Exchange Servers can reach.

So start by downloading the script at http://support.microsoft.com/kb/2292741 and saving it as Update-Engines.ps1.

Create a folder called C:\Engines (for example) and share it with Authenticated Users / Read access and full control to the account that will run Update-Engines.ps1

Run Update-Engines.ps1 with the following

Update-Engines.ps1 -EngineDirPath C:\engines -UpdatePathUrl http://forefrontdl.microsoft.com/server/scanengineUpdate/  -Engines “Microsoft” -Platforms amd64

The above cmdlet/script downloads just the 64 bit Microsoft engine as that is all you need and places them in the local folder (which is the shared folder you created) on that machine. You can schedule this script using standard published techniques for scheduling PowerShell.

On your Exchange Server that has no internet connectivity, start Exchange Management Shell and run the following:

Set-MalwareFilteringServer ServerName –PrimaryUpdatePath \\dlserver\enginesShare

Then start a PowerShell window that is running as an administrator – you can use Exchange Management Shell, but it too needs to be started as an administrator to do this last step. In this shell run the following:

Add-PSSnapin microsoft.forefront.filtering.management.powershell

Get-EngineUpdateInformation

Start-EngineUpdate

Get-EngineUpdateInformation

Then compare the first results from Get-EngineUpdateInformation with the second results. If you have waited 30 or so seconds, the second set of results should be updated to the current time for the LastChecked value. UpdateVersion and UpdateStatus might also have changed. If your Exchange Server has internet connectivity it will already have updated automatically every hour and so not need this script running.

Exchange DLP Rules in Exchange Management Shell

Posted on Leave a commentPosted in 2013, cloud, DLP, EOP, exchange, exchange online, Exchange Online Protection, IAmMEC, IFilter, mcm, mcsm, Office 365

This one took a while to work out, so noting it down here!

If you want to create a transport rule for a DLP policy that has one data classification (i.e. data type to look for such as ‘Credit Card Number’) then that is easy in PowerShell and an example would be as below.

New-TransportRule -name “Contoso Pharma Restricted DLP Rule (Blocked)” -DlpPolicy ContosoPharma” -SentToScope NotInOrganization -MessageContainsDataClassifications @{Name=”Contoso Pharmaceutical Restricted Content”} -SetAuditSeverity High -RejectMessageEnhancedStatusCode 5.7.1 -RejectMessageReasonText “This email contains restricted content and you are not allowed to send it outside the organization”

As you can see, and highlighted in red, the data classification is a hashtable and the single classification is mentioned.

To add more than one classification is much more involved:

$DataClassificationA = @{Name=”Contoso Pharmaceutical Private Content”}
$DataClassificationB = @{Name=”Contoso Pharmaceutical Restricted Content”}
$AllDataClassifications = @{}
$AllDataClassifications.Add(“DataClassificationA”,$DataClassificationA)
$AllDataClassifications.Add(“DataClassificationB”,$DataClassificationB)
New-TransportRule -name “Notify if email contains ContosoPharma documents 1” -DlpPolicy “ContosoPharma” -SentToScope NotInOrganization -MessageContainsDataClassifications $AllDataClassifications.Values -SetAuditSeverity High -GenerateIncidentReport administrator -IncidentReportContent “Sender”,”Recipients”,”Subject” -NotifySender NotifyOnly

And as you can see, shown in red above, you need to make a hashtable of hashtables and then use the value of the final hashtable in the New-TransportRule

An “Inexpensive” Exchange Lab In Azure

Posted on Leave a commentPosted in 2010, 2013, Azure, cloud, DNS, exchange, exchange online, hyper-v, IAmMEC, Office 365, vhd, vm, vpn

This blog post centres around two scripts that can be used to quickly provision an Exchange Server lab in Azure and then to remove it again. The reason why the blog post is titled “inexpensive” is that Azure charges compute hours even if the virtual machines are shut down. Therefore to make my Exchange lab cheaper to operate and to not charge me when the lab is not being used, I took my already provisioned VHD files and created a few scripts to create the virtual machines and cloud service and then to remove it again if needed.

Before you start using these scripts, you need to have already uploaded or created your own VHD’s in Azure and designed your lab as you need. These scripts will then take a CSV file with the relevant values in them and create a VM for each VHD in the correct subnet (that you have also created in Azure) and always in the correct order – thus ensuring they always get the same IP address from your virtual network (UPDATE: 14 March 2014 – Thanks to Bhargav, this script now reserves the IPs as well as this is a newish feature in Azure). Without reserving an IP, when you boot your domain controller first in each subnet it will always get the fourth available IP address. This IP is the DNS IP address in Azure and then each of the other machines are created and booted in the order of your choosing and so get the subsequent IP’s. Azure never used to guarantee the IP but updates in Feb 2014 now allow this with the latest Azure PowerShell cmdlets. This way we can ensure the private IP is always the same and machine dependancies such as domain controllers running first are adhered to.

These scripts are created in PowerShell and call the Windows Azure PowerShell cmdlets. You need to install the Azure cmdlets on your computer and these scripts rely on features found in version 0.7.3.1 or later. You can install the cmdlets from http://www.windowsazure.com/en-us/documentation/articles/install-configure-powershell/

Build-AzureExchangeLab.ps1

# Retrieve with Get-AzureSubscription 
$subscriptionName = "Visual Studio Premium with MSDN"

Import-AzurePublishSettingsFile 'downloaded.publishsettings.file.got.with.Get-AzurePublishSettingsFile'

# Select the subscription to work on if you have more than one subscription
Select-AzureSubscription -SubscriptionName $subscriptionName

# Name of Virtual Network to add VM's to
$VMNetName = "MCMHybrid"

# CSV File with following columns (BringOnline,VMName,StorageAccount,VMOSDiskName,VMInstanceSize,SubnetName,IPAddress,Location,AffinityGroup,WaitForBoot,PublicRDPPort)
)
$CSVFile = Import-CSV 'path\filename.csv'

# Loop to build lab here. Ultimately get values from CSV file
foreach ($VMItem in $CSVFile) {

    # Retrieve with Get-AzureStorageAccount  
    $StorageAccount = $VMItem.StorageAccount

    # Specify the storage account location containing the VHDs 
    Set-AzureSubscription -SubscriptionName $subscriptionName  -CurrentStorageAccount $StorageAccount
  
    # Not Used $location = $VMItem.Location     # Retrieve with Get-AzureLocation

    # Specify the subnet to use. Retreive with Get-AzureVNetSite | FL Subnets
    $subnetName = $VMItem.SubnetName

    $AffinityGroup = $VMItem.AffinityGroup      # From Get-AzureAffinityGroup (for association with a private network you have already created). 

    $VMName = $VMItem.VMName
    $VMOSDiskName = $VMItem.VMOSDiskName        # From Get-AzureDisk
    $VMInstanceSize = $VMItem.VMInstanceSize    # ExtraSmall, Small, Medium, Large, ExtraLarge 
    $CloudServiceName = $VMName
    $IPAddress = $VMItem.IPAddress              # Reserves a specific IP for the VM
    if ($VMItem.BringOnline -eq "Yes") {
        Write-Host "Creating VM: " $VMName
        $NewVM = New-AzureVMConfig -Name $VMName -DiskName $VMOSDiskName -InstanceSize $VMInstanceSize | Add-AzureEndpoint -Name 'Remote Desktop' -LocalPort 3389 -PublicPort $VMItem.PublicRDPPort -Protocol tcp | Add-AzureEndpoint -Protocol tcp -LocalPort 25 -PublicPort 25 -Name 'SMTP' | Add-AzureEndpoint -Protocol tcp -LocalPort 443 -PublicPort 443 -Name 'SSL' | Add-AzureEndpoint -Protocol tcp -LocalPort 80 -PublicPort 80 -Name 'HTTP' | Set-AzureSubnet –SubnetNames $subnetName | Set-AzureStaticVNetIP –IPAddress $IPAddress        
        # Creates new VM and waits for it to boot if required
        if ($VMItem.WaitForBoot -eq "Yes") {New-AzureVM -ServiceName $CloudServiceName -AffinityGroup $AffinityGroup -VMs $NewVM -VNetName $VMNetName -WaitForBoot}
            else {New-AzureVM -ServiceName $CloudServiceName -AffinityGroup $AffinityGroup -VMs $NewVM -VNetName $VMNetName }
    }
}

Remove-AzureExchangeLab.ps1

# Retrieve with Get-AzureSubscription 
$subscriptionName = "Visual Studio Premium with MSDN"

Import-AzurePublishSettingsFile 'downloaded.publishsettings.file.got.with.Get-AzurePublishSettingsFile'

# Select the subscription to work on if you have more than one subscription
Select-AzureSubscription -SubscriptionName $subscriptionName

# CSV File with following columns (BringOnline,VMName,StorageAccount,VMOSDiskName,VMInstanceSize,SubnetName,IPAddress,Location,AffinityGroup,WaitForBoot,PublicRDPPort)
$CSVFile = Import-CSV 'path\filename.csv'

# Loop to build lab here. Ultimately get values from CSV file
foreach ($VMItem in $CSVFile) {

    # Stop VM
    Stop-AzureVM -Name $VMItem.VMName -ServiceName $VMItem.VMName -Force

    # Remove VM but leave VHDs behind
    Remove-AzureVM -ServiceName $VMItem.VMName -Name $VMItem.VMName 

    # Remove Cloud Service
    Remove-AzureService $VMItem.VMName -Force
}

CSV File Format

The CSV file has a row per virtual machine, listed in order that the machine is booted:

BringOnline,VMName,StorageAccount,VMOSDiskName,VMInstanceSize,SubnetName,IPAddress,Location,AffinityGroup,WaitForBoot,PublicRDPPort
Yes,mh-oxf-dc1,portalvhdsjv47jtq9qdrmb,mh-oxf-mbx2-mh-oxf-mbx2-0-201312301745030496,Small,Oxford,10.0.0.4,West Europe,C7Solutions-AG,Yes,3389
etc,…

The columns are as follows:

  • BringOnline: Yes or No
  • VMName: This name is used for the VM and the Cloud Service. It must be unique within Azure. An example might be EX-LAB-01 (if that is unique that is)
  • StorageAccount: The name of the storage account that the VHD is stored in. This might be one you created yourself or one made by Azure with a name containing random letters. For example portalvhdshr4djwe9dwcb5 would be what this value might look like. Use Get-AzureStorageAccount to find this value.
  • VMOSDiskName: This is the disk name (not the file name) and is retrieved with Get-AzureDisk
  • VMInstanceSize: ExtraSmall, Small, Medium, Large or ExtraLarge
  • SubnetName: Get with Get-AzureVNetSite | FL Subnets
  • IPAddress: Sets a specific IP address for the VM. VM will always get this IP when it boots and other VM’s will not take it if the happen to boot before it
  • Location: Retrieve with Get-AzureLocation. This value is not used in the script as I use Affinity Groups and subnets instead.
  • AffinityGroup: From Get-AzureAffinityGroup (for association with a private network you have already created).
  • WaitForBoot: Yes or No. This will wait for the VM to come online (and thus get an IP correctly provisioned in order) or ensure things like the domain controller is running first.
  • PublicRDPPort: Set to 3389 unless you want to use a different port. For simplicity, the script sets ports 443, 80 and 25 as open on the IP addresses of the VM