Category Archives: SharePoint Server 2013

SharePoint 2013 Hybrid Disaster Recovery Approach with Azure IaaS

Over the past few months I’ve been working with the content team in Redmond to publish a hybrid disaster recovery approach for SharePoint 2013 with SQL Server AlwaysOn Availability Groups with Microsoft Azure. Today, the article was published and can be found below.

Plan for SQL Server AlwaysOn and Microsoft Azure for SharePoint Server 2013 Disaster Recovery
https://technet.microsoft.com/en-us/library/mt607084.aspx

I’d ike to acknowledge the following people who provided valuable assisance in testing, reviewing and publishing the article on TechNet with me over the past few months in order to get this published in the detailed form that it is today.

Neil Hodgkinson, Senior Program Manager

Dan Wesley, Senior Content Editor

Steve Hord, Senior Content Editor

David Williams, SQL Server Expert

Matthew Robertshaw, SQL Server Expert

Azure IaaS: Workflow Manager Farm

Recently, I had a requirement to host and test a Workflow Manager farm in Azure IaaS with SharePoint 2013. The key element here was to ensure high availability for the SharePoint farm and the Workflow Manager farm as part of the design. This can be achieved by appropriately spreading your farm tier across availability sets, to ensure that the appropriate roles are available during host patching. This, of course, would include the database tier, where you could utilize SQL Server AlwaysOn Availability Groups in synchronous commit mode to achieve maximum uptime.

Cloud Services Generally, you would want to have separation between the SharePoint farm, SQL infrastructure and the workflow manager farm. I personally would recommend using the new regional networks in Azure so that you can utilize internal load balancers required for Workflow Manager high availability. The picture below depicts a very basic farm with high level components, including Workflow Manager within its own cloud service.

. SPWFHAAZure

Internal load balancers are used for SQL Server AlwaysOn, currently supporting one AG in Windows Azure since the Internal Load Balancer (ILB)  supports only one IP address. The SharePoint farm would access SQL Server ideally via a DNS record pointing to the load balanced IP. The SQL Servers would expose the SQL Server endpoint port to the internal load balancer used by the SQL Server Availability Group for connectivity. Synchronous commit mode is utilized on the Availability group to maximise uptime. Since this topic is about Workflow Manager high availability, we would have three instances hosted within a cloud service being exposed via a internal load balancer. This should ideally be on port 12290 (https) for the farm to communicate to the Workflow Manager endpoint. It’s recommended to utilize a domain certification authority (CA), to have a trusted certificate between all the Workflow Manager nodes and Workflow clients. A common question is why would we just not have two servers in the Workflow Manager farm? Basically, Service Bus, utilizes a quorum, so a third node is required for High Availability.

Configuration

  • Prior to configuring the Workflow Manager farm, ensure you have all the Azure VMs provisioned as part of a cloud service and install the Workflow and Service Bus, with the refresh versions and Service Bus 1.1. All nodes in the farm should have the same bits installed, otherwise your configuration may fail.
  • Ensure each host has a certificate issued by a domain CA and you have the certificate installed on all three nodes. The certificate must be trusted by the SharePoint Servers, otherwise you’d have a JSON download error message when you try and register and configure the Workflow Manager Proxy
  • Ensure you have a DNS Host (A) entry created, pointing to the ILB IP address,  in the domain which is tied to the common name issued by the domain CA (after you create the Internal Load Balancer)
  • The workflow manager installer account must be a local administrator on all three farm nodes
  • The workflow manager installer account must have the Server Admin role in SQL Server, although I have made this work with DBCREATOR and SECURITY ADMIN in the past, the Workflow Manager articles still suggest the SA role
  • The Workflow Manager RunAs Account must be a domain account and have the log on as a service right (make sure your domain policies do not override this security policy

Create the Internal Load Balancer in Azure

For this step you’ll require:

  • The Cloud Service Name
  • The VNet Subnet (for assigning the IP address to the ILB)

Run the following PowerShell to create the ILB:

# Internal Load Balancer Settings – replace these settings with your own environment settings

$ILBName = “ILBSBWorkflow”

$ILBSubnet = “WFSubNet”

$EndPointName = “IntSBWFEndPoint”

$WFHTTPSPort = “12290”

$PoleInterval = “20”

$ServiceName = “WORKFLOWCSVC”

$WFVM1 = “WF01”

$WFVM1 = “WF02”

$WFVM1 = “WF03”

# Add Internal Load Balancer to the service

Add-AzureInternalLoadBalancer -InternalLoadBalancerName $ILBName -SubnetName $ILBSubnet -ServiceName $ServiceName

Expose and Assign the Endpoints to the ILB

Run the following PowerShell the create and expose the endpoint on the Virtual Machines, which participate in the load balanced set.

# Add load balanced endpoints to ILB for Workflow Manager

Get-AzureVM -ServiceName $ServiceName -Name $WFVM1 | Add-AzureEndpoint -Name $EndPointName -LBSetName $ILBName -Protocol tcp -LocalPort $WFHTTPSPort -PublicPort $WFHTTPSPort -ProbePort $WFHTTPSPort -ProbeProtocol tcp -ProbeIntervalInSeconds $PoleInterval -InternalLoadBalancerName $ILBName | Update-AzureVM

Get-AzureVM -ServiceName $ServiceName -Name $WFVM2 | Add-AzureEndpoint -Name $EndPointName -LBSetName $ILBName -Protocol tcp -LocalPort $WFHTTPSPort -PublicPort $WFHTTPSPort -ProbePort $WFHTTPSPort -ProbeProtocol tcp -ProbeIntervalInSeconds $PoleInterval -InternalLoadBalancerName $ILBName | Update-AzureVM

Get-AzureVM -ServiceName $ServiceName -Name $WFVM3 | Add-AzureEndpoint -Name $EndPointName -LBSetName $ILBName -Protocol tcp -LocalPort $WFHTTPSPort -PublicPort $WFHTTPSPort -ProbePort $WFHTTPSPort -ProbeProtocol tcp -ProbeIntervalInSeconds $PoleInterval -InternalLoadBalancerName $ILBName | Update-AzureVM

Create your DNS host entry to point to the load balanced address for Workflow Manager. To determine the IP address of the ILB, if you have not assigned one yourself when you created the ILB, you can run the following PowerShell:

Get-AzureInternalLoadBalancer -ServiceName $ServiceName

Install and configure the first Workflow Manager node and join the remaining nodes to the Workflow Manager farm. Check the farm is operational by running the following PowerShell:

Get-SBFarmStatus

Get-WFFarmStatus

Register the Workflow Manager Proxy

$SPWebApp = “https://webapplication”
$SPWorkFlowHostURI = “https://WORKfLOWADDRESS:12290”
Register-SPWorkflowService -SPSite $SPWebApp -WorkflowHostUri $SPWorkFlowHostURI

Running SharePoint 2013 in Azure IaaS

Moving your SharePoint Infrastructure to the Cloud? Have a requirement to host SharePoint in an IaaS solution? Look no further.

Microsoft have several Virtual Machine Sizes in Windows Azure IaaS which suit the requirements for SharePoint Server 2013 and have also certified Microsoft SharePoint Server 2013 running in Windows Azure IaaS.

The link to the whitepaper below details the best practices for deploying a SharePoint Infrastructure in Windows Azure:
SharePoint 2013 on Windows Azure Infrastructure Services
I have highlighted the important aspects of running SharePoint Server 2013 in Windows Azure IaaS below:

Supporting Infrastructure

  • Deploy a Virtual Network and deploy a gateway in Windows Azure and create a site-to-site VPN if you have not done so already.
  • Ensure that you have at least two domain controllers running in Azure IaaS to support the IaaS deployed infrastructure. They should belong to their own availability group.
  • If you plan to deploy a new forest you can review the Windows Azure documentation Install a new Active Directory forest in Windows Azure.
  • Never shutdown your domain controller in Windows Azure (Shutdown and restart only), you potentially get the infinite DHCP lease removed and at start-up, your domain controller will get a new IP address when it starts from a cold boot.

SharePoint 2013

  • Ensure that each SharePoint role belongs to the appropriate availability set for high availability. This is so that there is a separation in fault domains and the roles do not go down during maintenance.
  • Be aware of the network throughput and virtual disk limits per VM size in the depot, this is important to get the correct throughput and number of disks per role.
  • Never store anything on the temporary space drive (D:).
  • Ensure that your infrastructure points to the DNS servers defined in your VNet configuration.
  • Use a Windows Azure load balancer since SharePoint 2013 now supports load balancers that do not support sticky sessions.
  • Create gold images of your SharePoint servers so that you can deploy them from the Windows Azure Virtual Machine Gallery.

SQL Server 2012

This information should provide you the basics of getting started in your journey to deploying a SharePoint farm in Windows Azure.

If you already have a private cloud built on Microsoft Windows Server Hyper-V, I would highly recommend deploying System Center Virtual Machine Manager and App Controller to manage your private and public cloud infrastructure as a minimum. Many of the System Center products should be reviewed, planned and deployed accordingly since the there are no other solutions in the market place, that provide the comprehensive functionality which System Center provides, to deploy and manage Microsoft public and private clouds.