Category Archives: Cloud Computing

CarShop .NET Core Blazor Project – Part 1

Over the past few months I have been working on a sample project, which will eventually be published to my GitHub repo. The project specifically focusses on Microsoft Entity Framework with an Azure SQL database and also utilises Blazor as the underlying UI and logic layer, including DevExpress for Blazor UI components. This is an initial post which describes the project and it’s capabilities.

The CarShop project was envisioned from wanting to build something new and then work on multiple articles rather than only a few for a project, for part of this year. This will enable me to provide updates at various intervals and at some stage, including the publishing of the code.

Why a CarShop?

Since I am car fan and have been for many years, I thought this would be an exciting project to work on this year. With Blazor + .NET and DevExpress being some of my favorite development frameworks, I thought this would be ideal as a project.

The database schema

Since I am using the Microsoft Entity Framework in the Blazor Visual Studio project, the schema was exactly where I wanted to start. As the iterations developed, I decided to go straight into Azure SQL to provision my tables, entities and relationships, primary keys and foreign keys etc. The project needs to store car details (at a basic level), customer details, car manufacturers, car models, fuel types, engine sizes etc. Whilst it is a simple model to start with, it’s relatively simple to expand the schema as I see fit, both from the SQL backend as well as the coded elements and data classes.

For this post I will show an example of the schema, which is below, produced by dbForgeStudio 2022 for SQL Server.

CarShop Schema – Developed using dbForgeStudio 2022 for SQL Server

Since this is a relatively simple sample project, the data is held in a single Azure SQL database. As you can see, the Vehicles table has the most relationships with car fuel types, vehicle status, engine size, colours, models and manufacturers. For the customers table, I’ve kept the design simple for now although I intend to expand this into a scenario where there may need to be some data quality checks and periodic checks around when the customer data was last updated, for reasons I will include in a future post.

Part 2 will focus on the Transact-SQL, so that the schema can be provisioned.

Advertisement

Blazor File Uploads – to Azure Blob Storage

The Intro…

You might wonder, your an Architect, why are you posting blogs about coding? Good question I would say 🙂 Well, working as an Architect is my day job, but there are many different ideas I have in my head and some of them just require a bit more thought, where I might need to build a PoC, or sometimes like this post, I like to try out new frameworks, libraries or just build samples for fun. Having already coded for well over a decade in .NET, doesn’t stop me from wanting to keep up to date with my coding skills , especially C#, or trying out different architectural patterns to understand how they can be applied in software applications, hybrid architectures, native cloud architectures or help a project I am working on at a specific point in time. From a work perspective, all things Microsoft are highly important to me, especially since I consider myself lucky enough to have spent a fair bit of time working at Microsoft Canada, Microsoft UK and in Seattle at Microsoft HQ and at Microsoft conferences being trained and kept up to date on technologies, which were not always about Microsoft Azure of course.

Now back to this post…

With the recent updates announced at the .NET 2020 Conference last week, I decided to try out the new InputFile UI component in Blazor. Steve Sanderson had already posted a blog regarding a InputFile component here previously, in September 2019. This is now included with the launch of .NET 5. I’ve also been using the DevExpress Blazor UI Components library and used the <DXUpload/> UI component, which I highly recommend also.

In this post I wanted to try out the new native InputFile Control and upload some files to Azure blob storage for testing purposes. I am working on a project which will require the basic functionality of file uploads and downloads to and from Azure blob storage, I may add a future post about one of the new exciting projects I am working on.

Note: This is a very basic implementation for the purposes of this blog post, it is not at all production ready.

Nuget Packages

With any .NET project, the first thing you may want to do is to decide your approach on how you want to build out your components, whether or not you have existing shared libraries, component libraries, NuGet packages you work with regularly, or if you just like coding and building your own supporting toolsets and templates, you may not even need to include anything from the onset and just include packages whilst you are building out your solution in an agile way.

In my sample project, a Blazor Server application, called BlazorFileUploader, I used the following NuGet packages, Microsoft Azure.Storage.Blobs client library and DevExpress.Blazor.

Note: As I mentioned earlier in this article, I have been testing the DevExpress UI components <DXUpload> UI component, but it is not used in this post.

The code…

Appsettings.json

The following configuration was added for the Azure blob storage connection string. Of course, in an enterprise scenario, if your hosting your application in Azure App Service, you could utilise Azure KeyVault, which is recommended instead.

“Storage”: {
“StorageAccountConnectionString”: “DefaultEndpointsProtocol=https;AccountName=[YourStorageAccountName;AccountKey=YourStorageAccountKey==;EndpointSuffix=core.windows.net”,
},

Startup.cs

The the default configuration provider, the connection string is set in string within a config context class.

public class Startup
{
    public IConfiguration Configuration { get; }
    readonly string MyAllowSpecificOrigins = "_myAllowSpecificOrigins";
    public Startup(IConfiguration configuration)
    {
        Configuration = configuration;
        Config.AzureStorageConnectionString = Configuration.GetConnectionString("StorageAccountConnectionString");
    }
....

Config.cs

The AzureStorageConnectionString is set by the code above, so this can be used when the server upload is handled server side.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Reflection.Metadata.Ecma335;
using System.Threading.Tasks;

namespace BlazorAzureStorageFileLoader.Data
{
    public class Config
    {
        public static string AzureStorageConnectionString { get; set; }

    }
}

New Razor page

A new Razor page is created to handle the file uploads, a bulk of the code, with the <InputFile/> component.

Note: I hard coded a container named “files” in the code. This is not necessary of course, you can build your own blob storage browser and even have methods to retrieve files and create containers.

@page "/fileloader"
@using System.IO
@using Azure.Storage.Blobs

<h4>Blob Storage File Loader</h4>

<InputFile OnChange="@UploadFiletoAzBlobStorage" />

<p>@status</p>

@if (fileSelected)
{
<p>
    <div class="spinner-border" /><h5>Uploading file...</h5>
</p>
}

@code {

    int count = 1;
    string status;
    bool fileSelected = false;
    private string localFileName { get; set; } = "Not Selected";

    private void UpdateStatus(ChangeEventArgs e)
    {
        status = e.Value.ToString();
        count = count +1 ;
        StateHasChanged();
    }


    async Task UploadFiletoAzBlobStorage(InputFileChangeEventArgs e)
    {
        var file = e.File;

        if (file != null)
        {

            fileSelected = true;

            string connectionString = Data.Config.AzureStorageConnectionString;

            // Nax File Size ~ 50MB
            long maxFileSize = 1024 * 1024 * 50;

            BlobContainerClient container = new BlobContainerClient(connectionString, "files");

            try
            {
                BlobClient blob = container.GetBlobClient(file.Name);
                using (Stream fs = file.OpenReadStream(maxFileSize))
                {
                    await blob.UploadAsync(fs);
                }
            }
            catch (Exception ex)
            {

            }
            finally
            {
                // Clean up after the test when we're finished
            }

            status = $"Finished loading {file.Size/1024/1024} MB from {file.Name}";
            fileSelected = false;
            StateHasChanged();
        }
        else
        {
            status = $"No file selected!";
        }
    }
}

Shared\NavMenu.razor

The navigation menu is updated to remove the default links and an update is made to the application display name. A new navigation link is added to the fileloader razor page.

div class="top-row pl-4 navbar navbar-dark">
    <a class="navbar-brand" href="">Azure Storage FileLoader</a>
    <button class="navbar-toggler" @onclick="ToggleNavMenu">
        <span class="navbar-toggler-icon"></span>
    </button>
</div>

<div class="@NavMenuCssClass" @onclick="ToggleNavMenu">
    <ul class="nav flex-column">
        <li class="nav-item px-3">
            <NavLink class="nav-link" href="" Match="NavLinkMatch.All">
                <span class="oi oi-home" aria-hidden="true"></span> Home
            </NavLink>
        </li>
        @*<li class="nav-item px-3">
            <NavLink class="nav-link" href="counter">
                <span class="oi oi-plus" aria-hidden="true"></span> Counter
            </NavLink>
        </li>
        <li class="nav-item px-3">
            <NavLink class="nav-link" href="fetchdata">
                <span class="oi oi-list-rich" aria-hidden="true"></span> Fetch data
            </NavLink>
        </li>*@
        <li class="nav-item px-3">
            <NavLink class="nav-link" href="fileloader">
                <span class="oi oi-list-rich" aria-hidden="true"></span> File Loader
            </NavLink>
        </li>
        @*<li class="nav-item px-3">
            <NavLink class="nav-link" href="fileloader2">
                <span class="oi oi-list-rich" aria-hidden="true"></span> File Loader 2
            </NavLink>
        </li>*@
        @*<li class="nav-item px-3">
            <NavLink class="nav-link" href="Component Testing">
                <span class="oi oi-list-rich" aria-hidden="true"></span> File Loader 2
            </NavLink>
        </li>*@
    </ul>
</div>

@code {
    private bool collapseNavMenu = true;

    private string NavMenuCssClass => collapseNavMenu ? "collapse" : null;

    private void ToggleNavMenu()
    {
        collapseNavMenu = !collapseNavMenu;
    }
}

The Test

The FileInput UI component is shown in the fileloader.razor page.

I choose a local file to upload.

The spinner is shown with the filename. The code which displays this in the razor page is highlighted below.

<p>@status</p>

@if (fileSelected)
{
<p>
    <div class="spinner-border" /><h5>Uploading file...</h5>
</p>
}

The file load to Azure blob storage is completed.

The blob is shown in the Azure storage account blob container.

That’s the most simple and quickest way to utilise the new <InputFile/> UI component in Blazor with .NET 5, with Azure blob storage.

I launched Cloud Release on Google Play

On the 8th of November, I submitted an Android app on Google Play named Cloud Release. Google published the app to the store on the 11th November and it is currently available in 10 countries.

As a Microsoft Azure Cloud Architect, it was important for me to keep up to date with all the latest previews, developments and releases into the Microsoft Azure cloud platform which is why I have created the Android app, so that I can easily be kept informed of all Microsoft Azure updates. I’ve kept the layout simple, in a listview, with a web browser view to launch directly into the published article from Microsoft. This enables me to keep up to date on a regular basis on all recent developments in Microsoft Azure in one place. I developed the app during my train journey’s and in some of my spare time in the evenings and I’m delighted to release it free to the Google Play store. I’ll be releasing new features over time, but for now this is what the app provides.

  • Synchronisation of the latest articles on the application launch
  • A local database is used for all downloaded articles
  • Quick search feature to query the local database
  • Direct Web browser view, to review the full article on the Microsoft docs web site

You can download the app free from the Google Play store here.

Enjoy!

Azure VNet Peering

I’ve recently been reviewing VNet peering for Azure in detail and if you have VNet VPN gateways today, it’s time to switch to VNet peering if you have VNets connected together with the old method, especially now that it is in general availability. This is only a consideration for VNet’s connected in the same region as currently VNet peering is not available across Azure regions e.g. UK West and UK South.

VNet peering allows you to connect two or more Azure VNet’s together with a few simple steps vs the old method of provisioning gateway subnets and VPN Gateways. You also don’t have to provision a VPN gateway for VNet peering as the traffic travels through the Azure backbone and not through a site-to-site (S2S) IPSEC VPN tunnel. This obviously has numerous benefits, with Azure regional virtual networks that have provide the appropriate infrastructure for high bandwidth use. VNet-to-VNet connections always had limited connectivity based on the gateway type that was provisioned.

You can follow the Microsoft Azure article below if you have a need to provision VNet-to-VNet connections using VPN gateways, based on your requirements e.g. you need to connect two VNet’s together in different Azure regions.

Configure a VNet-to-VNet connection using the Azure portal

Here’s my example of how to create two Azure VNet’s and peer them using PowerShell. I’ll wrap these into my custom Azure PowerShell module in a couple of functions, with the appropriate  parameter input types, for future use.


# Login to Azure
Login-AzureRmAccount
# Use Select-AzureRmSubscription to select the subscription you want to create the VNets into

# New resource group in the location where we will hold all the resources
New-AzureRmResourceGroup -Name RG-VNETS -Location ukwest

# Create Virtual Network A with a subnet in the UK West Region
$NSGSubNetA = New-AzureRmNetworkSecurityGroup -Name NSG-SubNetA -ResourceGroupName RG-VNETS -Location ukwest
$SubNetA = New-AzureRmVirtualNetworkSubnetConfig -Name SubnetA -AddressPrefix 10.1.1.0/24 -NetworkSecurityGroup $NSGSubNetA
$VNetA = New-AzureRmVirtualNetwork -Name VNETA -ResourceGroupName RG-VNETS -Location ukwest -AddressPrefix 10.1.0.0/16 -Subnet $SubNetA

# Create Virtual Network B with a subnet in the UK West Region
$NSGSubNetB = New-AzureRmNetworkSecurityGroup -Name NSG-SubNetB -ResourceGroupName RG-VNETS -Location ukwest
$SubNetB = New-AzureRmVirtualNetworkSubnetConfig -Name SubnetB -AddressPrefix 10.2.1.0/24 -NetworkSecurityGroup $NSGSubNetB
$VNetB = New-AzureRmVirtualNetwork -Name VNETB -ResourceGroupName RG-VNETS -Location ukwest -AddressPrefix 10.2.0.0/16 -Subnet $SubNetB

# Add peering VNETA to VNETB (this initiates the peering)
Add-AzureRmVirtualNetworkPeering -Name Peering-VNETA-to-VNETB -VirtualNetwork $VNETA -RemoteVirtualNetworkId $VNETB.Id

Notice that the peering status is initiated.

vnet-peering-initiated

We now need to create the peering from VNETB to VNETA.


# Add peering VNETB to VNETA (this completes the peering)
Add-AzureRmVirtualNetworkPeering -Name Peering-VNETB-to-VNETA -VirtualNetwork $VNETB -RemoteVirtualNetworkId $VNETA.Id

The peering is now complete.

vnet-peering-connected

Be aware of the additional settings that are available to peering connections shown below.

vnet-peering-settings

You can enable or disable virtual network access from a peered VNet.

Allow forwarded traffic: Allows traffic from the peered network, not originating from the peered network, into the local VNet.

Allow gateway transit: allows the peered VNet to use the gateway in the local VNet. The peered network “use remote gateways option” must be enabled. It will only be available if the local VNet has a gateway configured.

Use Remote Gateways: The VNet will use the remote peered VNet gateway, but the remote VNet must have the “Allow gateway transit option enabled”.

Important

You cannot daisy chain VNets and expect them all to act as one address space with routing between then, regardless of the number of options you specify above. Daisy chaining isn’t really the best networking practice to follow in any case. I recommended that a hub and spoke topology is implemented with the appropriate security controls. I also recommended that you have Network Virtual Appliances (NVAs) in the hub as that will provide you with the most flexibility for controlling your network traffic between the VNet’s and subnets. Managing lots of NSG’s can be very cumbersome, but this depends on your release mechanism and experience as you could still use ARM templates to manage updates to NSGs.

The most common NVAs are CheckPoint vSEC, Barracuda NextGen and Fortinet FortiGate. For further information see the Microsoft Azure Docs article below.

Deploying high availability network virtual appliances

For further information on VNet Peering, you can review Microsoft Azure Docs overview article below.

VNet Peering

 

Shared Responsibilities: Cloud Computing

Whilst implementing security controls in Microsoft Azure, it is also important to understand the shared responsibilities between cloud service providers and what the customer can configure and control in terms of networking and security for the services customers require. Responsibilities change when you work with SaaS, PaaS and IaaS. It’s also important to understand how Microsoft handles security response and the process which is followed..

Alice Rison, Senior Director, Microsoft Azure has just published details on two recent whitepapers which were recently released to provide insight into the shared responsibilities and security response at Microsoft.

The published papers can be found linked to this announcement here: Microsoft Incident Response and shared responsibility for cloud computing

Azure NSG Rules:Beware

Recently, I had reviewed an issue with a load balancer which was not working correctly in Azure IaaS. This load balancer was specifically created in Azure Resource Manager (ARM) and it was load balancing a SQL Server AlwaysOn Availability Group (AOAG) listener. Client connections would fail to connect to the SQL Server standard TCP port 1433, through the load balancer, but would be accessible from to host with a direct reference.

After a fair amount of troubleshooting, Network Security Group (NSG) rules were preventing the load balancer from working correctly. The default rules in NSGs are set at a very high priority, which do allow load balancers to access the local virtual networks. This is currently defined in the default rule highlighted below. The source tag AzureLoadBalancer is used to allow a destination of any with any service/port on the NSG.

AzureNSGDefaultInBoundRulespng

The rule which blocked the load balancer from functioning correctly has been shown below.

AzureNSGInternetBlocktInBoundRule

Creating a rule blocking a source tag of Internet with destination any and service of any with any source/target port ranges of any, rendered the load balancer inoperable. The rule is actually not required since the default DenyAllInBound rule would block any internet traffic. Besides, you would need to either load balance requests or NAT traffic from the internet to the local subnet or host in order to have the appropriate communication pass through.

When you define your NSGs and apply these to subnets or host network interfaces, be aware that you have the capability to block Azure services from working correctly.

Manage the security architecture correctly and ensure that you design your NSGs based on your requirements, also be aware of the default NSG rules which are implicitly implied.

Azure IaaS Guidance, the basics…

The importance of planning and designing in Azure IaaS is just as important as in the days you designed your own on premises infrastructure. Azure has many capabilities which all rely on dependent resource models when you design your application infrastructure to support your projects. With any platform, private or public cloud, you need to ensure that you follow the appropriate guidance around the principles which are focused on best practices for successful implementations of cloud projects. As you would do with any infrastructure design project, there are common components which are key to how you design your applications to achieve operational excellence and performance. Some of the key basic elements have been outlined below.

Data Center Azure IaaS
Physical Machine/Virtual Machine Azure Virtual Machine
SAN Available through Storage Accounts (Tables, Queues, Blobs)
LAN Multiple Virtual Networks with Subnets
WAN/VPN enablement ExpressRoute/VPN Gateway
Network Security Polices Network Security Groups

 

Ingredient 1: VM Sizing

With all the current VM sizes in Azure it can be complicated to start with sizing your requirements up front. It’s important to review your application requirements and how you intend to deploy your overall architecture and instances to support your needs throughout the project and application lifecycle. If you don’t need load balancing and you have some simple VM requirements you can start with A series VMs. If you need something more powerful with additional features like load balancing, you’ll need to move to the standard tier VMs. For intensive compute applications like SQL Server, especially if you are expecting high transactional loads, you should move up to at least the D or DS series VMs. The G series and GS series offer most performance, more disks, larger temporary disks and RAM with the most powerful processors today. Each tiered VM has different capabilities with the number of CPU cores and speed, disk bandwidth and maximum number of attached data disks and virtual network interface adapters. You also need to be aware of the disk limits and I/O limits of each VM size, the number of data disks that you could attach and the max number of IOPS each VM can support with the appropriate storage account type.

It’s important to note that premium storage is only available for use with the DS and GS series VMs. They also use solid state drives for improved I/O, the same as premium storage.

For further information, see https://azure.microsoft.com/en-gb/documentation/articles/virtual-machines-size-specs/.

Ingredient 2: Storage

Once you’ve sized your VMs, storage is key to application performance and based upon IOPS requirements. If you have an idea of IOPS requirements for your applications, then life is much simpler in designing your storage requirements. Azure storage comes with different capabilities, tables, queues and blobs. For VMs, you will use storage accounts with blob storage. Each Virtual Hard Disk (VHD) is stored as a page blob. With Azure storage you have the choice of choosing Locally Redundant Storage (LRS), Zone Redundant Storage (ZRS), Geographically Redundant Storage (GRS) and Read Access Geographically Redundant Storage (RA-GRS). All these redundancy options need to be taken into consideration when you design your storage strategy in Azure for your VMs.

You also need to be aware of the max number of supported VHDs for your VMs in the storage accounts. Not to say that with a standard storage account you can have up to 500TB of space which could hold many VHDs, but you need to determine how many storage accounts you need as part of your design. For example, if you have multiple VMs and data disks and you use one storage account, each VM based on tier has max IOPS to each disk. If you had all your VHDs in one storage account you would end up with throttling your throughput because standard storage account can only support a number of highly utilised disks before throttling occurs. Premium storage with the DS and GS series VMs considerably improves your latency, but your storage space is currently reduced to 35TB per storage account.

With standard tier storage you pay for the data written to the VHD, regardless of the VHD size, each up to 1023GB. With premium storage, you have to create the data disks and you are charged for the whole size up front, unlike standard storage.

For further information, see https://azure.microsoft.com/en-gb/documentation/articles/storage-scalability-targets/ and https://azure.microsoft.com/en-gb/documentation/articles/storage-introduction/.

Ingredient 3: Network

How you design your Virtual Networks and Subnets is important from a number of aspects. You need to think about isolating your resources and how they will be accessed and by whom. The most common theme I have utilized is to ensure that you group your resources into Subnets which take more sense from an application perspective and tie these with security policies. For example, you wouldn’t have VMs exposed in a Subnet to a public IP address which don’t need to be part of a traditional DMZ in your data center. You would do exactly the same in Azure Virtual Networks and segment your virtual network design around your applications and accessibility requirements for your applications. Another way to think about segmentation around applications is to have front end, middle tier and backend Subnets for each set of groups of resources.

For further information, see https://azure.microsoft.com/en-gb/documentation/articles/virtual-networks-overview/.

For on premises connectivity you should consider ExpressRoute or VPN options with VPN Gateways. Each solution offers it’s own benefits.

For further information, see https://azure.microsoft.com/en-gb/documentation/articles/vpn-gateway-about-vpngateways/

Ingredient 4: Security

Security can be managed in different ways in Azure, with virtual appliances available through the VM depot, network security groups, endpoints, load balancers, firewalls, group policies etc. This is a very large area to consider and is of highest importance in any environment. As part of your blueprint and building our logical design, ensure that you take deep consideration into network security groups which can be applied to VMs, Network Interfaces and Subnets. A series of Access Control Lists (ACL’s) can be applied to the resources I mentioned earlier as part of a defence-in-depth strategy. Cloud security should be layered from the VMs, VM firewall, network isolation, access control lists and any protection that can be provided by virtual appliances. You can also force tunnel connections through your on premises environment if you are extending to Azure from your data center and mange traffic to the internet from on premise network security appliances..

For further information, see https://azure.microsoft.com/en-us/documentation/articles/virtual-networks-nsg/.

Ingredient 5: Deployment

The deployment aspects of the cloud can be achieved in different ways. You can use the Azure Portal, Azure PowerShell Cmdlets, Azure Command Line Interface (CLI) and a combination of the highly recommended deployment method with Azure Resource Manager (ARM) templates. In Azure IaaS V1, we focused on Cloud Services as a container for resources. In ARM (IaaS v2), you focus on the whole application, or pieces of the deployment which makes up your footprint for your application architecture. There are a number of ARM templates available on GitHub to get you started. I would recommend building your own ARM templates to understand how the templates glue together before using ARM templates which might be complex. ARM allows you to define your blueprint and deploy this in a repeatable consistent manner, which is perfect for testing your deployments through the application lifecycle. There are certain elements you need to be aware of when you use ARM, for example your storage account deployment will fail if you don’t use lowercase alphanumeric characters from 3-24 characters. Something which isn’t debugged by Visual Studio when you develop the template, so be aware of some of the rulesets you need to follow.

For more information, see https://azure.microsoft.com/en-gb/documentation/articles/resource-group-overview/.

For Quickstart ARM templates to get you up and running, see https://azure.microsoft.com/en-gb/documentation/templates/.

That’s the end of the IaaS basics starter post for today, I hope you found it informative.

SharePoint 2013 Hybrid Disaster Recovery Approach with Azure IaaS

Over the past few months I’ve been working with the content team in Redmond to publish a hybrid disaster recovery approach for SharePoint 2013 with SQL Server AlwaysOn Availability Groups with Microsoft Azure. Today, the article was published and can be found below.

Plan for SQL Server AlwaysOn and Microsoft Azure for SharePoint Server 2013 Disaster Recovery
https://technet.microsoft.com/en-us/library/mt607084.aspx

I’d ike to acknowledge the following people who provided valuable assisance in testing, reviewing and publishing the article on TechNet with me over the past few months in order to get this published in the detailed form that it is today.

Neil Hodgkinson, Senior Program Manager

Dan Wesley, Senior Content Editor

Steve Hord, Senior Content Editor

David Williams, SQL Server Expert

Matthew Robertshaw, SQL Server Expert

Azure IaaS: Workflow Manager Farm

Recently, I had a requirement to host and test a Workflow Manager farm in Azure IaaS with SharePoint 2013. The key element here was to ensure high availability for the SharePoint farm and the Workflow Manager farm as part of the design. This can be achieved by appropriately spreading your farm tier across availability sets, to ensure that the appropriate roles are available during host patching. This, of course, would include the database tier, where you could utilize SQL Server AlwaysOn Availability Groups in synchronous commit mode to achieve maximum uptime.

Cloud Services Generally, you would want to have separation between the SharePoint farm, SQL infrastructure and the workflow manager farm. I personally would recommend using the new regional networks in Azure so that you can utilize internal load balancers required for Workflow Manager high availability. The picture below depicts a very basic farm with high level components, including Workflow Manager within its own cloud service.

. SPWFHAAZure

Internal load balancers are used for SQL Server AlwaysOn, currently supporting one AG in Windows Azure since the Internal Load Balancer (ILB)  supports only one IP address. The SharePoint farm would access SQL Server ideally via a DNS record pointing to the load balanced IP. The SQL Servers would expose the SQL Server endpoint port to the internal load balancer used by the SQL Server Availability Group for connectivity. Synchronous commit mode is utilized on the Availability group to maximise uptime. Since this topic is about Workflow Manager high availability, we would have three instances hosted within a cloud service being exposed via a internal load balancer. This should ideally be on port 12290 (https) for the farm to communicate to the Workflow Manager endpoint. It’s recommended to utilize a domain certification authority (CA), to have a trusted certificate between all the Workflow Manager nodes and Workflow clients. A common question is why would we just not have two servers in the Workflow Manager farm? Basically, Service Bus, utilizes a quorum, so a third node is required for High Availability.

Configuration

  • Prior to configuring the Workflow Manager farm, ensure you have all the Azure VMs provisioned as part of a cloud service and install the Workflow and Service Bus, with the refresh versions and Service Bus 1.1. All nodes in the farm should have the same bits installed, otherwise your configuration may fail.
  • Ensure each host has a certificate issued by a domain CA and you have the certificate installed on all three nodes. The certificate must be trusted by the SharePoint Servers, otherwise you’d have a JSON download error message when you try and register and configure the Workflow Manager Proxy
  • Ensure you have a DNS Host (A) entry created, pointing to the ILB IP address,  in the domain which is tied to the common name issued by the domain CA (after you create the Internal Load Balancer)
  • The workflow manager installer account must be a local administrator on all three farm nodes
  • The workflow manager installer account must have the Server Admin role in SQL Server, although I have made this work with DBCREATOR and SECURITY ADMIN in the past, the Workflow Manager articles still suggest the SA role
  • The Workflow Manager RunAs Account must be a domain account and have the log on as a service right (make sure your domain policies do not override this security policy

Create the Internal Load Balancer in Azure

For this step you’ll require:

  • The Cloud Service Name
  • The VNet Subnet (for assigning the IP address to the ILB)

Run the following PowerShell to create the ILB:

# Internal Load Balancer Settings – replace these settings with your own environment settings

$ILBName = “ILBSBWorkflow”

$ILBSubnet = “WFSubNet”

$EndPointName = “IntSBWFEndPoint”

$WFHTTPSPort = “12290”

$PoleInterval = “20”

$ServiceName = “WORKFLOWCSVC”

$WFVM1 = “WF01”

$WFVM1 = “WF02”

$WFVM1 = “WF03”

# Add Internal Load Balancer to the service

Add-AzureInternalLoadBalancer -InternalLoadBalancerName $ILBName -SubnetName $ILBSubnet -ServiceName $ServiceName

Expose and Assign the Endpoints to the ILB

Run the following PowerShell the create and expose the endpoint on the Virtual Machines, which participate in the load balanced set.

# Add load balanced endpoints to ILB for Workflow Manager

Get-AzureVM -ServiceName $ServiceName -Name $WFVM1 | Add-AzureEndpoint -Name $EndPointName -LBSetName $ILBName -Protocol tcp -LocalPort $WFHTTPSPort -PublicPort $WFHTTPSPort -ProbePort $WFHTTPSPort -ProbeProtocol tcp -ProbeIntervalInSeconds $PoleInterval -InternalLoadBalancerName $ILBName | Update-AzureVM

Get-AzureVM -ServiceName $ServiceName -Name $WFVM2 | Add-AzureEndpoint -Name $EndPointName -LBSetName $ILBName -Protocol tcp -LocalPort $WFHTTPSPort -PublicPort $WFHTTPSPort -ProbePort $WFHTTPSPort -ProbeProtocol tcp -ProbeIntervalInSeconds $PoleInterval -InternalLoadBalancerName $ILBName | Update-AzureVM

Get-AzureVM -ServiceName $ServiceName -Name $WFVM3 | Add-AzureEndpoint -Name $EndPointName -LBSetName $ILBName -Protocol tcp -LocalPort $WFHTTPSPort -PublicPort $WFHTTPSPort -ProbePort $WFHTTPSPort -ProbeProtocol tcp -ProbeIntervalInSeconds $PoleInterval -InternalLoadBalancerName $ILBName | Update-AzureVM

Create your DNS host entry to point to the load balanced address for Workflow Manager. To determine the IP address of the ILB, if you have not assigned one yourself when you created the ILB, you can run the following PowerShell:

Get-AzureInternalLoadBalancer -ServiceName $ServiceName

Install and configure the first Workflow Manager node and join the remaining nodes to the Workflow Manager farm. Check the farm is operational by running the following PowerShell:

Get-SBFarmStatus

Get-WFFarmStatus

Register the Workflow Manager Proxy

$SPWebApp = “https://webapplication&#8221;
$SPWorkFlowHostURI = “https://WORKfLOWADDRESS:12290&#8221;
Register-SPWorkflowService -SPSite $SPWebApp -WorkflowHostUri $SPWorkFlowHostURI

Running SharePoint 2013 in Azure IaaS

Moving your SharePoint Infrastructure to the Cloud? Have a requirement to host SharePoint in an IaaS solution? Look no further.

Microsoft have several Virtual Machine Sizes in Windows Azure IaaS which suit the requirements for SharePoint Server 2013 and have also certified Microsoft SharePoint Server 2013 running in Windows Azure IaaS.

The link to the whitepaper below details the best practices for deploying a SharePoint Infrastructure in Windows Azure:
SharePoint 2013 on Windows Azure Infrastructure Services
I have highlighted the important aspects of running SharePoint Server 2013 in Windows Azure IaaS below:

Supporting Infrastructure

  • Deploy a Virtual Network and deploy a gateway in Windows Azure and create a site-to-site VPN if you have not done so already.
  • Ensure that you have at least two domain controllers running in Azure IaaS to support the IaaS deployed infrastructure. They should belong to their own availability group.
  • If you plan to deploy a new forest you can review the Windows Azure documentation Install a new Active Directory forest in Windows Azure.
  • Never shutdown your domain controller in Windows Azure (Shutdown and restart only), you potentially get the infinite DHCP lease removed and at start-up, your domain controller will get a new IP address when it starts from a cold boot.

SharePoint 2013

  • Ensure that each SharePoint role belongs to the appropriate availability set for high availability. This is so that there is a separation in fault domains and the roles do not go down during maintenance.
  • Be aware of the network throughput and virtual disk limits per VM size in the depot, this is important to get the correct throughput and number of disks per role.
  • Never store anything on the temporary space drive (D:).
  • Ensure that your infrastructure points to the DNS servers defined in your VNet configuration.
  • Use a Windows Azure load balancer since SharePoint 2013 now supports load balancers that do not support sticky sessions.
  • Create gold images of your SharePoint servers so that you can deploy them from the Windows Azure Virtual Machine Gallery.

SQL Server 2012

This information should provide you the basics of getting started in your journey to deploying a SharePoint farm in Windows Azure.

If you already have a private cloud built on Microsoft Windows Server Hyper-V, I would highly recommend deploying System Center Virtual Machine Manager and App Controller to manage your private and public cloud infrastructure as a minimum. Many of the System Center products should be reviewed, planned and deployed accordingly since the there are no other solutions in the market place, that provide the comprehensive functionality which System Center provides, to deploy and manage Microsoft public and private clouds.