This post will be a short guide through all steps without HCI Setup.
Requirements
After you have deployed your HCI Cluster successfully you will have your cluster resources within the azure portal. Once you click on that cluster you will find the following overview.
As you can see as well all prerequisites are met. These prerequisites are:
Deployment
Now you can click on “DEPLOY” to start a custom deployment:
most of the informations are clear, but these 3 were a bit tricky for me 😉
LOCATION
The location you will find within your Azure ARC resources. (Azure Portal > Azure Arc > Custom Location > Properties > ID
IMAGE
To finde the Image id it is required to add at least one image to azure stack.
You have three options to add an image:
The easiest way to get started is to add an azure marketplace image. I have already added “Windows 11” and “Windows Server” to my list. After adding an image go to azure portal > azure stack hci > vm image > “windows11” now copy the url from your browser – that must look like this:
My first deployments failed and I wasn’t sure why. After I checked the deployments within my resource group and checked my inputs to the last failed one
I found that my VM tries to get access to the following URL. That was blocked so I copied that script and created my own https url as a workarround.
To change that URL only “redeploy” of one of the last deployments gives you the option to change that URL
After Deployment
After that deployment I had my VM up and running on my azure stack hci. It was domain joined but the avd agent was mising. I installed that avd agent manually. Now I was able to see that host within the azure portal.
Recently I migrated some Linux Systems with Azure Migrate from a VMWare environment to Azure. We also used Azure Backup to have a daily backup of all VMs and of all Databases as well, but we had not application consistent one. I needed some troubleshooting time to figure out how it works. This step by step guide shows an example how I did it and how to prepare a test environment. This includes how to installs MySQL, creating a Database and how to configure Azure Backup to have an app consistent Backup.
Install MySQL
Create a Database
Configure Azure Backup
Install MySQL
Prerequisites
To follow this guide you need to use (because I did 😉 ): – Ubuntu 20.04
$ sysop@linux01:/$ sudo apt update
output:
$ sysop@linux01:/$ sudo apt install mysql-server
$ systemctl status mysql.service
output:
Create Test DB
$ sudo mysql mysql> create database techguysdb;
mysql> show databases;
output:
Configure Azure Backup
To configure Azure Backup you need to do the following:
I changed “script location” and “continueBackupOnFailure” (this change helped me to see an error message within azure backup jobs, if one script fails)
VMSnapshotPluginConfig.json need to be copied to “/etc/azure”. If this do not exit, simply create. After that we need to change the permission to that file that only “root” has read and write permissions.
both scripts must be copied to the Linux system. I copied it to /scripts. Next important task is to set permissions to 600 to both files otherwise azure backup will fail.
if the backup is enabled it looks like this. It is only configured but has never been executed. Restore points overview shows no backup.
1st Backup
Very important is that the first Backup needs to be done when the virtual machine is deallocated!
then run backup-job as configured
The Backup includes two steps. 1st take a snapshot, second is to copy data to the vault.
When the snapshot task is done the linux-system can be started and our vault shows a crash consistent backup
2nd backup
if the VM is up and running all scripts and config files are in place we can trigger the second backup. now the service should use all configuration and the result should be an app consistent backup 🙂
and here we go…
Hope that step by step guide helps to get this working.
While most of you already have container workloads deployed in different flavors on-premise, the ability to deploy Cloud PaaS Services into your on-prem container is a relatively new thing. With the announced support for Azure Arc enabled Data Services on Karbon it is possible to deploy Azure managed SQL instances or PostgreSQL Hyperscale Services to your Arc managed Nutanix Karbon Kubernetes Cluster.
In this post i will guide you thru the process to deploy a Karbon Cluster, register it to Azure Arc, create a Data Controller, a custom location and a PostgreSQL Instance on your on-premise infrastructure.
Create a Karbon Cluster
To create your Karbon Cluster you have to enable Karbon on your Prism Central instance. Note that a IPAM enabled Network is required. Prism need to control the Network where the Kubernetes Clusters are deployed.
Example for a Production Cluster. You can choose the Dev Option as well.
Name the Cluster and Choose the Version and Host OS Image.
Choose the Nutanix managed Network and decide how much worker and etcd Ressources you need. If you have a external Load Balancer you can use it, or go with the Active-Passive Control Plane.
I used the default values here.
Fill out the needed Data to provide Storage Services to your Cluster.
Ready deployed Cluster in the Karbon Console.
Register Karbon Cluster to Azure Arc
To link your Kubernetes Cluster to Azure you need a Subsription where you are able to deploy resources in. The Service User needs Contributor und Monitoring Metrics Publisher rights.
The Prerequisites are:
A new or existing Kubernetes cluster The cluster must use Kubernetes version 1.13 or later (including OpenShift 4.2 or later and other Kubernetes derivatives).
Access to ports 443 and 9418. Make sure the cluster has access to these ports, and the required outbound URLs
Azure CLI
CLI extensions. Install the latest connectedk8s and k8sconfiguration CLI extensions.
Helm 3
Kubeconfig file with cluster admin permissions (you can download the config from the actions section in the Karbon Portal)
Select the Subscription/Resource Group and choose a Cluster name
Connect to Arc Service
To Connect the Karbon Cluster to Arc you need an elevated Shell with installed Prerequisites and cluster config to connect to your K8s Cluster. You should see the following success page in Azure after Verification.
Next Step is to create a namespace on your Cluster to go through the next steps. Set a custom Namespace with: kubectl create namespace namespace-name –cluster arc-cluster-name
Next Step ist to create a Data Controller and deploy it to your Arc managed Cluster. In this Example i connect to with direct-connectivity mode. There is also a option to connect in indirect connectivity mode.
Fill out the needed Fields Data controller name and create a custom location. Select “azure-arc-kubeadm” as the Kubernetes configuration template and select “onpremise” as the Infrastructure.
To get the correct Data storage class from your Kubernets Cluster run “kubectl get storageclass” in an elevated promt. In my case i have “default-storageclass”.
At Service Type choose Node Port.
At the end we need a Service Principal to Upload usage Data and logs.
To create it use:
az ad sp create-for-rbac –name SP-Name –role Contributor –scopes /subscriptions/subscription-id/resourceGroups/ressourcegroup-name
and
az role assignment create –assignee SP-ID –role ‘Monitoring Metrics Publisher’ –scope /subscriptions/subscription-id/resourceGroups/ressourcegroup-name
to get the Client Secret from your Service Prinzipal use:
az ad sp credential reset –name SP-Name
The Deployment take a while till the Controller is up and in ready state, so catch a cup of coffee 😀
When youre Data Controller is Ready. You can create SQL Managed Instances or PostgresSQL Hyperscale server group. In this example i create a Postgres Instance.
This will take a few minutes. You can watch the progress with the Kibana Instance which was automatically deployed from Karbon to you Cluster. Navigate to the Cluster and under Add-On you can Launch Kibana. With LogTrail you can view and filter real time events and see what´s going on on your Cluster and deployment of your instance.
Ready Deployed Instance
As you can see, we got an External Enpoint to Connect to the instance and see the Health of the Service. The Server Group Nodes where the Server Group runs on and the Node configuration.
Next we hop to our Azure Data Studio and connect to the Data Controller to manage the Instance.
To add a Data Controller just klick Connect Controller and fill out the needed Fields Namespace, Kube Config File Path and give it a name. After Discovery you can right click the instance and manage it.
Connected Azure Data Studio
You can view your connections Strings, Worker Node Parameters or Edit Compute + Storage Settings of your Server Group, or jump to Kibana or Grafana to get insights from your Instance. Some Metrics are also available in the Azure Portal on the Metrics.
Metrics in the Azure Portal
Now you can play around like Scale up Worker Nodes, push Data to the Database or what else you like to see.
I hope this short walk thru helps a little bit to get this up an running for testing.