Nutanix has updated its core software. AOS 6.5 is designed to meet power-hungry applications and databases with more performance, integrated cyber resilience, and granular snapshots and replication services.
AOS 6.5 includes NVMe tiering for Intel Optane SSD multithreaded vDisks and metadata cache rewarming. AOS 6.5 provides networking and security enhancements for encrypted data recovery (DR) and inter-cluster traffic, virtual networks for logical network isolation, visible AWS subnets from the Nutanix Cloud Manager console, and IPv6 certification. In addition, the upgraded OS includes built-in storage layer snapshots with policy-based virtual machine (VM)- or datastore-specific snapshots with low overhead.
Other new features include:
– AHV memory overcommit and VM templates; – Enhanced resiliency monitoring; – Capacity planning for non-Nutanix VMware ESXi clusters; – Revised memory management and visibility; – improved maintenance mode support.
The .Next is every time the starting point for new product announcements and features. This time, too, there was something new. I have summarized the most important information about the announcements.
While most of you already have container workloads deployed in different flavors on-premise, the ability to deploy Cloud PaaS Services into your on-prem container is a relatively new thing. With the announced support for Azure Arc enabled Data Services on Karbon it is possible to deploy Azure managed SQL instances or PostgreSQL Hyperscale Services to your Arc managed Nutanix Karbon Kubernetes Cluster.
In this post i will guide you thru the process to deploy a Karbon Cluster, register it to Azure Arc, create a Data Controller, a custom location and a PostgreSQL Instance on your on-premise infrastructure.
Create a Karbon Cluster
To create your Karbon Cluster you have to enable Karbon on your Prism Central instance. Note that a IPAM enabled Network is required. Prism need to control the Network where the Kubernetes Clusters are deployed.
Example for a Production Cluster. You can choose the Dev Option as well.
Name the Cluster and Choose the Version and Host OS Image.
Choose the Nutanix managed Network and decide how much worker and etcd Ressources you need. If you have a external Load Balancer you can use it, or go with the Active-Passive Control Plane.
I used the default values here.
Fill out the needed Data to provide Storage Services to your Cluster.
Ready deployed Cluster in the Karbon Console.
Register Karbon Cluster to Azure Arc
To link your Kubernetes Cluster to Azure you need a Subsription where you are able to deploy resources in. The Service User needs Contributor und Monitoring Metrics Publisher rights.
The Prerequisites are:
A new or existing Kubernetes cluster The cluster must use Kubernetes version 1.13 or later (including OpenShift 4.2 or later and other Kubernetes derivatives).
Access to ports 443 and 9418. Make sure the cluster has access to these ports, and the required outbound URLs
Azure CLI
CLI extensions. Install the latest connectedk8s and k8sconfiguration CLI extensions.
Helm 3
Kubeconfig file with cluster admin permissions (you can download the config from the actions section in the Karbon Portal)
Select the Subscription/Resource Group and choose a Cluster name
Connect to Arc Service
To Connect the Karbon Cluster to Arc you need an elevated Shell with installed Prerequisites and cluster config to connect to your K8s Cluster. You should see the following success page in Azure after Verification.
Next Step is to create a namespace on your Cluster to go through the next steps. Set a custom Namespace with: kubectl create namespace namespace-name –cluster arc-cluster-name
Next Step ist to create a Data Controller and deploy it to your Arc managed Cluster. In this Example i connect to with direct-connectivity mode. There is also a option to connect in indirect connectivity mode.
Fill out the needed Fields Data controller name and create a custom location. Select “azure-arc-kubeadm” as the Kubernetes configuration template and select “onpremise” as the Infrastructure.
To get the correct Data storage class from your Kubernets Cluster run “kubectl get storageclass” in an elevated promt. In my case i have “default-storageclass”.
At Service Type choose Node Port.
At the end we need a Service Principal to Upload usage Data and logs.
To create it use:
az ad sp create-for-rbac –name SP-Name –role Contributor –scopes /subscriptions/subscription-id/resourceGroups/ressourcegroup-name
and
az role assignment create –assignee SP-ID –role ‘Monitoring Metrics Publisher’ –scope /subscriptions/subscription-id/resourceGroups/ressourcegroup-name
to get the Client Secret from your Service Prinzipal use:
az ad sp credential reset –name SP-Name
The Deployment take a while till the Controller is up and in ready state, so catch a cup of coffee 😀
When youre Data Controller is Ready. You can create SQL Managed Instances or PostgresSQL Hyperscale server group. In this example i create a Postgres Instance.
This will take a few minutes. You can watch the progress with the Kibana Instance which was automatically deployed from Karbon to you Cluster. Navigate to the Cluster and under Add-On you can Launch Kibana. With LogTrail you can view and filter real time events and see what´s going on on your Cluster and deployment of your instance.
Ready Deployed Instance
As you can see, we got an External Enpoint to Connect to the instance and see the Health of the Service. The Server Group Nodes where the Server Group runs on and the Node configuration.
Next we hop to our Azure Data Studio and connect to the Data Controller to manage the Instance.
To add a Data Controller just klick Connect Controller and fill out the needed Fields Namespace, Kube Config File Path and give it a name. After Discovery you can right click the instance and manage it.
Connected Azure Data Studio
You can view your connections Strings, Worker Node Parameters or Edit Compute + Storage Settings of your Server Group, or jump to Kibana or Grafana to get insights from your Instance. Some Metrics are also available in the Azure Portal on the Metrics.
Metrics in the Azure Portal
Now you can play around like Scale up Worker Nodes, push Data to the Database or what else you like to see.
I hope this short walk thru helps a little bit to get this up an running for testing.
With the freshly released Version pc.2021.8 of Prism Central, Nutanix integrates new Features to licence Clusters under Prism Central management.
With cluster-based-licensing you were able to choose the license level of a managed cluster. But with this update some changes apply in how you have to apply licenses in the future.
Licence Options Example
The benefits of this new way to apply licences to your managed environment is, that you can choose a different tiers of licences for each cluster under management. For example you need Ultimate Features for Cluster X, but only Pro Features for Cluster Y. For Dev or Testing Clusters you can even leave the Cluster unlicensed, but every node in a cluster must have the same licence tier.
Nutanix changed the way it handles licence features with cluster-based-licencing in this release, because if you access a feature which is within a higher licence tier, the feature is disabled and PC displays a “Feature Disabled” message. The pulled data from a managed cluster which uses widgets or reports from a higher tier feature is filtered out.
The metering Types available are capacity and nodes, based on cluster, for each node. Flow and Calm is available as Core also.
The Feature comes with the following Limitations (copied from the License Manager Guide out of the Nutanix Portal):
Cluster-based licensing is not available for dark site clusters or deployments where clusters are not connected to the Internet.
To use Prism Central cluster-based licensing, Prism Element AOS clusters registered with Prism Central must be licensed with an AOS Starter, AOS Pro, or AOS Ultimate license.
If you have not implemented cluster-based licensing for your managed clusters, you have access to features provided by your existing Prism Central license tier for all clusters registered to Prism Central as usual.
When using Prism Central cluster-based licensing, a Prism Element cluster is considered unlicensed if no cluster-based license is applied. Only Prism Central Starter features are available to manage a cluster without a cluster-based licensed applied.
If you have not implemented cluster-based licensing for your managed clusters when it is available in your Prism Central version, the next time you update your license from Prism Central (for example, applying a new license or consuming unused existing licenses), the Licensing page at the Nutanix Support portal will now present the cluster-based licensing work flow tasks. That is, you must now use cluster-based licensing for eligible registered clusters.
Flow Pro and Starter Option
If you crawl thru the information’s on the Nutanix portal some Screenshots show some of the upcoming add-on tiles Tiers.