Monday, November 15, 2021

New capabilities available for node pools in AKS on Azure Stack HCI!

In a previous release of Azure Kubernetes Service (AKS) on Azure Stack HCI, we announced node pool support so that you can deploy multiple different nodes with mixed virtual machine sizes and mixed operating systems. At Microsoft, we’re all about growth mindset – so we have some more improvements for your AKS on Azure Stack HCI node pool experience! 


New capabilities available for node pools in AKS on Azure Stack HCI! 


There are two main updates that came with the October release that I’ll update you on – taint support and max pod controls are available in AKS on Azure Stack HCI! Both of these features give you even more fine-grained control over your Kubernetes cluster deployments. To optimize IP address allocation, you may want to change the number of pods that run on each node. With max pod controls, you can change how many pods you would like to be scheduled on each node in that node pool. A new parameter was added to New-AksHciCluster and New-AksHciNodePool that lets you do this. 


Create a new Kubernetes cluster with a changed max pod count 

New-AksHciCluster -name mycluster 

                                -nodePoolName nodepool1 

                                -count 1  

                                -osType linux 

                                -nodeMaxPodCount 70 


Add a node pool with a changed max pod to an existing cluster  

New-AksHciNodePool -clusterName mycluster 

                                     -name nodepool2 

                                     -count 1  

                                     -osType linux 

                                     -maxPodCount 70 


*Note: The value of `-nodeMaxPodCount` and `-maxPodCount` must be greater than 50 so that the max pod count will not prevent the infrastructure pods and the user pods from deploying. 


But wait—! You can also easily differentiate node pools by adding taints to them with the new parameter that was added to the cluster creation and node pool creation command! Do you have a special node pool that has some special hardware configured? To not waste these resources on pods that don’t need it, you can taint this node pool so that only pods with that toleration are scheduled to that particular node pool. Not only do we love ‘zero waste’ for our environment, but we also love it for your Kubernetes environment too! 


Taints and tolerations work in tandem—when a node is tainted, only a workload with a toleration that tells the workload to tolerate that specific taint will be scheduled onto that node. A taint consists of a key, operator, value, and effect. The key and value are arbitrary strings and the operator is either Equalor Exists. The effect can be NoSchedule, NoExecute, or PreferNoSchedule. To read more details about how taints and tolerations work, go here.


Reference these tables below to read about what each of these mean! 





The value is required and must match the key value on the taint. 


The value should be omitted and the toleration will match any taint with the specified key name. 






A "soft" version of NoSchedule. Tells the scheduler to try to avoid placing a pod that does not tolerate the taint on the node, but it is not required. 



Prohibits the scheduler from scheduling intolerant pods to the tainited node. 



Tells the scheduler to evict intolerant pods that are already running there. 




Here’s an example of how you can start using taints and tolerations 


Add a tainted node pool to an existing cluster 

New-AksHciNodePool -clusterName mycluster 

                   -name taintnp 

                   -count 1  

                   -osType linux 

-taints sku=gpu:NoSchedule 


Make sure that the new node pool was created with the taint 

Get-AksHciNodePool -clusterName mycluster 

                                   -name taintnp 


Add the toleration to your YAML file 


  - key: "sku" 

    operator: "Equal" 

    value: "gpu" 

effect: "NoSchedule" 


After this, you can schedule your pods with `kubectl apply`, and watch your pods be scheduled on the nodes that you want! Before this feature, you had to individually apply the taints to each node, and the taints did not persist through an upgrade. Now, you can easily taint the whole node pool with one simple parameter, and the taints and max pod setting will persist through an upgrade! 


To learn more about node pools in AKS on Azure Stack HCI, please visit here. 


To learn more about AKS on Azure Stack HCI in general, please start here. 


Useful links: 

Try for free: 
Tech Docs: 
Issues and Roadmap: 
Evaluate on Azure: 

Posted at