Background: Getting started and learning Azure Kubernetes Service (AKS) is known to be a steep learning curve. In addition to planning for an AKS setup has a ton of design and architecture considerations.
Objective: Go through some essentials for planning a simple Azure Kubernetes Cluster to host your containerized or micro-services application in a development/test or proof of concept (POC) environment. Also, to lay out the foundation to build upon towards a production grade environment.
Intended Audience: You are a novice with Azure Kubernetes and you want to deploy a containerized application. You may be an application developer or system administrator or solutions architect.
AKS Cluster
The AKS cluster architecture is broadly two major components which comprises of the control plane serving the core services and orchestration and set of Nodes that actually runs your applications.

Nodes
The number of nodes can be 1 or more. For dev/test, 1 node may be enough to start, but can scale out manually. Note that to operate reliably, you should have at least 2 nodes. In my experiences, 1 node, has worked fine. I can just manually scale as needed. And for production, it is recommended to have at least 3 nodes.
To read further details:
Operating Systems: Linux or Windows
By default, the nodes are Linux based containers. However, AKS supports windows containers in its own Node pool. Node pools share the same configuration among nodes. Therefore, you will have a situation of supporting both operating systems in their own Node pool. You can’t have only windows based container support.
To read further details:
Networking
New or existing Virtual Network
AKS cluster is deployed to a virtual network and specifically to a subnet. Either A VNet is autogenerated or an AKS cluster associated to an existing virtual network. The 2nd option allows for more control, modularity, segregation and manageability.
I would recommend creating a virtual network first and deploy AKS into one of its subnets. Therefore, some IP address planning is recommended. On the other hand, if you just require an AKS cluster with its standalone virtual network and its default configuration has little importance, then autogenerate a virtual network upon AKS deployment via the Azure Portal or Azure CLI.
To read further details:
Public or Private network Access
Depending if your application needs access from within your virtual network or from the internet, you can choose either a Public or Internal Azure Load Balancer with the respective public IP or private IP address. Just note that a load balancer is a first typical step towards providing application access in an AKS cluster.

To read further details:
Network Models: Kubenet or Azure CNI plugin
In addition to the virtual network configuration, you need to configure the networking model that is managed within the AKS cluster resources.
- Kubenet networking
- Azure Container Networking Interface (CNI) networking

Each Pod within a node gets an IP address from the VNet subnet address space. In the case of using the Azure Application Gateway, it may require this networking plugin so that the backend pools have direct traffic to the pods’ IP addresses.
Kubenet is simpler and less control and management of IP addresses. However, Azure CNI needs more planning with IP addressing and allocation in a subnet as you may scale the nodes and AKS cluster.
Generally, recommend Azure CNI for more manageability and control as you work towards a production environment.
To read further details:
- Network concepts for applications in Azure Kubernetes Service (AKS) – Virtual Networks
- Best practices for network connectivity and security in Azure Kubernetes Service (AKS)
Continue reading further for Part 2 of Planning Essentials for an Azure Kubernetes Cluster.
Pingback: Planning Essentials for an Azure Kubernetes Cluster – Part 2 – Roy Kim on Azure, Office 365 and SharePoint