In this 1st configuration profile, I will walk through the resulting configuration of AKS and its effect on the Load Balancer, Virtual Network, VM network interface card, deploy and test a web application in the Azure Kubernetes Service (AKS) cluster. The configuration profile is mainly around the Kubenet network model. Kubenet is a very basic, simple network plugin, on Linux only. It does not, of itself, implement more advanced features like cross-node networking or network policy.
Please read the Part 1 intro of this blog series if you haven’t already. This will explain the background and what each configuration setting means.
Configuration Profile 1
- Network Model/Type: Basic (Kubenet)
- Http application routing: Disabled
- VM Scale Sets: Disabled
The following are screen shots to show the resulting configuration across its related azure resources.
AKS Resource – Networking Profile
AKS Infrastructure Resource Group
When this AKS resource is deployed, behind the scenes, this resource group along with these azure resources are created to support the AKS azure resource.
You can find this resource group by going the AKS properties pane.
The load balancer is required to support the AKS cluster. The standard SKU is the default. The load balancer is automatically setup with one IP addresses in the Load balancer front end. Here I had another IP Address configures after the fact.
Load Balancer – Backend pools
The load balancer directs traffic to one VM or AKS Node as part of its backend pool.
Load balancing rules
Directing traffic from the front end IP to a backend pool.
Virtual Network – Connected devices
The VM or AKS node’s network interface private IP address as a connected device in the virtual network subnet.
AKS Node VM Network Interface Card (NIC)
Because the network type (plugin) is Basic Kubenet, there is only one private IP address assigned to the VM. This IP address represents at the node level.
An Ingress controller is required in AKS. So using Helm, I deployed the NGINX ingress controller. The steps to do this is outlined in the documentation Create an ingress controller. In a new cluster, you need to also install and setup the helm client and tiller on the server side in AKS.
I created the Ingress resource to set a routing path to the Azure Voting app service which was already deployed.
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: azure-voting-frontend namespace: default annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/ssl-redirect: "false" nginx.ingress.kubernetes.io/rewrite-target: /$2 spec: rules: - http: paths: - backend: serviceName: azure-voting-frontend servicePort: 80 path: /(.*)
And this ingress resource is shown in the Kubernetes Dashboard.
Here is the public IP address served by the load balancer and nginx ingress controller.
Testing the application:
In summary, this configuration profile predominately shows the effects of using the Kubenet plugin and what the virtual network settings look like. This blog post can be a point of reference to see the resulting setup in screenshots and how the azure resources depend and relate to one another.
As defined in Kubenet (basic) networking
The kubenet networking option is the default configuration for AKS cluster creation. With kubenet, nodes get an IP address from the Azure virtual network subnet. Pods receive an IP address from a logically different address space to the Azure virtual network subnet of the nodes. Network address translation (NAT) is then configured so that the pods can reach resources on the Azure virtual network. The source IP address of the traffic is NAT’d to the node’s primary IP address.A benefit is that this approach greatly reduces the number of IP addresses that you need to reserve in your network space for pods to use.
Read the next blog post for Part 3 on Azure CNI (coming soon).