AKS and Ingress, again

Published: Mar 28, 2020 by Isaac Johnson

I wrote most of my Azure Kubernetes guides a while back in a time where Helm 2 was standard and RBAC was an optional add-in.  Recently when doing AKS work for work, I realized really how much has changed.  For instance, all AKS clusters now have RBAC turned on by default and furthermore most help charts, including the standard “stable” ones have moved to Helm 3.  AKS 1.16 fundamentally changed things by expiring so many “beta” and pre-release APIs (favouring instead annotations).  So let’s get back to AKS and Ingress.

The reason I wish to sort out Ingress is that many of the guides I’ve seen from Microsoft tech blogs still reference pre-RBAC clusters and Helm 2. They don’t work with the current charts that our out there and ingress is probably the most key piece of actually using a cluster to host applications.

Creating an AKS cluster

Az CLI and Login

First, we better upgrade our Azure CLI (If you lack it, you can use apt-get install instead of upgrade)

$ sudo apt-get update
$ sudo apt-get upgrade azure-cli

You can use –version to see if you are out of date:

builder@DESKTOP-JBA79RT:~$ az --version
azure-cli 2.2.0

command-modules-nspkg 2.0.3
core 2.2.0
nspkg 3.0.4
telemetry 1.0.4

Python location '/opt/az/bin/python3'
Extensions directory '/home/builder/.azure/cliextensions'

Python (Linux) 3.6.5 (default, Mar 6 2020, 14:41:24)
[GCC 7.4.0]

Legal docs and information: aka.ms/AzureCliLegal



Your CLI is up-to-date.

Please let us know how we are doing: https://aka.ms/clihats

You’ll want to login:

$ az login
$ az account set --subscription (your sub id)

Let’s just verify by showing what clusters are there now:

$ az aks list
[]

Cluster create

$ az group create --name idjaks02rg --location centralus
{
  "id": "/subscriptions/70b42e6a-6faf-4fed-bcec-9f3995b1aca8/resourceGroups/idjaks02rg",
  "location": "centralus",
  "managedBy": null,
  "name": "idjaks02rg",
  "properties": {
    "provisioningState": "Succeeded"
  },
  "tags": null,
  "type": "Microsoft.Resources/resourceGroups"
}

Next we should create an SP (so we can later handle creating an ACR or enabling RBAC based IAM to Azure resources)

builder@DESKTOP-JBA79RT:~$ az ad sp create-for-rbac -n idjaks02sp --skip-assignment --output json > my_sp.json
Changing "idjaks02sp" to a valid URI of "http://idjaks02sp", which is the required format used for service principal names
builder@DESKTOP-JBA79RT:~$ cat my_sp.json | jq -r .appId
5bbad7af-0559-411c-a5c6-c33874cbbd5b

We can save the ID and Password for the next step

builder@DESKTOP-JBA79RT:~$ export SP_PASS=`cat my_sp.json | jq -r .password`
builder@DESKTOP-JBA79RT:~$ export SP_ID=`cat my_sp.json | jq -r .appId`

Now we can create it

builder@DESKTOP-JBA79RT:~$ az aks create --resource-group idjaks02rg --name idjaks02 --location centralus --node-count 3 --enable-cluster-autoscaler --min-count 2 --max-count 4 --generate-ssh-keys --network-plugin azure --network-policy azure --service-principal $SP_ID --client-secret $SP_PASS
{
  "aadProfile": null,
  "addonProfiles": null,
  "agentPoolProfiles": [
    {
      "availabilityZones": null,
      "count": 3,
      "enableAutoScaling": true,
      "enableNodePublicIp": null,
      "maxCount": 4,
      "maxPods": 30,
      "minCount": 2,
      "name": "nodepool1",
      "nodeLabels": null,
      "nodeTaints": null,
      "orchestratorVersion": "1.15.10",
      "osDiskSizeGb": 100,
      "osType": "Linux",
      "provisioningState": "Succeeded",
      "scaleSetEvictionPolicy": null,
      "scaleSetPriority": null,
      "tags": null,
      "type": "VirtualMachineScaleSets",
      "vmSize": "Standard_DS2_v2",
      "vnetSubnetId": null
    }
  ],
  "apiServerAccessProfile": null,
  "dnsPrefix": "idjaks02-idjaks02rg-70b42e",
  "enablePodSecurityPolicy": null,
  "enableRbac": true,
  "fqdn": "idjaks02-idjaks02rg-70b42e-bd99caae.hcp.centralus.azmk8s.io",
  "id": "/subscriptions/70b42e6a-6faf-4fed-bcec-9f3995b1aca8/resourcegroups/idjaks02rg/providers/Microsoft.ContainerService/managedClusters/idjaks02",
  "identity": null,
  "identityProfile": null,
  "kubernetesVersion": "1.15.10",
  "linuxProfile": {
    "adminUsername": "azureuser",
    "ssh": {
      "publicKeys": [
        {
          "keyData": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDLzysqDWJpJ15Sho/NYk3ZHzC36LHw5zE1gyxhEQCH53BSbgA39XVXs/8TUjrkoVi6/YqlliYVg7TMQSjG51d3bLuelMh7IGIPGqSnT5rQe4x9ugdi+rLeFgP8+rf9aGYwkKMd98Aj2i847/deNLFApDoTtI54obZDuhu2ySW23BiQqV3lXuIe/0WwKpG0MFMoXU9JrygPXyNKbgJHR7pLR9U8WVLMF51fmUEeKb5johgrKeIrRMKBtiijaJO8NP6ULuOcQ+Z0VpUUbZZpIqeo8wqdMbDHkyFqh5a5Z1qrY5uDSpqcElqR5SiVesumUfMTBxz83/oprz23e747h8rP"
        }
      ]
    }
  },
  "location": "centralus",
  "maxAgentPools": 10,
  "name": "idjaks02",
  "networkProfile": {
    "dnsServiceIp": "10.0.0.10",
    "dockerBridgeCidr": "172.17.0.1/16",
    "loadBalancerProfile": {
      "allocatedOutboundPorts": null,
      "effectiveOutboundIps": [
        {
          "id": "/subscriptions/70b42e6a-6faf-4fed-bcec-9f3995b1aca8/resourceGroups/MC_idjaks02rg_idjaks02_centralus/providers/Microsoft.Network/publicIPAddresses/2ad8aacf-3c25-4eea-922e-24b1341b0f87",
          "resourceGroup": "MC_idjaks02rg_idjaks02_centralus"
        }
      ],
      "idleTimeoutInMinutes": null,
      "managedOutboundIps": {
        "count": 1
      },
      "outboundIpPrefixes": null,
      "outboundIps": null
    },
    "loadBalancerSku": "Standard",
    "networkPlugin": "azure",
    "networkPolicy": "azure",
    "outboundType": "loadBalancer",
    "podCidr": null,
    "serviceCidr": "10.0.0.0/16"
  },
  "nodeResourceGroup": "MC_idjaks02rg_idjaks02_centralus",
  "privateFqdn": null,
  "provisioningState": "Succeeded",
  "resourceGroup": "idjaks02rg",
  "servicePrincipalProfile": {
    "clientId": "5bbad7af-0559-411c-a5c6-c33874cbbd5b",
    "secret": null
  },
  "tags": null,
  "type": "Microsoft.ContainerService/ManagedClusters",
  "windowsProfile": {
    "adminPassword": null,
    "adminUsername": "azureuser"
  }
}

I wanted to verify both Kubenet and Azure CNI networking, so I’ll create another cluster we’ll be using later

builder@DESKTOP-JBA79RT:~$ az group create --name idjaks03rg --location centralus
{
  "id": "/subscriptions/70b42e6a-6faf-4fed-bcec-9f3995b1aca8/resourceGroups/idjaks03rg",
  "location": "centralus",
  "managedBy": null,
  "name": "idjaks03rg",
  "properties": {
    "provisioningState": "Succeeded"
  },
  "tags": null,
  "type": "Microsoft.Resources/resourceGroups"
}

And i’ll just make a quick SP for this cluster as well

builder@DESKTOP-JBA79RT:~$ az ad sp create-for-rbac -n idjaks03sp --skip-assignment
Changing "idjaks03sp" to a valid URI of "http://idjaks03sp", which is the required format used for service principal names
{
  "appId": "9a5eac0f-ea43-4791-9dc1-d2226b35de7d",
  "displayName": "idjaks03sp",
  "name": "http://idjaks03sp",
  "password": "159ef2f8-xxxx-xxxx-xxxx-4ff051177429",
  "tenant": "d73a39db-6eda-495d-8000-7579f56d68b7"
}

And now create it

builder@DESKTOP-JBA79RT:~$ az aks create -g idjaks03rg -n idjaks03 --location centralus --node-count 3 --generate-ssh-keys --network-plugin kubenet --service-principal 9a5eac0f-xxxx-xxxx-xxxx-d2226b35de7d --client-secret 159ef2f8-384f-45bd-9a20-4ff051177429
{
  "aadProfile": null,
  "addonProfiles": null,
  "agentPoolProfiles": [
    {
      "availabilityZones": null,
      "count": 3,
      "enableAutoScaling": null,
      "enableNodePublicIp": null,
      "maxCount": null,
      "maxPods": 110,
      "minCount": null,
      "name": "nodepool1",
      "nodeLabels": null,
      "nodeTaints": null,
      "orchestratorVersion": "1.15.10",
      "osDiskSizeGb": 100,
      "osType": "Linux",
      "provisioningState": "Succeeded",
      "scaleSetEvictionPolicy": null,
      "scaleSetPriority": null,
      "tags": null,
      "type": "VirtualMachineScaleSets",
      "vmSize": "Standard_DS2_v2",
      "vnetSubnetId": null
    }
  ],
  "apiServerAccessProfile": null,
  "dnsPrefix": "idjaks03-idjaks03rg-70b42e",
  "enablePodSecurityPolicy": null,
  "enableRbac": true,
  "fqdn": "idjaks03-idjaks03rg-70b42e-4ef6b7dc.hcp.centralus.azmk8s.io",
  "id": "/subscriptions/70b42e6a-6faf-4fed-bcec-9f3995b1aca8/resourcegroups/idjaks03rg/providers/Microsoft.ContainerService/managedClusters/idjaks03",
  "identity": null,
  "identityProfile": null,
  "kubernetesVersion": "1.15.10",
  "linuxProfile": {
    "adminUsername": "azureuser",
    "ssh": {
      "publicKeys": [
        {
          "keyData": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDLzysqDWJpJ15Sho/NYk3ZHzC36LHw5zE1gyxhEQCH53BSbgA39XVXs/8TUjrkoVi6/YqlliYVg7TMQSjG51d3bLuelMh7IGIPGqSnT5rQe4x9ugdi+rLeFgP8+rf9aGYwkKMd98Aj2i847/deNLFApDoTtI54obZDuhu2ySW23BiQqV3lXuIe/0WwKpG0MFMoXU9JrygPXyNKbgJHR7pLR9U8WVLMF51fmUEeKb5johgrKeIrRMKBtiijaJO8NP6ULuOcQ+Z0VpUUbZZpIqeo8wqdMbDHkyFqh5a5Z1qrY5uDSpqcElqR5SiVesumUfMTBxz83/oprz23e747h8rP"
        }
      ]
    }
  },
  "location": "centralus",
  "maxAgentPools": 10,
  "name": "idjaks03",
  "networkProfile": {
    "dnsServiceIp": "10.0.0.10",
    "dockerBridgeCidr": "172.17.0.1/16",
    "loadBalancerProfile": {
      "allocatedOutboundPorts": null,
      "effectiveOutboundIps": [
        {
          "id": "/subscriptions/70b42e6a-6faf-4fed-bcec-9f3995b1aca8/resourceGroups/MC_idjaks03rg_idjaks03_centralus/providers/Microsoft.Network/publicIPAddresses/ef81397e-f752-4b8e-9b20-842aa72fbc9e",
          "resourceGroup": "MC_idjaks03rg_idjaks03_centralus"
        }
      ],
      "idleTimeoutInMinutes": null,
      "managedOutboundIps": {
        "count": 1
      },
      "outboundIpPrefixes": null,
      "outboundIps": null
    },
    "loadBalancerSku": "Standard",
    "networkPlugin": "kubenet",
    "networkPolicy": null,
    "outboundType": "loadBalancer",
    "podCidr": "10.244.0.0/16",
    "serviceCidr": "10.0.0.0/16"
  },
  "nodeResourceGroup": "MC_idjaks03rg_idjaks03_centralus",
  "privateFqdn": null,
  "provisioningState": "Succeeded",
  "resourceGroup": "idjaks03rg",
  "servicePrincipalProfile": {
    "clientId": "9a5eac0f-ea43-4791-9dc1-d2226b35de7d",
    "secret": null
  },
  "tags": null,
  "type": "Microsoft.ContainerService/ManagedClusters",
  "windowsProfile": null
}

Verify they are both stood up just fine

builder@DESKTOP-JBA79RT:~$ az aks list -o table
Name Location ResourceGroup KubernetesVersion ProvisioningState Fqdn
-------- ---------- --------------- ------------------- ------------------- -----------------------------------------------------------
idjaks02 centralus idjaks02rg 1.15.10 Succeeded idjaks02-idjaks02rg-70b42e-bd99caae.hcp.centralus.azmk8s.io
idjaks03 centralus idjaks03rg 1.15.10 Succeeded idjaks03-idjaks03rg-70b42e-4ef6b7dc.hcp.centralus.azmk8s.io

Ingress with Nginx

We will now show using both network types.  For NGinx, since the Microsoft docs are a bit out of date, at least with regards to Helm, let’s use the current GH page for Nginx: https://kubernetes.github.io/ingress-nginx/deploy/

First, we need to login to the cluster with Admin credentials

builder@DESKTOP-JBA79RT:~$ az aks get-credentials -n idjaks02 -g idjaks02rg --admin
Merged "idjaks02-admin" as current context in /home/builder/.kube/config

And if we want that sanity that our kubectl is up to date, we can always check the provider id (check resource group);

builder@DESKTOP-JBA79RT:~$ kubectl get nodes -o json | jq '.items[0].spec | .providerID'
"azure:///subscriptions/70b42e6a-6faf-4fed-bcec-9f3995b1aca8/resourceGroups/mc_idjaks02rg_idjaks02_centralus/providers/Microsoft.Compute/virtualMachineScaleSets/aks-nodepool1-13062035-vmss/virtualMachines/0"

First, we apply the mandatory (which applies to all k8s providers except minikube):

builder@DESKTOP-JBA79RT:~$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml
namespace/ingress-nginx created
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
serviceaccount/nginx-ingress-serviceaccount created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
role.rbac.authorization.k8s.io/nginx-ingress-role created
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
deployment.apps/nginx-ingress-controller created
limitrange/ingress-nginx created

Then our cloud generic (for AKS)

builder@DESKTOP-JBA79RT:~$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/provider/cloud-generic.yaml
service/ingress-nginx created

We should see an Nginx controller now running:

builder@DESKTOP-JBA79RT:~$ kubectl get pods --all-namespaces -l app.kubernetes.io/name=ingress-nginx --watch
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx nginx-ingress-controller-7fcf8df75d-vrwft 1/1 Running 0 102s

And we can see an LB was created for us:

builder@DESKTOP-JBA79RT:~$ kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx LoadBalancer 10.0.74.42 52.141.208.74 80:31389/TCP,443:30406/TCP 6m47s

Hitting that endpoint should show a 404 page:

404 shows Nginx is up and replying

Next let’s install a hello world app

builder@DESKTOP-JBA79RT:~$ helm repo add azure-samples https://azure-samples.github.io/helm-charts/
"azure-samples" has been added to your repositories
builder@DESKTOP-JBA79RT:~$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "azure-samples" chart repository
Update Complete. ⎈ Happy Helming!⎈
builder@DESKTOP-JBA79RT:~$ kubectl create ns ingress-basic
namespace/ingress-basic created
builder@DESKTOP-JBA79RT:~$ helm install aks-helloworld azure-samples/aks-helloworld --namespace ingress-basic
NAME: aks-helloworld
LAST DEPLOYED: Fri Mar 27 11:29:40 2020
NAMESPACE: ingress-basic
STATUS: deployed
REVISION: 1
TEST SUITE: None

Let’s get the pod and forward traffic

builder@DESKTOP-JBA79RT:~$ kubectl get pods -n ingress-basic
NAME READY STATUS RESTARTS AGE
acs-helloworld-aks-helloworld-5d6f57bdb5-5nx95 1/1 Running 0 42s
builder@DESKTOP-JBA79RT:~$ kubectl port-forward acs-helloworld-aks-helloworld-5d6f57bdb5-5nx95 -n ingress-basic 8080:80
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
Handling connection for 8080
Handling connection for 8080

Create the Ingress and apply it.  Note we had to add turn off validation

builder@DESKTOP-JBA79RT:~$ cat simple-ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: aks-helloworld
  namespace: ingress-basic
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - http:
      paths:
      - path: /mysimplehelloworld
        pathType: Prefix
        backend:
          serviceName: aks-helloworld
          servicePort: 80

builder@DESKTOP-JBA79RT:~$ kubectl apply -f simple-ingress.yaml
error: error validating "simple-ingress.yaml": error validating data: ValidationError(Ingress.spec.rules[0].http.paths[0]): unknown field "pathType" in io.k8s.api.networking.v1beta1.HTTPIngressPath; if you choose to ignore these errors, turn validation off with --validate=false
builder@DESKTOP-JBA79RT:~$ !v
vi simple-ingress.yaml
builder@DESKTOP-JBA79RT:~$ kubectl apply -f simple-ingress.yaml --validate=false
ingress.networking.k8s.io/aks-helloworld created
Properly serves traffic

This works, but no matter what we pass on the URL, it will just redirect to “/”

If we wanted to pass subpaths (more common with webservices), we would do a rewrite

builder@DESKTOP-JBA79RT:~$ cat simple-ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: aks-helloworld
  namespace: ingress-basic
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
  rules:
  - http:
      paths:
      - path: /mysimplehelloworld(/|$)(.*)
        backend:
          serviceName: aks-helloworld
          servicePort: 80
builder@DESKTOP-JBA79RT:~$ kubectl apply -f simple-ingress.yaml --validate=false
ingress.networking.k8s.io/aks-helloworld configured

We can see that reflected now:

Showing the base URL directing, but the Pod doesn't serve /asdf

I should point out that the Cluster-IP can be used by containers inside the cluster. We can test that with a local VNC host.

builder@DESKTOP-JBA79RT:~$ kubectl apply -f https://raw.githubusercontent.com/ConSol/docker-headless-vnc-container/master/kubernetes/kubernetes.headless-vnc.example.deployment.yaml
deployment.apps/headless-vnc created
$ kubectl port-forward headless-vnc-54f58c69f9-zxwbd 5901:5901
(password is “vncpassword”)
Here we see traffic from Pod to Pod

Another question we want answered is what happens when we upgrade? Which pods are going to change?

Let’s move the cluster to 1.16 (and throw caution to the wind)

Showing upgrade process

And the node pool

We can use $ watch kubectl get nodes to see when it comes up

To see when they flip over…

Every 2.0s: kubectl get nodes --all-namespaces DESKTOP-JBA79RT: Fri Mar 27 12:48:34 2020
NAME STATUS ROLES AGE VERSION
aks-nodepool1-13062035-vmss000000 Ready agent 117m v1.15.10
aks-nodepool1-13062035-vmss000002 Ready agent 117m v1.15.10

builder@DESKTOP-JBA79RT:~$ kubectl get nodes --all-namespaces
NAME STATUS ROLES AGE VERSION
aks-nodepool1-13062035-vmss000000 Ready agent 130m v1.16.7
aks-nodepool1-13062035-vmss000002 Ready agent 130m v1.16.7

Let’s get the services

builder@DESKTOP-JBA79RT:~$ kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default headless-vnc NodePort 10.0.100.216 <none> 6901:32001/TCP,5901:32002/TCP 21m
default kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 133m
ingress-basic aks-helloworld ClusterIP 10.0.128.75 <none> 80/TCP 91m
ingress-nginx ingress-nginx LoadBalancer 10.0.74.42 52.141.208.74 80:31389/TCP,443:30406/TCP 106m
kube-system dashboard-metrics-scraper ClusterIP 10.0.121.188 <none> 8000/TCP 13m
kube-system kube-dns ClusterIP 10.0.0.10 <none> 53/UDP,53/TCP 132m
kube-system kubernetes-dashboard ClusterIP 10.0.148.139 <none> 443/TCP 12m
kube-system metrics-server ClusterIP 10.0.35.19 <none> 443/TCP 132m

builder@DESKTOP-JBA79RT:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default headless-vnc-54f58c69f9-s8l9g 0/1 ContainerCreating 0 66s
ingress-basic acs-helloworld-aks-helloworld-5d6f57bdb5-vmwkt 1/1 Running 0 3m40s
ingress-nginx nginx-ingress-controller-7fcf8df75d-fjpql 1/1 Running 0 3m40s
kube-system azure-cni-networkmonitor-mj45l 1/1 Running 0 130m
kube-system azure-cni-networkmonitor-whvxf 1/1 Running 0 130m
kube-system azure-ip-masq-agent-5vqvl 1/1 Running 0 130m
kube-system azure-ip-masq-agent-x6xx7 1/1 Running 0 130m
kube-system azure-npm-cpk6m 1/1 Running 0 130m
kube-system azure-npm-zwv9x 1/1 Running 0 130m
kube-system coredns-698c77c5d7-4grnl 1/1 Running 0 3m40s
kube-system coredns-698c77c5d7-hqpc6 0/1 ContainerCreating 0 66s
kube-system coredns-autoscaler-7dcd5c4456-4xh28 1/1 Running 0 3m40s
kube-system dashboard-metrics-scraper-69d57d47b8-24bkx 1/1 Running 0 3m40s
kube-system kube-proxy-fxl25 1/1 Running 0 8m30s
kube-system kube-proxy-ndc9v 1/1 Running 0 8m16s
kube-system kubernetes-dashboard-7f7676f7b5-zg575 1/1 Running 0 66s
kube-system metrics-server-ff58ffc74-v67sx 1/1 Running 0 66s
kube-system tunnelfront-557955c4df-98mp5 1/1 Running 0 66s

We can actually see it kept the same cluster IP - the services stay put..

Another way to think about it, if we have a scaling event that moves the pods around, they’ll get new IPs, but the services stay put.  Let’s delete the pod and verify that…

builder@DESKTOP-JBA79RT:~$ kubectl describe pod nginx-ingress-controller-7fcf8df75d-fjpql -n ingress-nginx | grep IP
IP: 10.240.0.14
builder@DESKTOP-JBA79RT:~$ kubectl delete pod nginx-ingress-controller-7fcf8df75d-fjpql -n ingress-nginx
pod "nginx-ingress-controller-7fcf8df75d-fjpql" deleted
builder@DESKTOP-JBA79RT:~$ kubectl get pods --all-namespaces | grep nginx
ingress-nginx nginx-ingress-controller-7fcf8df75d-sr5t5 1/1 Running 0 43s
builder@DESKTOP-JBA79RT:~$ kubectl describe pod nginx-ingress-controller-7fcf8df75d-sr5t5 -n ingress-nginx | grep IP
IP: 10.240.0.10

Services still work

builder@DESKTOP-JBA79RT:~$ kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default headless-vnc NodePort 10.0.100.216 <none> 6901:32001/TCP,5901:32002/TCP 26m
default kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 138m
ingress-basic aks-helloworld ClusterIP 10.0.128.75 <none> 80/TCP 96m
ingress-nginx ingress-nginx LoadBalancer 10.0.74.42 52.141.208.74 80:31389/TCP,443:30406/TCP 111m
kube-system dashboard-metrics-scraper ClusterIP 10.0.121.188 <none> 8000/TCP 18m
kube-system kube-dns ClusterIP 10.0.0.10 <none> 53/UDP,53/TCP 138m
kube-system kubernetes-dashboard ClusterIP 10.0.148.139 <none> 443/TCP 18m
kube-system metrics-server ClusterIP 10.0.35.19 <none> 443/TCP 138m

Using Native Load Balancers directly

First, let’s fire up that sample service:

builder@DESKTOP-2SQ9NQM:~$ helm repo add azure-samples https://azure-samples.github.io/helm-charts/
"azure-samples" has been added to your repositories
builder@DESKTOP-2SQ9NQM:~$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "nginx-stable" chart repository
...Successfully got an update from the "azure-samples" chart repository
...Successfully got an update from the "kedacore" chart repository
...Successfully got an update from the "banzaicloud-stable" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈
builder@DESKTOP-2SQ9NQM:~$ kubectl create ns ingress-basic
namespace/ingress-basic created
builder@DESKTOP-2SQ9NQM:~$ helm install aks-helloworld azure-samples/aks-helloworld --namespace ingress-basic

NAME: aks-helloworld
LAST DEPLOYED: Fri Mar 27 19:11:55 2020
NAMESPACE: ingress-basic
STATUS: deployed
REVISION: 1
TEST SUITE: None

Let’s check it out

builder@DESKTOP-2SQ9NQM:~$ kubectl get pods -n ingress-basic
NAME READY STATUS RESTARTS AGE
acs-helloworld-aks-helloworld-5d6f57bdb5-kjx2t 1/1 Running 0 32m
builder@DESKTOP-2SQ9NQM:~$ kubectl port-forward acs-helloworld-aks-helloworld-5d6f57bdb5-kjx2t -n ingress-basic 8080:
80
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80

Now let’s check our service

builder@DESKTOP-2SQ9NQM:~$ kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 8h
ingress-basic aks-helloworld ClusterIP 10.0.17.88 <none> 80/TCP 60s
kube-system kube-dns ClusterIP 10.0.0.10 <none> 53/UDP,53/TCP 8h
kube-system kubernetes-dashboard ClusterIP 10.0.45.224 <none> 80/TCP 8h
kube-system metrics-server ClusterIP 10.0.15.11 <none> 443/TCP 8h

We can force the aks-helloworld service to attach to an internal load balancer.  As you recall we checked our nodes and they were 10.240.0.4, 5, and 6.  So let’s pick a private IP in that CIDR.

builder@DESKTOP-2SQ9NQM:~$ cat ingress2.yaml
apiVersion: v1
kind: Service
metadata:
  name: aks-helloworld
  annotations:
    service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
  type: LoadBalancer
  loadBalancerIP: 10.240.0.25
  ports:
  - port: 80
  selector:
    app: aks-helloworld

We can force the ingress on the service by redefining the service outside of the chart (however, Kubernetes will warn us)

builder@DESKTOP-2SQ9NQM:~$ kubectl apply -f ingress2.yaml -n ingress-basic
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
service/aks-helloworld configured

We see that did apply an internal loadbalancer:

builder@DESKTOP-2SQ9NQM:~$ kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 8h
ingress-basic aks-helloworld LoadBalancer 10.0.17.88 10.240.0.25 80:30462/TCP 40m
kube-system kube-dns ClusterIP 10.0.0.10 <none> 53/UDP,53/TCP 8h
kube-system kubernetes-dashboard ClusterIP 10.0.45.224 <none> 80/TCP 8h
kube-system metrics-server ClusterIP 10.0.15.11 <none> 443/TCP 8h

We can also see that an loadbalancer with a private IP matches that in the resource group in the Azure portal:

But since this is an internal LB, we’ll need a private pod to check.

builder@DESKTOP-2SQ9NQM:~$ kubectl apply -f https://raw.githubusercontent.com/ConSol/docker-headless-vnc-container/master/kubernetes/kubernetes.headless-vnc.example.deployment.yaml
deployment.apps/headless-vnc created
service/headless-vnc created
builder@DESKTOP-2SQ9NQM:~$ kubectl get pods --all-namespaces | grep vnc
default headless-vnc-54f58c69f9-w5n26 0/1 ContainerCreating 0 51s

But that didn’t work.

The key issue is the both the fact the namespace needs to match the app and also, you use a selector on the label of the pod, not the service.

builder@DESKTOP-2SQ9NQM:~$ cat ingress2.yaml
apiVersion: v1
kind: Service
metadata:
  name: aks-helloworld
  namespace: ingress-basic
  annotations:
    service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
  type: LoadBalancer
  loadBalancerIP: 10.240.0.25
  ports:
    - protocol: TCP
      port: 8080
      targetPort: 80
  selector:
    app: acs-helloworld-aks-helloworld
builder@DESKTOP-2SQ9NQM:~$ kubectl apply -f ingress2.yaml
service/aks-helloworld created

builder@DESKTOP-2SQ9NQM:~$ kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default echoserver-lb LoadBalancer 10.0.146.124 10.240.0.8 8080:30702/TCP 24m
default headless-vnc NodePort 10.0.180.196 <none> 6901:32001/TCP,5901:32002/TCP 85m
default kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 10h
ingress-basic aks-helloworld LoadBalancer 10.0.99.210 10.240.0.25 8080:31206/TCP 2m47s
kube-system kube-dns ClusterIP 10.0.0.10 <none> 53/UDP,53/TCP 10h
kube-system kubernetes-dashboard ClusterIP 10.0.45.224 <none> 80/TCP 10h
kube-system metrics-server ClusterIP 10.0.15.11 <none> 443/TCP 10h

We can do the same for External IPs as well:

builder@DESKTOP-2SQ9NQM:~$ cat ingress2.yaml
apiVersion: v1
kind: Service
metadata:
  name: aks-helloworld
  namespace: ingress-basic
spec:
  type: LoadBalancer
  ports:
    - protocol: TCP
      port: 8080
      targetPort: 80
  selector:
    app: acs-helloworld-aks-helloworld

Now let’s get the namespaces

builder@DESKTOP-2SQ9NQM:~$ kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default echoserver-lb LoadBalancer 10.0.146.124 10.240.0.8 8080:30702/TCP 37m
default headless-vnc NodePort 10.0.180.196 <none> 6901:32001/TCP,5901:32002/TCP 98m
default kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 10h
ingress-basic aks-helloworld LoadBalancer 10.0.9.144 13.86.3.38 8080:31193/TCP 7m19s
kube-system kube-dns ClusterIP 10.0.0.10 <none> 53/UDP,53/TCP 10h
kube-system kubernetes-dashboard ClusterIP 10.0.45.224 <none> 80/TCP 10h
kube-system metrics-server ClusterIP 10.0.15.11 <none> 443/TCP 10h

Cleaning up

builder@DESKTOP-2SQ9NQM:~$ az aks delete -n idjaks03 -g idjaks03rg
Are you sure you want to perform this operation? (y/n): y
 - Running 

Summary

There are multiple ways one can create an ingress loadbalancer into your cluster. You can serve traffic with Nginx or just tie it directly to the load balancer. The advantage to the Nginx approach is it is more Cloud agnostic - that setup will work with minor changes between various clouds and on-premise.  It does mean having an Nginx controller serving up layer 7 traffic, but it can also serve as a TLS endpoint.

Using the Azure Loadbalancer via a kubernetes service with annotations takes out the middeman Nginx.  It should serve Layer 4 and 7 traffic equally well.  However, the annotations are Azure specific so one will need to change per provider (not too hard, for instance in AWS documentation it has things like service.beta.kubernetes.io/aws-load-balancer-type: nlb).

The key things to recall are very your spec selectors and match namespaces.

aks ingress k8s tutorial

Have something to add? Feedback? You can use the feedback form

Isaac Johnson

Isaac Johnson

Cloud Solutions Architect

Isaac is a CSA and DevOps engineer who focuses on cloud migrations and devops processes. He also is a dad to three wonderful daughters (hence the references to Princess King sprinkled throughout the blog).

Theme built by C.S. Rhymes