Automating AKS Deployments like a Boss: Part 9 (AKS AAD Refinements)

Published: Aug 8, 2019 by Isaac Johnson

In our last post we covered setting up AKS with AAD integration as well as namespaces and service accounts.  While we did launch the cluster with Terraform, almost all of the post configuration was done manually via YAML files.  Ideally we would want as much as possible automated.  We can do this in kubernetes.  Also, if you have namespaces and tokens, how can you use that with helm? It is actually possible to create a working kubeconfig helm can use.

Terraform the Yamls

Most, but not all, of the things can be handled in Terraform using the “kubernetes” provider.  You’ll see some use the “null_resource” to manually invoke kubectl, but if we can, let’s stick with the main provider.

The terraform will assume you’ve logged into the cluster already with –admin.  It is quite possible a further refinement would be to bundle this code in with the AKS code to make it one logical deployment.

Let’s break down what we plan to cover:

  1. DevOps Service User setup:
  • create cluster role “my-devops-role”
  • create namespace “devops”
  • create service account in the devops namespace “my-devops-sa”
  • create role binding “kubernetes_role_binding” (to connect the above service account with cluster role)
  1. AD Group Configurations
  • create “my-clusteradmin-clusterrolebinding” cluster role binding to grant “MY-Azure-DevOps” AD group cluster admin permissions
  1. Sandbox (dev sandbox) setup
  • create namespace “sandbox”
  • create service account in the sandbox namespace “my-sandbox-sa”
  • create rolebinding “sandbox-my-sandbox-sa-rolebinding” to associate service account to devops role
  • create rolebinding “sandbox-my-sandbox-sa-rolebinding” to associate AD groups “MY-Azure-DevOps” and “MY-Azure-Developers” to the cluster-admin role on the sandbox namespacehelm/tiller setup
  1. Helm Tiller setup
  • create service account “tiller”
  • create cluster role binding “tiller-cluster-role-binding” to associate the tiller service account with the cluster-admin role

The Terraform

First we need to just set the provider - this is easy as we assume kubeconfig has been installed already:

provider "kubernetes" {}

Next, let’s create the devops-role and namespace:

resource "kubernetes_cluster_role" "devops-role" {
  metadata {
    name = "my-devops-role"
  }

  rule {
    api_groups = [""]
    resources = ["deployments","pods","pods/exec","pods/portforward","secrets","nodes","services","replicasets","daemonsets"]
    verbs = ["create","delete","deletecollection","get","list","patch","update","watch","logs"]
  }

  rule {
    api_groups = ["apps"]
    resources = ["deployments"]
    verbs = ["create","get","list","watch"]
  }

  rule {
    api_groups = ["extensions"]
    resources = ["deployments"]
    verbs = ["create","get","list","watch"]
  }
}

resource "kubernetes_namespace" "devops" {
  metadata {
    annotations = {
      name = "devops-annotation"
    }

    labels = {
      mylabel = "devops-namespace-value"
    }

    name = "devops"
  }
}

resource "kubernetes_service_account" "my-devops-sa" {
  metadata {
    name = "my-devops-sa"
    namespace = "${kubernetes_namespace.devops.metadata.0.name}"
  }
}

Now that we have a role, namespace and service user, let’s start hooking them together with a rolebinding.

resource "kubernetes_role_binding" "example" {
  metadata {
    name = "${kubernetes_namespace.devops.metadata.0.name}-${kubernetes_service_account.my-devops-sa.metadata.0.name}-rolebinding"
    namespace = "${kubernetes_namespace.devops.metadata.0.name}"
  }

  role_ref {
    api_group = "rbac.authorization.k8s.io"
    kind = "ClusterRole"
    name = "${kubernetes_cluster_role.devops-role.metadata.0.name}"
  }

  subject {
    kind = "User"
    name = "system:serviceaccount:${kubernetes_namespace.devops.metadata.0.name}:${kubernetes_service_account.my-devops-sa.metadata.0.name}"
    api_group = ""
  }
  subject {
    kind = "ServiceAccount"
    name = "${kubernetes_service_account.my-devops-sa.metadata.0.name}"
    namespace = "${kubernetes_namespace.devops.metadata.0.name}"
  }
}

resource "kubernetes_role_binding" "tiller-devops-sa-rolebinding" {
  metadata {
    name = "${kubernetes_namespace.devops.metadata.0.name}-${kubernetes_service_account.my-devops-sa.metadata.0.name}-rolebinding"
    namespace = "kube-system"
  }

  role_ref {
    api_group = "rbac.authorization.k8s.io"
    kind = "ClusterRole"
    name = "${kubernetes_cluster_role.devops-role.metadata.0.name}"
  }

  subject {
    kind = "User"
    name = "system:serviceaccount:${kubernetes_namespace.devops.metadata.0.name}:${kubernetes_service_account.my-devops-sa.metadata.0.name}"
    api_group = ""
  }
  subject {
    kind = "ServiceAccount"
    name = "${kubernetes_service_account.my-devops-sa.metadata.0.name}"
    namespace = "${kubernetes_namespace.devops.metadata.0.name}"
  }
}

Another thing we can tackle now is adding the MY-Azure-DevOps group to the cluster-admin role.  This will allow users in that AD group to use their own logins to administer this cluster in the future.

# DevOps AD Group user setup

resource "kubernetes_cluster_role_binding" "MY-Azure-Devops-CRB" {
  metadata {
    name = "my-clusteradmin-clusterrolebinding"
  }
  role_ref {
    api_group = "rbac.authorization.k8s.io"
    kind = "ClusterRole"
    name = "cluster-admin"
  }
  subject {
    # MY-Azure-DevOps
    kind = "Group"
    name = "84c49f53-1bdf-48fb-a85c-46646b93823d"
    api_group = "rbac.authorization.k8s.io"
  }
}

Sandbox

Similar to our devops work, we need to create a sandbox namespace and service account.  We can use the roles we already made for devops if we wish for tighter controls, or just give the users cluster admin access to their own sandbox namespace.  For now, let’s allow the users full access to sandbox.   However, we will apply the devops role to the service account (so they can mimic and validate access functionality)

 resource "kubernetes_namespace" "sandbox" {
  metadata {
    annotations = {
      name = "sandbox-ns-annotation"
    }

    labels = {
      mylabel = "sandbox-namespace-value"
    }

    name = "sandbox"
  }
}

resource "kubernetes_service_account" "my-sandbox-sa" {
  metadata {
    name = "my-sandbox-sa"
    namespace = "${kubernetes_namespace.sandbox.metadata.0.name}"
  }
}


resource "kubernetes_role_binding" "my-sandbox-clusterrolebinding" {
  metadata {
    name = "${kubernetes_namespace.sandbox.metadata.0.name}-${kubernetes_service_account.my-sandbox-sa.metadata.0.name}-rolebinding"
    namespace = "${kubernetes_namespace.sandbox.metadata.0.name}"
  }

  role_ref {
    api_group = "rbac.authorization.k8s.io"
    kind = "ClusterRole"
    name = "${kubernetes_cluster_role.devops-role.metadata.0.name}"
  }

  subject {
    kind = "User"
    name = "system:serviceaccount:${kubernetes_namespace.sandbox.metadata.0.name}:${kubernetes_service_account.my-sandbox-sa.metadata.0.name}"
    api_group = ""
  }
  subject {
    kind = "ServiceAccount"
    name = "${kubernetes_service_account.my-sandbox-sa.metadata.0.name}"
    namespace = "${kubernetes_namespace.sandbox.metadata.0.name}"
  }
}

resource "kubernetes_role_binding" "my-sandboxdevops-clusterrolebinding" {
  metadata {
    name = "${kubernetes_namespace.sandbox.metadata.0.name}-${kubernetes_service_account.my-sandbox-sa.metadata.0.name}-devops-rolebinding"
    namespace = "${kubernetes_namespace.sandbox.metadata.0.name}"
  }

  role_ref {
    api_group = "rbac.authorization.k8s.io"
    kind = "ClusterRole"
    name = "cluster-admin"
  }

  subject {
    # MY-Azure-DevOps
    api_group = "rbac.authorization.k8s.io"
    kind = "Group"
    name = "84c49f53-1bdf-48fb-a85c-46646b93823d"
  }
  subject {
    # MY-Azure-Developers
    api_group = "rbac.authorization.k8s.io"
    kind = "Group"
    name = "c08b0d21-caed-4f56-8ca6-3b33dd23550b"
  }
}

Helm/Tiller

This next block could be used anywhere, not just AKS with AAD - in all my past guides, when we had an RBAC enabled cluster, I provided a block of cut and paste yaml.  Here is the equivalent for helm/tiller RBAC rules in terraform.

resource "kubernetes_service_account" "tiller-sa" {
  metadata {
    name = "tiller"
    namespace = "kube-system"
  }
}

resource "kubernetes_cluster_role_binding" "tiller-cluster-role-binding" {
  metadata {
    name = "${kubernetes_service_account.tiller-sa.metadata.0.name}-cluster-role-binding"
  }
  role_ref {
    api_group = "rbac.authorization.k8s.io"
    kind = "ClusterRole"
    name = "cluster-admin"
  }
  subject {
    kind = "ServiceAccount"
    name = "tiller"
    namespace = "kube-system"
  }
}

Lastly, let’s use that null resource to just do helm init.  This will install tiller.  One could also use “helm init –upgrade” to include upgrading if it already exists.

resource "null_resource" "helm_init" {
  provisioner "local-exec" {
    command = "helm init"
  }
}

At this point you have terraform to do the work.  You can run:

  1. terraform init
  2. terraform plan -out tfplan
  3. terraform apply tfplan

Notes

If you want to get the secret for the token after install, you can use this oneliner:

kubectl -n devops get secret $(kubectl -n devops get secret | grep my-devops | awk '{print $1}') -o json | jq -r '.data.token' | base64 -D

Part 2: Getting a kubeconfig for Helm

Perhaps the biggest challenge in using helm and tiller with a service user is that Helm really insists on using a kubeconfig.  Often we just have the token to use and that can create a barrier to enjoying the functionality that Helm provides.

Here are the steps you can use to create the kubeconfig from the service account we made above.

Pre-requisites

First we need some pre-req’s - namely the cluster name, endpoint and token.  We also need the ca.crt (so we don’t need to use the ssl insecure lines)

mkdir tmp
kubectl get secret -n devops $(kubectl -n devops get secret | grep my-devops | awk '{print $1}') -o json | jq -r '.data["ca.crt"]' | base64 -D > tmp/ca.crt
kubectl get secret -n devops $(kubectl -n devops get secret | grep my-devops | awk '{print $1}') -o json | jq -r '.data["token"]' | base64 -D > tmp/token
export context=$(kubectl config current-context)
export CLUSTER_NAME=$(kubectl config get-contexts "$context" | awk '{print $3}' | tail -n 1)
export ENDPOINT=$(kubectl config view -o jsonpath="{.clusters[?(@.name == \"${CLUSTER_NAME}\")].cluster.server}")

If we did that right, we’ll see our shell has the vars properly set

declare -x CLUSTER_NAME="MY-AKS-CLUSTER"
declare -x ENDPOINT="https://my-aks-cluster-a7ca578.hcp.northcentralus.azmk8s.io:443"
declare -x context="MY-AKS-CLUSTER-admin"

Creating the kubeconfig

First set the cluster name:

$ kubectl config set-cluster "$CLUSTER_NAME" --kubeconfig=tmp/my-devops-sa-config --server="$ENDPOINT" --certificate-authority="tmp/ca.crt" --embed-certs=true
Cluster "MY-AKS-CLUSTER" set.

Then set the token creds

$ kubectl config set-credentials "my-devops-sa" --kubeconfig=tmp/my-devops-sa-config --token=`cat tmp/token`
User "my-devops-sa" set

Lastly, set the context entry, then set the current context

$ kubectl config set-context "my-devops-sa" --kubeconfig=tmp/my-devops-sa-config --cluster="$CLUSTER_NAME" --user="my-devops-sa" --namespace="devops"
Context "my-devops-sa" created.

$ kubectl config use-context "my-devops-sa" --kubeconfig=tmp/my-devops-sa-config
Switched to context "my-devops-sa".

Testing

$ cp tmp/my-devops-sa-config ~/.kube/config
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
docker-deploy-764cfc8dc9-54vdm 1/1 Running 0 34h
$ helm install stable/sonarqube
NAME: liquid-mongoose
LAST DEPLOYED: Wed Jul 31 15:56:28 2019
NAMESPACE: azdo
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
liquid-mongoose-sonarqube-config 0 0s
liquid-mongoose-sonarqube-copy-plugins 1 0s
liquid-mongoose-sonarqube-install-plugins 1 0s
liquid-mongoose-sonarqube-tests 1 0s

==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
liquid-mongoose-postgresql Pending default 0s

==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
liquid-mongoose-postgresql-c08b0d21-caed 0/1 Pending 0 0s
liquid-mongoose-sonarqube-abb75162-97f8 0/1 Init:0/1 0 0s

==> v1/Secret
NAME TYPE DATA AGE
liquid-mongoose-postgresql Opaque 1 0s

==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
liquid-mongoose-postgresql ClusterIP 10.0.31.34 <none> 5432/TCP 0s
liquid-mongoose-sonarqube LoadBalancer 10.0.14.207 <pending> 9000:31788/TCP 0s

==> v1beta1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
liquid-mongoose-postgresql 0/1 1 0 0s
liquid-mongoose-sonarqube 0/1 1 0 0s


NOTES:
1. Get the application URL by running these commands:
     NOTE: It may take a few minutes for the LoadBalancer IP to be available.
           You can watch the status of by running 'kubectl get svc -w liquid-mongoose-sonarqube'
  export SERVICE_IP=$(kubectl get svc --namespace azdo liquid-mongoose-sonarqube -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
  echo http://$SERVICE_IP:9000

Cleaning up

$ helm delete liquid-mongoose
release "liquid-mongoose" deleted

Notes:

As far as using this kubeconfig, I would recommend base64 encoding it and storing it in a secret store.  The CICD system you use can then pull and decrypt on the fly.

e.g.

echo VGhpcyBpcyB0aGUgd2F5IHRoZSB3b3JsZCBlbmRzClRoaXMgaXMgdGhlIHdheSB0aGUgd29ybGQgZW5kcwpUaGlzIGlzIHRoZSB3YXkgdGhlIHdvcmxkIGVuZHMKTm90IHdpdGggYSBiYW5nIGJ1dCBhIHdoaW1wZXIu -D > ~/.kube/config
k8s terraform

Have something to add? Feedback? Try our new forums

Isaac Johnson

Isaac Johnson

Cloud Solutions Architect

Isaac is a CSA and DevOps engineer who focuses on cloud migrations and devops processes. He also is a dad to three wonderful daughters (hence the references to Princess King sprinkled throughout the blog).

Theme built by C.S. Rhymes