Automating AKS Deployments like a Boss: Part 1

Published: Mar 10, 2019 by Isaac Johnson

Azure’s managed kubernetes service (AKS) is surprisingly easy to get started.  Let’s start our series by clearing up an FUD and fire up a little cluster on our own.

Creating the service

Like most things, just go to the portal and choose “+ Create Resource”.  Enter “Kubernetes Service” and choose create.

Resource Creation Window
Creation settings

Let’s go through the settings a bit.

  1. The Cluster name is what will show up in the portal and be referred to in command line arguments.
  2. The DNS Prefix name is what shows up in the URLs for the public facing endpoints.
  3. By default the node count is 3 which is reasonable.  But for a demo, a one node cluster will serve our purposes

Authorization:

Allow AKS to create a new Service Principal and for now, we will not enable RBAC. Role Based Access Control is a topic we can dive into later, but when you are in a position where the cluster will be directly interacted with by many people, then you’ll want to enable it.  If your cluster is private or the interactions are entirely driven through pipelines, then there is no need.

Networking:

  1. HTTP application routing: this is a quick start for ingress controllers. I always leave this off, however, for this demo, i actually don’t mind the public DNS and exposing my service so i’ll allow it

Monitoring:

You can choose to disable the Azure Monitor. It’s pretty cheap. For instance, on East US it’s 2.30 per Gb of injested data AFTER 5gb.  And 10c a gig to retain beyond 30 days.  Another way to put it, if you keep it under 5Gb and don’t save beyond 30d, the service is free.  For this demo I’ll leave it enabled.  We will cover this later.

Tags:

These are often required by your organization for billing (such as usage with Cloud Custodian).

Our last step is to create the cluster.

Final creation screen
  • Download a Template will let you download the ARM template for re-use.
  • Click Create to Create the Cluster.

Interacting with your AKS cluster.

First, install the azure cli and then login to Azure on the command line:

$ az login --use-device-code
To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code FD2BKM76B to authenticate.
[
  {
    "cloudName": "AzureCloud",
    "id": "d955c0ba-13dc-44cf-a29a-8fed74cbb22d",
    "isDefault": true,
    "name": "Pay-As-You-Go",
    "state": "Enabled",
    "tenantId": "28c575f6-ade1-4838-8e7c-7e6d1ba0eb4a",
    "user": {
      "name": "isaac.johnson@gmail.com",
      "type": "user"
    }
  }
]

Next install the kubernetes cli (kubectl).  You can install the Azure CLI to make it that much easier;

$ az aks install-cli
Downloading client to /usr/local/bin/kubectl from https://storage.googleapis.com/kubernetes-release/release/v1.13.4/bin/darwin/amd64/kubectl
Please ensure that /usr/local/bin is in your search PATH, so the `kubectl` command can be found.

And our last step is to get the credentials to access our cluster:

$ az aks get-credentials --name idj-aks --resource-group idj-aks
Merged "idj-aks" as current context in /Users/isaac.johnson/.kube/config

Our last step is to check for any pods:

$ kubectl get pods
No resources found.

Helm

While it is not required to install Helm to use Kubernetes, Helm (and Tiller which is the backend that Helm installs and with which it interacts) make life a bit easier.

$ brew install kubernetes-helm
Updating Homebrew...
==> Auto-updated Homebrew!
Updated 1 tap (homebrew/core).
==> New Formulae
cafeobj gloo-ctl h3 i386-elf-grub libopenmpt reprepro sd stolon zydis
dockerize gnunet homeassistant-cli kcov re-flex riff serve v2ray-plugin
==> Updated Formulae
azure-cli ✔ boxes diffoscope goreleaser metricbeat roswell
git ✔ brew-php-switcher diffstat grafana micronaut rtags
node ✔ bro digdag grpc minio-mc rust
terraform ✔ buildifier dita-ot grpcurl mkl-dnn rustup-init
abcde burp django-completion gwyddion mosh sbcl
abcmidi bwm-ng dmd gx mosquitto scrcpy
activemq-cpp bzt dmenu gx-go mpd scw
aescrypt-packetizer c-blosc docfx handbrake mysql@5.6 sfml
afflib caffe docker hdf5 ncompress shadowsocks-libev
afio calabash docker-completion helmfile needle shellz
agedu calcurse double-conversion hub netcdf ship
akamai calicoctl dovecot i386-elf-binutils netpbm siege
algernon cargo-completion dpkg ibex nginx simutrans
ammonite-repl carrot2 draco imagemagick nifi singular
amqp-cpp cash-cli dscanner immortal node-build sip
angular-cli cassandra@2.1 dtc ipfs node@10 skaffold
annie cassandra@2.2 dub ispc node@6 skinny
ansible cataclysm duck jadx node@8 skopeo
apache-arrow cdk dwdiff jdupes nspr sn0int
apache-arrow-glib cdogs-sdl dwm jenkins nss solr
apache-flink certbot dyld-headers jenv ntopng solr@5.5
apache-zeppelin cfengine dynamips jetty ntp sourcery
apktool cflow eccodes jetty-runner numpy sphinx-doc
app-engine-java cfr-decompiler elasticsearch jhipster nwchem spotbugs
apt-dater cglm elasticsearch@5.6 jo ocrmypdf sqlmap
arangodb chakra elektra joplin odpi step
aravis check_postgres emscripten jruby offlineimap supersonic
arm-linux-gnueabihf-binutils checkstyle erlang kibana@5.6 ohcount svgo
armadillo chkrootkit erlang@20 kitchen-sync openssl@1.1 swagger-codegen
arpack cli53 eslint knot opentracing-cpp swagger-codegen@2
artifactory click exiftool kompose osquery swiftformat
asciidoctorj closure-compiler exploitdb kops packer swiftlint
asdf cmark-gfm faas-cli kube-ps1 paket syncthing
asio cocoapods fabio kubeprod pandoc telegraf
ask-cli cointop firebase-cli kubernetes-cli pandoc-citeproc terragrunt
at-spi2-atk collada-dom flake8 kubernetes-helm parallel tgui
at-spi2-core collector-sidecar flann kustomize parallelstl thors-serializer
atkmm commandbox flow lcov passenger thrift
ats2-postiats conan fluid-synth ldc pcre tile38
auditbeat configen fn lean-cli pdftoedn tippecanoe
aurora confluent-oss freetds leiningen petsc tmux-xpanes
autogen consul-template frugal libbi petsc-complex tomcat-native
avfs convox fx libcerf pgweb topgrade
aws-sdk-cpp coturn gambit-scheme libdvdread phoronix-test-suite tor
awscli couchdb gcc libgweather php typescript
axel cpprestsdk gdal libheif php@7.1 ucloud
azure-storage-cpp cproto gdk-pixbuf libosinfo picard-tools unrar
babeld crc32c geckodriver libphonenumber planck unzip
babl cromwell gecode libpq plank vagrant-completion
backupninja cryptominisat geocode-glib libpulsar platformio vala
bacula-fd cryptopp gerbil-scheme libqalculate pmd vault
balena-cli cscope get_iplayer librealsense ponyc wabt
batik csfml ghc libsecret pre-commit wartremover
bazel cython git-flow-avh libtensorflow precomp weaver
bee czmq git-lfs libuninameslist presto weboob
befunge93 darcs gitlab-runner libxlsxwriter primesieve wildfly-as
bettercap dartsim glib linkerd profanity wireguard-tools
bgpstream dav1d glslang liquibase prometheus wtf
bibtexconv davix gmsh lmod protobuf xmrig
bigloo dcd gmt logtalk protobuf-c xtensor
binaryen ddrescue gmt@4 lsd protobuf-swift yara
bind deark gnome-latex lxc protoc-gen-go ydcv
bindfs debianutils gnu-tar lzlib pulumi yle-dl
bit deja-gnu go mariadb python@2 you-get
blastem deployer godep mat2 redis@4.0 youtube-dl
bluetoothconnector dhex golang-migrate maxwell rhash zbackup
bmake dialog gomplate mesa rke
==> Renamed Formulae
ark -> velero
==> Deleted Formulae
gdnsd

==> Downloading https://homebrew.bintray.com/bottles/kubernetes-helm-2.13.0.mojave.bottle.tar.gz
######################################################################## 100.0%
==> Pouring kubernetes-helm-2.13.0.mojave.bottle.tar.gz
==> Caveats
Bash completion has been installed to:
  /usr/local/etc/bash_completion.d

zsh completions have been installed to:
  /usr/local/share/zsh/site-functions
==> Summary
🍺 /usr/local/Cellar/kubernetes-helm/2.13.0: 51 files, 84.1MB
==> `brew cleanup` has not been run in 30 days, running now...
Removing: /usr/local/Cellar/gdbm/1.17... (20 files, 581.4KB)
Removing: /Users/isaac.johnson/Library/Caches/Homebrew/node--11.9.0.mojave.bottle.tar.gz... (13.4MB)
Removing: /usr/local/Cellar/openssl/1.0.2o_2... (1,792 files, 12.3MB)
Removing: /usr/local/Cellar/openssl/1.0.2p... (1,793 files, 12.3MB)
Removing: /usr/local/Cellar/python/3.7.0... (4,875 files, 103.2MB)
Removing: /usr/local/Cellar/readline/7.0.5... (46 files, 1.5MB)
Removing: /usr/local/Cellar/sqlite/3.24.0... (11 files, 3.5MB)
Removing: /Users/isaac.johnson/Library/Logs/Homebrew/gettext... (64B)
Removing: /Users/isaac.johnson/Library/Logs/Homebrew/pcre2... (64B)
Removing: /Users/isaac.johnson/Library/Logs/Homebrew/openssl... (64B)
Removing: /Users/isaac.johnson/Library/Logs/Homebrew/git... (64B)
Removing: /Users/isaac.johnson/Library/Logs/Homebrew/httrack... (64B)
Pruned 0 symbolic links and 2 directories from /usr/local


$ helm init
Creating /Users/isaac.johnson/.helm 
Creating /Users/isaac.johnson/.helm/repository 
Creating /Users/isaac.johnson/.helm/repository/cache 
Creating /Users/isaac.johnson/.helm/repository/local 
Creating /Users/isaac.johnson/.helm/plugins 
Creating /Users/isaac.johnson/.helm/starters 
Creating /Users/isaac.johnson/.helm/cache/archive 
Creating /Users/isaac.johnson/.helm/repository/repositories.yaml 
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com 
Adding local repo with URL: http://127.0.0.1:8879/charts 
$HELM_HOME has been configured at /Users/isaac.johnson/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!

Using our cluster

Let’s do a quick install of a two tier app, SonarQube (OSS) just to see things work.

$ helm install stable/sonarqube
NAME: coiling-bee
LAST DEPLOYED: Sun Mar 10 10:13:05 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
coiling-bee-sonarqube-config 0 1s
coiling-bee-sonarqube-copy-plugins 1 1s
coiling-bee-sonarqube-install-plugins 1 1s
coiling-bee-sonarqube-tests 1 1s
==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
coiling-bee-postgresql Pending default 1s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
coiling-bee-postgresql-5d7b8fd7f7-srqhk 0/1 Pending 0 1s
coiling-bee-sonarqube-76d8dcc899-kxbbm 0/1 ContainerCreating 0 1s
==> v1/Secret
NAME TYPE DATA AGE
coiling-bee-postgresql Opaque 1 1s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
coiling-bee-postgresql ClusterIP 10.0.83.230 <none> 5432/TCP 1s
coiling-bee-sonarqube LoadBalancer 10.0.103.5 <pending> 9000:30510/TCP 1s
==> v1beta1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
coiling-bee-postgresql 0/1 1 0 1s
coiling-bee-sonarqube 0/1 1 0 1s
NOTES:
1. Get the application URL by running these commands:
     NOTE: It may take a few minutes for the LoadBalancer IP to be available.
           You can watch the status of by running 'kubectl get svc -w coiling-bee-sonarqube'
  export SERVICE_IP=$(kubectl get svc --namespace default coiling-bee-sonarqube -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
  echo http://$SERVICE_IP:9000

Let’s get that IP address:

Kubectl get svc --namespace default coiling-bee-sonarqube -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
168.61.164.98

Checking http://168.61.164.98:9000 times out.. Why?

Checking the state of our Deployment

First we can check if the pods have been created yet (again, we chose the tinniest cluster, i expect things might take a bit longer as a result):

NAME READY STATUS RESTARTS AGE
coiling-bee-postgresql-5d7b8fd7f7-srqhk 0/1 ContainerCreating 0 4m33s
coiling-bee-sonarqube-76d8dcc899-kxbbm 0/1 ContainerCreating 0 4m33s

Checking a bit later:
AHD-MBP13-048:current isaac.johnson$ kubectl get pods
NAME READY STATUS RESTARTS AGE
coiling-bee-postgresql-5d7b8fd7f7-srqhk 1/1 Running 0 7m33s
coiling-bee-sonarqube-76d8dcc899-kxbbm 0/1 Running 1 7m33s

kubectl get pods
NAME READY STATUS RESTARTS AGE
coiling-bee-postgresql-5d7b8fd7f7-srqhk 1/1 Running 0 8m7s
coiling-bee-sonarqube-76d8dcc899-kxbbm 1/1 Running 1 8m7s

Let’s check our ingress IP and see if Sonar is up:

Helm installed SonarQube running: always change the admin password immediately if doing this way (default pw: admin)

Cleanup and Costs:

Perhaps the biggest reason I am a bit of an Azure fanboy despite working in all the public cloud providers is because of how incridebly easy it is to check on costs and tear down things you create.

We can go to the resource group and check costs which in the 30 minutes it took to create a cluster, launch tiller and sonar and write these instructions has amounted to $0.00:

Resource Cost Window

Deleting a helm installed release

If we wish to just delete a helm installed release, we can free up resources on our cluster using the command line.

Find existing running releases:

$ helm list
NAME REVISION	UPDATED STATUS CHART APP VERSION	NAMESPACE
coiling-bee	1 Sun Mar 10 10:13:05 2019	DEPLOYED	sonarqube-0.15.0	7.6 default  

Then delete the named release:

$ helm delete coiling-bee
release "coiling-bee" deleted
AHD-MBP13-048:current isaac.johnson$ kubectl get pods
NAME READY STATUS RESTARTS AGE
coiling-bee-postgresql-5d7b8fd7f7-srqhk 0/1 Terminating 0 58m
coiling-bee-sonarqube-76d8dcc899-kxbbm 0/1 Terminating 1 58m

If we are totally done with this work, we can easily remove the entire resource group which should remove the cluster and anything deployed within it.

Just pull up the resource group, then choose Overview and click “Delete resource group”.  

Resource Group Delete from Overview

Summary:

In a few minutes we quickly created an entire AKS cluster in Azure.  We installed command line tools, including Helm and launched a stable chart.   This chart acquired a public IP and set up a loadbalancer and we verified we could access a usable 2-tier app.

When we were done we showed how we can delete a release installed by helm and lastly how we could remove the entire resource group so as not to incur any costs.

tutorial k8s

Have something to add? Feedback? Try our new forums

Isaac Johnson

Isaac Johnson

Cloud Solutions Architect

Isaac is a CSA and DevOps engineer who focuses on cloud migrations and devops processes. He also is a dad to three wonderful daughters (hence the references to Princess King sprinkled throughout the blog).

Theme built by C.S. Rhymes