The Other Clouds: Linode and Kubernetes

Published: May 8, 2019 by Isaac Johnson

So far we have dug in quite a bit using Azure’s Kubernetes Service (AKS). This is primarily because i’m a bit of a fan and use it daily.  But what other offerings are out there?  Certainly Amazon has their EKS and Google’s GKE.  And we’ll get to those eventually, but what else is out there?  While some certainly use minikubefor local testing and kubespray (which we covered already) for a mix of servers (cloud agnostic), there are some really great cloud providers outside the big 3.  Today we’ll dig into one of them: Linode.

Linodewas founded by Chris Aker back in 2003 with a focus on linux virtualization using UML and later Xen in 2008 and by 2015 switched to KVM.  They have datacenters worldwide operating in 9 distinct regions from Tokyo to Frankfurt to Dallas.  Their core is compute, but they have Linode Backup (09), LinodeBalancer and in 2013 launched Linode Longview for analysis of servers.  It’s a 16 year old company that books over $100million in revenue.  People generally like them because they are stableand using good hardware.

I became familiar with them only after visiting with their vendor booth next to google’s at Hashiconf ‘18.  They were unassuming and friendly and didn’t try to oversell their offering. Essentially fast, inexpensive, linux based cloud.

Getting Started

First, much props to them as their linode-cli is just a wrapper around terraform (and there is a guide in markdown on the gh repo i used to get started).  There for it requires Terraform (TF) to be downloaded and available on your path (https://www.terraform.io/downloads.html)

Once TF is installed, you can then install the linode-cli:

$ brew tap linode/cli
Updating Homebrew...
==> Auto-updated Homebrew!
Updated 3 taps (homebrew/core, homebrew/cask and caskroom/versions).
==> New Formulae
$ brew install linode-cli
Updating Homebrew...
==> Installing linode-cli from linode/cli
==> Downloading https://github.com/linode/cli/archive/v1.4.8.tar.gz
==> Downloading from https://codeload.github.com/linode/cli/tar.gz/v1.4.8
…
==> perl Makefile.PL PREFIX=/usr/local/Cellar/linode-cli/1.4.8
==> make install
🍺 /usr/local/Cellar/linode-cli/1.4.8: 740 files, 8.1MB, built in 1 minute 59 seconds


$ sudo pip install linode-cli
Password:
The directory '/Users/isaac.johnson/Library/Logs/pip' or its parent directory is not owned by the current user and the debug log has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
…
  Running setup.py install for terminaltables
  Running setup.py install for colorclass
  Running setup.py install for PyYAML
Successfully installed PyYAML-5.1 certifi-2019.3.9 chardet-3.0.4 colorclass-2.2.0 enum34-1.1.6 linode-cli-2.3.0 requests-2.21.0 terminaltables-3.1.0 urllib3-1.24.3

Next we do a Terraform init:

$ terraform init
Initializing modules...
- module.k8s
  Found version 0.1.0 of linode/k8s/linode on registry.terraform.io
  Getting source "linode/k8s/linode"
- module.k8s.masters
  Getting source "./modules/masters"
- module.k8s.nodes
  Getting source "./modules/nodes"
- module.k8s.masters.master_instance
  Getting source "../instances"
- module.k8s.nodes.node
  Getting source "../instances"

Initializing provider plugins...
- Checking for available provider plugins on https://releases.hashicorp.com...
- Downloading plugin for provider "null" (2.1.2)...
- Downloading plugin for provider "linode" (1.4.0)...
- Downloading plugin for provider "external" (1.0.0)...

The following providers do not have any version constraints in configuration,
so the latest version was installed.

To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.

* provider.null: version = "~> 2.1"

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

And create a new workspace:

$ terraform workspace new linode
Created and switched to workspace "linode"!

You're now on a new, empty workspace. Workspaces isolate their state,
so if you run "terraform plan" Terraform will not see any existing state
for this configuration.

Before we move on,  we are going to need our API Key, Assuming you have an account, get your API key in the Profile section of the Linode Manager under the “API Keys” tab:

You can now use that token in a main.tf we’ll create:

$ cat main.tf
module "k8s" {
  source = "linode/k8s/linode"
  version = "0.1.0"

  linode_token = "1ab1234123412341234123412341234123412341234254"
}

We have only a few more steps left.  First you can decide region (but i’ll tell you that at this moment in time, both the cli and terraform module ignores that) and VM size.

Here is a quick list of VM sizes:
 1 - g6-nanode-1
 2 - g6-standard-1
 3 - g6-standard-2
 4 - g6-standard-4
 5 - g6-standard-6
 6 - g6-standard-8
 7 - g6-standard-16
 8 - g6-standard-20
 9 - g6-standard-24
 10 - g6-standard-32
 11 - g6-highmem-1
 12 - g6-highmem-2
 13 - g6-highmem-4
 14 - g6-highmem-8
 15 - g6-highmem-16
 16 - g6-dedicated-2
 17 - g6-dedicated-4
 18 - g6-dedicated-8
 19 - g6-dedicated-16
 20 - g6-dedicated-32
 21 - g6-dedicated-48

Lastly, make sure to add your ssh-key (both steps below will require that):

$ ssh-add /Users/isaac.johnson/.ssh/id_rsa

Now we can launch this one of two ways:

[1]. Use the linode-cli

linode-cli k8s-alpha create --node-type g6-standard-2 --nodes 2 myFirstCluster

However, the first time through i had some serious issues the pvc cli.  So i might suggest just skipping the CLI and using Terraform directly.

[2]. Use Terraform directly

First do a Terraform Plan

$ terraform plan -var region=us-central -var server_type_master=g6-standard-2 -var nodes=2 -var server_type_node=g6-standard-2 -out plan.tf
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

data.linode_instance_type.type: Refreshing state...
data.linode_instance_type.type: Refreshing state...

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create
 <= read (data resources)

Terraform will perform the following actions:

  + module.k8s.null_resource.local_kubectl
      id: <computed>

  + module.k8s.null_resource.preflight-checks
      id: <computed>
      triggers.%: "1"
      triggers.key: "b7cdd23c-d447-681e-3ae3-eaa2439f5e3e"

 <= module.k8s.module.masters.data.external.kubeadm_join
      id: <computed>
      program.#: "1"
      program.0: "/Users/isaac.johnson/Workspaces/linode-k8s/.terraform/modules/cd0a831a87484215950b49d7082d5d95/scripts/local/kubeadm-token.sh"
      query.%: <computed>
      result.%: <computed>

  + module.k8s.module.masters.null_resource.masters_provisioner
      id: <computed>

  + module.k8s.module.nodes.null_resource.kubeadm_join[0]
      id: <computed>

  + module.k8s.module.nodes.null_resource.kubeadm_join[1]
      id: <computed>

  + module.k8s.module.nodes.null_resource.kubeadm_join[2]
      id: <computed>

  + module.k8s.module.masters.module.master_instance.linode_instance.instance
      id: <computed>
      alerts.#: <computed>
      backups.#: <computed>
      backups_enabled: <computed>
      boot_config_label: <computed>
      config.#: "1"
      config.0.devices.#: "1"
      config.0.devices.0.sda.#: "1"
      config.0.devices.0.sda.0.disk_id: <computed>
      config.0.devices.0.sda.0.disk_label: "boot"
      config.0.devices.0.sdb.#: <computed>
      config.0.devices.0.sdc.#: <computed>
      config.0.devices.0.sdd.#: <computed>
      config.0.devices.0.sde.#: <computed>
      config.0.devices.0.sdf.#: <computed>
      config.0.devices.0.sdg.#: <computed>
      config.0.devices.0.sdh.#: <computed>
      config.0.helpers.#: <computed>
      config.0.kernel: "linode/direct-disk"
      config.0.label: "master"
      config.0.root_device: <computed>
      config.0.run_level: "default"
      config.0.virt_mode: "paravirt"
      disk.#: "1"
      disk.0.authorized_keys.#: "1"
      disk.0.authorized_keys.0: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC8kZzEtk7J7Mvv4hJIE1jcQ0q6h41g5hUwPtOUPjNWPIKm4djmy4+C4+Gtsxxh5jUFooAbwl+DubFZogbU1Q5aLOGKSsD/K4XimTyOhr90DO47naCnaSS0Rg0XyZlvQsHKwcXGuGOleCMhB2gQ70QAK4X/N1dvGfqCDdKBbTORKQyz0WHWo7YGA6YAgtvzn1C5W0l7cT0AXgOfFEAGF31nqqTuRVBbBmosq1qhXJlVt+PO32MqmxZv44ZuCP1jWjyTz1rbQ1OLHCxP/+eDIlpOlkYop4XgwiHHMRn/rxHFTKOAxtFOccFw9KEnDM0j0M5FRBj5qU1BCa/6jhnu7LIz"
      disk.0.filesystem: <computed>
      disk.0.id: <computed>
      disk.0.image: "linode/containerlinux"
      disk.0.label: "boot"
      disk.0.read_only: <computed>
      disk.0.size: "81920"
      disk.0.stackscript_data.%: <computed>
      disk.0.stackscript_id: <computed>
      ip_address: <computed>
      ipv4.#: <computed>
      ipv6: <computed>
      label: "linode-master-1"
      private_ip: "true"
      private_ip_address: <computed>
      region: "eu-west"
      specs.#: <computed>
      status: <computed>
      swap_size: <computed>
      type: "g6-standard-2"
      watchdog_enabled: "true"

  + module.k8s.module.nodes.module.node.linode_instance.instance[0]
      id: <computed>
      alerts.#: <computed>
      backups.#: <computed>
      backups_enabled: <computed>
      boot_config_label: <computed>
      config.#: "1"
      config.0.devices.#: "1"
      config.0.devices.0.sda.#: "1"
      config.0.devices.0.sda.0.disk_id: <computed>
      config.0.devices.0.sda.0.disk_label: "boot"
      config.0.devices.0.sdb.#: <computed>
      config.0.devices.0.sdc.#: <computed>
      config.0.devices.0.sdd.#: <computed>
      config.0.devices.0.sde.#: <computed>
      config.0.devices.0.sdf.#: <computed>
      config.0.devices.0.sdg.#: <computed>
      config.0.devices.0.sdh.#: <computed>
      config.0.helpers.#: <computed>
      config.0.kernel: "linode/direct-disk"
      config.0.label: "node"
      config.0.root_device: <computed>
      config.0.run_level: "default"
      config.0.virt_mode: "paravirt"
      disk.#: "1"
      disk.0.authorized_keys.#: "1"
      disk.0.authorized_keys.0: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC8kZzEtk7J7Mvv4hJIE1jcQ0q6h41g5hUwPtOUPjNWPIKm4djmy4+C4+Gtsxxh5jUFooAbwl+DubFZogbU1Q5aLOGKSsD/K4XimTyOhr90DO47naCnaSS0Rg0XyZlvQsHKwcXGuGOleCMhB2gQ70QAK4X/N1dvGfqCDdKBbTORKQyz0WHWo7YGA6YAgtvzn1C5W0l7cT0AXgOfFEAGF31nqqTuRVBbBmosq1qhXJlVt+PO32MqmxZv44ZuCP1jWjyTz1rbQ1OLHCxP/+eDIlpOlkYop4XgwiHHMRn/rxHFTKOAxtFOccFw9KEnDM0j0M5FRBj5qU1BCa/6jhnu7LIz"
      disk.0.filesystem: <computed>
      disk.0.id: <computed>
      disk.0.image: "linode/containerlinux"
      disk.0.label: "boot"
      disk.0.read_only: <computed>
      disk.0.size: "81920"
      disk.0.stackscript_data.%: <computed>
      disk.0.stackscript_id: <computed>
      ip_address: <computed>
      ipv4.#: <computed>
      ipv6: <computed>
      label: "linode-node-1"
      private_ip: "true"
      private_ip_address: <computed>
      region: "eu-west"
      specs.#: <computed>
      status: <computed>
      swap_size: <computed>
      type: "g6-standard-2"
      watchdog_enabled: "true"

  + module.k8s.module.nodes.module.node.linode_instance.instance[1]
      id: <computed>
      alerts.#: <computed>
      backups.#: <computed>
      backups_enabled: <computed>
      boot_config_label: <computed>
      config.#: "1"
      config.0.devices.#: "1"
      config.0.devices.0.sda.#: "1"
      config.0.devices.0.sda.0.disk_id: <computed>
      config.0.devices.0.sda.0.disk_label: "boot"
      config.0.devices.0.sdb.#: <computed>
      config.0.devices.0.sdc.#: <computed>
      config.0.devices.0.sdd.#: <computed>
      config.0.devices.0.sde.#: <computed>
      config.0.devices.0.sdf.#: <computed>
      config.0.devices.0.sdg.#: <computed>
      config.0.devices.0.sdh.#: <computed>
      config.0.helpers.#: <computed>
      config.0.kernel: "linode/direct-disk"
      config.0.label: "node"
      config.0.root_device: <computed>
      config.0.run_level: "default"
      config.0.virt_mode: "paravirt"
      disk.#: "1"
      disk.0.authorized_keys.#: "1"
      disk.0.authorized_keys.0: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC8kZzEtk7J7Mvv4hJIE1jcQ0q6h41g5hUwPtOUPjNWPIKm4djmy4+C4+Gtsxxh5jUFooAbwl+DubFZogbU1Q5aLOGKSsD/K4XimTyOhr90DO47naCnaSS0Rg0XyZlvQsHKwcXGuGOleCMhB2gQ70QAK4X/N1dvGfqCDdKBbTORKQyz0WHWo7YGA6YAgtvzn1C5W0l7cT0AXgOfFEAGF31nqqTuRVBbBmosq1qhXJlVt+PO32MqmxZv44ZuCP1jWjyTz1rbQ1OLHCxP/+eDIlpOlkYop4XgwiHHMRn/rxHFTKOAxtFOccFw9KEnDM0j0M5FRBj5qU1BCa/6jhnu7LIz"
      disk.0.filesystem: <computed>
      disk.0.id: <computed>
      disk.0.image: "linode/containerlinux"
      disk.0.label: "boot"
      disk.0.read_only: <computed>
      disk.0.size: "81920"
      disk.0.stackscript_data.%: <computed>
      disk.0.stackscript_id: <computed>
      ip_address: <computed>
      ipv4.#: <computed>
      ipv6: <computed>
      label: "linode-node-2"
      private_ip: "true"
      private_ip_address: <computed>
      region: "eu-west"
      specs.#: <computed>
      status: <computed>
      swap_size: <computed>
      type: "g6-standard-2"
      watchdog_enabled: "true"

  + module.k8s.module.nodes.module.node.linode_instance.instance[2]
      id: <computed>
      alerts.#: <computed>
      backups.#: <computed>
      backups_enabled: <computed>
      boot_config_label: <computed>
      config.#: "1"
      config.0.devices.#: "1"
      config.0.devices.0.sda.#: "1"
      config.0.devices.0.sda.0.disk_id: <computed>
      config.0.devices.0.sda.0.disk_label: "boot"
      config.0.devices.0.sdb.#: <computed>
      config.0.devices.0.sdc.#: <computed>
      config.0.devices.0.sdd.#: <computed>
      config.0.devices.0.sde.#: <computed>
      config.0.devices.0.sdf.#: <computed>
      config.0.devices.0.sdg.#: <computed>
      config.0.devices.0.sdh.#: <computed>
      config.0.helpers.#: <computed>
      config.0.kernel: "linode/direct-disk"
      config.0.label: "node"
      config.0.root_device: <computed>
      config.0.run_level: "default"
      config.0.virt_mode: "paravirt"
      disk.#: "1"
      disk.0.authorized_keys.#: "1"
      disk.0.authorized_keys.0: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC8kZzEtk7J7Mvv4hJIE1jcQ0q6h41g5hUwPtOUPjNWPIKm4djmy4+C4+Gtsxxh5jUFooAbwl+DubFZogbU1Q5aLOGKSsD/K4XimTyOhr90DO47naCnaSS0Rg0XyZlvQsHKwcXGuGOleCMhB2gQ70QAK4X/N1dvGfqCDdKBbTORKQyz0WHWo7YGA6YAgtvzn1C5W0l7cT0AXgOfFEAGF31nqqTuRVBbBmosq1qhXJlVt+PO32MqmxZv44ZuCP1jWjyTz1rbQ1OLHCxP/+eDIlpOlkYop4XgwiHHMRn/rxHFTKOAxtFOccFw9KEnDM0j0M5FRBj5qU1BCa/6jhnu7LIz"
      disk.0.filesystem: <computed>
      disk.0.id: <computed>
      disk.0.image: "linode/containerlinux"
      disk.0.label: "boot"
      disk.0.read_only: <computed>
      disk.0.size: "81920"
      disk.0.stackscript_data.%: <computed>
      disk.0.stackscript_id: <computed>
      ip_address: <computed>
      ipv4.#: <computed>
      ipv6: <computed>
      label: "linode-node-3"
      private_ip: "true"
      private_ip_address: <computed>
      region: "eu-west"
      specs.#: <computed>
      status: <computed>
      swap_size: <computed>
      type: "g6-standard-2"
      watchdog_enabled: "true"


Plan: 10 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------

This plan was saved to: plan.tf

To perform exactly these actions, run the following command to apply:
    terraform apply "plan.tf"

Then you can apply

$ terraform apply "plan.tf"
module.k8s.null_resource.preflight-checks: Creating...
  triggers.%: "" => "1"
  triggers.key: "" => "866b8709-9b74-eaa8-5a4b-f4b9006a55ec"
module.k8s.null_resource.preflight-checks: Provisioning with 'local-exec'...
module.k8s.null_resource.preflight-checks (local-exec): Executing: ["/bin/sh" "-c" "/Users/isaac.johnson/Workspaces/linode-k8s/.terraform/modules/ef30511175d44539abfe9e49cec6c979/linode-terraform-linode-k8s-cf68130/scripts/local/preflight.sh"]
module.k8s.null_resource.preflight-checks: Creation complete after 0s (ID: 5608863322754883472)
module.k8s.module.nodes.module.node.linode_instance.instance[1]: Creating...
  alerts.#: "" => "<computed>"
  backups.#: "" => "<computed>"
  backups_enabled: "" => "<computed>"
  boot_config_label: "" => "<computed>"
  config.#: "" => "1"
  config.0.devices.#: "" => "1"
  config.0.devices.0.sda.#: "" => "1"
  config.0.devices.0.sda.0.disk_id: "" => "<computed>"
  config.0.devices.0.sda.0.disk_label: "" => "boot"
  config.0.devices.0.sdb.#: "" => "<computed>"
  config.0.devices.0.sdc.#: "" => "<computed>"
  config.0.devices.0.sdd.#: "" => "<computed>"
  config.0.devices.0.sde.#: "" => "<computed>"
  config.0.devices.0.sdf.#: "" => "<computed>"
  config.0.devices.0.sdg.#: "" => "<computed>"
  config.0.devices.0.sdh.#: "" => "<computed>"
  config.0.helpers.#: "" => "<computed>"
  config.0.kernel: "" => "linode/direct-disk"
  config.0.label: "" => "node"
  config.0.root_device: "" => "<computed>"
  config.0.run_level: "" => "default"
  config.0.virt_mode: "" => "paravirt"
  disk.#: "" => "1"
  disk.0.authorized_keys.#: "" => "1"
  disk.0.authorized_keys.0: "" => "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC8kZzEtk7J7Mvv4hJIE1jcQ0q6h41g5hUwPtOUPjNWPIKm4djmy4+C4+Gtsxxh5jUFooAbwl+DubFZogbU1Q5aLOGKSsD/K4XimTyOhr90DO47naCnaSS0Rg0XyZlvQsHKwcXGuGOleCMhB2gQ70QAK4X/N1dvGfqCDdKBbTORKQyz0WHWo7YGA6YAgtvzn1C5W0l7cT0AXgOfFEAGF31nqqTuRVBbBmosq1qhXJlVt+PO32MqmxZv44ZuCP1jWjyTz1rbQ1OLHCxP/+eDIlpOlkYop4XgwiHHMRn/rxHFTKOAxtFOccFw9KEnDM0j0M5FRBj5qU1BCa/6jhnu7LIz"
  disk.0.filesystem: "" => "<computed>"
  disk.0.id: "" => "<computed>"
  disk.0.image: "" => "linode/containerlinux"
  disk.0.label: "" => "boot"
  disk.0.read_only: "" => "<computed>"
  disk.0.size: "" => "81920"
  disk.0.stackscript_data.%: "" => "<computed>"
  disk.0.stackscript_id: "" => "<computed>"
  ip_address: "" => "<computed>"
  ipv4.#: "" => "<computed>"
  ipv6: "" => "<computed>"
  label: "" => "linode-node-2"
  private_ip: "" => "true"
  private_ip_address: "" => "<computed>"
  region: "" => "eu-west"
  specs.#: "" => "<computed>"
  status: "" => "<computed>"
  swap_size: "" => "<computed>"
  type: "" => "g6-standard-2"
  watchdog_enabled: "" => "true"
module.k8s.module.masters.module.master_instance.linode_instance.instance: Creating...
  alerts.#: "" => "<computed>"
  backups.#: "" => "<computed>"
  backups_enabled: "" => "<computed>"
  boot_config_label: "" => "<computed>"
  config.#: "" => "1"
  config.0.devices.#: "" => "1"
  config.0.devices.0.sda.#: "" => "1"
  config.0.devices.0.sda.0.disk_id: "" => "<computed>"
  config.0.devices.0.sda.0.disk_label: "" => "boot"
  config.0.devices.0.sdb.#: "" => "<computed>"
  config.0.devices.0.sdc.#: "" => "<computed>"
  config.0.devices.0.sdd.#: "" => "<computed>"
  config.0.devices.0.sde.#: "" => "<computed>"
  config.0.devices.0.sdf.#: "" => "<computed>"
  config.0.devices.0.sdg.#: "" => "<computed>"
  config.0.devices.0.sdh.#: "" => "<computed>"
  config.0.helpers.#: "" => "<computed>"
  config.0.kernel: "" => "linode/direct-disk"
  config.0.label: "" => "master"
  config.0.root_device: "" => "<computed>"
  config.0.run_level: "" => "default"
  config.0.virt_mode: "" => "paravirt"
  disk.#: "" => "1"
  disk.0.authorized_keys.#: "" => "1"
  disk.0.authorized_keys.0: "" => "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC8kZzEtk7J7Mvv4hJIE1jcQ0q6h41g5hUwPtOUPjNWPIKm4djmy4+C4+Gtsxxh5jUFooAbwl+DubFZogbU1Q5aLOGKSsD/K4XimTyOhr90DO47naCnaSS0Rg0XyZlvQsHKwcXGuGOleCMhB2gQ70QAK4X/N1dvGfqCDdKBbTORKQyz0WHWo7YGA6YAgtvzn1C5W0l7cT0AXgOfFEAGF31nqqTuRVBbBmosq1qhXJlVt+PO32MqmxZv44ZuCP1jWjyTz1rbQ1OLHCxP/+eDIlpOlkYop4XgwiHHMRn/rxHFTKOAxtFOccFw9KEnDM0j0M5FRBj5qU1BCa/6jhnu7LIz"
  disk.0.filesystem: "" => "<computed>"
  disk.0.id: "" => "<computed>"
  disk.0.image: "" => "linode/containerlinux"
  disk.0.label: "" => "boot"
  disk.0.read_only: "" => "<computed>"
  disk.0.size: "" => "81920"
  disk.0.stackscript_data.%: "" => "<computed>"
  disk.0.stackscript_id: "" => "<computed>"
  ip_address: "" => "<computed>"
  ipv4.#: "" => "<computed>"
  ipv6: "" => "<computed>"
  label: "" => "linode-master-1"
  private_ip: "" => "true"
  private_ip_address: "" => "<computed>"
  region: "" => "eu-west"
  specs.#: "" => "<computed>"
  status: "" => "<computed>"
  swap_size: "" => "<computed>"
  type: "" => "g6-standard-2"
  watchdog_enabled: "" => "true"
module.k8s.module.nodes.module.node.linode_instance.instance[0]: Creating...
  alerts.#: "" => "<computed>"
  backups.#: "" => "<computed>"
  backups_enabled: "" => "<computed>"
  boot_config_label: "" => "<computed>"
  config.#: "" => "1"
  config.0.devices.#: "" => "1"
  config.0.devices.0.sda.#: "" => "1"
  config.0.devices.0.sda.0.disk_id: "" => "<computed>"
  config.0.devices.0.sda.0.disk_label: "" => "boot"
  config.0.devices.0.sdb.#: "" => "<computed>"
  config.0.devices.0.sdc.#: "" => "<computed>"
  config.0.devices.0.sdd.#: "" => "<computed>"
  config.0.devices.0.sde.#: "" => "<computed>"
  config.0.devices.0.sdf.#: "" => "<computed>"
  config.0.devices.0.sdg.#: "" => "<computed>"
  config.0.devices.0.sdh.#: "" => "<computed>"
  config.0.helpers.#: "" => "<computed>"
  config.0.kernel: "" => "linode/direct-disk"
  config.0.label: "" => "node"
  config.0.root_device: "" => "<computed>"
  config.0.run_level: "" => "default"
  config.0.virt_mode: "" => "paravirt"
  disk.#: "" => "1"
  disk.0.authorized_keys.#: "" => "1"
  disk.0.authorized_keys.0: "" => "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC8kZzEtk7J7Mvv4hJIE1jcQ0q6h41g5hUwPtOUPjNWPIKm4djmy4+C4+Gtsxxh5jUFooAbwl+DubFZogbU1Q5aLOGKSsD/K4XimTyOhr90DO47naCnaSS0Rg0XyZlvQsHKwcXGuGOleCMhB2gQ70QAK4X/N1dvGfqCDdKBbTORKQyz0WHWo7YGA6YAgtvzn1C5W0l7cT0AXgOfFEAGF31nqqTuRVBbBmosq1qhXJlVt+PO32MqmxZv44ZuCP1jWjyTz1rbQ1OLHCxP/+eDIlpOlkYop4XgwiHHMRn/rxHFTKOAxtFOccFw9KEnDM0j0M5FRBj5qU1BCa/6jhnu7LIz"
  disk.0.filesystem: "" => "<computed>"
  disk.0.id: "" => "<computed>"
  disk.0.image: "" => "linode/containerlinux"
  disk.0.label: "" => "boot"
  disk.0.read_only: "" => "<computed>"
  disk.0.size: "" => "81920"
  disk.0.stackscript_data.%: "" => "<computed>"
  disk.0.stackscript_id: "" => "<computed>"
  ip_address: "" => "<computed>"
  ipv4.#: "" => "<computed>"
  ipv6: "" => "<computed>"
  label: "" => "linode-node-1"
  private_ip: "" => "true"
  private_ip_address: "" => "<computed>"
  region: "" => "eu-west"
  specs.#: "" => "<computed>"
  status: "" => "<computed>"
  swap_size: "" => "<computed>"
  type: "" => "g6-standard-2"
  watchdog_enabled: "" => "true"
module.k8s.module.nodes.module.node.linode_instance.instance[2]: Creating...
  alerts.#: "" => "<computed>"
  backups.#: "" => "<computed>"
  backups_enabled: "" => "<computed>"
  boot_config_label: "" => "<computed>"
  config.#: "" => "1"
  config.0.devices.#: "" => "1"
  config.0.devices.0.sda.#: "" => "1"
  config.0.devices.0.sda.0.disk_id: "" => "<computed>"
  config.0.devices.0.sda.0.disk_label: "" => "boot"
  config.0.devices.0.sdb.#: "" => "<computed>"
  config.0.devices.0.sdc.#: "" => "<computed>"
  config.0.devices.0.sdd.#: "" => "<computed>"
  config.0.devices.0.sde.#: "" => "<computed>"
  config.0.devices.0.sdf.#: "" => "<computed>"
  config.0.devices.0.sdg.#: "" => "<computed>"
  config.0.devices.0.sdh.#: "" => "<computed>"
  config.0.helpers.#: "" => "<computed>"
  config.0.kernel: "" => "linode/direct-disk"
  config.0.label: "" => "node"
  config.0.root_device: "" => "<computed>"
  config.0.run_level: "" => "default"
  config.0.virt_mode: "" => "paravirt"
  disk.#: "" => "1"
  disk.0.authorized_keys.#: "" => "1"
  disk.0.authorized_keys.0: "" => "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC8kZzEtk7J7Mvv4hJIE1jcQ0q6h41g5hUwPtOUPjNWPIKm4djmy4+C4+Gtsxxh5jUFooAbwl+DubFZogbU1Q5aLOGKSsD/K4XimTyOhr90DO47naCnaSS0Rg0XyZlvQsHKwcXGuGOleCMhB2gQ70QAK4X/N1dvGfqCDdKBbTORKQyz0WHWo7YGA6YAgtvzn1C5W0l7cT0AXgOfFEAGF31nqqTuRVBbBmosq1qhXJlVt+PO32MqmxZv44ZuCP1jWjyTz1rbQ1OLHCxP/+eDIlpOlkYop4XgwiHHMRn/rxHFTKOAxtFOccFw9KEnDM0j0M5FRBj5qU1BCa/6jhnu7LIz"
  disk.0.filesystem: "" => "<computed>"
  disk.0.id: "" => "<computed>"
  disk.0.image: "" => "linode/containerlinux"
  disk.0.label: "" => "boot"
  disk.0.read_only: "" => "<computed>"
  disk.0.size: "" => "81920"
  disk.0.stackscript_data.%: "" => "<computed>"
  disk.0.stackscript_id: "" => "<computed>"
  ip_address: "" => "<computed>"
  ipv4.#: "" => "<computed>"
  ipv6: "" => "<computed>"
  label: "" => "linode-node-3"
  private_ip: "" => "true"
  private_ip_address: "" => "<computed>"
  region: "" => "eu-west"
  specs.#: "" => "<computed>"
  status: "" => "<computed>"
  swap_size: "" => "<computed>"
  type: "" => "g6-standard-2"
  watchdog_enabled: "" => "true"
module.k8s.masters.master_instance.linode_instance.instance: Still creating... (10s elapsed)
module.k8s.nodes.node.linode_instance.instance.1: Still creating... (10s elapsed)
module.k8s.nodes.node.linode_instance.instance.0: Still creating... (10s elapsed)
module.k8s.nodes.node.linode_instance.instance.2: Still creating... (10s elapsed)
module.k8s.masters.master_instance.linode_instance.instance: Still creating... (20s elapsed)
module.k8s.nodes.node.linode_instance.instance.1: Still creating... (20s elapsed)
module.k8s.nodes.node.linode_instance.instance.0: Still creating... (20s elapsed)
module.k8s.nodes.node.linode_instance.instance.2: Still creating... (20s elapsed)
module.k8s.masters.master_instance.linode_instance.instance: Still creating... (30s elapsed)
module.k8s.nodes.node.linode_instance.instance.1: Still creating... (30s elapsed)
module.k8s.nodes.node.linode_instance.instance.0: Still creating... (30s elapsed)
module.k8s.nodes.node.linode_instance.instance.2: Still creating... (30s elapsed)
module.k8s.masters.master_instance.linode_instance.instance: Still creating... (40s elapsed)
module.k8s.nodes.node.linode_instance.instance.0: Still creating... (40s elapsed)
module.k8s.nodes.node.linode_instance.instance.2: Still creating... (40s elapsed)
module.k8s.nodes.node.linode_instance.instance.1: Still creating... (40s elapsed)
module.k8s.nodes.node.linode_instance.instance.0: Still creating... (50s elapsed)
module.k8s.nodes.node.linode_instance.instance.1: Still creating... (50s elapsed)
module.k8s.masters.master_instance.linode_instance.instance: Still creating... (50s elapsed)
module.k8s.nodes.node.linode_instance.instance.2: Still creating... (50s elapsed)
module.k8s.nodes.node.linode_instance.instance.1: Still creating... (1m0s elapsed)
module.k8s.nodes.node.linode_instance.instance.2: Still creating... (1m0s elapsed)
module.k8s.masters.master_instance.linode_instance.instance: Still creating... (1m0s elapsed)
module.k8s.nodes.node.linode_instance.instance.0: Still creating... (1m0s elapsed)
module.k8s.nodes.node.linode_instance.instance.1: Still creating... (1m10s elapsed)
module.k8s.nodes.node.linode_instance.instance.2: Still creating... (1m10s elapsed)
module.k8s.masters.master_instance.linode_instance.instance: Still creating... (1m10s elapsed)
module.k8s.nodes.node.linode_instance.instance.0: Still creating... (1m10s elapsed)
module.k8s.module.nodes.module.node.linode_instance.instance[1]: Provisioning with 'file'...
module.k8s.module.masters.module.master_instance.linode_instance.instance: Provisioning with 'file'...
module.k8s.nodes.node.linode_instance.instance.0: Still creating... (1m20s elapsed)
module.k8s.masters.master_instance.linode_instance.instance: Still creating... (1m20s elapsed)
module.k8s.nodes.node.linode_instance.instance.1: Still creating... (1m20s elapsed)
module.k8s.nodes.node.linode_instance.instance.2: Still creating... (1m20s elapsed)
module.k8s.module.masters.module.master_instance.linode_instance.instance: Provisioning with 'remote-exec'...
module.k8s.module.masters.module.master_instance.linode_instance.instance (remote-exec): Connecting to remote host via SSH...
module.k8s.module.masters.module.master_instance.linode_instance.instance (remote-exec): Host: 212.71.247.39
module.k8s.module.masters.module.master_instance.linode_instance.instance (remote-exec): User: core
module.k8s.module.masters.module.master_instance.linode_instance.instance (remote-exec): Password: false
module.k8s.module.masters.module.master_instance.linode_instance.instance (remote-exec): Private key: false
module.k8s.module.masters.module.master_instance.linode_instance.instance (remote-exec): SSH Agent: true
module.k8s.module.masters.module.master_instance.linode_instance.instance (remote-exec): Checking Host Key: false
module.k8s.module.nodes.module.node.linode_instance.instance[1]: Provisioning with 'remote-exec'...
module.k8s.module.nodes.module.node.linode_instance.instance[1] (remote-exec): Connecting to remote host via SSH...
module.k8s.module.nodes.module.node.linode_instance.instance[1] (remote-exec): Host: 109.74.202.35
module.k8s.module.nodes.module.node.linode_instance.instance[1] (remote-exec): User: core
module.k8s.module.nodes.module.node.linode_instance.instance[1] (remote-exec): Password: false
module.k8s.module.nodes.module.node.linode_instance.instance[1] (remote-exec): Private key: false
module.k8s.module.nodes.module.node.linode_instance.instance[1] (remote-exec): SSH Agent: true
module.k8s.module.nodes.module.node.linode_instance.instance[1] (remote-exec): Checking Host Key: false
module.k8s.module.masters.module.master_instance.linode_instance.instance (remote-exec): Connected!
module.k8s.module.nodes.module.node.linode_instance.instance[1] (remote-exec): Connected!
module.k8s.module.nodes.module.node.linode_instance.instance[0]: Provisioning with 'file'...
module.k8s.module.masters.module.master_instance.linode_instance.instance (remote-exec): ip_vs_sh
module.k8s.module.masters.module.master_instance.linode_instance.instance (remote-exec): ip_vs
module.k8s.module.masters.module.master_instance.linode_instance.instance (remote-exec): ip_vs_rr
module.k8s.module.masters.module.master_instance.linode_instance.instance (remote-exec): ip_vs_wrr
module.k8s.module.masters.module.master_instance.linode_instance.instance (remote-exec): nf_conntrack_ipv4
module.k8s.module.nodes.module.node.linode_instance.instance[1] (remote-exec): ip_vs_sh
module.k8s.module.nodes.module.node.linode_instance.instance[1] (remote-exec): ip_vs
module.k8s.module.nodes.module.node.linode_instance.instance[1] (remote-exec): ip_vs_rr
module.k8s.module.nodes.module.node.linode_instance.instance[1] (remote-exec): ip_vs_wrr
module.k8s.module.nodes.module.node.linode_instance.instance[1] (remote-exec): nf_conntrack_ipv4
module.k8s.nodes.node.linode_instance.instance.0: Still creating... (1m30s elapsed)
module.k8s.nodes.node.linode_instance.instance.2: Still creating... (1m30s elapsed)
module.k8s.masters.master_instance.linode_instance.instance: Still creating... (1m30s elapsed)
module.k8s.nodes.node.linode_instance.instance.1: Still creating... (1m30s elapsed)
module.k8s.module.nodes.module.node.linode_instance.instance[2]: Provisioning with 'file'...
module.k8s.module.nodes.module.node.linode_instance.instance[1] (remote-exec): Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /etc/systemd/system/kubelet.service.
module.k8s.module.masters.module.master_instance.linode_instance.instance (remote-exec): Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /etc/systemd/system/kubelet.service.
module.k8s.module.masters.module.master_instance.linode_instance.instance: Creation complete after 1m39s (ID: 13892088)
module.k8s.module.masters.null_resource.masters_provisioner: Creating...
module.k8s.module.masters.null_resource.masters_provisioner: Provisioning with 'file'...
module.k8s.module.nodes.module.node.linode_instance.instance[1]: Creation complete after 1m39s (ID: 13892089)
module.k8s.nodes.node.linode_instance.instance.0: Still creating... (1m40s elapsed)
module.k8s.nodes.node.linode_instance.instance.2: Still creating... (1m40s elapsed)
module.k8s.module.masters.null_resource.masters_provisioner: Provisioning with 'remote-exec'...
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): Connecting to remote host via SSH...
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): Host: 212.71.247.39
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): User: core
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): Password: false
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): Private key: false
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): SSH Agent: true
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): Checking Host Key: false
module.k8s.module.nodes.module.node.linode_instance.instance[0]: Provisioning with 'remote-exec'...
module.k8s.module.nodes.module.node.linode_instance.instance[0] (remote-exec): Connecting to remote host via SSH...
module.k8s.module.nodes.module.node.linode_instance.instance[0] (remote-exec): Host: 109.74.198.242
module.k8s.module.nodes.module.node.linode_instance.instance[0] (remote-exec): User: core
module.k8s.module.nodes.module.node.linode_instance.instance[0] (remote-exec): Password: false
module.k8s.module.nodes.module.node.linode_instance.instance[0] (remote-exec): Private key: false
module.k8s.module.nodes.module.node.linode_instance.instance[0] (remote-exec): SSH Agent: true
module.k8s.module.nodes.module.node.linode_instance.instance[0] (remote-exec): Checking Host Key: false
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): Connected!
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [init] Using Kubernetes version: v1.13.2
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [preflight] Running pre-flight checks
module.k8s.module.nodes.module.node.linode_instance.instance[0] (remote-exec): Connected!
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [preflight] Pulling images required for setting up a Kubernetes cluster
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [preflight] This might take a minute or two, depending on the speed of your internet connection
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
module.k8s.module.nodes.module.node.linode_instance.instance[2]: Provisioning with 'remote-exec'...
module.k8s.module.nodes.module.node.linode_instance.instance[2] (remote-exec): Connecting to remote host via SSH...
module.k8s.module.nodes.module.node.linode_instance.instance[2] (remote-exec): Host: 109.74.192.154
module.k8s.module.nodes.module.node.linode_instance.instance[2] (remote-exec): User: core
module.k8s.module.nodes.module.node.linode_instance.instance[2] (remote-exec): Password: false
module.k8s.module.nodes.module.node.linode_instance.instance[2] (remote-exec): Private key: false
module.k8s.module.nodes.module.node.linode_instance.instance[2] (remote-exec): SSH Agent: true
module.k8s.module.nodes.module.node.linode_instance.instance[2] (remote-exec): Checking Host Key: false
module.k8s.module.nodes.module.node.linode_instance.instance[0] (remote-exec): ip_vs_sh
module.k8s.module.nodes.module.node.linode_instance.instance[0] (remote-exec): ip_vs
module.k8s.module.nodes.module.node.linode_instance.instance[0] (remote-exec): ip_vs_rr
module.k8s.module.nodes.module.node.linode_instance.instance[0] (remote-exec): ip_vs_wrr
module.k8s.module.nodes.module.node.linode_instance.instance[0] (remote-exec): nf_conntrack_ipv4
module.k8s.module.nodes.module.node.linode_instance.instance[2] (remote-exec): Connected!
module.k8s.masters.null_resource.masters_provisioner: Still creating... (10s elapsed)
module.k8s.module.nodes.module.node.linode_instance.instance[2] (remote-exec): ip_vs_sh
module.k8s.module.nodes.module.node.linode_instance.instance[2] (remote-exec): ip_vs
module.k8s.module.nodes.module.node.linode_instance.instance[2] (remote-exec): ip_vs_rr
module.k8s.module.nodes.module.node.linode_instance.instance[2] (remote-exec): ip_vs_wrr
module.k8s.module.nodes.module.node.linode_instance.instance[2] (remote-exec): nf_conntrack_ipv4
module.k8s.nodes.node.linode_instance.instance.0: Still creating... (1m50s elapsed)
module.k8s.nodes.node.linode_instance.instance.2: Still creating... (1m50s elapsed)
module.k8s.module.nodes.module.node.linode_instance.instance[0] (remote-exec): Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /etc/systemd/system/kubelet.service.
module.k8s.module.nodes.module.node.linode_instance.instance[0]: Creation complete after 1m54s (ID: 13892091)
module.k8s.module.nodes.module.node.linode_instance.instance[2] (remote-exec): Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /etc/systemd/system/kubelet.service.
module.k8s.module.nodes.module.node.linode_instance.instance[2]: Creation complete after 1m56s (ID: 13892090)
module.k8s.masters.null_resource.masters_provisioner: Still creating... (20s elapsed)
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [kubelet-start] Activating the kubelet service
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [certs] Using certificateDir folder "/etc/kubernetes/pki"
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [certs] Generating "etcd/ca" certificate and key
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [certs] Generating "etcd/peer" certificate and key
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [certs] etcd/peer serving cert is signed for DNS names [linode-master-1 localhost] and IPs [192.168.137.24 127.0.0.1 ::1]
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [certs] Generating "etcd/healthcheck-client" certificate and key
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [certs] Generating "apiserver-etcd-client" certificate and key
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [certs] Generating "etcd/server" certificate and key
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [certs] etcd/server serving cert is signed for DNS names [linode-master-1 localhost] and IPs [192.168.137.24 127.0.0.1 ::1]
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [certs] Generating "front-proxy-ca" certificate and key
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [certs] Generating "front-proxy-client" certificate and key
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [certs] Generating "ca" certificate and key
module.k8s.masters.null_resource.masters_provisioner: Still creating... (30s elapsed)
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [certs] Generating "apiserver" certificate and key
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [certs] apiserver serving cert is signed for DNS names [linode-master-1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.137.24 212.71.247.39]
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [certs] Generating "apiserver-kubelet-client" certificate and key
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [certs] Generating "sa" key and public key
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [kubeconfig] Writing "admin.conf" kubeconfig file
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [kubeconfig] Writing "kubelet.conf" kubeconfig file
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [kubeconfig] Writing "controller-manager.conf" kubeconfig file
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [kubeconfig] Writing "scheduler.conf" kubeconfig file
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [control-plane] Using manifest folder "/etc/kubernetes/manifests"
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [control-plane] Creating static Pod manifest for "kube-apiserver"
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [control-plane] Creating static Pod manifest for "kube-controller-manager"
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [control-plane] Creating static Pod manifest for "kube-scheduler"
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
module.k8s.masters.null_resource.masters_provisioner: Still creating... (40s elapsed)
module.k8s.masters.null_resource.masters_provisioner: Still creating... (50s elapsed)
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [apiclient] All control plane components are healthy after 20.502023 seconds
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "linode-master-1" as an annotation
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [mark-control-plane] Marking the node linode-master-1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [mark-control-plane] Marking the node linode-master-1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [bootstrap-token] Using token: 1jusxn.eyvko1s4w4fsw4ao
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [addons] Applied essential addon: CoreDNS
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): [addons] Applied essential addon: kube-proxy

module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): Your Kubernetes master has initialized successfully!

module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): To start using your cluster, you need to run the following as a regular user:

module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): mkdir -p $HOME/.kube
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): sudo chown $(id -u):$(id -g) $HOME/.kube/config

module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): You should now deploy a pod network to the cluster.
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): https://kubernetes.io/docs/concepts/cluster-administration/addons/

module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): You can now join any number of machines by running the following on each node
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): as root:

module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): kubeadm join 192.168.137.24:6443 --token 1jusxn.eyvko1s4w4fsw4ao --discovery-token-ca-cert-hash sha256:943806176c5c8dfa8e5f8b78ac3e7741e5c3458f5b01abb6546fbd46c3954762

module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): clusterrole.rbac.authorization.k8s.io/calico-node created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): clusterrolebinding.rbac.authorization.k8s.io/calico-node created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): configmap/calico-config created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): service/calico-typha created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): deployment.apps/calico-typha created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): daemonset.extensions/calico-node created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): serviceaccount/calico-node created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): secret/linode created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): serviceaccount/ccm-linode created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): clusterrolebinding.rbac.authorization.k8s.io/system:ccm-linode created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): daemonset.apps/ccm-linode created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): customresourcedefinition.apiextensions.k8s.io/csinodeinfos.csi.storage.k8s.io created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): customresourcedefinition.apiextensions.k8s.io/csidrivers.csi.storage.k8s.io created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): serviceaccount/csi-node-sa created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): clusterrole.rbac.authorization.k8s.io/driver-registrar-role created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): clusterrolebinding.rbac.authorization.k8s.io/driver-registrar-binding created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): serviceaccount/csi-controller-sa created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): clusterrole.rbac.authorization.k8s.io/external-provisioner-role created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): clusterrolebinding.rbac.authorization.k8s.io/csi-controller-provisioner-binding created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): clusterrole.rbac.authorization.k8s.io/external-attacher-role created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): clusterrolebinding.rbac.authorization.k8s.io/csi-controller-attacher-binding created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): clusterrole.rbac.authorization.k8s.io/external-snapshotter-role created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): clusterrolebinding.rbac.authorization.k8s.io/csi-controller-snapshotter-binding created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): csidriver.csi.storage.k8s.io/linodebs.csi.linode.com created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): storageclass.storage.k8s.io/linode-block-storage created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): statefulset.apps/csi-linode-controller created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): daemonset.extensions/csi-linode-node created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): serviceaccount/external-dns created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): clusterrole.rbac.authorization.k8s.io/external-dns created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): clusterrolebinding.rbac.authorization.k8s.io/external-dns-viewer created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): deployment.extensions/external-dns created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): secret/kubernetes-dashboard-certs created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): serviceaccount/kubernetes-dashboard created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): deployment.apps/kubernetes-dashboard created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): service/kubernetes-dashboard created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): serviceaccount/metrics-server created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): deployment.extensions/metrics-server created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): service/metrics-server created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): clusterrole.rbac.authorization.k8s.io/system:metrics-server created
module.k8s.module.masters.null_resource.masters_provisioner (remote-exec): clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
module.k8s.module.masters.null_resource.masters_provisioner: Creation complete after 57s (ID: 5397593024671584831)
module.k8s.module.masters.data.external.kubeadm_join: Refreshing state...
module.k8s.null_resource.local_kubectl: Creating...
module.k8s.module.nodes.null_resource.kubeadm_join[2]: Creating...
module.k8s.null_resource.local_kubectl: Provisioning with 'local-exec'...
module.k8s.module.nodes.null_resource.kubeadm_join[0]: Creating...
module.k8s.module.nodes.null_resource.kubeadm_join[1]: Creating...
module.k8s.module.nodes.null_resource.kubeadm_join[2]: Provisioning with 'remote-exec'...
module.k8s.module.nodes.null_resource.kubeadm_join[0]: Provisioning with 'remote-exec'...
module.k8s.null_resource.local_kubectl (local-exec): Executing: ["/bin/sh" "-c" "/Users/isaac.johnson/Workspaces/linode-k8s/.terraform/modules/ef30511175d44539abfe9e49cec6c979/linode-terraform-linode-k8s-cf68130/scripts/local/kubectl-conf.sh linode 212.71.247.39 192.168.137.24 ~/.ssh/id_rsa.pub"]
module.k8s.module.nodes.null_resource.kubeadm_join[1]: Provisioning with 'remote-exec'...
module.k8s.module.nodes.null_resource.kubeadm_join[2] (remote-exec): Connecting to remote host via SSH...
module.k8s.module.nodes.null_resource.kubeadm_join[2] (remote-exec): Host: 109.74.192.154
module.k8s.module.nodes.null_resource.kubeadm_join[2] (remote-exec): User: core
module.k8s.module.nodes.null_resource.kubeadm_join[2] (remote-exec): Password: false
module.k8s.module.nodes.null_resource.kubeadm_join[2] (remote-exec): Private key: false
module.k8s.module.nodes.null_resource.kubeadm_join[2] (remote-exec): SSH Agent: true
module.k8s.module.nodes.null_resource.kubeadm_join[2] (remote-exec): Checking Host Key: false
module.k8s.module.nodes.null_resource.kubeadm_join[1] (remote-exec): Connecting to remote host via SSH...
module.k8s.module.nodes.null_resource.kubeadm_join[1] (remote-exec): Host: 109.74.202.35
module.k8s.module.nodes.null_resource.kubeadm_join[1] (remote-exec): User: core
module.k8s.module.nodes.null_resource.kubeadm_join[1] (remote-exec): Password: false
module.k8s.module.nodes.null_resource.kubeadm_join[1] (remote-exec): Private key: false
module.k8s.module.nodes.null_resource.kubeadm_join[1] (remote-exec): SSH Agent: true
module.k8s.module.nodes.null_resource.kubeadm_join[1] (remote-exec): Checking Host Key: false
module.k8s.module.nodes.null_resource.kubeadm_join[0] (remote-exec): Connecting to remote host via SSH...
module.k8s.module.nodes.null_resource.kubeadm_join[0] (remote-exec): Host: 109.74.198.242
module.k8s.module.nodes.null_resource.kubeadm_join[0] (remote-exec): User: core
module.k8s.module.nodes.null_resource.kubeadm_join[0] (remote-exec): Password: false
module.k8s.module.nodes.null_resource.kubeadm_join[0] (remote-exec): Private key: false
module.k8s.module.nodes.null_resource.kubeadm_join[0] (remote-exec): SSH Agent: true
module.k8s.module.nodes.null_resource.kubeadm_join[0] (remote-exec): Checking Host Key: false
module.k8s.null_resource.local_kubectl (local-exec): Warning: Permanently added '212.71.247.39' (ECDSA) to the list of known hosts.
module.k8s.module.nodes.null_resource.kubeadm_join[2] (remote-exec): Connected!
module.k8s.module.nodes.null_resource.kubeadm_join[1] (remote-exec): Connected!
module.k8s.module.nodes.null_resource.kubeadm_join[0] (remote-exec): Connected!
module.k8s.null_resource.local_kubectl: Creation complete after 1s (ID: 6240730930346092628)
module.k8s.module.nodes.null_resource.kubeadm_join[2] (remote-exec): [preflight] Running pre-flight checks
module.k8s.module.nodes.null_resource.kubeadm_join[0] (remote-exec): [preflight] Running pre-flight checks
module.k8s.module.nodes.null_resource.kubeadm_join[1] (remote-exec): [preflight] Running pre-flight checks
module.k8s.module.nodes.null_resource.kubeadm_join[2] (remote-exec): [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
module.k8s.module.nodes.null_resource.kubeadm_join[2] (remote-exec): [discovery] Trying to connect to API Server "192.168.137.24:6443"
module.k8s.module.nodes.null_resource.kubeadm_join[2] (remote-exec): [discovery] Created cluster-info discovery client, requesting info from "https://192.168.137.24:6443"
module.k8s.module.nodes.null_resource.kubeadm_join[2] (remote-exec): [discovery] Failed to connect to API Server "192.168.137.24:6443": token id "4nbtz9" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
module.k8s.module.nodes.null_resource.kubeadm_join[0] (remote-exec): [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
module.k8s.module.nodes.null_resource.kubeadm_join[1] (remote-exec): [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
module.k8s.module.nodes.null_resource.kubeadm_join[0] (remote-exec): [discovery] Trying to connect to API Server "192.168.137.24:6443"
module.k8s.module.nodes.null_resource.kubeadm_join[0] (remote-exec): [discovery] Created cluster-info discovery client, requesting info from "https://192.168.137.24:6443"
module.k8s.module.nodes.null_resource.kubeadm_join[0] (remote-exec): [discovery] Failed to connect to API Server "192.168.137.24:6443": token id "4nbtz9" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
module.k8s.module.nodes.null_resource.kubeadm_join[1] (remote-exec): [discovery] Trying to connect to API Server "192.168.137.24:6443"
module.k8s.module.nodes.null_resource.kubeadm_join[1] (remote-exec): [discovery] Created cluster-info discovery client, requesting info from "https://192.168.137.24:6443"
module.k8s.module.nodes.null_resource.kubeadm_join[1] (remote-exec): [discovery] Failed to connect to API Server "192.168.137.24:6443": token id "4nbtz9" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
module.k8s.module.nodes.null_resource.kubeadm_join[2] (remote-exec): [discovery] Trying to connect to API Server "192.168.137.24:6443"
module.k8s.module.nodes.null_resource.kubeadm_join[2] (remote-exec): [discovery] Created cluster-info discovery client, requesting info from "https://192.168.137.24:6443"
module.k8s.module.nodes.null_resource.kubeadm_join[2] (remote-exec): [discovery] Failed to connect to API Server "192.168.137.24:6443": token id "4nbtz9" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
module.k8s.module.nodes.null_resource.kubeadm_join[0] (remote-exec): [discovery] Trying to connect to API Server "192.168.137.24:6443"
module.k8s.module.nodes.null_resource.kubeadm_join[0] (remote-exec): [discovery] Created cluster-info discovery client, requesting info from "https://192.168.137.24:6443"
module.k8s.module.nodes.null_resource.kubeadm_join[0] (remote-exec): [discovery] Failed to connect to API Server "192.168.137.24:6443": token id "4nbtz9" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
module.k8s.module.nodes.null_resource.kubeadm_join[1] (remote-exec): [discovery] Trying to connect to API Server "192.168.137.24:6443"
module.k8s.module.nodes.null_resource.kubeadm_join[1] (remote-exec): [discovery] Created cluster-info discovery client, requesting info from "https://192.168.137.24:6443"
module.k8s.module.nodes.null_resource.kubeadm_join[1] (remote-exec): [discovery] Failed to connect to API Server "192.168.137.24:6443": token id "4nbtz9" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
module.k8s.nodes.null_resource.kubeadm_join.2: Still creating... (10s elapsed)
module.k8s.nodes.null_resource.kubeadm_join.0: Still creating... (10s elapsed)
module.k8s.nodes.null_resource.kubeadm_join.1: Still creating... (10s elapsed)
module.k8s.module.nodes.null_resource.kubeadm_join[2] (remote-exec): [discovery] Trying to connect to API Server "192.168.137.24:6443"
module.k8s.module.nodes.null_resource.kubeadm_join[2] (remote-exec): [discovery] Created cluster-info discovery client, requesting info from "https://192.168.137.24:6443"
module.k8s.module.nodes.null_resource.kubeadm_join[0] (remote-exec): [discovery] Trying to connect to API Server "192.168.137.24:6443"
module.k8s.module.nodes.null_resource.kubeadm_join[0] (remote-exec): [discovery] Created cluster-info discovery client, requesting info from "https://192.168.137.24:6443"
module.k8s.module.nodes.null_resource.kubeadm_join[0] (remote-exec): [discovery] Requesting info from "https://192.168.137.24:6443" again to validate TLS against the pinned public key
module.k8s.module.nodes.null_resource.kubeadm_join[2] (remote-exec): [discovery] Requesting info from "https://192.168.137.24:6443" again to validate TLS against the pinned public key
module.k8s.module.nodes.null_resource.kubeadm_join[1] (remote-exec): [discovery] Trying to connect to API Server "192.168.137.24:6443"
module.k8s.module.nodes.null_resource.kubeadm_join[1] (remote-exec): [discovery] Created cluster-info discovery client, requesting info from "https://192.168.137.24:6443"
module.k8s.module.nodes.null_resource.kubeadm_join[0] (remote-exec): [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.137.24:6443"
module.k8s.module.nodes.null_resource.kubeadm_join[0] (remote-exec): [discovery] Successfully established connection with API Server "192.168.137.24:6443"
module.k8s.module.nodes.null_resource.kubeadm_join[0] (remote-exec): [join] Reading configuration from the cluster...
module.k8s.module.nodes.null_resource.kubeadm_join[0] (remote-exec): [join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
module.k8s.module.nodes.null_resource.kubeadm_join[2] (remote-exec): [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.137.24:6443"
module.k8s.module.nodes.null_resource.kubeadm_join[1] (remote-exec): [discovery] Requesting info from "https://192.168.137.24:6443" again to validate TLS against the pinned public key
module.k8s.module.nodes.null_resource.kubeadm_join[2] (remote-exec): [discovery] Successfully established connection with API Server "192.168.137.24:6443"
module.k8s.module.nodes.null_resource.kubeadm_join[2] (remote-exec): [join] Reading configuration from the cluster...
module.k8s.module.nodes.null_resource.kubeadm_join[2] (remote-exec): [join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
module.k8s.module.nodes.null_resource.kubeadm_join[1] (remote-exec): [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.137.24:6443"
module.k8s.module.nodes.null_resource.kubeadm_join[1] (remote-exec): [discovery] Successfully established connection with API Server "192.168.137.24:6443"
module.k8s.module.nodes.null_resource.kubeadm_join[1] (remote-exec): [join] Reading configuration from the cluster...
module.k8s.module.nodes.null_resource.kubeadm_join[1] (remote-exec): [join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
module.k8s.module.nodes.null_resource.kubeadm_join[0] (remote-exec): [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
module.k8s.module.nodes.null_resource.kubeadm_join[2] (remote-exec): [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
module.k8s.module.nodes.null_resource.kubeadm_join[1] (remote-exec): [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
module.k8s.module.nodes.null_resource.kubeadm_join[2] (remote-exec): [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
module.k8s.module.nodes.null_resource.kubeadm_join[0] (remote-exec): [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
module.k8s.module.nodes.null_resource.kubeadm_join[1] (remote-exec): [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
module.k8s.module.nodes.null_resource.kubeadm_join[0] (remote-exec): [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
module.k8s.module.nodes.null_resource.kubeadm_join[2] (remote-exec): [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
module.k8s.module.nodes.null_resource.kubeadm_join[0] (remote-exec): [kubelet-start] Activating the kubelet service
module.k8s.module.nodes.null_resource.kubeadm_join[2] (remote-exec): [kubelet-start] Activating the kubelet service
module.k8s.module.nodes.null_resource.kubeadm_join[1] (remote-exec): [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
module.k8s.module.nodes.null_resource.kubeadm_join[1] (remote-exec): [kubelet-start] Activating the kubelet service
module.k8s.module.nodes.null_resource.kubeadm_join[0] (remote-exec): [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
module.k8s.module.nodes.null_resource.kubeadm_join[2] (remote-exec): [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
module.k8s.module.nodes.null_resource.kubeadm_join[1] (remote-exec): [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
module.k8s.module.nodes.null_resource.kubeadm_join[1] (remote-exec): [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "linode-node-2" as an annotation
module.k8s.module.nodes.null_resource.kubeadm_join[2] (remote-exec): [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "linode-node-3" as an annotation
module.k8s.module.nodes.null_resource.kubeadm_join[0] (remote-exec): [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "linode-node-1" as an annotation



module.k8s.module.nodes.null_resource.kubeadm_join[1] (remote-exec): This node has joined the cluster:
module.k8s.module.nodes.null_resource.kubeadm_join[2] (remote-exec): This node has joined the cluster:
module.k8s.module.nodes.null_resource.kubeadm_join[0] (remote-exec): This node has joined the cluster:
module.k8s.module.nodes.null_resource.kubeadm_join[0] (remote-exec): * Certificate signing request was sent to apiserver and a response was received.
module.k8s.module.nodes.null_resource.kubeadm_join[1] (remote-exec): * Certificate signing request was sent to apiserver and a response was received.
module.k8s.module.nodes.null_resource.kubeadm_join[2] (remote-exec): * Certificate signing request was sent to apiserver and a response was received.
module.k8s.module.nodes.null_resource.kubeadm_join[0] (remote-exec): * The Kubelet was informed of the new secure connection details.
module.k8s.module.nodes.null_resource.kubeadm_join[2] (remote-exec): * The Kubelet was informed of the new secure connection details.
module.k8s.module.nodes.null_resource.kubeadm_join[1] (remote-exec): * The Kubelet was informed of the new secure connection details.



module.k8s.module.nodes.null_resource.kubeadm_join[0] (remote-exec): Run 'kubectl get nodes' on the master to see this node join the cluster.
module.k8s.module.nodes.null_resource.kubeadm_join[2] (remote-exec): Run 'kubectl get nodes' on the master to see this node join the cluster.
module.k8s.module.nodes.null_resource.kubeadm_join[1] (remote-exec): Run 'kubectl get nodes' on the master to see this node join the cluster.



module.k8s.module.nodes.null_resource.kubeadm_join[2]: Creation complete after 15s (ID: 1431711151197785685)
module.k8s.module.nodes.null_resource.kubeadm_join[0]: Creation complete after 16s (ID: 6813310552091977437)
module.k8s.module.nodes.null_resource.kubeadm_join[1]: Creation complete after 16s (ID: 4661218998672419198)

Apply complete! Resources: 10 added, 0 changed, 0 destroyed.

Either way (terraform or linode-cli) you should now have a working cluster!

A quick note - the Terraform install will not modify ~/.kube/config (the linode-cli will). So to run your kubectl commands, you’ll want to move the conf in place (or reference it with command line args):

$ cp linode.conf ~/.kube/config 

Let’s check out our cluster and see what pods are running:

$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-node-8kpv9 2/2 Running 0 43m
kube-system calico-node-dp2xj 2/2 Running 0 43m
kube-system calico-node-rbtbs 2/2 Running 0 43m
kube-system calico-node-xvf9k 2/2 Running 0 43m
kube-system ccm-linode-rv46t 1/1 Running 0 43m
kube-system coredns-86c58d9df4-2vg92 1/1 Running 0 43m
kube-system coredns-86c58d9df4-vq79q 1/1 Running 0 43m
kube-system csi-linode-controller-0 3/3 Running 0 43m
kube-system csi-linode-node-c5vg8 2/2 Running 0 42m
kube-system csi-linode-node-dcg8p 2/2 Running 0 42m
kube-system csi-linode-node-s5rcp 2/2 Running 0 42m
kube-system etcd-linode-master-1 1/1 Running 0 42m
kube-system external-dns-d4cfd5855-s8cgq 1/1 Running 0 43m
kube-system kube-apiserver-linode-master-1 1/1 Running 0 42m
kube-system kube-controller-manager-linode-master-1 1/1 Running 0 42m
kube-system kube-proxy-7dd5z 1/1 Running 0 43m
kube-system kube-proxy-bq42m 1/1 Running 0 43m
kube-system kube-proxy-jfbzr 1/1 Running 0 43m
kube-system kube-proxy-vpszq 1/1 Running 0 43m
kube-system kube-scheduler-linode-master-1 1/1 Running 0 42m
kube-system kubernetes-dashboard-57df4db6b-9zqqj 1/1 Running 0 43m
kube-system metrics-server-68d85f76bb-dzqvh 1/1 Running 0 43m

We can also checkout out dashboard (kubernetes-dashboard-57df4db6b-9zqqj)

$ kubectl port-forward kubernetes-dashboard-57df4db6b-kqn69 -n kube-system 8443:8443
Forwarding from 127.0.0.1:8443 -> 8443
Forwarding from [::1]:8443 -> 8443
Handling connection for 8443
Handling connection for 8443
viewing workloads in the dashboard

Scaling

We can checkout our cluster in the Linode control panel:

The resize menu will allow us to vertically (up/down) scale our cluster (albeit one node at a time):

changing the size of a node

To scale horizontally (in/out) one can use either terraform or the linode-cli (which leverages TF). E.g.

$ linode-cli k8s-alpha create --node-type g6-standard-2 --nodes 3 myFirstCluster
Workspace "myFirstCluster" already exists
Initializing modules...
- module.k8s

Initializing provider plugins...

The following providers do not have any version constraints in configuration,
so the latest version was installed.

To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.

* provider.null: version = "~> 2.1"

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
null_resource.preflight-checks: Refreshing state... (ID: 7232703061644315079)
data.linode_instance_type.node: Refreshing state...
data.linode_instance_type.master: Refreshing state...
linode_instance.k8s_master: Refreshing state... (ID: 13878345)
linode_instance.k8s_node[0]: Refreshing state... (ID: 13878353)
linode_instance.k8s_node[1]: Refreshing state... (ID: 13878352)

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create
-/+ destroy and then create replacement
 <= read (data resources)

Terraform will perform the following actions:

 <= module.k8s.data.external.kubeadm_join
      id: <computed>
      program.#: "1"
      program.0: "/Users/isaac.johnson/.k8s-alpha-linode/myFirstCluster/.terraform/modules/64f5a0e04ed2256c4dc48e49e4c7d8a7/scripts/kubeadm-token.sh"
      query.%: "1"
      query.host: "45.79.24.137"
      result.%: <computed>

  + module.k8s.linode_instance.k8s_node[2]
      id: <computed>
      alerts.#: <computed>
      backups.#: <computed>
      backups_enabled: <computed>
      boot_config_label: <computed>
      config.#: "1"
      config.0.devices.#: "1"
      config.0.devices.0.sda.#: "1"
      config.0.devices.0.sda.0.disk_id: <computed>
      config.0.devices.0.sda.0.disk_label: "boot"
      config.0.devices.0.sdb.#: <computed>
      config.0.devices.0.sdc.#: <computed>
      config.0.devices.0.sdd.#: <computed>
      config.0.devices.0.sde.#: <computed>
      config.0.devices.0.sdf.#: <computed>
      config.0.devices.0.sdg.#: <computed>
      config.0.devices.0.sdh.#: <computed>
      config.0.helpers.#: <computed>
      config.0.kernel: "linode/direct-disk"
      config.0.label: "node"
      config.0.root_device: <computed>
      config.0.run_level: "default"
      config.0.virt_mode: "paravirt"
      disk.#: "1"
      disk.0.authorized_keys.#: "1"
      disk.0.authorized_keys.0: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC8kZzEtk7J7Mvv4hJIE1jcQ0q6h41g5hUwPtOUPjNWPIKm4djmy4+C4+Gtsxxh5jUFooAbwl+DubFZogbU1Q5aLOGKSsD/K4XimTyOhr90DO47naCnaSS0Rg0XyZlvQsHKwcXGuGOleCMhB2gQ70QAK4X/N1dvGfqCDdKBbTORKQyz0WHWo7YGA6YAgtvzn1C5W0l7cT0AXgOfFEAGF31nqqTuRVBbBmosq1qhXJlVt+PO32MqmxZv44ZuCP1jWjyTz1rbQ1OLHCxP/+eDIlpOlkYop4XgwiHHMRn/rxHFTKOAxtFOccFw9KEnDM0j0M5FRBj5qU1BCa/6jhnu7LIz"
      disk.0.filesystem: <computed>
      disk.0.id: <computed>
      disk.0.image: "linode/containerlinux"
      disk.0.label: "boot"
      disk.0.read_only: <computed>
      disk.0.size: "81920"
      disk.0.stackscript_data.%: <computed>
      disk.0.stackscript_id: <computed>
      group: "kaAQoK8Pk11-myFirstCluster"
      ip_address: <computed>
      ipv4.#: <computed>
      ipv6: <computed>
      label: "myFirstCluster-node-3"
      private_ip: "true"
      private_ip_address: <computed>
      region: "us-central"
      specs.#: <computed>
      status: <computed>
      swap_size: <computed>
      type: "g6-standard-2"
      watchdog_enabled: "true"

-/+ module.k8s.null_resource.preflight-checks (new resource required)
      id: "7232703061644315079" => <computed> (forces new resource)
      triggers.%: "1" => "1"
      triggers.key: "f20a3edc-08a3-cf6d-5229-88bef51d49ed" => "3dfcccf0-c258-d6f2-9d1b-a4ca822cfc95" (forces new resource)


Plan: 2 to add, 0 to change, 1 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.
….
module.k8s.linode_instance.k8s_node[2]: Creation complete after 2m16s (ID: 13878619)

Apply complete! Resources: 2 added, 0 changed, 1 destroyed.
Switched to context "myFirstCluster-kaAQoK8Pk11@myFirstCluster".
Your cluster has been created and your kubectl context updated.

Try the following command: 
kubectl get pods --all-namespaces

Come hang out with us in #linode on the Kubernetes Slack! http://slack.k8s.io/

When done, we can launch the dashboard and see the node node was added (and the old ones stayed put - see the Age column):

we can see one node is 3 minutes old but the other two are 31min

Running things in k8s

Let’s install my favourite app; Sonarqube.

First, we need to install helm/tiller in our RBAC enabled cluster:

$ helm init
$HELM_HOME has been configured at /Users/isaac.johnson/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!

$ kubectl create serviceaccount --namespace kube-system tiller
serviceaccount/tiller created
$ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
clusterrolebinding.rbac.authorization.k8s.io/tiller-cluster-rule created
$ kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
deployment.extensions/tiller-deploy patched

Next we can install our chart:

$ helm install stable/sonarqube --tiller-namespace kube-system
NAME: icy-swan
LAST DEPLOYED: Mon May 6 20:40:07 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
icy-swan-sonarqube-config 0 1s
icy-swan-sonarqube-copy-plugins 1 1s
icy-swan-sonarqube-install-plugins 1 1s
icy-swan-sonarqube-tests 1 1s

==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
icy-swan-postgresql Pending linode-block-storage 1s

==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
icy-swan-postgresql-68dc449cb-wjzpv 0/1 Pending 0 1s
icy-swan-sonarqube-79df7f7564-678bb 0/1 ContainerCreating 0 1s

==> v1/Secret
NAME TYPE DATA AGE
icy-swan-postgresql Opaque 1 1s

==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
icy-swan-postgresql ClusterIP 10.106.210.82 <none> 5432/TCP 1s
icy-swan-sonarqube LoadBalancer 10.98.155.69 <pending> 9000:31724/TCP 1s

==> v1beta1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
icy-swan-postgresql 0/1 1 0 1s
icy-swan-sonarqube 0/1 1 0 1s


NOTES:
1. Get the application URL by running these commands:
     NOTE: It may take a few minutes for the LoadBalancer IP to be available.
           You can watch the status of by running 'kubectl get svc -w icy-swan-sonarqube'
  export SERVICE_IP=$(kubectl get svc --namespace default icy-swan-sonarqube -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
  echo http://$SERVICE_IP:9000

And then get our public IP when it’s all up and running (might take a minute or so):

$ kubectl get svc --namespace default icy-swan-sonarqube -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
178.79.175.99

Here i want to take a quick pause.  My first time through, i had real persistant volume claim issues.  The requests would hang:

$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
csi-example-pvc Pending linode-block-storage 5m7s
messy-fly-postgresql Pending linode-block-storage 4m21s

With the “kubectl describe pvc” show the error:

waiting for a volume to be created, either by external provisioner "linodebs.csi.linode.com" 

I tried a few things, including both the 0.3.0 and 0.1.0 csi drivers:

$ kubectl apply -f https://raw.githubusercontent.com/linode/linode-blockstorage-csi-driver/master/pkg/linode-bs/deploy/releases/linode-blockstorage-csi-driver-v0.1.0.yaml
customresourcedefinition.apiextensions.k8s.io/csinodeinfos.csi.storage.k8s.io unchanged
customresourcedefinition.apiextensions.k8s.io/csidrivers.csi.storage.k8s.io unchanged
serviceaccount/csi-node-sa unchanged
clusterrole.rbac.authorization.k8s.io/driver-registrar-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/driver-registrar-binding unchanged
serviceaccount/csi-controller-sa unchanged
clusterrole.rbac.authorization.k8s.io/external-provisioner-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/csi-controller-provisioner-binding unchanged
clusterrole.rbac.authorization.k8s.io/external-attacher-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/csi-controller-attacher-binding unchanged
clusterrole.rbac.authorization.k8s.io/external-snapshotter-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/csi-controller-snapshotter-binding unchanged
csidriver.csi.storage.k8s.io/linodebs.csi.linode.com unchanged
storageclass.storage.k8s.io/linode-block-storage unchanged
statefulset.apps/csi-linode-controller configured
daemonset.extensions/csi-linode-node configured

Nothing solved it.  However, the second time through using the latest Terraform, i had no PVC issues:

$ kubectl describe pvc
Name: icy-swan-postgresql
Namespace: default
StorageClass: linode-block-storage
Status: Bound
Volume: pvc-0b860de0-7069-11e9-874b-f23c918d8c8f
Labels: app=icy-swan-postgresql
               chart=postgresql-0.8.3
               heritage=Tiller
               release=icy-swan
Annotations: pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: linodebs.csi.linode.com
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 10Gi
Access Modes: RWO
VolumeMode: Filesystem
Events:
  Type Reason Age From Message
  ---- ------ ---- ---- -------
  Normal ExternalProvisioning 17s persistentvolume-controller waiting for a volume to be created, either by external provisioner "linodebs.csi.linode.com" or manually created by system administrator
  Normal Provisioning 17s linodebs.csi.linode.com_csi-linode-controller-0_8d487179-7062-11e9-9342-9e09f2fde301 External provisioner is provisioning volume for claim "default/icy-swan-postgresql"
  Normal ProvisioningSucceeded 13s linodebs.csi.linode.com_csi-linode-controller-0_8d487179-7062-11e9-9342-9e09f2fde301 Successfully provisioned volume pvc-0b860de0-7069-11e9-874b-f23c918d8c8f
Mounted By: icy-swan-postgresql-68dc449cb-wjzpv

Moving on…

Let’s check our pods:

$ kubectl get pods
NAME READY STATUS RESTARTS AGE
icy-swan-postgresql-68dc449cb-wjzpv 1/1 Running 0 2m36s
icy-swan-sonarqube-79df7f7564-678bb 1/1 Running 1 2m36s

And then hit the public IP for our Sonarqube:

We can also circle back to checkout the cluster in the Linode Manager:

Cleaning up:

To scrub up, you can of course manually delete it all from the Linode Manager, but better yet, do a helm delete (takes care of public IP/lb):

$ helm delete icy-swan
release "icy-swan" deleted

Then a TF destroy:

$ mv plan.tf plantf
$ terraform destroy
null_resource.preflight-checks: Refreshing state... (ID: 5608863322754883472)
data.linode_instance_type.type: Refreshing state...
data.linode_instance_type.type: Refreshing state...
linode_instance.instance: Refreshing state... (ID: 13892088)
linode_instance.instance[0]: Refreshing state... (ID: 13892091)
linode_instance.instance[1]: Refreshing state... (ID: 13892089)
linode_instance.instance[2]: Refreshing state... (ID: 13892090)
null_resource.masters_provisioner: Refreshing state... (ID: 5397593024671584831)
null_resource.local_kubectl: Refreshing state... (ID: 6240730930346092628)
null_resource.kubeadm_join[0]: Refreshing state... (ID: 6813310552091977437)
null_resource.kubeadm_join[2]: Refreshing state... (ID: 1431711151197785685)
null_resource.kubeadm_join[1]: Refreshing state... (ID: 4661218998672419198)

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  - destroy

Terraform will perform the following actions:

  - module.k8s.null_resource.local_kubectl

  - module.k8s.null_resource.preflight-checks

  - module.k8s.module.masters.null_resource.masters_provisioner

  - module.k8s.module.nodes.null_resource.kubeadm_join[0]

  - module.k8s.module.nodes.null_resource.kubeadm_join[1]

  - module.k8s.module.nodes.null_resource.kubeadm_join[2]

  - module.k8s.module.masters.module.master_instance.linode_instance.instance

  - module.k8s.module.nodes.module.node.linode_instance.instance[0]

  - module.k8s.module.nodes.module.node.linode_instance.instance[1]

  - module.k8s.module.nodes.module.node.linode_instance.instance[2]


Plan: 0 to add, 0 to change, 10 to destroy.

Do you really want to destroy?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: yes

module.k8s.null_resource.preflight-checks: Destroying... (ID: 5608863322754883472)
module.k8s.null_resource.local_kubectl: Destroying... (ID: 6240730930346092628)
module.k8s.null_resource.local_kubectl: Destruction complete after 0s
module.k8s.null_resource.preflight-checks: Destruction complete after 0s
module.k8s.module.nodes.null_resource.kubeadm_join[0]: Destroying... (ID: 6813310552091977437)
module.k8s.module.nodes.null_resource.kubeadm_join[2]: Destroying... (ID: 1431711151197785685)
module.k8s.module.nodes.null_resource.kubeadm_join[1]: Destroying... (ID: 4661218998672419198)
module.k8s.module.nodes.null_resource.kubeadm_join[0]: Destruction complete after 0s
module.k8s.module.nodes.null_resource.kubeadm_join[2]: Destruction complete after 0s
module.k8s.module.nodes.null_resource.kubeadm_join[1]: Destruction complete after 0s
module.k8s.module.masters.null_resource.masters_provisioner: Destroying... (ID: 5397593024671584831)
module.k8s.module.masters.null_resource.masters_provisioner: Destruction complete after 0s
module.k8s.module.nodes.module.node.linode_instance.instance[2]: Destroying... (ID: 13892090)
module.k8s.module.masters.module.master_instance.linode_instance.instance: Destroying... (ID: 13892088)
module.k8s.module.nodes.module.node.linode_instance.instance[0]: Destroying... (ID: 13892091)
module.k8s.module.nodes.module.node.linode_instance.instance[1]: Destroying... (ID: 13892089)
module.k8s.nodes.node.linode_instance.instance.1: Still destroying... (ID: 13892089, 10s elapsed)
module.k8s.masters.master_instance.linode_instance.instance: Still destroying... (ID: 13892088, 10s elapsed)
module.k8s.nodes.node.linode_instance.instance.0: Still destroying... (ID: 13892091, 10s elapsed)
module.k8s.nodes.node.linode_instance.instance.2: Still destroying... (ID: 13892090, 10s elapsed)
module.k8s.module.masters.module.master_instance.linode_instance.instance: Destruction complete after 16s
module.k8s.module.nodes.module.node.linode_instance.instance[2]: Destruction complete after 16s
module.k8s.module.nodes.module.node.linode_instance.instance[1]: Destruction complete after 20s
module.k8s.nodes.node.linode_instance.instance.0: Still destroying... (ID: 13892091, 20s elapsed)
module.k8s.module.nodes.module.node.linode_instance.instance[0]: Destruction complete after 25s

Destroy complete! Resources: 10 destroyed.

We can confirm they are all gone in the manager as well:

Note: the first time through, i neglected to do helm delete first, so i had a load balancer hanging out:

an unused nodebalancer i removed manually

Summary

Over several hours I launched multiple clusters and installed Sonarqube (and some sample apps).  In that time, my spend was $1.25 (includes an extra couple days with a 10gb volume i forgot about)  This makes it pretty comparable to the other cloud providers in cost.  Being that it treats Terraform as it’s primary driver, scripting an IaC pipeline to manage a cluster would be quite easy.  

Where Linode has some catching up to do is in IaM (federated accounts) - for instance, i just have an API token to use with TF to create things instead of an account/secret pair.  One can add users, but the level of access one can restrict is just the following:

That said, Linode is a great fast unpretentious provider.  They won’t have multicoloured sockets at a vendor booth, or IoT devices to hand out - they have stickers - and free compute credits.  They do a couple things and do them well.  And in a world that is moving from “the Cloud” to “any Cloud” it’s nice to have another choice for a really solid k8s provider.

PS. I do this blog for fun and learning. No monetizing, tracking or adverts. If you want some free credits (and hook me up as well), you can use my referral code (bd545d1d59306f604933cc2694bba9b1c266df4c) or this link .

k8s tutorial getting-started

Have something to add? Feedback? Try our new forums

Isaac Johnson

Isaac Johnson

Cloud Solutions Architect

Isaac is a CSA and DevOps engineer who focuses on cloud migrations and devops processes. He also is a dad to three wonderful daughters (hence the references to Princess King sprinkled throughout the blog).

Theme built by C.S. Rhymes