This is an Alpha version of Kubermatic management via GitOps which can become the default way to manage KKP in future. This article explains how you can kickstart that journey with ArgoCD right now. But this feature can also change significantly and in backward incompatible ways. Please use this setup in production at your own risk.
Kubermatic Kubernetes Platform is a versatile solution to create and manage Kubernetes clusters (user-clusters) a plethora of cloud providers and on-prem virtualizaton platforms. But this flexibility also means that there is a good amount of moving parts. KKP provides various tools to manage user-clusters across various regions and cloudss.
This is why, if we utilize a GitOps solution to manage KKP and its upgrades, KKP administrators would have better peace of mind. We have now provided an alpha release of ArgoCD based management of KKP master and seeds.
This page outlines how to install ArgoCD and team-provided Apps to manage KKP seeds and their upgrades.
In order to setup and manage KKP using a GitOps solution, one must first know some basics about KKP. Following links could be useful to get that knowledge, if you are new to KKP.
We will install KKP along the way, so, we do not need a running KKP installation. But if you already have a KKP installation running, you can still make use of this guide to onboard your existing KKP installation to ArgoCD.
The diagram below shows the general concept of the setup. It shows how KKP installations and ArgoCD will be deployed and what KKP components will be managed by ArgoCD in each seed.
For the demonstration,
A high-level procedure to get ArgoCD to manage the seed would be as follows:
The Folder and File Structure
section in the README.md of ArgoCD Apps Component explains what files should be present for each seed in what folders and how to customize the behavior of ArgoCD apps installation.
Note: Configuring values for all the components of KKP is a humongous task. Also - each KKP installation might like a different directory structure to manage KKP installation. This ArgoCD Apps based approach is an opinionated attempt to provide a standard structure that can be used in most of the KKP installations. If you need different directory structure, refer to README.md of ArgoCD Apps Component to understand how you can customize this, if needed.
We will install ArgoCD on both the clusters and we will install following components on both clusters via ArgoCD. In non-GitOps scenario, some of these components are managed via kubermatic-installer and rest are left to be managed by KKP administrator in master/seed clusters by themselves. With ArgoCD, except for kubermatic-operator, everything else can be managed via ArgoCD. Choice remains with KKP Administrator to include which apps to be managed by ArgoCD.
You can find code for this tutorial with sample values in this git repository. For ease of installation, a
Makefile
has been provided to just make commands easier to read. Internally, it just depends on Helm, kubectl and kubermatic-installer binaries. But you will need to look atmake
target definitions inMakefile
to adjust DNS names. While for the demo, provided files would work, you would need to look through each file underdev
folder and customize the values as per your need.
This step install two Kubernetes clusters using KubeOne in AWS. You can skip this step if you already have access to two Kubernetes clusters.
Use KubeOne to create 2 clusters in DEV env - master-seed combo (c1) and regular seed (c2). The steps below are generic to any KubeOne installation. a) We create basic VMs in AWS using Terraform and then b) Use KubeOne to bootstrap the control plane on these VMs and then rollout worker node machines.
Note: The sample code provided here to create Kubernetes clusters uses single VM control-plane. This is NOT recommended in any way as production. Always use HA control-plane for any production grade Kubernetes installation.
You should be looking at terraform.tfvars
and kubeone.yaml
files to customize these folder as per your needs.
# directory structure
kubeone-install
├── dev-master
│ ├── kubeone.yaml
│ ├── main.tf
│ ├── output.tf
│ ├── terraform.tfvars
│ ├── variables.tf
│ ├── versions.tf
├── dev-seed
│ ├── kubeone.yaml
│ ├── main.tf
│ ├── output.tf
│ ├── terraform.tfvars
│ ├── variables.tf
│ ├── versions.tf
└── kubeone
# KubeOne needs SSH keys to communicate with machines.
# make sure to use SSH keys which do not need any passphrase
# Setup CP machines and other infra
export AWS_PROFILE=<your AWS profile>
make k1-apply-masters
make k1-apply-seed
This same folder structure can be further expanded to add KubeOne installations for additional environments like staging and prod.
The demo codebase assumes argodemo.lab.kubermatic.io
as base URL for KKP. The KKP Dashboard is available at this URL. This also means that ArgoCD for master-seed, all tools like Prometheus, Grafana, etc are accessible at *.argodemo.lab.kubermatic.io
The seed need its own DNS prefix which is configured as self.seed
. This prefix needs to be configured in Route53 or similar DNS provider in your setup.
Similarly, the demo creates a 2nd seed named india-seed
. Thus, 2nd seed’s ArgoCD, Prometheus, Grafana etc are accessible at *.india.argodemo.lab.kubermatic.io
. And this seed’s DNS prefix is india.seed
.
These names would come handy to understand the references below to them and customize these values as per your setup.
cd <root directory of this repo>
make deploy-argo-dev-master deploy-argo-apps-dev-master
# get ArgoCD admin password via below command
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
make
target creates a git tag with a pre-configured name: dev-kkp-<kkp-version>
and pushes it to your git repository. This way, when you want to upgrade KKP version, you just need to update the KKP version at the top of Makefile and run this make target again.
make push-git-tag-dev
# Apply the DNS CNAME record below manually in AWS Route53:
# argodemo.lab.kubermatic.io
# *.argodemo.lab.kubermatic.io
# grafana-user.self.seed.argodemo.lab.kubermatic.io
# alertmanager-user.self.seed.argodemo.lab.kubermatic.io
# You can get load balancer details from `k get svc -n nginx-ingress-controller nginx-ingress-controller`
# After DNS setup, you can access ArgoCD at https://argocd.argodemo.lab.kubermatic.io
make install-kkp-dev
self
make create-long-lived-master-seed-kubeconfig
# commit changes to git and push latest changes
make push-git-tag-dev
# Apply DNS record manually in AWS Route53
# *.self.seed.argodemo.lab.kubermatic.io
# Loadbalancer details from `k get svc -n kubermatic nodeport-proxy`
https://argodemo.lab.kubermatic.io/dex/
from browser / openssl and insert the certificate in dev/common/custom-ca-bundle.yaml
for the secret letsencrypt-staging-ca-cert
under key ca.crt
in base64 encoded format. After saving the file, commit the change to git and re-apply the tag via make push-git-tag-dev
and sync the ArgoCD App.Note: You can follow these steps only if you have a KKP EE license. With KKP CE licence, you can only work with one seed (which is master-seed combo above)
We follow similar procedure as the master-seed combo but with slightly different commands. We execute most of the commands below, unless noted otherwise, in a 2nd shell where we have exported kubeconfig of dev-seed cluster above.
cd <root directory of this repo>
make deploy-argo-dev-seed deploy-argo-apps-dev-seed
# get ArgoCD admin password via below command
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
# Apply below DNS CNAME record manually in AWS Route53
# *.india.argodemo.lab.kubermatic.io
# grafana-user.india.seed.argodemo.lab.kubermatic.io
# alertmanager-user.india.seed.argodemo.lab.kubermatic.io
# You can get load balancer details from `k get svc -n nginx-ingress-controller nginx-ingress-controller`
# After DNS setup, you can access the seed ArgoCD at https://argocd.india.argodemo.lab.kubermatic.io
make create-long-lived-seed-kubeconfig
# commit changes to git and push latest changes in
make push-git-tag-dev
# Apply DNS record manually in AWS Route53
# *.india.seed.argodemo.lab.kubermatic.io
# Loadbalancer details from `k get svc -n kubermatic nodeport-proxy`
NOTE: If you receive timeout errors, you should restart node-local-dns daemonset and/or coredns / cluster-autoscaler deployment to resolve these errors.