Experimenting with Helm Operator with minikube
Experimenting with Helm Operator with minikube
Helm
Helm is a package manager for Kubernetes. Its aim is to abstract the multiple resources required for a system, say Prometheus for instance, to be installed and wrap it in a convenient and distributable artifact.
It provides tunables (variables) so that Helm users can tweak their installs based on their own requirements.
A word on Helm V3
Helm V3 was released in November 2019 and differenciate itself from the previous major release (v2) in that it does no longer require a backend (Tiller) to run in order to act on the target cluster.
Helm now ships in a single binary, a client, that can manage packages and deploy them without requiring any further setup.
See here for more details.
Helm Operator
Helm Operator is a Kubernetes Operator developed by WeaveWorks and part of the FluxCD bundle and the CNCF family.
It does what it says on the tin; it’s an operator that deploys Helm Charts from a CRD.
There’s a high level walkthrough of the Helm Operator here, demonstrated coupled with Flux though this is not required.
The CRD specifications can be found here
Installation on minikube
Minkube
The current version of the charts is providing resources from an older version of the API (current k8s=1.18).
We have to create a cluster which version is a bit behind to get the right versions of the API resources.
minikube start --kubernetes-version 1.15.1 --cpus 4 --memory 8192Helm Operator
We’re following the official guide and we have to install first the CRD.
$ kubectl apply -f https://raw.githubusercontent.com/fluxcd/helm-operator/1.0.1/deploy/crds.yaml
customresourcedefinition.apiextensions.k8s.io/helmreleases.helm.fluxcd.io createdThen we add the flux helm repository to our local environment
$ helm repo add fluxcd https://charts.fluxcd.io
"fluxcd" has been added to your repositoriesThen installing the operator in its own namespace:
$ kubectl create namespace flux
namespace/flux created
$ helm upgrade -i helm-operator fluxcd/helm-operator \
--namespace=flux \
--set helm.versions=v3
Release "helm-operator" does not exist. Installing it now.
NAME: helm-operator
LAST DEPLOYED: Sat Apr 25 14:39:46 2020
NAMESPACE: flux
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Flux Helm Operator docs https://docs.fluxcd.io
[...]Check your deploy:
helm status helm-operator --namespace flux
NAME: helm-operator
LAST DEPLOYED: Fri Apr 24 16:56:19 2020
NAMESPACE: flux
STATUS: deployed
REVISION: 3
TEST SUITE: None
NOTES:
Flux Helm Operator docs https://docs.fluxcd.ioHelm is kind enough to give you some notes and guides on how to use the Helm Operator.
At the end of the install it tells you you can use kubectl get hr.
hr are HelmReleases, the CRD we’ve created earlier on.
Let’s have a look:
$ kubectl get hr
No resources found.We’ve not deployed anything yet so that makes sense.
Installing Prometheus
Now let’s move to the fun bit; Let’s deploy something. We could deploy Prometheus for instance.
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
name: prometheus
namespace: default
spec:
releaseName: prometheus
chart:
name: prometheus
repository: https://kubernetes-charts.storage.googleapis.com
version: 10.4.0
values:
service:
type: NodePort$ cat <<EOF | kubectl apply -f -
apiVersion: helm.fluxcd.io/v1
[...]
EOF
helmrelease.helm.fluxcd.io/prometheus createdLet’s see!
$ kubectl get hr
NAME RELEASE PHASE STATUS MESSAGE AGE
prometheus prometheus Succeeded deployed Release was successful for Helm release 'prometheus' in 'default'. 44sWe notice that the CRD is very similar to what you’d expect to pass to the helm client, that makes sense. The feedback you get is:
- Phase (where you are in the helm deployment process)
- Status (current state of your deploy)
- Message (feedback from helm in the last phase)
You can get all the details of the previous phases, it’s all in the CRD.
{
"apiVersion": "v1",
"items": [
{
"apiVersion": "helm.fluxcd.io/v1",
"kind": "HelmRelease",
"metadata": {
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"helm.fluxcd.io/v1\",\"kind\":\"HelmRelease\",\"metadata\":{\"annotations\":{},\"name\":\"prometheus\",\"namespace\":\"default\"},\"spec\":{\"chart\":{\"name\":\"prometheus\",\"repository\":\"https://kubernetes-charts.storage.googleapis.com\",\"version\":\"10.4.0\"},\"releaseName\":\"prometheus\",\"values\":{\"service\":{\"type\":\"NodePort\"}}}}\n"
},
"creationTimestamp": "2020-04-25T13:49:51Z",
"generation": 1,
"name": "prometheus",
"namespace": "default",
"resourceVersion": "1636",
"selfLink": "/apis/helm.fluxcd.io/v1/namespaces/default/helmreleases/prometheus",
"uid": "e23ff1fd-8d61-46ee-8527-40c4a8315cc6"
},
"spec": {
"chart": {
"name": "prometheus",
"repository": "https://kubernetes-charts.storage.googleapis.com",
"version": "10.4.0"
},
"releaseName": "prometheus",
"values": {
"service": {
"type": "NodePort"
}
}
},
"status": {
"conditions": [
{
"lastTransitionTime": "2020-04-25T13:50:00Z",
"lastUpdateTime": "2020-04-25T13:50:00Z",
"message": "Chart fetch was successful for Helm release 'prometheus' in 'default'.",
"reason": "ChartFetched",
"status": "True",
"type": "ChartFetched"
},
{
"lastTransitionTime": "2020-04-25T13:50:01Z",
"lastUpdateTime": "2020-04-25T13:50:01Z",
"message": "Release was successful for Helm release 'prometheus' in 'default'.",
"reason": "Succeeded",
"status": "True",
"type": "Released"
}
],
"observedGeneration": 1,
"phase": "Succeeded",
"releaseName": "prometheus",
"releaseStatus": "deployed",
"revision": "10.4.0"
}
}
],
"kind": "List",
"metadata": {
"resourceVersion": "",
"selfLink": ""
}
}Installing Chartmuseum
For the next bit we’ll install Chart Museum. It’s an open source repository for Helm Charts. N.T.D.L.: it’s important for the Helm Operator to have direct access to the Chart Museum for future use; that’s why we’re colocating it in the same namespace in this example;
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
name: chartmuseum
namespace: flux
spec:
releaseName: chartmuseum
chart:
name: chartmuseum
repository: https://kubernetes-charts.storage.googleapis.com
version: 2.12.0
values:
service:
type: NodePort
env:
open:
DISABLE_API: falsePushing this CRD to Kubernetes Helm Operator would give us:
$ curl `minikube service chartmuseum-chartmuseum --url`
<!DOCTYPE html>
<html>
<head>
<title>Welcome to ChartMuseum!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to ChartMuseum!</h1>
<p>If you see this page, the ChartMuseum web server is successfully installed and
working.</p>
<p>For online documentation and support please refer to the
<a href="https://github.com/helm/chartmuseum">GitHub project</a>.<br/>
<p><em>Thank you for using ChartMuseum.</em></p>
</body>
</html>Looping the loop
We have Chart Museum, a Helm Chart repository hosted in Kubernetes and desperately waiting to do something. Now let’s push something there. We can use one of my personal projects here.
$ git clone https://github.com/sledigabel/3-tier-python.git
Cloning into '3-tier-python'...
remote: Enumerating objects: 55, done.
remote: Counting objects: 100% (55/55), done.
remote: Compressing objects: 100% (39/39), done.
remote: Total 55 (delta 22), reused 45 (delta 13), pack-reused 0
Unpacking objects: 100% (55/55), done.
$ cd 3-tier-python/app3tierpython
$ helm dependency build && helm package .
Hang tight while we grab the latest from your chart repositories...
Update Complete. ⎈Happy Helming!⎈
Saving 1 charts
Downloading redis from repo https://kubernetes-charts.storage.googleapis.com
Deleting outdated charts
Successfully packaged chart and saved it to: /Users/sebastienledigabel/dev/temp/3-tier-python/app3tierpython/app3tierpython-0.1.0.tgzHow we have a Helm Chart for the app (which has a built-in dependency to redis), ready to be pushed to the Chart Museum.
$ curl -XPOST --data-binary "@app3tierpython-0.1.2.tgz" $(minikube service chartmuseum-chartmuseum --url)/api/charts
{"saved":true}Now we can deploy the app3tierpython hosted on the chart museum.
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
name: app3tierpython
namespace: default
spec:
releaseName: app3tierpython
chart:
name: app3tierpython
repository: http://chartmuseum-chartmuseum:8080
version: 0.1.2What does Helm Operator say?
$ kubectl describe hr app3tierpython
Name: app3tierpython
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"helm.fluxcd.io/v1","kind":"HelmRelease","metadata":{"annotations":{},"name":"app3tierpython","namespace":"default"},"spec":...
API Version: helm.fluxcd.io/v1
Kind: HelmRelease
Metadata:
Creation Timestamp: 2020-04-28T09:15:53Z
Generation: 1
Resource Version: 101480
Self Link: /apis/helm.fluxcd.io/v1/namespaces/default/helmreleases/app3tierpython
UID: bf8712e5-d083-40c0-ac33-e671f9572f00
Spec:
Chart:
Name: app3tierpython
Repository: http://chartmuseum-chartmuseum:8080
Version: 0.1.2
Release Name: app3tierpython
Status:
Conditions:
Last Transition Time: 2020-04-28T09:15:53Z
Last Update Time: 2020-04-28T09:15:53Z
Message: Release was successful for Helm release 'app3tierpython' in 'default'.
Reason: Succeeded
Status: True
Type: Released
Observed Generation: 1
Phase: Succeeded
Release Name: app3tierpython
Release Status: deployed
Revision: 0.1.2Conclusions
Helm Operator works pretty nicely with a simple CRD that is matching more or less what you can do with the helm client.
An interesting thing is that you can view the state of your helm deployments with the classic helm cli. But what you cannot do is have an interactive view of what has been deployed through the helm cli with the Help Operator, as it is declarative and would compare the custom resources with the real world.