Using TLS-enabled Helm 2.x with multiple clusters

Helm 2.x is using a server component (Tiller) deployed in Kubernetes cluster, which "interacts directly with the Kubernetes API server to install, upgrade, query, and remove Kubernetes resources. It also stores the objects that represent releases".

Tiller runs in-cluster with privileges granted to it by service account, which is suboptimal, since anyone with cluster access would be able to connect to gRCP endpoint Tiller is exposing.

Upcoming release of Helm 3.x solves the issue entirely by removing Tiller, having helm CLI communicate directly with K8s API, thus relying on K8s RBAC for auth/authz. Check out a post A Gentle Farewell to Tiller on Helm blog, and The state of Helm 3 by Thorsten Hans.

While waiting for release of Helm 3, we have to deal with Tiller.. There are many articles discussing Helm 2.x security:

One of the better options is to use Tillerless Helm v2.
Another, is to enable TLS-based auth, issuing SSL certs to every user authorized to connect to a specific Tiller instance (since there can be multiple Tiller instances running, each with it's own set of privileges).

Working with multiple TLS-enabled Helm 2.x clusters is somewhat of a pain, since passing it requires use of environment variables or cli options to select specific certificates, without option to use configuration file.

To make it easier, I've made a wrapper script, naturally. When --tls is passed to the wrapper, it sets necessary environment variables based on passed --kube-context argument, and calls helm binary.

The script also addresses an annoyance of having to match Helm client CLI version with Tiller (server component), by detecting Tiller version, and downloading an appropriate version of helm cli from Helm's Github releases page.