Get Your First App Up and Running with Helm Charts
To those not familiar with it, Helm is an application package manager for Kubernetes; it assists in quickly and reliably provisioning container applications through the easy install, update, and removal. Helm does this through the use of Charts, which define an application as a collection of related Kubernetes resources.
Helm makes managing the deployment of your applications inside of Kubernetes easier by using a "templated approach." Every Helm chart follows the same structure, while still being flexible enough to represent any type of application you need on Kubernetes. Helm also supports versioning, taking into consideration the fact that deployment needs change over time. The alternative to using Helm charts is to use multiple configuration files manually applied to your Kubernetes cluster to launch an application. The downside to this is that manual processes inevitably lead to errors.
In this article, we'll be walking through using Helm with Minikube, a single-node testing environment for Kubernetes. We will be making a small web server application with Nginx.
For this example, I am using Minikube version 1.13.1 and Helm version 3.0.0 installed locally.
To get set up, make sure to:
Download and configure Minikube; here is documentation on how to do this:
Download and configure Helm using your preferred package manager or manually from the releases.
https://github.com/helm/helm/releases
Create a Helm chart
We start off by confirming we have all prerequisites installed:
$ which Helm
$ minikube status
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
Starting a new Helm chart requires this simple command:
$ Helm create chartbuild
For this tutorial, I'm naming the chart chartbuild:
$ Helm create chartbuild
Creating chartbuild
$ ls chartbuild/
Chart.yaml charts/ templates/ values.yaml
Reviewing a Chart’s Structure
Now that you have a created chart, study its structure to see what's inside. The first two files you see, Chart.yaml, and values.yaml define the chart. What it is and the values that will be in it when deployed.
When looking at Chart.yaml, you can see the outline of a Helm chart's structure:
apiVersion: v2
name: chartbuild
type: application
description: Kubernetes Helm chart
version: 0.1.0
appVersion: 1.19.0
One of the most important areas of the chart is the template directory. This holds all of the configurations for your application that will be deployed into the cluster.
The image below shows what our application contains; a deployment, ingress, service account, service, and a test directory.
There is another directory, charts/, which is empty. Charts allow you to add dependent charts that are necessary to deploy your app. Some Helm charts have up to four additional charts that need to be deployed with the main application. In those cases, the values file is updated with the values for each chart; this makes it possible for applications to be configured and deployed simultaneously. This is a far more advanced configuration, so we are leaving the folder charts/ empty.
Understanding and Editing Values
Template files have a formatting set up which collects deployment information from file values.yaml. So, to customize your Helm chart, you need to edit the values file. By default, the values.yaml file looks like:
replicaCount: 1
image:
repository: nginx
pullPolicy: IfNotPresent
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
serviceAccount:
create: true
annotations: {}
name:
podSecurityContext: {}
fsGroup: 2000
securityContext: {}
capabilities:
drop:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
service:
type: ClusterIP
port: 80
ingress:
enabled: false
annotations: {}
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths: []
tls: []
resources: {}
'resources:'.
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}
Basic Chart Configurations
Starting from the top of our chart, you can see that the replicaCount is automatically set to 1, meaning that only one pod will come up. We only need one pod for this example, but this is a great way to see how easy it is to have Kubernetes run multiple pods for redundancy.
The image section has two areas we want to concentrate on: first, the repository where you are pulling your image and then the pullPolicy. The pullPolicy in this example is set to IfNotPresent; meaning that the image will download a new version of the image if it isn’t already present in our cluster. There are two other options available: Always, in which it will pull the image on every deployment or the restart option (it's a good idea to use this in case of image failure), and Latest, this option will always pull the most up-to-date version of the image. Latest is useful if you trust that your image repository is compatible with your deployment environment.
In this example, we will be changing the value to Always.
Naming and secrets
For this next section, inspect the overrides in the chart. The first override here is ImagePullSecrets. This is a setting to pull a secret, something like a password or an API key you've generated as credentials.
Below this, you will see nameOverride and fullnameOverride.
When you ran the command helm create, it’s name was added to configuration files, like the YAML file above. If you need to rename your chart, this is the best place to do it to ensure you do not miss any config files.
Assigning System Resources
Helm permits you to explicitly allocate your hardware resources. You have the ability to configure the maximum amount of resources a Helm chart can request along with the highest limits it can receive.
For a small application, we could set the following resources using the commands below.
Before:
After:
Tolerations, Node Selectors, and Affinities
These last three are based on node configurations.
The nodeSelector is helpful when you want to assign part of your app to specified nodes in your cluster. For example, if you have infrastructure-specific applications, you can set the node selector name and then match that name in the Helm chart. When the application is deployed, it will then be associated with the node that matches the selector.
Tolerations, tainting, and affinities work in conjunction to ensure that we have pods running on separate nodes. Node affinity is a property of pods; it ties them to a set of nodes (either set as a preference or a requirement). Taints are the opposite —they instead give a node the ability to repel a set of pods.
If a node is tainted, this means that it is not working properly or may not have enough resources to hold the deployment. tolerations are a key/value pair; these are watched by the scheduler to confirm that a node will work with a deployment.
Node affinity is similar to nodeSelector, conceptually: it allows you to constrain which nodes your pod is eligible to be scheduled; this is based on labels on the node.
nodeSelector: {}
tolerations: []
affinity: {}
Deploying Helm Charts
After you've made the necessary modifications to create a Helm chart, you can deploy it by using a Helm command, adding a name point to the chart, adding a values file, and then sending it to a namespace:
The command's output will provide you with the next steps necessary to connect to the application; these include how to set up port forwarding, which gives you the ability to reach the app from your localhost.
Follow these instructions and connect to an Nginx load balancer:
$ export POD_NAME=$(kubectl get pods -l / "app.kubernetes.io/name=chartbuild,app.kubernetes.io/instance=my-chart" -o jsonpath="{.items[0].metadata.name}")
$ echo "Visit http://127.1.0.1:8080 to use your application"
Visit http://127.1.0.1:8080 to use your application.
$ kubectl port-forward $POD_NAME 8080:80
Forwarding from 127.1.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
Viewing a Deployed Application
To view your app, open your web browser:
When you see this screen, then you've successfully deployed an Nginx web server by using a Helm chart!
Conclusion
In this article, we’ve walked through the build of a Helm chart and highlighted how it can rev-up the delivery of applications on Kubernetes. Starting from a directory, we easily built up a Helm chart with simple commands, deployed it to the cluster and then accessed an NGINX server.
About Stone Door Group
Stone Door Group is a DevOps solutions integrator specializing in every flavor of Docker and Kubernetes. Our Docker CE-to-EE Accelerator℠ transforms your development instance of Docker CE into a compliant, enterprise container platform. For more information, drop us a line at letsdothis@stonedoorgroup.com.
About the Author
Amber Ernst is a Docker Certified Associate and Docker Accredited Instructor for Stone Door Group, a Mirantis Value Added Reseller. Amber is a Docker and Kubernetes expert who currently teaches all courses in Docker’s official training catalogue and is based in San Antonio, TX.