Kevin Ashcraft

Linux & Radio Tutorials

An Example Kubernetes Application

This is an example of how to get an application up and running on Kubernetes cluster.

Cluster Layout

A Kubernetes cluster is home to several different tools all working together with the purpose of orchestrating containerized applications. So before we setup an application let's briefly look at the big picture, starting at the base.

Nodes are servers (typically CoreOS) that are a part of the cluster. They house the Pods which house the Containers.

Containers are created from images (hosted on Docker Hub, GCR, etc). They live inside of Pods.

Pods are a group of related Containers (usually when there's a 1-to-1 relationship).

Deployments are scalable Pod templates, they define the Pod structure and the number of replicas.

Services provide a single point of access to a set of Pods (from within the cluster).

Ingresses provide external points of access to Services.

Load Balancers (separate from Kubernetes) take incoming requests to a single IP address and distribute them to multiple servers.

For example: a request is made on port 80 for the host www.example.com. This request first hits the Load Balancer which sends it to any one of the nodes. The Ingress, listening on all of the nodes, receives the request and sends it to a Service, which then forwards it to the correct Pod on the correct Node.

Load Balancer -> Ingress -> Service -> Pod

To reiterate it all one more time, Containers live in Pods which live on Nodes. Services provide an internal, single point of access to Pods, and Ingresses listen on the external network, sending incoming requests to the appropriate service.

Okay, now that we understand the structure, let's setup an example application.

Config Files

There are a few different ways to interact with Kubernetes, from individual commands to configuration files and Helm.

We're going to use the kubectl apply -f command to apply configuration files. This has the advantage of creating the template files as we go and also providing an easy way to update things.

You can run kubectl apply -f to both create and update a configuration.

Create a Deployment

Create a simple Deployment of an nginx server.

example-com.deployment.yaml

kind: Deployment
apiVersion: apps/v1
metadata:
  name: example-com-controller
  labels:
    app: example-com
spec:
  replicas: 1
  matchSelector:
    labels:
      app: example-com
  template:
    metadata:
      name: example-com-pod
      labels:
        app: example-com
    spec:
      containers:
      - name: example-com-nginx
        image: nginx
        ports:
        - containerPort: 80
        livenessProbe:
          httpGet:
            path: /
            port: 80
    

Most configuration files contain this same layout of kind, apiVersion, metadata, and spec.

The spec of a Deployment defines the matchSelector, replicas, and template.

The matchSelector specifies which Pods this Deployment will control.

The replicas states the number of Pods which should be running. This is the number that you can change to scale the Deployment. You could run kubectl scale deployment example-com --replicas=3 to change the number of Pods.

The template is the Pod configuration. It has the metadata and spec which will be used when the Pods are created.

The label in the metadata should match the matchSelector's label.

The spec.template.spec states the containers which will exist inside of the pod.

The containers are an array of definitions including a name and image, and can also include ports to expose and a livenessProbe that can be used to check if the Container is working.

Here we've created a Deployment with 1 Pod containing a Container built from the Nginx image with port 80 exposed.

Create a Service

Create a Service for the example-com Pods.

example-com.service.yaml

kind: Service
apiVersion: v1
metadata:
  name: example-com-service
  labels:
    app: example-com
spec:
  matchSelector:
    app: example-com
  ports:
  - port: 80
    targetPort: 80
    

Here we've created a Service which will expose port 80 and forward it to Pods with the label app=example-com.

As with the Deployment, this file has the same main properties, a kind, apiVersion, metadata, and a spec.

The spec has a matchSelector which states that this service should look for Pods with the app=example-com label.

The spec also has a ports property which states the port of the Service and the targetPort to send requests to on the Pod.

The Service can be accessed from anywhere within the cluster.

Run kubectl describe service example-com-service to see the ClusterIP which is where the Service could be reached.

Kubernetes also has a running DNS so the service could be accessed via example-com-service.default.svc.cluster.local.

To test this we can run a busybox container and use wget.

kubectl run busybox --image busybox --generator run-pod/v1 --tty -i
wget -qO- example-com-service.default.svc.cluster.local

Create an Ingress

Now that we've got some scalable Pods running and a Service to reach them from within Kubernetes, it's time to provide some access to the outside world. To do that we'll create an Ingress.

An Ingress in Kubernetes can forward HTTP/s requests to the appropriate Service as defined by hosts and paths. It can also be used to terminate TLS requests, providing a single point for SSL certificates.

example-com.ingress.yaml

kind: Ingress
apiVersion: extensions/v1beta1
metadata:
  name: example-com-ingress
  labels:
    app: example-com
spec:
  backend:
    serviceName: example-com-service
    servicePort: 80
    

Our Ingress is a simple example which forwards all traffic on port 80 to the example-com-service.

The spec states the serviceName and servicePort to send the incoming requests to be answered.

The spec could also include rules with hosts and paths and different backends to use, as well as tls information for SSL requests.

You should now be able to request port 80 from any Node on the Cluster and have it answered from the example-com Pods.

An Ingress is different than setting up a Service with NodePort because it listens on all of the Nodes.

On some cloud services an Ingress controller already exists, but on most you'll have to setup an Nginx Ingress Controller.