Kubernetes

#Kubernetes - Basic concepts

This article in Spanish.

What is Kubernetes?

Kubernetes is an awesome tool that allows you to manage containerized applications in minutes being able to:

  • Deploy your applications quickly and in a predictable way.

  • Scale your applications on the fly.

  • Seamlessly roll out new features.

  • Optimize use of your hardware by using only the resources you need.


To perform all these activities kubernetes is based on the kernel of the operating system and on a particular functionality of the supported operating systems called virtualization. This is what makes it possible to modularize and isolate an application so that it can then be run on multiple hosts without considerable loss of performance.

Given this architecture and following the separation of concerns pattern, each container must have a specific responsibility, so that faults can be identified more easily and containers can also be replaced by others that perform the same functionality with practically no friction.

Likewise, each container should have its team responsible in order to keep the teams focused on only their responsibilities.

On the other hand it will be Kubernetes who is responsible for raising, monitoring and maintaining the health of the entire project and the nodes involved, always seeking to preserve the desired state. This means that not only can identify that something is wrong, but also fix it or try to fix it.

Speaking about an example, it is possible to define a container for an application that we have built and also define a container for each dependency

  • Node.js container

  • PostgreSQL container

  • Redis container

  • Data container

Kubernetes uses a concept called pods to group those dependencies into a unit that will represent the application and will actually be raised on the same host sharing:

  • ip (localhost)

  • memory

  • volumes

Facilitating a lot the integration of the dependencies and the distribution of them. We can think on pods as our complete application.

Once we have our pods configured, the last minimal step is to configure the deployment file which is where we will specify the number of replicas in the system plus some other metadata parameters such as deploy name, version and category.

Having these concepts in mind we are already prepared to run our first server with Kubernetes.

To specify the deployment file, the YAML format it’s used and working in a simple example with an nginx server, the file myserver.yaml would be as follows:

apiVersion: extensions/v1beta1 kind: Deployment metadata: name: my-nginx spec: replicas: 2 template: metadata: labels: run: my-nginx spec: containers: - name: my-nginx image: nginx ports: - containerPort: 80

Here as we only have a pod is specified directly under the tag "template" and the pods will inherit the deploy name. The same happens with the container since in this example we use the official image of nginx.

To run our example we must execute:

$ kubectl create -f ./myserver.yaml deployment "my-nginx" created

With this we have our first container running thanks to Kubernetes! And much more since Kubernetes also does a monitoring of it and in case of failure maintains the instance besides many other characteristics that we will be seeing in the future like labels, the dashboard, etc.


*Article by: Alfredo Levy at Bixlabs, Uruguay*
---