Introduction to OpenShift: Part 1
In this article series, I am going to explain some fundamental concepts of containerization, kubernetes and RedHat OpenShift.
Containerized Applications
Software applications are typically deployed as a single set of libraries and configuration files to a runtime environment. They are traditionally deployed to an operating system with a set of services running, such as a database server or an HTTP server, but they can also be deployed to any environment that can provide the same services, such as a virtual machine or a physical host. The major drawback to using a software application is that it is entangled with the runtime environment and any updates or patches applied to the base OS might break the application. For example, an OS update might include multiple dependency updates, including libraries (that is, operating system libraries shared by multiple programming languages) that might affect the running application with incompatible updates. Moreover, if another application is sharing the same host OS and the same set of libraries, there might be a risk of breaking it if an update that fixes the first application libraries affects the second application. Thus, for a company developing typical software applications, any maintenance on the running environment might require a full set of tests to guarantee that any OS update does not affect the application as well. Depending on the complexity of an application, the regression verification might not be an easy task and might require a major project. Furthermore, any update normally requires a full application stop. Normally, this implies an environment with high-availability features enabled to minimize the impact of any downtime, and increases the complexity of the deployment process. The maintenance might become cumbersome, and any deployment or update might become a complex process.
Alternatively, a system administrator can work with containers, which are a kind of isolated partition inside a single operating system. Containers provide many of the same benefits as virtual machines, such as security, storage, and network isolation, while requiring far fewer hardware resources and being quicker to launch and terminate. They also isolate the libraries and the runtime environment (such as CPU and storage) used by an application to minimize the impact of any OS update to the host OS, as described in the previous diagram.
Advantages of Containers
- The use of containers helps not only with the efficiency, elasticity, and reusability of the hosted applications, but also with portability of the platform and applications. There are many container providers available, such as Rocket, Drawbridge, and LXC, but one of the major providers is Docker.
- Works in a closed environment where changes made to the host OS or other applications do not affect the container. Because the libraries needed by a container are self-contained, the application can run without disruption. For example, each application can exist in its own container with its own set of libraries. An update made to one container does not affect other containers, which might not work with the update.
- Deploys any container quickly because there is no need for a full OS install or restart. Normally, to support the isolation, a new OS installation is required on a physical host or VM, and any simple update might require a full OS restart. A container only requires a restart without stopping any services on the host OS.
- In a traditional deployment scenario using a single host, any environment differences might potentially break the application. Using containers, however, the differences and incompatibilities are mitigated because the same container image is used.
- The same container can be reused by multiple applications without the need to set up a full OS. A database container can be used to create a set of tables for a software application, and it can be quickly destroyed and recreated without the need to run a set of housekeeping tasks. Additionally, the same database container can be used by the production environment to deploy an application.
- Containers boost the microservices development approach because they provide a lightweight and reliable environment to create and run services that can be deployed to a production or development environment without the complexity of a multiple machine environment.
Docker Architecture
Docker is one of the container implementations available for deployment and supported by companies. Docker Hub provides a large set of containers developed by the community. Docker uses a client-server architecture, described below:
- Client: The command-line tool (docker) is responsible for communicating with a server using a RESTful API to request operations.
- Server: This service, which runs as a daemon on an operating system, does the heavy lifting of building, running, and downloading container images.
- The daemon can run either on the same system as the docker client or remotely.
Docker depends on three major elements:
- Images: Images are read-only templates that contain a runtime environment that includes application libraries and applications. Images are used to create containers. Images can be created, updated, or downloaded for immediate consumption.
- Registries: Registries store images for public or private use. The well-known public registry is Docker Hub, and it stores multiple images developed by the community, but private registries can be created to support internal image development under a company’s discretion. This course runs on a private registry in a virtual machine where all the required images are stored for faster consumption.
- Containers: Containers are segregated user-space environments for running applications isolated from other applications sharing the same host OS.
Each image in Docker consists of a series of layers that are combined into what is seen by the containerized applications a single virtual file system. Docker images are immutable; any extra layer added over the pre-existing layers overrides their contents without changing them directly. Therefore, any change made to a container image is destroyed unless a new image is generated using the existing extra layer.
In a nutshell, to create a new image, there are two approaches:
- Using a running container: An immutable image is used to start a new container instance and any changes or updates needed by this container are made to a read/write extra layer. Docker commands can be issued to store that read/write layer over the existing image to generate a new image. Due to its simplicity, this approach is the easiest way to create images, but it is not a recommended approach because the image size might become large due to unnecessary files, such as temporary files and logs.
- Using a Dockerfile: Alternatively, container images can be built from a base image using a set of steps called instructions. Each instruction creates a new layer on the image that is used to build the final container image. This is the suggested approach to building images, because it controls which files are added to each layer.
Thank you for reading…
Source & References:
- Docker official website
- Kubernetes official website
- RedHat OpenShift website
- edX Learning Platform
Next ->