At a time when flexibility and reliability of software implementation dominate the industry, OpenShift appears as the quintessential platform that combines these advantages. It’s an open-source container platform powered by Kubernetes that takes developers into a streamlined realm of application development, deployment, and management.
In this article, I will describe the basics of OpenShift and its core components. Whether you’re a developer looking to streamline your application deployment process, an IT professional looking to understand container orchestration, or simply someone interested in cloud-native technologies, this article will give you an overview of OpenShift’s capabilities.
Before we dive into OpenShift, it’s essential to have a basic grasp of Kubernetes (in short: K8s), the platform on which OpenShift is built. If you feel that you know K8s like the back of your hand, you can of course skip this part.
Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of container applications. It groups containers into logical units, making container management across infrastructure environments seamless. With Kubernetes, you can automate the distribution and scheduling of applications in the cluster, manage workloads, and ensure high availability and failure mitigation. It provides a solid foundation for cloud applications, ensuring they run efficiently and reliably in various environments.
This brief introduction to Kubernetes sets the stage for understanding the improvements that OpenShift brings, providing a more developer-centric and enterprise-ready solution by building on and extending the orchestration capabilities of Kubernetes.
OpenShift, Red Hat’s enterprise Kubernetes platform, provides an attractive, enterprise-ready container application management solution by extending Kubernetes with additional development and operational capabilities. Through the lens of OpenShift, developers and administrators can leverage the orchestration capabilities of Kubernetes while taking advantage of the additional features, streamlined workflows, and robust support infrastructure provided by OpenShift.
Below is an overview of the most significant differences between both solutions. The differences outlined there highlight the benefits that OpenShift brings to meet the needs of both developers and enterprises.
In the following paragraphs, I will cover the core components and architecture of OpenShift, setting the stage for a deeper understanding of deploying, managing, and scaling applications in an OpenShift environment.
OpenShift’s architecture is a testament to its robustness and flexibility. At its core are several components that are key to deploying and managing applications.
Pods are the smallest units you can deploy in OpenShift. They contain one or more containers that operate as a single unit.
Pod example:
apiVersion: v1 kind: Pod metadata: name: example-pod spec: containers: - name: hello-openshift image: openshift/hello-openshift
Services in OpenShift allow you to connect pods to a network, ensuring smooth communication between various application components. Routes, on the other hand, expose these services to external traffic.
Service example:
apiVersion: v1 kind: Service metadata: name: example-service spec: selector: app: example-app ports: - protocol: TCP port: 80 targetPort: 8080
Route example:
apiVersion: route.openshift.io/v1 kind: Route metadata: name: example-route spec: host: www.example.com to: kind: Service name: example-service
To clarify, in this example, the Service uses a selector, which takes all apps named `example-app`, and maps exposed internal port 8080 to port 80 in Service. The Route then “connects” to the Service called `example-service.“
Managing external access to services is a key aspect in the area in which OpenShift and Kubernetes operate. It’s mainly supported by Routes in OpenShift and Ingress in Kubernetes. So how do they differ?
# OpenShift Route apiVersion: route.openshift.io/v1 kind: Route metadata: name: example-route spec: host: www.example.com to: kind: Service name: example-service --- # Kubernetes Ingress apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example-ingress spec: rules: - host: www.example.com http: paths: - path: / pathType: Prefix backend: service: name: example-service port: number: 80
The choice between Routes and Ingress can significantly impact how you manage external access to your application. While Routes provide a simplified approach that is particularly beneficial in less complex scenarios, Ingress offers a flexible path-based routing mechanism that can be beneficial in more complex ones.
The difference between Routes and Ingress highlights the flexibility and options available to developers and administrators in the OpenShift and Kubernetes ecosystems. Understanding routing mechanisms and their capabilities is fundamental to designing and implementing externally accessible applications. This knowledge enriches the understanding of how OpenShift and Kubernetes manage network traffic, further preparing developers and administrators for networking challenges in container orchestration environments.
DeploymentConfigs is an essential OpenShift resource designed to manage application deployment and updating. It encapsulates the desired application state, including the container image, replicas, and other key configurations, and manages the application lifecycle to ensure this desired state is maintained.
DeploymentConfigs provides a declarative way to manage deployments, including support for rolling updates, rollbacks, and triggers to automate deployments in response to events. With DeploymentConfigs, developers and administrators can easily deploy, update, and manage applications in the OpenShift environment.
A sample DeploymentConfig with comments explaining each section:
apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: example-deploymentconfig # Name of the DeploymentConfig spec: strategy: type: Rolling # Deployment strategy type triggers: - type: ConfigChange # Trigger a new deployment when config changes replicas: 3 # Number of pod replicas selector: app: example # Selector to match pods template: metadata: labels: app: example # Labels to apply to pods spec: containers: - name: hello-openshift # Container name image: openshift/hello-openshift # Container image
Secrets and ConfigMaps are critical resources for managing configuration data and sensitive information in OpenShift and Kubernetes. Using them, developers can create easier-to-maintain and more secure applications by ensuring a clean separation of configuration data from application code and images.
Secrets are used to store sensitive information such as passwords, SSH keys, and tokens. They ensure that sensitive data is stored securely and only accessible to authorized pods.
Sample configuration:
apiVersion: v1 kind: Secret metadata: name: example-secret type: Opaque data: password: cGFzc3dvcmQ= # Base64 encoded password ('password')
ConfigMaps store non-sensitive configuration data in key-value pairs. They are ideal for storing configuration settings and other non-sensitive data.
Sample configuration:
apiVersion: v1 kind: ConfigMap metadata: name: example-configmap data: log_level: INFO
In DeploymentConfig, you can reference Secrets and ConfigMaps to inject configuration data into pods.
Sample configuration:
apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: example-deploymentconfig spec: template: spec: containers: - name: example-container image: example-image env: - name: LOG_LEVEL valueFrom: configMapKeyRef: name: example-configmap key: log_level - name: PASSWORD valueFrom: secretKeyRef: name: example-secret key: password
In this DeploymentConfig example:
When it comes to application scaling, there are two main options: manual and automatic. Let’s take a loot at both.
oc scale dc/example-deploymentconfig --replicas=5 # Set 5 replicas
OpenShift’s HorizontalPodAutoscaler (HPA) feature automatically adjusts the number of pod replicas in a deployment based on observed metrics, such as CPU utilization. By dynamically scaling the number of replicas, HPA helps applications efficiently handle traffic growth, ensuring efficient resource use while maintaining performance and availability.
apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: example-hpa # Name of the Horizontal Pod Autoscaler spec: scaleTargetRef: apiVersion: apps.openshift.io/v1 kind: DeploymentConfig name: example-deploymentconfig # Target DeploymentConfig minReplicas: 3 # Minimum number of replicas maxReplicas: 10 # Maximum number of replicas targetCPUUtilizationPercentage: 80 # Target CPU utilization percentage
ReplicationControllers are a key Kubernetes functionality designed to ensure that the desired number of pod replicas are running in a cluster at any given time. They monitor the number of running pod instances and create/remove them to match the requested number. ReplicationControllers work to maintain the desired state by replacing failed pods, thus ensuring high availability and fault tolerance. Additionally, they support rolling configuration updates, enabling smoother transitions with minimal disruption.
Despite their usefulness, ReplicationControllers have been largely replaced by Deployments, which offer more fine-grained, declarative control over the desired state of an application, including the ability to easily roll back to previous versions and more complex update strategies. Nevertheless, understanding ReplicationControllers provides a basic grasp of the evolution of workload management in Kubernetes and OpenShift environments.
ReplicationControllers can be configured via the “replicas” field in DeploymentConfig or Deployment.
spec: replicas: 3 # Ensure three replicas of the pod are always running
These configurations illustrate the declarative nature of OpenShift, where the desired state is described, and OpenShift ensures that the actual state of the system corresponds to the desired state. The interplay of these core components and configurations forms the basis of the OpenShift operating model. It provides ease of deployment, scaling, and application management, making OpenShift a powerful platform for developers and administrators.
The replicas in DeploymentConfig and ReplicationController are associated via a mechanism that ensures a specific number of pod replicas within an OpenShift or Kubernetes cluster. This relationship illustrates how OpenShift and Kubernetes work together to provide self-healing and declarative deployment capabilities, ensuring that applications remain resilient and maintain the desired state, even in the face of pod or node failures.
apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: example-deploymentconfig spec: replicas: 3 # Desired number of pod replicas ...
apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: example-deploymentconfig spec: replicas: 3 # Desired number of pod replicas ...
Understanding the difference between Deployment (Kubernetes) and DeploymentConfigs (OpenShift) is critical to successfully managing applications. Both Deployments and DeploymentConfigs are used to deploy applications and maintain a certain number of pod replicas, but they differ in functionality and flexibility.
The choice between them often comes down to the project’s requirements and environment. DeploymentConfigs offer more granular control and additional functionality that can benefit complex OpenShift deployment scenarios, while Kubernetes Deployments offer a simplified, standard approach that may be preferred in Kubernetes-centric environments.
You’ll find an outline of the main differences below.
# OpenShift DeploymentConfig with triggers apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: example-dc spec: triggers: - type: ConfigChange # Automatic deployment on config change - type: ImageChange # Automatic deployment on image change replicas: 2 selector: app: example template: metadata: labels: app: example spec: containers: - name: hello-openshift image: openshift/hello-openshift --- # Kubernetes Deployment apiVersion: apps/v1 kind: Deployment metadata: name: example-deployment spec: replicas: 2 selector: matchLabels: app: example template: metadata: labels: app: example spec: containers: - name: hello-kubernetes image: k8s.gcr.io/echoserver:1.4
oc rollback <deploymentconfig-name> # Replace <deploymentconfig-name> with the name of your DeploymentConfig
kubectl rollout undo deployment/<deployment-name> --to-revision=<revision-number> # Replace <deployment-name> with the name of your Deployment and <revision-number> with the revision number you want to rollback to
In OpenShift, PodDisruptionBudget (PDB) are a resource that help ensure that a certain number of pod instances are maintained, even during interruptions such as node maintenance or updates. It allows developers and administrators to specify the minimum available instance of a replicated application, improving its resiliency and availability during voluntary disruptions.
Here are the key aspects of PodDisruptionBudgets in OpenShift:
PodDisruptionBudgets play a key role in managing pod availability and application resiliency, making them an essential feature for production-grade OpenShift deployments.
apiVersion: policy/v1beta1 kind: PodDisruptionBudget metadata: name: example-pdb # Name of the PodDisruptionBudget spec: minAvailable: 2 # At least two instances must remain available selector: matchLabels: app: example-app # Applies to pods with this label
When the required availability specified in the PodDisruptionBudget (PDB) is not met, several things happen:
These mechanisms ensure that PodDisruptionBudget is a mission-critical safeguard that helps maintain the desired level of application availability – even during disruptions – and provides clear feedback and logs.
Here’s how we need to configure everything to run our example-app application on OpenShift. It will use a database password, have the appropriate login level and a sufficient number of replicas, and expose ports 80 and 8080.
Secret:
apiVersion: v1 kind: Secret metadata: name: example-app-secret # Name of the Secret type: Opaque data: db_password: c2VjcmV0 # Base64 encoded password ('secret')
ConfigMap:
apiVersion: v1 kind: ConfigMap metadata: name: example-app-config # Name of the ConfigMap data: log_level: INFO
DeploymentConfig:
apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: example-app # Name of the DeploymentConfig spec: replicas: 2 # Desired number of Pod replicas template: metadata: labels: app: example-app # Label to identify the Pods spec: affinity: podAntiAffinity: # Anti-affinity to distribute pods across different nodes requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - example-app topologyKey: kubernetes.io/hostname containers: - name: example-container image: example-image # Image to be used ports: - containerPort: 80 - containerPort: 8080 env: - name: LOG_LEVEL valueFrom: configMapKeyRef: name: example-app-config key: log_level - name: DB_PASSWORD valueFrom: secretKeyRef: name: example-app-secret key: db_password
Service:
apiVersion: v1 kind: Service metadata: name: example-app-service # Name of the Service spec: selector: app: example-app # Selector to identify the Pods ports: - name: http protocol: TCP port: 80 targetPort: 80 - name: http-alt protocol: TCP port: 8080 targetPort: 8080
Route:
apiVersion: route.openshift.io/v1 kind: Route metadata: name: example-app-route # Name of the Route spec: to: kind: Service name: example-app-service # Name of the Service to route to port: targetPort: http # Port to be exposed externally
PodDisruptionBudget:
apiVersion: policy/v1beta1 kind: PodDisruptionBudget metadata: name: example-app-pdb # Name of the PodDisruptionBudget spec: minAvailable: 1 # At least one instance must remain available selector: matchLabels: app: example-app # Applies to pods with this label
HorizontalPodAutoscaler:
apiVersion: autoscaling/v2beta2 kind: HorizontalPodAutoscaler metadata: name: example-app-hpa # Name of the HorizontalPodAutoscaler spec: scaleTargetRef: apiVersion: apps.openshift.io/v1 kind: DeploymentConfig name: example-app # Name of the DeploymentConfig to scale minReplicas: 1 # Minimum number of replicas maxReplicas: 10 # Maximum number of replicas metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 50 # Target average CPU utilization percentage
Creating containers in OpenShift is easier thanks to several innovative tools and processes, among which the Source-to-Image (S2I) process stands out. S2I is a framework that transforms application source code into a container format, ready to be deployed to OpenShift or any other container platform. The appeal of S2I lies in its simplicity and speed – it allows developers to simply provide source code while S2I handles the rest of the compilation and deployment process.
The Source-to-Image process consists of three key elements:
The process works as follows: S2I takes the specified source code, uses builder-image to compile and build the code, and then packages the built code along with the runtime into a final, runnable container image.
Here is a simplified example showing how to use an S2I process to build and deploy a container from source code in OpenShift:
oc new-project example-project
oc new-app java:latest~https://github.com/spring-projects/spring-petclinic --name=example-app
In the above command:
When you run the S2I command, it downloads the source code from the specified URL, compiles it using the Java builder image, and then creates a new image containing the built application. In the next step, OpenShift deploys this image as a new application within the sample project.
After the compilation and deployment process, the application is available within OpenShift. We can create a Route that exposes the application to external traffic:
oc expose svc/example-app
Source-to-Image demonstrates the versatility and ease of deploying applications on OpenShift in various programming languages and frameworks. The streamlined process minimizes developer workload by making building and deploying applications in the OpenShift environment easier. Here are some examples of languages and frameworks commonly used in S2I:
Each of these languages and frameworks can be used with S2I builder images, which provide a ready-made environment for creating and assembling applications into container images. These images are adapted to each language and framework, taking into account the necessary dependencies and runtime environments, making the process of running applications on OpenShift seamless and accessible.
As we finish our Introduction to OpenShift, it becomes clear that this powerful platform offers a comprehensive solution for managing containerized applications. It not only simplifies the complexity of container orchestration, but also enriches it with a robust feature set, making OpenShift an ideal choice for enterprises looking to adopt modern, cloud-native development practices.
Here are the most important conclusions from our journey through the basics of OpenShift:
Pretius has a great deal of experience with designing complex architecture and infrastructure. If you need help with this, feel free to contact us at hello@pretius.com.
Starting your OpenShift journey can be both exciting and challenging, especially when you never worked with cloud environments. To help you through this journey, there are many resources available that can provide additional information, guidance and support.
Whether you’re looking to deepen your technical knowledge, solve specific problems, or connect with the OpenShift community, these resources can be invaluable.
Finally, if you have any questions, you can always contact me at jwojewoda@pretius.com. Good luck with using the platform in your projects!