Contents

OpenShift is a great way to extend Kubernetes with plenty of additional development and operational capabilities that will be useful – especially in enterprise projects. This OpenShift tutorial will show you how to make the best out of this technology.

At a time when flexibility and reliability of software implementation dominate the industry, OpenShift appears as the quintessential platform that combines these advantages. It’s an open-source container platform powered by Kubernetes that takes developers into a streamlined realm of application development, deployment, and management.

In this article, I will describe the basics of OpenShift and its core components. Whether you’re a developer looking to streamline your application deployment process, an IT professional looking to understand container orchestration, or simply someone interested in cloud-native technologies, this article will give you an overview of OpenShift’s capabilities.

Kubernetes and OpenShift – introduction

Before we dive into OpenShift, it’s essential to have a basic grasp of Kubernetes (in short: K8s), the platform on which OpenShift is built. If you feel that you know K8s like the back of your hand, you can of course skip this part.

Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of container applications. It groups containers into logical units, making container management across infrastructure environments seamless. With Kubernetes, you can automate the distribution and scheduling of applications in the cluster, manage workloads, and ensure high availability and failure mitigation. It provides a solid foundation for cloud applications, ensuring they run efficiently and reliably in various environments.

This brief introduction to Kubernetes sets the stage for understanding the improvements that OpenShift brings, providing a more developer-centric and enterprise-ready solution by building on and extending the orchestration capabilities of Kubernetes.

OpenShift – main advantages

OpenShift, Red Hat’s enterprise Kubernetes platform, provides an attractive, enterprise-ready container application management solution by extending Kubernetes with additional development and operational capabilities. Through the lens of OpenShift, developers and administrators can leverage the orchestration capabilities of Kubernetes while taking advantage of the additional features, streamlined workflows, and robust support infrastructure provided by OpenShift. 

Kubernetes vs OpenShift

Below is an overview of the most significant differences between both solutions. The differences outlined there highlight the benefits that OpenShift brings to meet the needs of both developers and enterprises. 

  1. Developer-friendly environment – OpenShift provides a developer-centric environment with a web console and CLI (Command Line Interface), easing application deployment. On the other hand, Kubernetes is primarily intended for operations-centric applications and requires additional configurations or tools to become developer-friendly.
  2. Built-in CI/CD pipelines – OpenShift has integrated CI/CD pipelines, while Kubernetes requires configuration and integration of external CI/CD tools.
  3. Security and compliance – OpenShift has strict default security policies, ensuring a secure container environment. It also includes features such as Security Context Constraints (SCC) and automatic updates. While Kubernetes has solid security features, it may require additional configuration to meet your company’s security standards.
  4. Integrated development tools – OpenShift comes bundled with numerous development tools, including a source-to-image (S2I) feature that simplifies creating and deploying container images. Kubernetes requires manual configuration or third-party tools to achieve similar functionality.
  5. Routing and networks – OpenShift has a built-in HAProxy-based router to handle network routes, thus, simplifying the management of network resources. Kubernetes requires manual configuration or additional ingress controllers for routing.
  6. Support and documentation – OpenShift, a Red Hat product, provides professional support and extensive documentation, which can be crucial for enterprise implementation. Although it has an active community, Kubernetes may lack the professional support required for critical enterprise operations.
  7. Pricing – Kubernetes is free and open source, but managing a Kubernetes cluster may involve operational costs. OpenShift, on the other hand, offers a subscription-based pricing model that includes support and other features. The tool’s website mentions that “reserved instances of Red Hat OpenShift are available for as little as $0.076/hour”, but you’ll need to contact the company directly to get information on the exact pricing in your particular case.

Basic components and architecture

In the following paragraphs, I will cover the core components and architecture of OpenShift, setting the stage for a deeper understanding of deploying, managing, and scaling applications in an OpenShift environment.

OpenShift’s architecture is a testament to its robustness and flexibility. At its core are several components that are key to deploying and managing applications.

Pods

Pods are the smallest units you can deploy in OpenShift. They contain one or more containers that operate as a single unit.

Pod example:

apiVersion: v1
kind: Pod
metadata:
  name: example-pod
spec:
  containers:
  - name: hello-openshift
    image: openshift/hello-openshift

Services and Routes

Services in OpenShift allow you to connect pods to a network, ensuring smooth communication between various application components. Routes, on the other hand, expose these services to external traffic.

Service example:

apiVersion: v1
kind: Service
metadata:
  name: example-service
spec:
  selector:
    app: example-app
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080

Route example:

apiVersion: route.openshift.io/v1
kind: Route
metadata:
  name: example-route
spec:
  host: www.example.com
  to:
    kind: Service
    name: example-service

To clarify, in this example, the Service uses a selector, which takes all apps named `example-app`, and maps exposed internal port 8080 to port 80 in Service. The Route then “connects” to the Service called `example-service.“

Routes vs Ingress

Managing external access to services is a key aspect in the area in which OpenShift and Kubernetes operate. It’s mainly supported by Routes in OpenShift and Ingress in Kubernetes. So how do they differ?

Simplicity and ease of use

  • Routes in OpenShift are designed to be simple and direct, providing a quick way to expose services outside the cluster.
  • Conversely, Ingress provides a more complex yet flexible routing system with URL path-based routing and more.
# OpenShift Route
apiVersion: route.openshift.io/v1
kind: Route
metadata:
  name: example-route
spec:
  host: www.example.com
  to:
    kind: Service
    name: example-service

---

# Kubernetes Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: example-ingress
spec:
  rules:
  - host: www.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: example-service
            port:
              number: 80

Path-based routing

  • Ingress stands out with its ability to support URL-based routing, allowing you to route HTTP and HTTPS traffic to different services based on URL paths.
  • Routes are simpler but may require additional configuration for path-based routing.

TLS

  • Both Routes and Ingress support TLS termination, enabling secure communication with backend services.

Support for subdomains with wildcards

  • Routes natively support subdomains with wildcards, making it easier to manage routing for dynamic subdomains.
  • Ingress may require additional configurations or Ingress controllers to support wildcard subdomains.

Routes and Ingress – use scenarios

The choice between Routes and Ingress can significantly impact how you manage external access to your application. While Routes provide a simplified approach that is particularly beneficial in less complex scenarios, Ingress offers a flexible path-based routing mechanism that can be beneficial in more complex ones.

The difference between Routes and Ingress highlights the flexibility and options available to developers and administrators in the OpenShift and Kubernetes ecosystems. Understanding routing mechanisms and their capabilities is fundamental to designing and implementing externally accessible applications. This knowledge enriches the understanding of how OpenShift and Kubernetes manage network traffic, further preparing developers and administrators for networking challenges in container orchestration environments.

DeploymentConfigs

DeploymentConfigs is an essential OpenShift resource designed to manage application deployment and updating. It encapsulates the desired application state, including the container image, replicas, and other key configurations, and manages the application lifecycle to ensure this desired state is maintained. 

DeploymentConfigs provides a declarative way to manage deployments, including support for rolling updates, rollbacks, and triggers to automate deployments in response to events. With DeploymentConfigs, developers and administrators can easily deploy, update, and manage applications in the OpenShift environment.

A sample DeploymentConfig with comments explaining each section:

apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
  name: example-deploymentconfig  # Name of the DeploymentConfig
spec:
  strategy:
    type: Rolling  # Deployment strategy type
  triggers:
    - type: ConfigChange  # Trigger a new deployment when config changes
  replicas: 3  # Number of pod replicas
  selector:
    app: example  # Selector to match pods
  template:
    metadata:
      labels:
        app: example  # Labels to apply to pods
    spec:
      containers:
      - name: hello-openshift  # Container name
        image: openshift/hello-openshift  # Container image

Secrets and ConfigMaps

Secrets and ConfigMaps are critical resources for managing configuration data and sensitive information in OpenShift and Kubernetes. Using them, developers can create easier-to-maintain and more secure applications by ensuring a clean separation of configuration data from application code and images. 

Secrets

Secrets are used to store sensitive information such as passwords, SSH keys, and tokens. They ensure that sensitive data is stored securely and only accessible to authorized pods.

Sample configuration:

apiVersion: v1
kind: Secret
metadata:
  name: example-secret
type: Opaque
data:
  password: cGFzc3dvcmQ=  # Base64 encoded password ('password')

ConfigMaps

ConfigMaps store non-sensitive configuration data in key-value pairs. They are ideal for storing configuration settings and other non-sensitive data.

Sample configuration:

apiVersion: v1
kind: ConfigMap
metadata:
  name: example-configmap
data:
  log_level: INFO

Using Secrets and ConfigMaps in DeploymentConfig

In DeploymentConfig, you can reference Secrets and ConfigMaps to inject configuration data into pods.

Sample configuration:

apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
  name: example-deploymentconfig
spec:
  template:
    spec:
      containers:
      - name: example-container
        image: example-image
        env:
    - name: LOG_LEVEL
          valueFrom:
            configMapKeyRef:
              name: example-configmap
              key: log_level
        - name: PASSWORD
          valueFrom:
            secretKeyRef:
              name: example-secret
              key: password

In this DeploymentConfig example:

  • The LOG_LEVEL environment variable is set based on the key in ConfigMap.
  • The PASSWORD environment variable is set based on the key in Secrets.

Application scaling

When it comes to application scaling, there are two main options: manual and automatic. Let’s take a loot at both.

Manual scaling

oc scale dc/example-deploymentconfig --replicas=5  # Set 5 replicas

Automatic scaling

OpenShift’s HorizontalPodAutoscaler (HPA) feature automatically adjusts the number of pod replicas in a deployment based on observed metrics, such as CPU utilization. By dynamically scaling the number of replicas, HPA helps applications efficiently handle traffic growth, ensuring efficient resource use while maintaining performance and availability.

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: example-hpa  # Name of the Horizontal Pod Autoscaler
spec:
  scaleTargetRef:
    apiVersion: apps.openshift.io/v1
    kind: DeploymentConfig
    name: example-deploymentconfig  # Target DeploymentConfig
  minReplicas: 3  # Minimum number of replicas
  maxReplicas: 10  # Maximum number of replicas
  targetCPUUtilizationPercentage: 80  # Target CPU utilization percentage

Replication

ReplicationControllers are a key Kubernetes functionality designed to ensure that the desired number of pod replicas are running in a cluster at any given time. They monitor the number of running pod instances and create/remove them to match the requested number. ReplicationControllers work to maintain the desired state by replacing failed pods, thus ensuring high availability and fault tolerance. Additionally, they support rolling configuration updates, enabling smoother transitions with minimal disruption. 

Despite their usefulness, ReplicationControllers have been largely replaced by Deployments, which offer more fine-grained, declarative control over the desired state of an application, including the ability to easily roll back to previous versions and more complex update strategies. Nevertheless, understanding ReplicationControllers provides a basic grasp of the evolution of workload management in Kubernetes and OpenShift environments.

ReplicationControllers can be configured via the “replicas” field in DeploymentConfig or Deployment.

spec:
  replicas: 3  # Ensure three replicas of the pod are always running

These configurations illustrate the declarative nature of OpenShift, where the desired state is described, and OpenShift ensures that the actual state of the system corresponds to the desired state. The interplay of these core components and configurations forms the basis of the OpenShift operating model. It provides ease of deployment, scaling, and application management, making OpenShift a powerful platform for developers and administrators.

How are DeploymentConfig and ReplicationController associated?

The replicas in DeploymentConfig and ReplicationController are associated via a mechanism that ensures a specific number of pod replicas within an OpenShift or Kubernetes cluster. This relationship illustrates how OpenShift and Kubernetes work together to provide self-healing and declarative deployment capabilities, ensuring that applications remain resilient and maintain the desired state, even in the face of pod or node failures.

Replicas in DeploymentConfig

  • In OpenShift, DeploymentConfig is a higher-level abstraction that manages application deployment. The replicas field in DeploymentConfig specifies the desired number of pod replicas.
  • When DeploymentConfig is created or updated, it triggers a new deployment, creating a new ReplicationController to manage the pod lifecycle for that particular deployment.
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
  name: example-deploymentconfig
spec:
  replicas: 3  # Desired number of pod replicas
  ...

Replicas in ReplicationController

  • The ReplicationController in Kubernetes (and, by extension, OpenShift) ensures that a certain number of pod replicas are running at any given time. It monitors the current number of pod replicas and adjusts as necessary to bring them into the desired state.
  • In the context of OpenShift, each DeploymentConfig creates a new ReplicationController for each deployment, which then manages a specified number of pod replicas based on the replicas field in DeploymentConfig.
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
  name: example-deploymentconfig
spec:
  replicas: 3  # Desired number of pod replicas
  ...

Deployments vs DeploymentConfigs

Understanding the difference between Deployment (Kubernetes) and DeploymentConfigs (OpenShift) is critical to successfully managing applications. Both Deployments and DeploymentConfigs are used to deploy applications and maintain a certain number of pod replicas, but they differ in functionality and flexibility.

The choice between them often comes down to the project’s requirements and environment. DeploymentConfigs offer more granular control and additional functionality that can benefit complex OpenShift deployment scenarios, while Kubernetes Deployments offer a simplified, standard approach that may be preferred in Kubernetes-centric environments.

You’ll find an outline of the main differences below.

Triggers

  • DeploymentConfigs offer a wider range of triggers for automatic deployments, such as ConfigChange and ImageChange triggers.
  • Deployments in Kubernetes are typically initiated by changes to the Pod template or manual updates.
# OpenShift DeploymentConfig with triggers
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
  name: example-dc
spec:
  triggers:
  - type: ConfigChange  # Automatic deployment on config change
  - type: ImageChange   # Automatic deployment on image change
  replicas: 2
  selector:
    app: example
  template:
    metadata:
      labels:
        app: example
    spec:
      containers:
      - name: hello-openshift
        image: openshift/hello-openshift
 
---

# Kubernetes Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: example-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: example
  template:
    metadata:
      labels:
        app: example
    spec:
      containers:
      - name: hello-kubernetes
        image: k8s.gcr.io/echoserver:1.4

Rollback

  • DeploymentConfigs provide an easier rollback mechanism for returning to previous versions of your application.
oc rollback <deploymentconfig-name>  # Replace <deploymentconfig-name> with the name of your DeploymentConfig
  • Deployments also support rollbacks, although this may require additional kubectl (Kubernetes command-line tool) commands
kubectl rollout undo deployment/<deployment-name> --to-revision=<revision-number>  # Replace <deployment-name> with the name of your Deployment and <revision-number> with the revision number you want to rollback to

Lifecycle Hooks

  • DeploymentConfigs provide lifecycle elements that enable code execution at various points in the deployment process.
  • Deployments do not have built-in Lifecycle Hooks, requiring additional configurations or tools.

PodDisruptionBudgets

In OpenShift, PodDisruptionBudget (PDB) are a resource that help ensure that a certain number of pod instances are maintained, even during interruptions such as node maintenance or updates. It allows developers and administrators to specify the minimum available instance of a replicated application, improving its resiliency and availability during voluntary disruptions. 

Here are the key aspects of PodDisruptionBudgets in OpenShift:

  • Definition – PodDisruptionBudget is defined in OpenShift to represent the number of disruptions a class of pods can tolerate at a given time. If a disruption causes the number of pod instances to drop below the specified budget, the operation will be suspended until the budget is maintained again.
  • Status representation – PodDisruptionBudgetStatus represents the status of the PodDisruptionBudget. It includes fields such as disruptionsAllowed, currentHealthy, desiredHealthy, and expectedPods. This status can track the system’s state, providing a snapshot of the current state of the PDB.
  • Communicating operational requirements – PodDisruptionBudgets enable application teams to communicate enforceable operational requirements to clusters. This ensures that even during maintenance or upgrades, a certain number of pod replicas will remain available, thus maintaining application availability and reliability.
  • Version support – PodDisruptionBudgets were introduced as a technical preview in OpenShift 3.4 and are fully supported in OpenShift 3.6, providing a robust way to manage pod availability during disruptions.
  • Configuration – PodDisruptionBudgets are configured via a YAML file. It specifies a selector identifying pods and the minimum available or maximum unavailable number of pod instances.

The role of PodDistributionBudgets

PodDisruptionBudgets play a key role in managing pod availability and application resiliency, making them an essential feature for production-grade OpenShift deployments.

apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: example-pdb  # Name of the PodDisruptionBudget
spec:
  minAvailable: 2  # At least two instances must remain available
  selector:
    matchLabels:
      app: example-app  # Applies to pods with this label

When the required availability specified in the PodDisruptionBudget (PDB) is not met, several things happen:

  1. Preventing interference – Any further attempts to disrupt the pods, for example, via a delete request or drain node operation, will be blocked. The system prevents operations that would violate the PDB and cause the number of available pods to drop below a specified minimum.
  2. Outage – If an operation already in progress would result in a PDB violation, the operation is paused until minimum availability is achieved again.
  3. Notification to Administrators – Operators receive feedback indicating that the disruption is not allowed because it violates the PDB. This feedback is crucial because it informs operators of the potential risks associated with the operation and the need to ensure a sufficient supply of pods.
  4. Manual intervention – Manual intervention may be required to adjust the PDB or resolve issues causing reduced availability. Administrators should investigate the cause, whether it is pod failures, node issues, or something else, and take corrective actions to restore the desired level of availability.
  5. Monitoring and alerting – In the event of a PDB violation, monitoring systems and warning mechanisms can be triggered. This provides operators with another layer of information about the state of the system and the need for possible intervention.
  6. Logging – PDB breach events are logged in the system, providing a historical record of when a PDB breach occurred and what actions were taken.

These mechanisms ensure that PodDisruptionBudget is a mission-critical safeguard that helps maintain the desired level of application availability – even during disruptions – and provides clear feedback and logs.

OpenShift – an example of a complete configuration

Here’s how we need to configure everything to run our example-app application on OpenShift. It will use a database password, have the appropriate login level and a sufficient number of replicas, and expose ports 80 and 8080.

Secret:

apiVersion: v1
kind: Secret
metadata:
  name: example-app-secret  # Name of the Secret
type: Opaque
data:
  db_password: c2VjcmV0  # Base64 encoded password ('secret')

ConfigMap:

apiVersion: v1
kind: ConfigMap
metadata:
  name: example-app-config  # Name of the ConfigMap
data:
  log_level: INFO

DeploymentConfig:

apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
  name: example-app  # Name of the DeploymentConfig
spec:
  replicas: 2  # Desired number of Pod replicas
  template:
    metadata:
      labels:
        app: example-app  # Label to identify the Pods
    spec:
      affinity:
        podAntiAffinity:  # Anti-affinity to distribute pods across different nodes
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - example-app
            topologyKey: kubernetes.io/hostname
      containers:
      - name: example-container
        image: example-image  # Image to be used
        ports:
        - containerPort: 80
        - containerPort: 8080
        env:
        - name: LOG_LEVEL
          valueFrom:
            configMapKeyRef:
              name: example-app-config
              key: log_level
        - name: DB_PASSWORD
          valueFrom:
            secretKeyRef:
              name: example-app-secret
              key: db_password

Service:

apiVersion: v1
kind: Service
metadata:
  name: example-app-service  # Name of the Service
spec:
  selector:
    app: example-app  # Selector to identify the Pods
  ports:
  - name: http
    protocol: TCP
    port: 80
    targetPort: 80
  - name: http-alt
    protocol: TCP
    port: 8080
    targetPort: 8080

Route:

apiVersion: route.openshift.io/v1
kind: Route
metadata:
  name: example-app-route  # Name of the Route
spec:
  to:
    kind: Service
    name: example-app-service  # Name of the Service to route to
  port:
    targetPort: http  # Port to be exposed externally

PodDisruptionBudget:

apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: example-app-pdb  # Name of the PodDisruptionBudget
spec:
  minAvailable: 1  # At least one instance must remain available
  selector:
    matchLabels:
      app: example-app  # Applies to pods with this label

HorizontalPodAutoscaler:

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: example-app-hpa  # Name of the HorizontalPodAutoscaler
spec:
  scaleTargetRef:
    apiVersion: apps.openshift.io/v1
    kind: DeploymentConfig
    name: example-app  # Name of the DeploymentConfig to scale
  minReplicas: 1  # Minimum number of replicas
  maxReplicas: 10  # Maximum number of replicas
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 50  # Target average CPU utilization percentage

Building containers in OpenShift using S2I

Creating containers in OpenShift is easier thanks to several innovative tools and processes, among which the Source-to-Image (S2I) process stands out. S2I is a framework that transforms application source code into a container format, ready to be deployed to OpenShift or any other container platform. The appeal of S2I lies in its simplicity and speed – it allows developers to simply provide source code while S2I handles the rest of the compilation and deployment process.

The Source-to-Image process consists of three key elements:

  1. Builder image – It’s a special container image type with all the tools and dependencies necessary to compile and build your application code.
  2. Source code – The application source code that needs to be containerized.
  3. Output image – The final container image created in the S2I process that is ready for deployment.

The process works as follows: S2I takes the specified source code, uses builder-image to compile and build the code, and then packages the built code along with the runtime into a final, runnable container image.

Using S2I – example

Here is a simplified example showing how to use an S2I process to build and deploy a container from source code in OpenShift:

  • Creating a new OpenShift project
oc new-project example-project
  • Application implementation using S2I:
oc new-app java:latest~https://github.com/spring-projects/spring-petclinic --name=example-app

In the above command:

  • java:latest – points to the image to be used to build the application
  • https://github.com/spring-projects/spring-petclinic – indicates the source code repository to build
  • –name=example-app – gives a name to the application

When you run the S2I command, it downloads the source code from the specified URL, compiles it using the Java builder image, and then creates a new image containing the built application. In the next step, OpenShift deploys this image as a new application within the sample project.

After the compilation and deployment process, the application is available within OpenShift. We can create a Route that exposes the application to external traffic:

oc expose svc/example-app

Source-to-Image demonstrates the versatility and ease of deploying applications on OpenShift in various programming languages and frameworks. The streamlined process minimizes developer workload by making building and deploying applications in the OpenShift environment easier. Here are some examples of languages and frameworks commonly used in S2I:

  • Java: Spring Boot, WildFly/Swarm
  • Python: Django, Flask
  • JavaScript: Node.js, Angular, Express.js, React
  • Ruby: Rails, Sinatra
  • PHP: Laravel, Symfony

Each of these languages and frameworks can be used with S2I builder images, which provide a ready-made environment for creating and assembling applications into container images. These images are adapted to each language and framework, taking into account the necessary dependencies and runtime environments, making the process of running applications on OpenShift seamless and accessible.

Conclusion

As we finish our Introduction to OpenShift, it becomes clear that this powerful platform offers a comprehensive solution for managing containerized applications. It not only simplifies the complexity of container orchestration, but also enriches it with a robust feature set, making OpenShift an ideal choice for enterprises looking to adopt modern, cloud-native development practices.

Here are the most important conclusions from our journey through the basics of OpenShift:

  • Improved container management – OpenShift extends Kubernetes by providing a more user-friendly interface and additional features that make it easier to deploy, manage, and scale containerized applications.
  • Greater security – Built-in security features such as security context restrictions (SCC) and role-based access control (RBAC) make OpenShift a secure environment for deploying applications.
  • Integrated tools and ecosystem – Platform integration with a wide range of tools and services, from CI/CD pipelines to monitoring and logging solutions, provides a complete environment for the entire application lifecycle.
  • Friendly to developers and administrators – OpenShift is friendly to both developers and administrators, integrating both environments compatible with DevOps methodologies.
  • Support for a variety of workloads – The platform’s compatibility with various programming languages, frameworks and external integrations provides flexibility to support different types of applications and workloads.
  • Strong community support and enterprise support – OpenShift, powered by Red Hat, benefits from strong community engagement and enterprise support, ensuring reliability and continuous innovation.

Pretius has a great deal of experience with designing complex architecture and infrastructure. If you need help with this, feel free to contact us at hello@pretius.com

Sources and additional information

Starting your OpenShift journey can be both exciting and challenging, especially when you never worked with cloud environments. To help you through this journey, there are many resources available that can provide additional information, guidance and support. 

Whether you’re looking to deepen your technical knowledge, solve specific problems, or connect with the OpenShift community, these resources can be invaluable.

  1. Official OpenShift documentation – A comprehensive resource covering all aspects of the platform, with detailed guides, tutorials and reference materials for both beginners and advanced users.
  2. OpenShift Interactive Learning Portal – Offers interactive browser-based tutorials that allow you to experiment with OpenShift without requiring any configuration.
  3. OpenShift Blog – Contains articles, tips and guides from OpenShift experts and the community. This is a good resource to keep you up to date with the latest trends and best practices.
  4. OpenShift Commons – Brings together users from around the world to collaborate and share knowledge.
  5. OpenShift on GitHub – provides a deeper understanding of its components and how they work.
  6. Stack Overflow –  contains a wealth of information and discussions related to specific issues and questions about OpenShift.

Finally, if you have any questions, you can always contact me at jwojewoda@pretius.com. Good luck with using the platform in your projects!

Share