Back to blog

Advanced Optimization Techniques for Red Hat OpenShift

Roman Yegorov

Solutions Engineer, Intel Granulate

Red Hat OpenShift brings together tested and trusted services to reduce the friction of developing, modernizing, deploying, running, and managing applications. Intel Tiber App-Level Optimization aligns with this mission by providing continuous, autonomous optimization that requires no R&D efforts or code changes.

OpenShift is a turnkey Hybrid Enterprise Kubernetes platform, which can mean unexpected costs if not optimized properly, mainly due to high peak traffic or events. The same features that make managed container services lower maintenance and simpler to use, also mean that there are fewer opportunities to customize specifically to the resource utilization required by each unique cluster, instance or pod. While the benefits of a managed container are clear and worthwhile, these slight inefficiencies can lead to overutilization and waste, taking up budgets that could otherwise be applied to new products and innovations.

New call-to-action

Intel Tiber App-Level Optimization’s continuous, autonomous runtime and capacity optimizations will ensure that you’re only paying for the usage that is actually occurring. By automatically optimizing your workloads to only utilize the amount of resources that are needed to operate your applications and dynamically updating those optimizations to compensate for usage in real-time, Intel Tiber App-Level Optimization allows DevOps teams to apply their time and budgets to improving their product, rather than manual configurations and tuning.

Use the following RedHat OpenShift techniques to make sure your clusters are benefitting from efficient management.

NodeSelector

NodeSelector is a feature in Kubernetes that allows a pod to be scheduled on specific nodes based on labels assigned to the nodes. It is used to control the placement of a pod on a particular node in a cluster. A node is a worker machine in Kubernetes that runs the pods.

For example, if you have a cluster of nodes with varying hardware configurations, you may want to schedule a pod on a node with specific hardware characteristics such as CPU, memory, or GPU.

To achieve this, you can use a NodeSelector to specify the label(s) that correspond to the desired hardware characteristics, and Kubernetes will schedule the pod only on the nodes that match those labels. A pod with a specific instruction set such as GPU-based hardware will have the label hardware-type: gpu.

Affinity

Affinity is a feature in Kubernetes that allows you to specify rules for pod scheduling based on the characteristics of the nodes in the cluster. Affinity can be used to ensure that related pods are co-located on the same node or spread across different nodes.

There are two types of affinity rules: nodeAffinity and podAffinity. nodeAffinity specifies rules for scheduling pods on specific nodes based on node labels, similar to NodeSelector. podAffinity specifies rules for scheduling pods on nodes that already have pods with certain labels.

Here’s an example podAffinity configuration in a pod specification file:

apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  containers:
    - name: my-container
      image: my-image
  affinity:
    podAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchExpressions:
              - key: app
                operator: In
                values:
                  - my-app
          topologyKey: "kubernetes.io/hostname"

In this example, the pod is scheduled on a node that already has a pod with the label app: my-app, and the affinity is based on the hostname of the node.

AntiAffinity

AntiAffinity is a feature in Kubernetes that allows you to specify rules for pod scheduling based on the characteristics of the nodes in the cluster, but with the opposite intent of Affinity. AntiAffinity can be used to ensure that related pods are spread across different nodes.

Here’s an example AntiAffinity configuration in a pod specification file:

apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  containers:
    - name: my-container
      image: my-image
  affinity:
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchExpressions:
              - key: app
                operator: In
                values:
                  - my-app
          topologyKey: "kubernetes.io/hostname"

In this example, the pod is scheduled on a node that does not have a pod with the label app: my-app, and the AntiAffinity is based on the hostname of the node.

Taints and Tolerations

A taint is a special label that is applied to a node that indicates that the node should not accept pods unless they have a matching toleration. This is a way to ensure that certain nodes are reserved for specific workloads or to limit resource usage on certain nodes.

For example, let’s say you have a node in your cluster that has a limited amount of CPU resources, and you want to ensure that only certain pods are scheduled on that node to avoid overloading it. You can apply a taint to the node that indicates that it should only accept pods with a matching toleration.

Here’s an example of how you can apply a taint to a node:
oc adm taint nodes node-name app=my-app:NoSchedule

This command applies a taint to the node with the name node-name that has the key-value pair app=my-app and the effect NoSchedule, which means that pods without a matching toleration cannot be scheduled on that node.

Now, let’s say you have a pod that requires a specific node to run on, but that node has a taint applied. You can add a toleration to the pod’s specification that matches the taint on the node, so that the pod can be scheduled on that node despite the taint.

Here’s an example of how you can add a toleration to a pod’s specification:

apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  containers:
    - name: my-container
      image: my-image
  tolerations:
    - key: "app"
      operator: "Equal"
      value: "my-app"
      effect: "NoSchedule"

In this example, the pod’s specification includes a toleration that matches the taint on the node with the key-value pair app=my-app and the effect NoSchedule. This means that the pod can be scheduled on that node despite the taint, as long as no other factors prevent it from being scheduled.

New call-to-action

Next Steps

Now that you’ve applied these advanced Red Hat OpenShift techniques, your Kubernetes should be running more efficiently and wasting fewer resources. However, there is a remaining opportunity to take your optimization to the next level. Red Hat OpenShift customers can optimize their workloads and save up to 45% on compute costs with Intel Tiber App-Level Optimization’s autonomous, continuous solutions.

Intel Tiber App-Level Optimization is listed on the Red Hat Ecosystem Catalog and Red Hat Marketplace as a certified independent software vendor (ISV), running its optimization solutions on Red Hat OpenShift. Its container certification allows customers to more confidently adopt Intel Tiber App-Level Optimization as a fully containerized software — portable, security enhanced, and easier to deploy in any compute environment.

Learn more about Intel Tiber App-Level Optimization for Red Hat OpenShift here, check out the container image on the Container Catalog, or reach out to a Intel Tiber App-Level Optimization optimization expert here.

Optimize application performance.

Save on cloud costs.

Start Now
Back to blog