What's new in OpenShift Container Platform 4.1
When Kubernetes was developed by Google as a successor to their own internal container management tool, Google Borg, and then open sourced in 2014, the world that developers and administrators lived in was turned on its head. However, challenges soon presented themselves: getting a container orchestration tool like Kubernetes to play well with containerization tools like Docker could be tricky.
A solution needed to be found that would allow complex open source projects designed for on-premise or in-cloud use to be integrated with familiar, easy-to-use platforms designed for streamlined application packaging and delivery.
When Red Hat OpenShift Enterprise 3 (later renamed OpenShift Container Platform or OCP) released in 2015 based on Docker and Kubernetes, it revolutionized PaaS (Platform as a Service) management. OCP integrated the features of the Docker platform with the management of Kubernetes, and then went further by integrating the networking and security layers that make pure Kubernetes clusters so complex to manage and secure.
OCP 3 gave developers a platform that allowed them to focus on code and administrators a platform that didn't keep them up at night worried about resiliency and security. Over its 11 iterations, OCP 3 continued to take the innovations of the upstream Kubernetes ecosystem and make them ready for the enterprise.
Now, we are in 2019 and OpenShift Container Platform (OCP) 4.1 is here. What has Red Hat brought to the table in their new flagship PaaS offering?
OCP 4.1 Platform Features
OCP 4.1 represents a revolutionary leap forward that leverages the new developments in the upstream projects of the Kubernetes ecosystem and the acquisition of the CoreOS container platform.
Cluster Installation
OCP 4.1 has an installer-provisioned infrastructure where the installer controls all areas of the installation process on AWS. For administrators installing in AWS, this is an excellent feature that allows you to provision an OCP 4.1 cluster from scratch in minutes. Expect to see other cloud providers available in this space soon.
For user-provisioned environments, administrators can deploy simply on any platform by filling out an inventory file and giving the installer connection credentials for all of the hosts in the environment.
Cluster Upgrades
Upgrading from 3.x to 4.x is currently not available. You must install your 4.x cluster in a fresh environment.
The OCP update service offers a simple interface that shows administrators available updates and analyses whether an update is safe for your cluster as well as verifying all downloaded updates before implementation.
Upgrades can now be performed using the web console by administrators. By logging into the console administrators can see new updates available and with a few clicks begin the upgrade of their cluster. This simplifies the process of upgrading clusters from being a major outage event to a simple task that can be done whenever it fits into your schedule.
Trusted enterprise Kubernetes
At the core of OCP is certified, conformant Kubernetes as validated by the Cloud Native Computing Foundation. This means migration to OCP from another Kubernetes platform like Google's GKE is a fairly straightforward operation. The difference with OCP is Red Hat takes has hardened the codebase for security and stability.
Red Hat Enterprise Linux CoreOS
The changes start at the operating system level with Red Hat Enterprise Linux CoreOS (RHCOS). Red Hat has leveraged its acquisition of the container platform CoreOS for this change. This will replace Red Hat Linux Atomic as the underlying thin host operating system of choice. What does that mean for administrators? It means hosts in a cluster can finally be treated as livestock instead of pets.
In previous platform models, individual hosts needed regular care. Hosts required patching and other regular maintenance tasks. When a host was misbehaving, administrators put in the effort to repair it, much like one would a pet. With RHEL CoreOS, this mentality switches to treating the hosts more like livestock, favoring simply replacing one host with another when one is failing. As it is immutable, each host is guaranteed to have an identical operating system.
Configuration for the cluster is stored in a central, distributed service that allows new hosts to immediately just start working the moment they join the cluster. Instead of patching hosts they receive a new image. During the upgrade process each host has its container workload migrated to another host, it receives the new image, and it is brought back into the cluster in a rolling update that requires no downtime.
SELinux
At the host level, OCP takes advantage of the added security benefits of SELinux categories to secure containers. SELinux labels each container with a unique kernel level context on the host. These contexts isolate containers preventing direct container to container access. Even if a container is compromised and root level rights are attained, the attack vector is walled off to only that container. Put differently, with SELinux categories, even someone with root credentials cannot break out of a container because they lack the correct allowed context.
Cloud Automation
OCP 4.1 is designed for diverse, hybrid cloud environments across traditional on-prem and cloud platforms. OCP has automation providers for on-prem bare metal and virtualization platforms such as VMware, Red Hat Virtualization, and OpenStack. OCP also has providers that will be available in the coming months to integrate with Alibaba, Amazon Web Services (AWS), Google Cloud, IBM Cloud, and Microsoft Azure.
These integrations will give a unified platform experience for developers allowing companies to take advantage of shifting price points in the various clouds while giving end users and developers a seamless experience. This means companies can host non-critical development environments on cheaper spot instances and also maintain a multi-cloud production workload giving the highest possible availability and redundancy.
Kubernetes Operators
As defined by the CoreOS team, "An Operator is a method of packaging, deploying and managing a Kubernetes application. A Kubernetes application is an application that is both deployed on Kubernetes and managed using the Kubernetes APIs and tooling." That's a lot of jargon. To simplify that definition, Operators are the runtime that manages your application on Kubernetes, allowing your code to directly interface with the Kubernetes system to get work done more efficiently and dynamically.
Red Hat now offers the Red Hat OperatorHub for Red Hat OpenShift giving customers a curated and tested repository of trusted operators, taking the guesswork out of Operator implementation. This allows companies to implement automation capabilities, such as self-service provisioning, self-tuning, data-replication, automated backups, and automated updates, for their respective services.
Operator-enabled integration with Red Hat Middleware
Using the OpenShift Certified Operators mentioned above, OCP can directly integrate with Red Hat Middleware for an unprecedented level of integration between platform and middleware. By unifying development environments around Operator capabilities, developers can focus on delivering next-gen services instead of worrying about underlying tooling.
OpenShift Service Mesh
According to OpenShift documentation, "OpenShift Service Mesh combines Istio, Jaeger, and Kiali projects as a single capability that encodes communication logic for microservices-based application architectures, freeing developer teams to focus on business-add logic." Let’s break down that jargon.
Istio - Istio controls the complexity of microservice network connectivity allowing secure communication between containers in a controlled and monitored manner.
Jaeger - Jaeger provides distributed tracing, transaction monitoring, service analysis, and root cause analysis giving you unprecedented visibility into your platform.
Kiali - Kiali provides visibility into the microservices integrated into your service mesh, giving topology, health status, metrics, configuration validation, and distributed tracing.
OpenShift Service Mesh is one of the most exciting changes to come to the OpenShift ecosystem. It solves for some of the greatest weaknesses in previous OCP platforms.
OpenShift 4.1 Features
OCP 4.1 has a slew of new innovations either in tech preview or soon-to-be released.
Knative
Knative is in developer preview for OCP 4.1 and is ideal for building Function-as-a-Service (FaaS) workloads. It is designed for building, deploying, and managing a serverless workload that can scale down to zero when not in use and on-demand scale to meet your needs. This allows you to create on demand functions in a similar fashion to AWS Lambda but internal to your OCP cluster.
KEDA (Kubernetes-based event-driven autoscaling)
Developed in a collaboration between Microsoft and Red Hat, KEDA supports deployment of serverless event-driven containers on Kubernetes, enabling Azure Functions in OpenShift, in Developer Preview. This allows for accelerated development of event-driven, serverless functions across hybrid cloud and on-premises with Red Hat OpenShift.
Operator-enabled Red Hat OpenShift Container Storage 4
Currently under development, Red Hat OpenShift Container Storage 4 will offer a highly scalable persistent storage for cloud-native applications with encryption, replication, and availability designed into the solution allowing application teams to dynamically provision secure, fast, and stable storage for workloads such as SQL/NoSQL databases, CI/CD pipelines, and AI/Machine Learning.
What does all of this mean?
The release of OCP 4.1 represents a quantum leap in PaaS technologies. OCP offers a platform with unprecedented levels of control and visibility for administrators and tighter integration between services that allow developers to simply focus on innovating with code. The current feature set is excellent and the coming features represent new heights in enterprise container management.
Next Steps
Stone Door Group is an early adopter of all things Red Hat OpenShift. Our OpenShift Container Platform Acceleratorâ„ solution takes all the guesswork out of transitioning brownfield applications to OpenShift by executing an industry best practice services migration methodology that delivers tangible and valuable outcomes.
About the author
James Kersbergen is a Senior Architect, AWS Professional Architect, and Red Hat OpenShift Certified Delivery Consultant for Stone Door Group, a DevOps Solutions Integrator. James, along with many other SDG consultants, work with enterprises of all sizes to help them execute on their DevOps and digital transformation strategies. To learn more about Stone Door Group, drop us a line at letsdothis@stonedoorgroup.com