Red Hat OpenShift 4 Production Cluster Administration

Mindwatering Incorporated

Author: Tripp W Black

Created: 10/13 at 09:09 PM

 

Category:
Linux
RH OpenShift

Overview:
- OpenShift cluster, applications, and users administration
- Maintenance and Troubleshooting of Kubernetes clusters
- Security management of the cluster
- User Provisioned Infrastructure (UPI)
- Cluster applications consist of multiple resources that are configured together, and each resource has a definition document and a configuration applied.
- Declarative paradigm of resource management for specifying desired states that the system will configure, vs imperative commands that manually configure the system step-by step

Software Alphabet Soup:
- Open Shift Container (OC)
- Red Hat OpenShift Container Platform (ROCP), based on Kubernetes
- Single-node base metal (SNO), a single-node implementation, meaning a RHOCP cluster running on a single-node (host server)
- Project resource type roughly equal to namespace (more later)


Using Deployment Strategies:
- Deployment strategies change or upgrade applications/instances with or without downtime so that users barely notice a change
- Users generally access applications through a route handled by a router, so updates can focus on the DeploymentConfig object features or routing features.
- Most deployment strategies are supported through the DeploymentConfig object, and additional strategies are supported through router features.
- - Object features impact all routes that use the application
- - Router features impact targeted individual routes
- Deployment readiness check fails, the DeploymentConfig object retries to run the pod until it times out. Default time-out is 10m, set in dc.spec.strategy.*parems --> TimeoutSeconds

Rolling Deployment Updates:
- Default deployment strategy when none specified in the DeploymentConfig object
- Replaces instances of previous application/deployment with new versions by deploying new pods and waiting for them to become "ready" before scaling down the old version instances.
- - Waiting on the new versions to be "ready" is called a "canary test", and this method, a "canary deployment".
- Will be aborted if the new pods don't become ready, the deployment rolls back to to its previous version.
- Should not be used if the old instance application is not compatible, and cannot run along side of the new version. Application should be designed to handle "N-1" compatibility.
- The rollingParams defaults:
- - updatePeriodSeconds - wait time for individual pod updates: 1
- - intervalSeconds - wait time after update for polling deployment status: 1
- - timeoutSeconds (optional) - wait time for scaling up/down event before timeout: 600
- - maxSurge (optional) - maximum percentage or number of instance rollover at one time: "25%"
- - maxUnavailable (optional) - maximum percentage or number of instances down/in-process at one time: "25%" (or 1 in OC)
- - pre and post - default to {}, are lifecycle hooks to be done before and after the rolling update
- If you want faster rollouts, use maxSurge with a high value. If you want low resource quotas and partial unavailability is okay, limit with maxUnavailability.
- If you implement complex checks (such as end-to-end workload workflows to the new instance(s)), custom deployment or blue-green deployments strategies are performance instead of a simpler rolling update.

Important:
In ROCP, the maxUnvailable is 1 for all machine config pools, RH recommends not to change this value to 3 for the control plane pool, but update one control plane node at a time.

Rolling Deployment Updates Order:
1. Executes pre lifecycle hook
2. Scales up new replication controller-based instances on surge count/percentage
3. Scales down old replication controller-based instances on max unable percentage
4. Repeats #2-3 scaling until the replication controller has reached the desired replica count and the old replication controller count has reached 0
5. Executes post lifecycle hook

Example rolling deployment demo from RH documentation:
Set-up a application to rollover:
[admin@rocp ~]$ oc new-app quay.io/openshifttest/deployment-example:latest
[admin@rocp ~]$ oc expose svc/deployment-example
[admin@rocp ~]$ oc scale dc/deployment-example --replicas=3

The following tag command will cause a rollover:
[admin@rocp ~]$ oc tag deployment-example:v2 deployment-example:latest
Watch the v1 to v2 rollover with:
[admin@rocp ~]$ oc describe dc deployment-example

Perform a rolling deployment update using the ROCP Developer Perspective:
- web console --> Developer perspective --> Topology view --> select/highlight application node --> Overview (tab/panel)
- In the Overview panel, confirm Update Strategy: Rolling, click Actions (dropdown) --> select Start Rollout

Edit Deployment configuration, image settings, and environmental variables in the ROCP Developer Perspective:
- web console --> Developer perspective --> Topology view --> click/open application --> Details (panel)
- In the Details panel, click Actions (dropdown) --> select Edit Deployment
- In the Edit Deployment window, edit the options desired:
- - Click Pause rollouts to temporarily disable updated application rollouts
- - Click Scaling to change the number of instance replicas
- - Click Save (button)

Recreate Deployment Update:
- Recreate deployment strategy
- Incurs downtime because, for a brief period, no instances of your application are running.
- Old code and new code do not run at the same time.
- Basic rollout behavior
- Use when:
- - Migration data transformation hooks have to be run before the new deployment starts
- - Application doesn't support old and new versions of code running in a rolling deployment
- - Application requires a RWO (read-write-once) volume, not supported being shared between multiple replicas
- Supports pre, mid, and post lifecycle hooks
- The recreateParams are all optional

Recreate Deployment Updates Order:
1. Executes pre lifecycle hook
2. Scales down previous deployment to 0 instances
3. Executes mid lifecycle hook
4. Scales up new deployment
5. Executes post lifecycle hook

Note:
- If number of replicas > 1, the first instance will be validated for readiness (wait for "ready") before scaling up the rest of the instance count. If the first replica fails, the deployment recreate fails and aborts.

Perform a recreate deployment update using the ROCP Developer Perspective:
- web console --> Developer perspective --> Topology view --> click/open application node --> Details (panel)
- In the Details panel, click Actions (dropdown) --> select Edit Deployment Config
- - In the YAML editor, change the spec.strategy.type to Recreate
- - Click Save (button)
- web console --> Developer perspective --> Topology view --> highlight/select application node --> Overview (tab/panel)
- In the Overview panel, confirm Update Strategy: Recreate, click Actions (dropdown) --> select Start Rollout


Imperative Commands vs. Declarative Commands:
Imperative commands in Kubernetes directly manipulate the state of the system by executing specific commands, while declarative commands involve defining the desired state in a configuration file that cumulates to what the state will be.

The imperative approach allows the administrator to issue step-by-step commands, where the result of each gives the administrator the flexibility of adaptability based on the last command's response, whereas the declarative are written instructions, called manifests which Kubernetes read and apply cluster changes to meet the state the resource manifest defines. The industry generally prefers the latter due to:
- Increased reproducibility/consistency
- Better version control
- Better GitOps methodology

Resource Manifest:
- file in YAML or JSON format, and thus a single document which can be version-controlled readily
- simplify administration by encapsulating all the attributes of an application in a file or a set of related files which are then can be run repeatably with consistent results allowing for the CI/CD pipelines of GitOps.

Imperative command example:
[admin@rocp ~]$ kubectl create deployment mysql-pod --port 3306 --image registry.ocp4.mindwatering.net:8443/mysql:latest --env="MYSQL_USER='dbuser'" --env="MYSQL_PASSWORD='hardpassword' --env="MYSQL_DATABASE='dbname'"
deployment.apps/mysql-pod created

Adding the --save-config and --dry-run=client options, respectively allow what would have been run to be saved in that resources configuration nomenclature into a manifest file.
[admin@rocp ~]$ kubectl create deployment mysql-pod --namespace=mysql-manifest --port 3306 --image registry.ocp4.mindwatering.net:8443/mysql:latest --replicas=1 --env="MYSQL_USER='dbuser'" --env="MYSQL_PASSWORD='hardpassword' --env="MYSQL_DATABASE='dbname'" --save-config --dry-run=client > ~/mysql-deployment.yaml

[admin@rocp ~]$ cat mysql-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: mysql-manifest
annotations:
...
creationTimestamp: null
labels:
app: mysql-pod
name: mysql-pod
spec:
replicas: 1
selector:
matchLabels:
app: mysql-pod
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: mysql-pod
spec:
containers:
- image: registry.ocp4.mindwatering.net:8443/mysql:latest
name: mysql
env:
- MYSQL_USER=dbuser
- MYSQL_PASSWORD=hardpassword
- MYSQL_DATABASE=dbname
ports:
- containerPort: 3306
resources: {}
status: {}

Notes:
- The order of parameters matter. For example, if the --env are moved earlier in the command they are not added.
- Never include the password like this, abstract with the credential.
- Add the service resource manifest to this one as one file separated by the --- delineator, or keep in separate files which are loaded together.

The declarative command syntax:
[admin@rocp ~]$ kubectl create -f ~/mysql-deployment.yaml

IMPORTANT:
- The kubectl create above does not take into account for the current running state of the resource, in this case, the mysql-pod resource. Executing kubectl create -f against a manifest for a live resource gives an error because the mysql-pod is already running. When using kubectl create to create/deploy a resource, the --save-config option produces the required annotations for future kubectl apply commands to operate.
- The kubectl apply command tries to apply updates w/o causing issues. In contrast, the kubectl apply -f command is declarative, and considers the difference between the current resource state in the cluster and the intended resource state that is expressed in the manifest. If the specified resource in the manifest file does not exist, then the kubectl apply command creates the resource. If any fields in the last-applied-configuration annotation of the live resource are not present in the manifest, then the command removes those fields from the live configuration. After applying changes to the live resource, the kubectl apply command updates the last-applied-configuration annotation of the live resource to account for the change.
- The kubectl apply command compares: the manifest file, the live configuration of the resource(s) in the cluster, and the configuration stored in the last-applied-configuration annotation.

To help verify syntax and whether an applied manifest update would succeed, use the --dry-run=server and the --validate=true flags. The dry-run=client does not have the validation of the cluster resource controllers server-side dry-run.
[admin@rocp ~]$ kubectl apply -f ~/mysql-deployment.yaml --dry-run=server --validate=true
deployment.apps/hello-openshift created (server dry-run)

Diff Tools vs Kubectl Diff:
Kubernetes resource controllers automatically add annotations and attributes to the live resource that make the output of other OS text-based diff tools report many differences that have no impact on the resource configuration, causing confusion and wasted time. Using the kubectl diff command confirms that a live resource matches, or does not match, a resource configuration that a manifest provides. Because other tools cannot know all details about how any controllers might change a resource, the the kubectl diff tool handles whether the cluster would determine if a change is meaningful. Moreover, GitOps tools depend on the kubectl diff command to determine whether anyone changed resources outside the GitOps workflow.

OC Diff Update:
Applying manifest changes with oc diff may not generate new pods for changes in secret and configuration maps because these elements are only read at deployment/pod start-up. Like the kubectl diff command, it compares the running deployment pod's configuration against the file specified in the diff. If the configuration changes require a restart, it has to be done separately. The pod could be deleted, but the oc rollout command will stop and replace pods to minimize downtime.
[admin@rocp ~]$ oc diff -f mysql-pod.yaml
or
[admin@rocp ~]$ cat mysqlservice.yaml | oc diff -f -
[admin@rocp ~]$ oc rollout restart deployment mysql-deployment.yaml

OC Patch Update:
The oc patch command allows partial YAML snippets to be applied to live resources in a repeatably declarative way. The applies to a deployment/pod regardless whether the patched resource/configuration already exists in the manifest yaml file - existing configuration is updated, new configuration is added.
[admin@rocp ~]$ oc patch deployment mysql-pod -p '{<insertjsonsnippet}'
deployment/mysql-pod patched

[admin@rocp ~]$ oc patch deployment mysql-pod -p ~/mysql-deploypatch.yaml
deployment/mysql-pod patched

CLI Reference:
docs.redhat.com/en/documentation/openshift_container_platform/4.14/html-single/cli_tools/index#cli-developer-commands


Create Deployment via UI:
- OCP login --> Workloads (left menu category/twistie) --> Deployments (left menu) --> click Create Deployment (button)
- Choose either Form view, or YAML view
- If Form view, enter the form in the Name field, or edit the YAML, otherwise.
- Click Create (button)
- Wait a few seconds for the Deployment details page to display for the deployment just created.
- - Under Details (tab), confirm the number of Pods running.
- - Under the Pods (tab), confirm the Status column becomes Running for the deployment pod(s).
- - Still under Pods (tab), continue scrolling down to the Containers heading, click "container" text.
- - - Under Container details (heading), conform State is Running, and Resource requests and Resource limits (if applied)
- - - - Dash "-" indicates none applied

View the YAML resource definition of a deployment (like oc describe <deploymentname>) of an existing deployment via UI:
- OCP login --> Workloads (left menu category/twistie) --> Deployments (left menu)
- Under Name column, click <deploymentname> desired
- On the Details page, click YAML (tab)
- - If desired, you can edit, and click Save (button) to perform a rolling apply to the deployment.


Creating Manifests from Git:
Maintaining application manifests in Git provides version controlling, and ability to deploy new versions of apps from Git. When you setup your GitBash access you typically create a folder structure in a specific location where it was run.
In this example, our git folder/project is: ~/gitlab.mindwatering.net/mwdev/mysql-deployment/
The version numbers are set when you commit; this git repo has v1.0, v1.1.
a. Login:
[admin@rocp ~]$ oc login -u myadminid -p <mypassword> https://api.ropc.mindwatering.net:6443
Login successful ...

b, Create new OC project:
[admin@rocp ~]$ oc new-project mysql-deployment
Now using project "mysql-deployment" on server ...

Note:
- To switch projects, use the command: oc project <project-name>

c. Switch to the v1.1 of the Git repo:
[admin@rocp ~]$ cd ~/gitlab.mindwatering.net/mwdev/

[admin@rocp ~]$ git clone https://github.mindwatering.net/mwdev/mysql-deployment.git
Cloning into 'mysql-deployment' ...

[admin@rocp ~]$ git log --online
... (HEAD -> main, tag: <branchversion>, origin ...
<Note the tag version number. That's the version of the application manifest for mwsqldb>

[admin@rocp ~]$ cd mysql-deployment/

[admin@rocp ~]$ git checkout v1.1
branch 'v1.0' set up to track 'origin/v1.1' ...

d. In the app's folder, validate the v1.1 version of the mysql-deployment application can be deployed:
[admin@rocp ~]$ oc apply -f . --validate=true --dry-run=server
<confirm dry run>

e. After a successful dry-run, deploy the application:
[admin@rocp ~]$ oc apply -f .

f. Watch the deployments and pod instances and confirm the new app becomes available and its pods have a running state:
(Technically, this watch command looks at all deployments and pods, not just the one just deployed. So we will like see much more than just the new app.)
[admin@rocp ~]$ watch oc get deployments,pods
Every 2.0s: oc get deployments,pods ...

NAME READY UP-TO-DATE AVAILABLE AGE
...
deployment.apps/mysql-pod 1/1 1 1 60s
...

NAME READY STATUS RESTARTS AGE
...
pod/mysql-pod-6fddbbf94f-2pghj 1/1 Running 0 60s
...
<cntrl+c>, to end the watch

g. Review the current deployment manifest:
[admin@rocp ~]$ oc get deployment mysql-pod -o yaml
<view output>

h. Confirm still in the git working folder, and delete the current running deployment:
admin@rocp ~]$ pwd
.../gitlab.mindwatering.net/mwdev/mysql-deployment/
admin@rocp ~]$ oc delete -f .
<view components deleted>


OpenShift CLI (oc) Common Commands and Options:
Login to OpenShift:
[admin@rocp ~]$ oc login <server_url> --token=<your_token>

Get Cluster Information
[admin@rocp ~]$ oc status

Get Projects
[admin@rocp ~]$ oc projects

Switch to a Project
[admin@rocp ~]$ oc project <project_name>

List All Pods
[admin@rocp ~]$ oc get pods

Describe a Pod
[admin@rocp ~]$ oc describe pod <pod_name>

Get Pod Logs
[admin@rocp ~]$ oc logs <pod_name>

Execute a Command Inside a Pod
[admin@rocp ~]$ oc exec <pod_name> -- <command>

List All Deployments
[admin@rocp ~]$ oc get deployments

View Deployment Details
[admin@rocp ~]$ oc describe deployment <deployment_name>

List All DeploymentConfigs
[admin@rocp ~]$ oc get deploymentconfigs

View DeploymentConfig Details
[admin@rocp ~]$ oc describe deploymentconfig <deployment_config_name>

Trigger a New Deployment Rollout (Kubernetes Deployment)
- Restart a Kubernetes Deployment to trigger a new rollout.
[admin@rocp ~]$ oc rollout restart deployment/<deployment_name>

Trigger a New Deployment Rollout (OpenShift DeploymentConfig)
- Trigger a new rollout using an OpenShift DeploymentConfig.
[admin@rocp ~]$ oc rollout latest <deployment_config_name>

Scale Down and Up to Rollout (Kubernetes Deployment)
-Trigger a new rollout by scaling down and then back up.
[admin@rocp ~]$ oc scale deployment <deployment_name> --replicas=0
[admin@rocp ~]$ oc scale deployment <deployment_name> --replicas=1

List Services
[admin@rocp ~]$ oc get svc

Describe a Service
[admin@rocp ~]$ oc describe svc <service_name>

List Routes
[admin@rocp ~]$ oc get routes

Describe a Route
[admin@rocp ~]$ oc describe route <route_name>

List Builds
[admin@rocp ~]$ oc get builds

Start a New Build
[admin@rocp ~]$ oc start-build <build_name>

List Image Streams
[admin@rocp ~]$ oc get is

Describe an Image Stream
[admin@rocp ~]$ oc describe is <image_stream_name>

Scale a Deployment
[admin@rocp ~]$ oc scale --replicas=<number_of_replicas> deployment/<deployment_name>

Scale a DeploymentConfig
[admin@rocp ~]$ oc scale --replicas=<number_of_replicas> deploymentconfig/<deployment_config_name>

Delete an Application
[admin@rocp ~]$ oc delete all -l app=<application_name>

Expose a Service as a Route
[admin@rocp ~]$ oc expose svc/<service_name>


Kubernetes Kustomize:
Kustomize Overview:
- As a standalone tool, customizes Kubernetes objects through a kustomization file
- Makes declarative changes to application configurations and components and preserve the original base YAML files
- Group in a directory the Kubernetes resources that constitute your application, and then use Kustomize to copy and adapt these resource files to your environments and clusters.
- Starting with Kubernetes 1.14, kubectl supported declarative management of Kubernetes objects using kustomization files. oc integrates the Kustomization tool, as well.
- Features:
- - Generating resources from other sources
- - Setting cross-cutting fields for resources
- - Composing and customizing collections of resources

Kustomize File Structure:
Add to your code's base directory a kustomization.yaml file. The kustomization.yaml file has a list resource field to include all resource files. As the name implies, all resources in the base directory are a common resource set. The kustomization.yaml file can create a base application by composing all common resources by referring to one or more directories as bases.

Below is a very basic example showing the contents of a kustomization:

[admin@rocp ~]$ tree ./
base folder
├── configmap.yaml
├── deployment.yaml
├── secret.yaml
├── service.yaml
├── route.yaml
└── kustomization.yaml

[admin@rocp ~]$ cat ~/kustomcodedemo/base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- configmap.yaml
- deployment.yaml
- secret.yaml
- service.yaml
- route.yaml

To render/view resources contained in kustomization:
[admin@rocp ~]$ kubectl kustomize ~/kustomcodedemo/base/
<review resources/yaml>

To apply the kustomation(s), use kubectl apply:
[admin@rocp ~]$ kubectl apply -k ~/kustomcodedemo/base/

Kustomize Overlays:
- Overlays declarative YAML artifacts or patches that override the general settings without modifying the original files

Overlay example:
Notes:
- Overlay folder is peer to the base folder
- The overlay folders contain relative references to the base code (e.g. ../../base).
- The dev overlay kustomization.yaml changes the namespace to dev-env.
- The test overlay kustomization.yaml changes the namespace to test-env, and contains the testing patches
- The prod overlay kustomization.yaml contains one patch loading the file patch.yaml, and has allowNameChange: true, so that patch.yaml can change the name.

[admin@rocp ~]$ tree ~/kustomcodedemo/base/
base (folder)
├── configmap.yaml
├── deployment.yaml
├── secret.yaml
├── service.yaml
├── route.yaml
└── kustomization.yaml
overlay (folder)
└── development
└── kustomization.yaml
└── testing
└── kustomization.yaml
└── production
├── kustomization.yaml
└── patch.yaml

[admin@rocp ~]$ cat ~/kustomcodedemo/overlay/development/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: dev-env
resources:
- ../../base

[admin@rocp ~]$ cat ~/kustomcodedemo/overlay/testing/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: test-env
patches:
- patch: |-
- op: replace
path: /metadata/name
value: mysql-pod-test
target:
kind: Deployment
name: frontend
- patch: |-
- op: replace
path: /spec/replicas
value: 15
target:
kind: Deployment
name: mysql-pod
resources:
- ../../base
commonLabels:
env: test

[admin@rocp ~]$ cat ~/kustomcodedemo/overlay/production/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: prod-env
patches:
- path: patch.yaml
target:
kind: Deployment
name: mysql-pod
options:
allowNameChange: true
resources:
- ../../base
commonLabels:
env: prod

[admin@rocp ~]$ cat ~/kustomcodedemo/overlay/production/patch.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-pod-prod
spec:
replicas: 5

Apply the production overlay:
[admin@rocp ~]$ kubectl apply -k ~/kustomcodedemo/overlay/production
deployment.apps/mysql-pod-prod created
...

Delete the testing overlay:
[admin@rocp ~]$ oc delete -k ~/kustomcodedemo/overlay/testing
<view pod containers deleted>

Configuration and Sensitive Data with Kustomize:
ConfigMaps and Secrets are used to store configuration or sensitive data that are used by other Kubernetes objects, such as Deployment Pods. The source of truth of ConfigMaps or Secrets are usually external to a cluster, such as a .properties file or an SSH keyfile. Kustomize has secretGenerator and configMapGenerator, which generate Secret and ConfigMap from files or literals.

Using ConfigMaps Notes:
- All entries in an application.properties become a single key in the ConfigMap generated.
- Each variable in the .env file becomes a separate key in the ConfigMap generated.
- ConfigMaps can also be generated from literal key-value pairs; add an entry to the literals list in configMapGenerator.

Using Secrets Notes:
- Generate Secrets from files or literal key-value pairs.
- To generate a Secret from a file, add an entry to the files list in secretGenerator

Using ConfigMaps:
Example configMapGenerator that loads from an application.properties file:
[admin@rocp ~]$ vi ~/kustomcodemap/base/application.properties
FOO=Bar
<esc>:wq (to save)

[admin@rocp ~]$ vi ~/kustomcodemap/base/kustomization.yaml
configMapGenerator:
- name: example-configmap-1
files:
- application.properties
<esc>:wq (to save)

To validate your configMap generation:
[admin@rocp ~]$ kubectl kustomize ~/kustomcodemap/base/
apiVersion: v1
data:
application.properties: |
FOO=Bar
kind: ConfigMap
metadata:
name: example-configmap-1-1abcd7891e

Example configMapGenerator that loads from an .env file:
[admin@rocp ~]$ vi ~/kustomcodemap/base/.env
FOO=Bar
<esc>:wq (to save)

[admin@rocp ~]$ vi ~/kustomcodemap/base/kustomization.yaml
configMapGenerator:
- name: example-configmap-1
envs:
- .env
<esc>:wq (to save)

To validate your configMap generation:
[admin@rocp ~]$ kubectl kustomize ~/kustomcodemap/base/
apiVersion: v1
data:
FOO: Bar
kind: ConfigMap
metadata:
name: example-configmap-1-23abcd123a

Example of a configMap from a literal key-value pair:
[admin@rocp ~]$ vi ~/kustomcodemap/base/kustomization.yaml
configMapGenerator:
- name: example-configmap-2
literals:
- FOO=Bar
<esc>:wq (to save)

To validate your configMap generation:
[admin@rocp ~]$ kubectl kustomize ~/kustomcodemap/base/
apiVersion: v1
data:
FOO: Bar
kind: ConfigMap
metadata:
name: example-configmap-2-a1abcde2ab

Example of application.properties, deployment, with a generated ConfigMap:
[admin@rocp ~]$ vi ~/kustomcodemap/base/application.properties
FOO=Bar
<esc>:wq (to save)

[admin@rocp ~]$ vi ~/kustomcodemap/base/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
spec:
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: app
image: my-app
volumeMounts:
- name: config
mountPath: /config
volumes:
- name: config
configMap:
name: example-configmap-1
<esc>:wq (to save)

[admin@rocp ~]$ vi ~/kustomcodemap/base/kustomization.yaml
resources:
- deployment.yaml
configMapGenerator:
- name: example-configmap-1
files:
- application.properties
<esc>:wq (to save)

To validate your configMap generation:
[admin@rocp ~]$ kubectl kustomize ~/kustomcodemap/base/
apiVersion: v1
data:
application.properties: |
FOO=Bar
kind: ConfigMap
metadata:
name: example-configmap-1-g4hk9g2ff8
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: my-app
name: my-app
spec:
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- image: my-app
name: app
volumeMounts:
- mountPath: /config
name: config
volumes:
- configMap:
name: example-configmap-1-a1bc2d3ef4
name: config

Example of configMapGenerator with all three types (file, envs, and literal key):
[admin@rocp ~]$ cd ~/kustomconfigmaps/
[admin@rocp ~]$ vi application.properties
Day=Monday
Enabled=True
<esc>:wq (to save)

[admin@rocp ~]$ vi configmap2.env
Greet=Welcome
Enable=True
<esc>:wq (to save)

[admin@rocp ~]$ cat kustomization.yaml
...
configMapGenerator:
- name: configmap-props
files:
- application.properties
- name: configmap-envs
envs:
- configmap2.env
- name: configmap-literals
literals:
- name="configmap-literal"
- description="literal key-value pair"
...

[admin@rocp ~]$ kubectl kustomize .
apiVersion: v1
data:
application.properties: |
Day=Monday
Enable=True
kind: ConfigMap
metadata:
name: configmap-1-5g2mh569b5 1
---
apiVersion: v1
data:
Enable: "True"
Greet: Welcome
kind: ConfigMap
metadata:
name: configmap-2-92m84tg9kt 2
---
apiVersion: v1
data:
description: literal key-value pair
name: configmap-3
kind: ConfigMap
metadata:
name: configmap-3-k7g7d5bffd 3
---
...

Using Secrets:
Example Secret that loads from an password.txt file:
[admin@rocp ~]$ vi ~/kustomcodeshh/base/password.txt
username=admin
password=reallygoodpassword
<esc>:wq (to save)

[admin@rocp ~]$ vi ~/kustomcodeshh/base/kustomization.yaml
secretGenerator:
- name: example-secret-1
files:
- password.txt
<esc>:wq (to save)

Example Secret that loads from literal list:
[admin@rocp ~]$ vi ~/kustomcodeshh/base/password.txt
[admin@rocp ~]$ vi ~/kustomcodeshh/base/kustomization.yaml
- name: example-secret-2
literals:
- username=admin
- password=reallygoodpassword
<esc>:wq (to save)

To validate your configMap generation:
[admin@rocp ~]$ kubectl kustomize ~/kustomcodeshh/base/
apiVersion: v1
data:
password: reallygoodpassword
username: admin
kind: Secret
metadata:
name: example-secret-2-t52t6g96d8
type: Opaque

To apply:
[admin@rocp ~]$ kubectl apply -k ~/kustomcodeshh/base/
<confirm created okay>

View the created pod:
[admin@rocp ~]$ oc get all
<view containers>

Example of secret generators with all three types (file, envs, and literal key):
[admin@rocp ~]$ cd ~/kustomsecrets2/
[admin@rocp ~]$ cat kustomization.yaml
...
secretGenerator:
- name: secret-file
files:
- password.txt
- name: secret-envs
envs:
- secret-mysql.env
- name: secret-literal
literals:
- MYSQL_ADMIN_PASSWORD=postgres
- MYSQL_DB=mysql
- MYSQL_USER=user
- MYSQL_PASS=reallygoodpassword
configMapGenerator:
- name: db-config
literals:
- DB_HOST=database
- DB_PORT=5432
...

ConfigMap generatorOptions:
- Alter the default behavior of Kustomize generators.
- Workload resources, e.g. deployments, do not detect changes to configuration maps and secrets unless the name changes.
- By default, a kustomize configMapGenerator and a secretGenerator append a hash suffix to the name of the generated resource(s), which means the name changes each apply, which means the deployment updates.
- Use generated-disable-suffix to disable the hash suffix

Example of configMapGenerator without a hash each time:
[admin@rocp ~]$ vi ~/kustomdisablegeneratedhash/kustomization.yaml
...
configMapGenerator:
- name: my-configmap
literals:
- name="configmap-nohash"
- description="literal key-value pair"
generatorOptions:
disableNameSuffixHash: true
labels:
type: generated-disabled-suffix
annotations:
note: generated-disabled-suffix
...

[admin@rocp ~]$ kubectl kustomize ~/kustomdisablegeneratedhash/
apiVersion: v1
data:
description: literal key-value pair
name: configmap-nohash
kind: ConfigMap
metadata:
annotations:
note: generated-disabled-suffix
labels:
type: generated-disabled-suffix
name: my-configmap
...


Namespace and Project
K8s Namespace Review:
- K8s uses namespaces to isolate workloads
- Policy controls via project/namespace labels limit capabilities (e.g limits or quotas)
- Policy controls via project/namespace labels limit who can edit the namespace definitions - which typically is good, because otherwise, users could edit their namespace to increase their capabilities, which would be bad.

K8s Namespace Security Limitation:
- Listing resources and viewing individual resources are different privileges.
- - Example: Granting to view within specific namespace resources is different than listing namespaces.
- Namespaces themselves are not namespaced, so granting users ability to edit specific namespaces in K8s is a catch-22.
- - Example: To allow users to list their namespaces gives them ability to list all namespaces, not just theirs.

Project:
- OCP API introduces the Project resource type
- Improves security and user experience of working with namespaces
- Allows filtering of namespaces to user to view only his/her visible/allowed namespaces in the project format (very similar to namespace output)

ProjectRequest:
- OCP API also introduces the ProjectRequest resource type
- Creates a Project and its namespace via a template

Project Template:
- Creates new Projects
- Incorporates any namespaced resource into the project template:
- - Roles and role bindings: Add to templates to grant specific permissions in new projects. The default template grants the admin role to the user who requests the project and its namespace; add different granular permissions over specific resource types.
- - Resource quotas and limits: Add resource quotas to the project template to ensure that all new projects have resource limits, and limit ranges to reduce the effort for workload creation.
- - Network policies: Add network policies to the template to enforce organizational network isolation requirements.
- Allows customization of new Projects and their namespaces to have specific permissions, resource quotas, limit ranges, etc.
- Allows OCP to have something that K8s doesn't currently have: self-service management of namespaces.
- Cluster administrators:
- - Allow users to create namespaces without allowing users to modify namespace metadata/definitions
- - Customize creation of namespaces to follow organizational requirements

Important:
- Even with quotas in all namespaces, users can create projects to continue adding workloads to a cluster. If this scenario is a concern, then consider adding cluster resource quotas to the cluster.


Deploy Packaged Templates
Templates Overview:
- A Kubernetes custom resource that describes a set of Kubernetes resource configurations
- Have varied use cases, and can create any Kubernetes resource
- Can have parameters
- Processing templates and provided parameter values creates a set of Kubernetes resources.
- - To run oc process on the templates in the a namespace, you must have write permissions on that namespace.
- The template resource is a Kubernetes extension that Red Hat OpenShift provides (OC).
- - The Cluster Samples Operator populates templates and image streams in the "openshift" namespace.
- - The operator can be set during installation to opt-out of adding templates.
- - The operator can be set to restrict the list of templates provided.
- - Unprivileged users can read the templates in the "openshift" namespace by default, can extract the template from the openshift namespace, create a copy in their own projects where they have wider permissions. By copying a template to a project, they can use the oc process command on the template into that project's namespace.

Methods to Create Kubernetes Related Resources Using Templates:
- Create using the CLI
- Upload a template to a project or the global template library using the web console

Login and display the list of templates in the openshift namespace:
[admin@rocp ~]$ oc login -u myadminid -p <mypassword> https://api.ropc.mindwatering.net:6443
[admin@rocp ~]$ oc get templates -n openshift
NAME DESCRIPTION PARAMETERS OBJECTS
...

Evaluate a template displayed:
[admin@rocp ~]$ oc describe template <template-name> -n openshift

Notes:
- The describe includes:
- - Name, Namespace, Created, Labels, Description, Annotations, and other meta-data
- - Parameters
- - - Name, Description, Required (or not)
- - Object Labels
- - Message (notes)
- - Objects (resources that make up the template, e.g. configMaps, secrets, services, etc.)

View just the parameters needed for the template from the list:
[admin@rocp ~]$ oc process --parameters <template-name> -n openshift
NAME ... GENERATOR VALUE
...

View just the parameters needed for a template contained in a file (folder):
admin@rocp ~]$ oc process --parameters -f template-name.yaml
NAME ... GENERATOR VALUE
...

To view the manifest file for the template:
admin@rocp ~]$ oc get template <template-name> -o yaml -n openshift
apiVersion: template.openshift.io/v1
kind: Template
labels:
template: <template-name>
metadata:
...

Create a new project and an app/deployment in the new project:
a. Create the new project:
[admin@rocp ~]$ oc new-project mysql-fromtemplate
Now using project "packaged-templates" on server ...

b. Create the new app from template with passed credentials (parameters):
Create a new app from a template and passing parameters like:
oc new-app --template=<template-name> -p PARAMETER_ONE=valueone PARAMETER_TWO=valuetwo

Notes:
- The oc new-app command cannot update an app deployed previous from an earlier version of a template.
- The oc process command can apply parameters to a template, to produce manifests to deploy the templates with a set of parameters, along with local files on the workstation.

[admin@rocp ~]$ oc mysql-app --template=mysql-template -p MYSQL_USER=user1 -p MYSQL_PASSWORD=mypasswd
--> Deploying template "mysql-fromtemplate/mysql-template" to project mysql-fromtemplate
...
--> Success
Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
'oc expose service/mysql'
Run 'oc status' to view your app.

b-alt.: Create the new app from template with credentials in a file as the parameter:
[admin@rocp ~]$ vi mysqlUserCred.env
MYSQL_USER=user1
MYSQL_PASSWORD=mypasswd
IMAGE=registry.ocp4.mindwatering.net:8443/mysql:latest

<esc>:wq (to save)

[admin@rocp ~]$ oc mysql-app --template=mysql-template --param-file=mysqlUserCred.env
--> Deploying template "mysql-fromtemplate/mysql-template" to project mysql-fromtemplate
...
--> Success
Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
'oc expose service/mysql'
Run 'oc status' to view your app.

c. Wait for deployment to become ready:
[admin@rocp ~]$ watch oc get pods
NAME READY STATUS RESTARTS AGE
...
mysql-1-deploy 0/1 Completed 0 84s
...
<cntrl>c (to exit watch)

d. Expose the service and view app in broswer:
[admin@rocp ~]$ oc expose service/mysql
<view result port>

[admin@rocp ~]$ oc status
<review app info>

[admin@rocp ~]$ oc get routes
<view host and port - we can then bring this host and port in a web browser>

e. Update the application from a new test2 version of the application image:
[admin@rocp ~]$ cp mysqlUserCred.env mysqlUserCredTest.env
[admin@rocp ~]$ vi mysqlUserCredTest.env
...
IMAGE=registry.ocp4.mindwatering.net:8443/mysql:test2

<esc>:wq (to save)

f. Test the new test2 image to the template manifest and confirm with a diff the new image version:
[admin@rocp ~]$ oc process mysql-template --param-file=mysqlUserCredTest.env | oc diff -f -
...
- name: INIT_DB
+ value: "false"
...
+ image: registry.ocp4.mindwatering.net:8443/mysql:test2
...

Notes:
- The template image has been switched from latest to test2.
- The db will not be reinitialized since the INIT parameter was not included. (The default is false.)

g. Apply the new test2 image to the existing app deployed:
[admin@rocp ~]$ oc process mysql-template --param-file=mysqlUserCredTest.env | oc apply -f -
secret/mysql configured
deployment.apps/do123-mysql-app configured
service/do123-mysql-app unchanged
route.route.openshift.io/do280-mysql-app unchanged

h. Use watch to verify the deployment is running:
[admin@rocp ~]$ watch oc get pods
NAME READY STATUS RESTARTS AGE
do123-mysql-app-a5f101bc2-abcde 1/1 Running 0 60s
mysql-1-abcde 1/1 Running 0 53m
mysql-1-deploy 0/1 Completed 0 53m



Helm Applications - Helm Charts
Overview: Deploying and updating applications from resource manifests packaged as Helm charts.

Helm:
- Open-source application for managing K8s app lifecycles
- Helm Chart:
- - A package that describes a set of K8s resources to be deployed
- - Defines values customizable during deployment
- - Contains hooks executed at different points during installation and updates.
- - - Automate tasks with more complex applications than purely manifest-based files
- Includes functions to distribute charts and updates
- Not as complex a model as K8s Operators
- Release: the deployment / app result of deploying/running the chart
- Versions: the chart can have multiple versions for upgrades and app fixes
- Minimum parameters for chart release/installation:
- - Deployment target namespace
- - Default values to override
- - Release name
- Helm charts distributed by/as:
- - Folders/files
- - Archives
- - Container images
- - Repository URLs

Notes:
- Typically, a Helm chart release does not create a namespace, and namespaced resources in the chart omit a namespace declaration.
- Helm uses the namespace passed (parameter) for the deployment, and Helm creates namespaced resources within "this" namespace.
- When installing a release, Helm creates a secret with the release details. If the secret is deleted or corrupted, Helm cannot operate with the former release anymore. (Secret is kind/type: helm.sh/release.v1)


Chart Structure:
example/
├── chart.yaml
├── templates
| |── example.yaml
└── values.yaml

chart.yaml: Contains chart metadata, including name, version, maintainer, and repository source of the chart.
- View this info with the command helm show chart <chartname> (e.g. example)

templates folder: Contains the resources/manifest files that make up the app / deployment
- can contain any K8s resources
- can include a namespace, or non-namespaced
- typically use the release name with the type of resource as a suffix releasename-deploy, releasename-sa, releasename-i

values.yaml: Contains the default values (like parameters) for the chart
- View these values with the command: helm show values <chartname> (e.g. example)

[admin@rocp ~]$ helm show chart mwMySQL
apiVersion: v1
description: A Helm chart for MW MySQL
name: mwMySQL
version: 0.0.1
maintainers:
- email: devaccount@mindwatering.net
name: MW Developer
sources:
- https://git.mindwatering.net/mwmysql

[admin@rocp ~]$ helm show values mwMySQL
...
image:
repository: "mwmysql"
tag: "1.1.10"
pullPolicy: IfNotPresent
...
route:
enabled: true
host: null
targetPort: http
...
resources: {}
...

Notes:
- All chart values can be overridden / configured using a yaml file. (e.g. --values valuesoverride.yaml )
- If you want to customize the route, create a route.host key file.

Dry run a release, and install a Helm chart release:
a. Login:
[admin@rocp ~]$ oc login -u myadminid -p <mypassword> https://api.ropc.mindwatering.net:6443
Login successful ...

b, Create new OC project:
[admin@rocp ~]$ oc new-project mysql-chartsdeployment
Now using project "mysql-deployment" on server ...

c. Create values override file:
[admin@rocp ~]$ $ vi valuesoverride.yaml
image:
repository: registry.ocp4.mindwatering.net:8443/mysql:test2
name: etherpad
tag: 1.8.18
route:
host: mysql-test2.apps.mindwatering.net
<esc>:wq (to save)

d. Perform dry-run:
[admin@rocp ~]$ helm install release-name mwmysql --namespace mwsql --dry-run --values valuesoverride.yaml

The release preview includes 3 sections:
- The Metadata: name, last deployed date/time, namespace, status, revision, hooks, manifest, etc.
- The K8s resources: Deployment, ConfigMaps, ReplicaSets, Service, ServiceAccount, Ingress, etc.
- Notes : Instructions from the developer/owner for deployment processes for release, upgrades, management ports, etc.

Note:
- The install, and the upgrade command below, install the latest chart version unless overridden with the --version option
- The helm install syntax supports --values valuesoverride.yaml or -f valuesoverride.yaml

e. If the dry-run looks correct, install w/o the dry-run parameter:
[admin@rocp ~]$ helm install release-name mwmysql --namespace mwsql --values valuesoverride.yaml

f. View the new Helm deployment along with previous ones in the mwmysql namespace:
[admin@rocp ~]$ helm list --namespace mwsql
NAME NAMESPACE REVISION ... STATUS CHART APP VERSION
mwmysql mwsql 1 ... deployed example-4.12.1 1.8.10

g. Confirm all pods running:
[admin@rocp ~]$ oc get pods -n mwsql
<confirm status column shows RUNNING>

h. View route:
[admin@rocp ~]$ oc get route --namespace mwsql
NAME HOST/PORT
mwMySQL ...

Note:
- Use -n or --namespace to limit
- Use -A or --all-namespaces to view all, but oc get pods and get route will show all namespaces by default

Bring up the app in a browser and verify working.


Upgrade a Helm Release:
- Upgrade using the <release-name> used above
- Upgrade defaults to the latest version if not overwritten
- Always use the dry-run as conflicts or issues upgrading may occur

[admin@rocp ~]$ helm upgrade mwmysql --dry-run --values valuesoverride.yaml
<confirm output is desired, then re-run w/o the --dry-run again>


View Helm Release History:
- View releases by the <release-name> used above

[admin@rocp ~]$ helm history mwmysql
REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
1 <date/time> superceded ...
2 <date/time> deployed ...


Revert to previous Helm release (number):
- Revert using a <release-name> and a <revision-number>
- Use after a history view so you have the revision number needed

Warning:
- Use warily - Rollbacks can be very very bad when the new version of the app is not compatible with the previous application version (e.g. db schema changes, app login changes, etc.)

[admin@rocp ~]$ helm rollback mwmysql 1
Rollback was a success! Happy Helming!


Helm Repositories:
- Use the helm repo command to set-up a Helm Chart repository
- Repo commands:
- - helm repo add: add a new repository
- - helm repo list: list repositories
- - helm repo update: update repository(s)
- - helm repo remove: remove a repository
- - helm search repo: searches all configured repos and lists all available charts (by default, the command only displays the latest version of each chart if the chart contains multiple, use --versions to override and list all versions)

Note:
- The helm repo command updates the local configuration and does not affect any running cluster resources
- The helm add and the helm remove commands update the following config file : ~/.config/helm/repositories.yaml on the administrative workstation

Use the following syntax:
helm repo add <repo-name> <repo-url>

[admin@rocp ~]$ helm repo add mwmysql-charts https://mysqlcharts.mindwatering.net



OCP Authentication and Authorization:
Authentication Components:
User:
- Entities that interact with the API server and identity a Person and assign the person roles either directly or via group membership
- Unauthenticated users are assigned by the authentication layer the system:anonymous virtual user

Identity:
- Keeps a record of successful authentication attempts from a user and his/her identity provider
- Data about the source of authentication is stored on the identity resource

Service Account:
- Application communication with the API independent of user credentials
- Access is given to applications or other components without the need of borrowing a user credential

Group:
- Represents a specific set of users
- Has Users as members of the group
- Authorization policies map groups to assign permissions
- OCP has system groups
- OCP sets up virtual groups provisioned automatically by the cluster
- Unauthenticated users are assigned by the authentication layer the system:unauthenticated virtual group

Role:
- Defines API operations that a user has permissions to perform on specific resource types
- Assigning roles grants permissions to users, groups, and service accounts
- K8s and OCP use role-based-access-control (RBAC) policies to determine user privileges

OCP API Authentication Methods Supported:
- OAuth access tokens
- X.509 client certificates

The OCP Authentication Operator
- Provides the Authentication operator, which runs an OAuth server.
- Provides OAuth access tokens to users when they attempt to authenticate to the API. An identity provider must be configured and available to the OAuth server.
- Uses an identity provider to validate the identity of the requester, reconciles the local user with the identity provider, and creates the OAuth access token for the user.
- Automatically creates identity and user resources after a successful login.

Note:
- In native K8s, OAuth and OpenID Connect (OIDC) authentication provided by an OAuth Proxy and an Identity Provider (e.g. Keycloak)

Identity Providers:
- The OCP OAuth server can be configured to use one or multiple identity providers at the same time.
- An OAuth custom resource is updated with the identity provider(s)
- Includes the more common identity providers:
- - HTPasswd: Validates usernames and passwords against a secret that stores credentials that are generated by using the htpasswd command.
- - Keystone: Enables shared authentication with an OpenStack Keystone v3 server.
- - LDAP: Configures the LDAP identity provider to validate usernames and passwords against an LDAPv3 server via simple bind authentication.
- - GitHub: Configures a GitHub identity provider to validate usernames and passwords against GitHub or the GitHub Enterprise OAuth authentication server.
- - OpenID Connect: Integrates with an OpenID Connect identity provider by using an Authorization Code Flow.

Authenticating as a Cluster Administrator:
- OCP provides two ways to authenticate API requests with cluster administrator privileges:
- - Use the kubeconfig file which embeds an X.509 client certificate that never expires
- - Authenticate as the kubeadmin virtual user. Successful authentication grants an OAuth access token.

Steps:
- Configure an identity provider
- Create any additional users and/or groups w/in the identity provider
- Grant them different access levels by assigning roles to the users/groups

Authenticating with the X.509 Certificate
- During installation, the OpenShift installer creates a unique kubeconfig file in the auth directory.
- The kubeconfig file contains specific details and parameters for the CLI to connect a client to the correct API server, including an X.509 certificate.
- Add KUBECONFIG to the user start-up (bash.rc, etc), to make available the K8s kubectl and OCP oc commands

Export path and X.509 authentication using KUBECONFIG file:
[admin@rocp ~]$ export KUBECONFIG=/home/admin/auth/kubeconfig
[admin@rocp ~]$ oc get nodes
<no password prompt - nodes information is presented>

Alternately, you can pass the path as part of the command:
(But other than an exam question, why would you?)
[admin@rocp ~]$ oc --kubeconfig /home/admin/auth/kubeconfig get nodes
<no password prompt - nodes information is presented>

Authenticating with the kubeadmin Virtual User
- The kubeadmin virtual user created at the end of the OCP installation
- Dynamically generates a unique kubeadmin password for the cluster
- Stores the kubeadmin secret in the kube-system namespace; the secret contains the hashed password for the kubeadmin user
- The kubeadmin user has cluster administrator privileges.
- The login path, username, and password for console access are printed near the end of the installation log

...
INFO Access the OpenShift web-console here:
https://api.ropc.mindwatering.net
INFO Login to the console with user: kubeadmin, password: abCD_EfgH_1234_9876_dcba

After configuring an identity provider, the local kubeadmin user can be deleted "for better security":
[admin@rocp ~]$ oc delete secret kubeadmin -n kube-system
<view confirmation>

Warning:
If you lose the KUBECONFIG file's X.509 certificate and you delete the kubeadmin user there is no other way to administer your cluster when the identity provider has an outage. You will have 100% security in your security analysis, and 0% cluster management productivity.


HTPasswd Identity Provider:
- Uses Linux htpasswd to create a temporary htpasswd file, and applies the file to a OCP/K8s secret
- Requires httpd-tools and the oc binaries installed


Create HTPasswd User Steps:
a. Prepare htpasswd file
b. Create secret in OCP
c. Create htpasswd identity provider custom resource (cr)

a. Create htpasswd file:
[admin@rocp ~]$ cd ~
[admin@rocp ~]$ htpasswd -c -B -b ocpuser1.htpasswd ocpuser1 SuperHumanPwd123

Notes:
-c = create file
-B = bcrypt password hashing
-b = use the password from command line rather than prompting (like a normal user account)
ocpuser1.htpasswd = name of the file (for the -c)
ocpuser1 = login id of the user

b. Create the secret:
[admin@rocp ~]$ oc login -u myadminid -p <mypassword> https://api.ropc.mindwatering.net:6443
[admin@rocp ~]$ oc create secret generic htpass-ocpuser1-secret --from-file htpasswd=ocpuser1.htpasswd -n openshift-config
secret/htpass-ocpuser1-secret created
htpass-ocpuser1-secret = name of the secret created, used in the mapping CR created next.

c. Create the CR file:
Note:
- Two methods to approach updating the identity provider, either:
- - Download the current OAuth config as YAML, edit and add the new HTPasswd section
-or-
- - If a new cluster, create a file with just what's needed for HTPasswd authentication

[admin@rocp ~]$ oc get oauth cluster -o yaml > ~/htpassocpuser1.yaml

Edit the downloaded yaml adding the new spec identityProviders section, or create a file with these contents.
[admin@rocp ~]$ vi htpassocpuser1.yaml
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
name: cluster
spec:
identityProviders:
- name: mwlocal
mappingMethod: claim
type: HTPasswd
htpasswd:
fileData:
name: htpass-ocpuser1-secret
<esc>:wq (to save)

Note:
name: mwlocal = the user ID prefix to the login ID created in OCP
mappingMethod: claim = With default claim value, user cannot log in with other identity providers
fileData name: htpass-ocpuser1-secret = the secret just created above

If the yaml config was downloaded and edited, then do a replace:
[admin@rocp ~]$ oc replace -f ~/htpassocpuser1.yaml
- or -
If the yaml file is new, you can alternately do an apply:
[admin@rocp ~]$ oc apply -f ~/htpassocpuser1.yaml

The system will perform a redeployment of the openshift-authentication pods, watch:
[admin@rocp ~]$ watch oc get pods -n openshift-authentication

Try out the new login:
Browser --> api.ropc.mindwatering.net --> Instead of kube:admin (button), choose mwlocal (button), and login as the ocpuser1 user.


Update HTPasswd User Steps:
a. Update the htpasswd file
b. Update the secret in OCP
c. Watch the openshift-authentication pods redeployment

a. Update the htpasswd file:
[admin@rocp ~]$ cd ~
[admin@rocp ~]$ htpasswd -b ocpuser1.htpasswd ocpuser1 SuperHumanPwd1234

b. Update the secret in OCP:
[admin@rocp ~]$ oc login -u myadminid -p <mypassword> https://api.ropc.mindwatering.net:6443
[admin@rocp ~]$ oc set data secret/htpass-ocpuser1-secret --from-file htpasswd=ocpuser1.htpasswd -n openshift-config

Note:
- After updating the secret in OCP, the openshift-authentication namespace pods are redeployed
- Monitor redeployment as desired/needed

c. Watch the openshift-authentication pods redeploy:
[admin@rocp ~]$ watch oc get pods -n openshift-authentication


Delete HTPasswd User Steps:
a. Delete user from htpasswd
b. Delete the password from the secret
c. Remove the user resource
d. Remove the identity resource

a. Delete a user HTPasswd file (credential):
[admin@rocp ~]$ oc login -u myadminid -p <mypassword> https://api.ropc.mindwatering.net:6443
[admin@rocp ~]$ htpasswd -D ocpuser1.htpasswd ocpuser1

b. Delete the password from the secret:
[admin@rocp ~]$ oc set data secret/htpass-ocpuser1-secret --from-file htpasswd=ocpuser1.htpasswd -n openshift-config

c. Remove the user resource:
[admin@rocp ~]$ oc delete user ocpuser1
user.user.openshift.io "ocpuser1" deleted

d. Remove the identity resource:
- Confirm the identity provider name/prefix:
[admin@rocp ~]$ oc get identities | grep ocpuser1
mwlocal:ocpuser1 mwlocal ocpuser1 ocpuser1 ...

- Remove ocpuser1
[admin@rocp ~]$ oc delete identity mwlocal:ocpuser1
identity.user.openshift.io "mwlocal:ocpuser1" deleted


Add a user to the cluster-admin role (cluster administration privileges):
[admin@rocp ~]$ oc adm policy add-cluster-role-to-user cluster-admin ocpuser1
clusterrole.rbac.authorization.k8s.io/cluster-admin added: "ocpuser1"

Note:
- When you execute the oc adm policy add-cluster-role-to-user cluster-admin new-admin command, a naming collision occurs with an existing cluster role binding object.
- The system creates an object and appends to the name -n, where n is an iterating numeral that starts with -0.

To view the new cluster role binding:
- Get all cluster-admin bindings:
[admin@rocp ~]$ oc get clusterrolebinding | grep ^cluster-admin

- Display the last binding for -n:
[admin@rocp ~]$ oc describe <cluster-admin-n>



Role-based Access Control (RBAC):
Overview:
- Technique for managing access to resources in K8s/OCP
- Determines whether a user can perform certain actions within the cluster or project
- Role types:
- - cluster
- - local (by project)
- Two-level hierarchy role types enables:
- - Reuse across multiple projects through the cluster roles
- - Enables customization inside individual projects through local roles
- - Authorization evaluation uses both the cluster role bindings and the local role bindings to allow or deny an action on a resource


Authorization Process:
The authorization process is managed by three RBAC Objects: rules, roles, and bindings.

RBAC Objects:
Rule
- Allowed actions for objects or groups of objects

Role
- Sets of rules. Users and groups can be associated with multiple roles

Binding
- Assignment of users or groups to a role


RBAC Scope
Red Hat OpenShift Container Platform (RHOCP) defines two groups of roles and bindings depending on the user's scope and responsibility: cluster roles and local roles.

RBAC Level Description
Cluster RBAC Roles and bindings that apply across all projects.
Local RBAC Roles and bindings that are scoped to a given project. Local role bindings can reference both cluster and local roles.


Managing RBAC with the CLI:
- Cluster administrators perform these commands
- Rules defined by an action and a resource
- Use the oc adm policy command to add and remove cluster roles and namespace roles
- - add a cluster role to a user, use the add-cluster-role-to-user subcommand
- - add a new user, use create user
- Query access with the who-can subcommand

Add ocpmgr1 as a cluster admin:
[admin@rocp ~]$ oc adm policy add-cluster-role-to-user cluster-role ocpmgr1

To remove ocpmgr1 from being a cluster admin:
[admin@rocp ~]$ oc adm policy remove-cluster-role-from-user cluster-role ocpmgr1

To query who can <action> on a user:
[admin@rocp ~]$ oc adm policy who-can delete ocpuser1


Out-of-the-box Default Roles:
cluster-admin
- Have superuser access to the cluster resources. These users can perform any action on the cluster, and have full control of all projects

cluster-status
- Have access to cluster status information.

cluster-reader
- Have access to access or view most of the objects but cannot modify them

self-provisioner (cluster role)
- Can create their own projects (projectrequests resources)
- By default, the self-provisioners cluster role binding associates the self-provisioner cluster role with the system:authenticated:oauth group
- Add users to the system:authenticated:oauth group

admin (technically a cluster role, but limited by -n to a project namespace)
- Manage all project resources, including granting access to other users to access "this" project
- Uses the oc policy command to add and remove "this" project's namespace roles
- Gives access to project resources including quotas and limit ranges in "this" project
- Gives ability to create/deploy applications in "this" project

basic-user (technically a cluster role, but limited by -n to a project namespace)
- Have read access to "this" project

edit (technically a cluster role, but limited by -n to a project namespace)
- Gives a user sufficient access to act as a developer inside the project, but working under the access limits that a project admin (role) configured
- Can create, change, and delete common application resources on "this" project, such as services and deployments
- Cannot act on management resources such as limit ranges and quotas
- Cannot manage access permissions to "this" project

view (technically a cluster role, but limited by -n to a project namespace)
- Can view "this" project resources
- Cannot modify "this" project resources.

As project admin, give a user basic-user access to my current project:
- oc policy add-role-to-user <role-name> <username> -n <project-namespace>
[admin@rocp ~]$ oc policy add-role-to-user basic-user ocpuser1 -n mwwordpress

Note:
- The self-provisioner cluster role is NOT the self-provisioners cluster role binding


User Types:
- User object represents a user who is granted permissions by adding roles to that user, or that user's group via role bindings
- User must authenticate before they can access OpenShift Container Platform
- API requests with no authentication or invalid authentication are authenticated as requests by the anonymous system user
- After successful authentication, a policy determines what the user is authorized to do

Three Types:
Regular users
- Represents a Person with access to the platform
- Interactive users who are represented with the User object

System users
- Created automatically when the infrastructure is defined, mainly for the infrastructure to securely interact with the API
- Include:
- - A cluster administrator (with access to everything)
- - A per-node user
- - Users for routers and registries and various others
- - An anonymous system user is used (by default) for unauthenticated requests
- Have a system: prefix
- - Examples: system:admin, system:openshift-registry, and system:node:rocp.mindwatering.net

Service accounts
- System users that are associated with projects
- Subsequently, also have the system: prefix, along with a project namespace prefix - system:serviceaccount:<project-namespace>:<serviceaccount>
- Typically workloads that use service accounts to invoke Kubernetes APIs
- Some created automatically during project creation
- Project administrators create additional service accounts to grant extra privileges to workloads
- By default, service accounts have no roles - grant roles to service accounts to enable workloads to use specific APIs
- Represented with the ServiceAccount object
- - Examples: system:serviceaccount:default:deployer and system:serviceaccount:mwit:builder


Group Management
- Represents a set of users
- Cluster administrators use the oc adm groups command to add groups or to add users to groups

Add new cluster mwdevelopers group:
[admin@rocp ~]$ oc adm groups new mwdevelopers

Adds the ocpdevuser1 user to the mwdevelopers group:
[admin@rocp ~]$ oc adm groups add-users mwdevelopers ocpdevuser1

Get all role bindings for users can provision new project namespaces with the self-provisioner cluster role:
[admin@rocp ~]$ oc get clusterrolebinding -o wide | grep -E 'ROLE|self-provisioner'
NAME ROLE ... GROUPS ...
self-provisioners ClusterRole/self-provisioner ... system:authenticated:oauth


Setup Wordpress Project RBAC Example:
- As the cluster admin:
a. Login:
[admin@rocp ~]$ oc login -u myadminid -p <mypassword> https://api.ropc.mindwatering.net:6443
Login successful.

b. Create project
[admin@rocp ~]$ oc new-project mwwordpress
Now using project "mwwordpress" on server "https://api.ropc.mindwatering.net:6443".
...

c. Give the team leader, mwdevadmin, the admin role for the project:
[admin@rocp ~]$ oc policy add-role-to-user admin mwdevadmin
clusterrole.rbac.authorization.k8s.io/admin added: "mwdevadmin"

d. Create the developers group and add the developer:
[admin@rocp ~]$ oc adm groups new mwwpdevgroup
group.user.openshift.io/mwwpdevgroup created

[admin@rocp ~]$ oc adm groups add-users mwwpdevgroup ocpdevuser1
group.user.openshift.io/mwwpdevgroup added: "ocpdevuser1"

e. Create the testing group and add the testers:
[admin@rocp ~]$ oc adm groups new mwwptestgroup
group.user.openshift.io/mwwptestgroup created

[admin@rocp ~]$ oc adm groups add-users mwwptestgroup ocptestuser1
group.user.openshift.io/mwwptestgroup added: "ocptestuser1"


f: Review the groups:
[admin@rocp ~]$ oc get groups -n mwwordpress
NAME USERS
...
mwwpdevgroup ocpdevuser1
mwwptestgroup ocptestuser1
...

- As the project admin:
a. Login:
[admin@rocp ~]$ oc login -u mwdevadmin -p <mypassword> https://api.ropc.mindwatering.net:6443
Login successful.
...
Using project "mwwordpress".

b. Set edit privileges for the developers:
[admin@rocp ~]$ oc policy add-role-to-group edit mwwpdevgroup
clusterrole.rbac.authorization.k8s.io/edit added: "mwwpdevgroup"

c. Set view only/read privileges to the testers:
[admin@rocp ~]$ oc policy add-role-to-group view mwwptestgroup
clusterrole.rbac.authorization.k8s.io/view added: "mwwptestgroup"

d. Review the RBAC assignments:
[admin@rocp ~]$ oc get rolebindings -o wide -n mwwordpress | grep -v '^system:'
NAME ROLE AGE USERS GROUPS SERVICEACCOUNTS
admin ClusterRole/admin 60s admin
admin-0 ClusterRole/admin 45s mwdevadmin
edit ClusterRole/edit 30s mwwpdevgroup
view ClusterRole/view 15s mwwptestgroup

- As developer, deploy an Apache HTTP Server:
[admin@rocp ~]$ oc login -u ocpdevuser1 -p <password>
Login successful.
...
Using project "mwwordpress".

[admin@rocp ~]$ oc new-app --name mwwordpress httpd:2.4
...
--> Creating resources ...
deployment.apps "mwwordpress" created
service "mwwordpress" created
--> Success
...

Note:
- When a person does not have access, the command fails:
"Error from server (Forbidden): ..."



Network Security:
Summary:
- Allow and protect network connections to applications inside an OCP cluster
- - Protect External Traffic with TLS
- - Protect Internal Traffic with TLS
- - - Configure Network Policies (internal between apps or between projects)
- Restrict network traffic between projects and their pods
- Configure and use service certificates

External Network Methods:
- service types: NodePort and LoadBalancer
- API types: Ingress and Route

Project App --> API -->
- Round Robin load balancing --> Service --> Internet
(or)
- Route --> Internet


Encrypting Routes
- Routes can be either encrypted or unencrypted
- - Unencrypted routes are the simplest to configure, because they require no key or certificates
- - Encrypted routes encrypt traffic to and from the pods.
- - Encrypted routes support several types of transport layer security (TLS) termination to serve certificates to the client
- - Encrypted routes specify their TLS termination of the routes.

TLS Termination Types (OpenShift Platform Route Encryption):
- Edge
- Passthrough
- Re-encryption


Edge Termination Type:
- TLS termination occurs at the router, before the traffic is routed to the pods
- The router serves the TLS certificates; the routers are configures with the TLS certificates
- If TLS not set-up at the router, OCP assigns its own certificate to the router for TLS termination
- Because TLS is terminated at the router, connections from the router to the internal network endpoints are not encrypted
- For better performance, routers send requests directly to pods based on service configuration's services network

Client --> HTTPS --> Edge route (router - tls.crt / tls.key encryption) --> HTTP --> Container/Pod (Application)

Edge Example:
[admin@rocp ~]$ oc create route edge --service api-frontend --hostname api.apps.mindwatering.net --key api.key --cert api.crt

Notes:
- If --key and --cert are omitted, the OCP ingress operator provides a certificate from the internal CA.
- - The route describe will not reference the certificate, to view the certificate provided: oc get secrets/router-ca -n openshift-ingress-operator -o yaml


Passthrough Termination Type:
- Encrypted traffic is sent straight to the destination pod without TLS termination from the router
- Application is responsible for serving certificates for the traffic
- Passthrough is a common method for supporting mutual authentication between the application and a client that accesses it

Client --> HTTPS --> Pass-through route (router) --> HTTPS --> Container/Pod --> Application - tls.crt / tls.key encryption)
Mounts: /usr/local/etc/ssl/certs from tls-certs (ro)

volumeMounts:
- name: tls-certs
readOnly: true
mountPath: /usr/local/etc/ssl/certs
...


Re-encryption Termination Type:
- Re-encryption is a variation on edge termination, whereby the router terminates TLS with a certificate, and then re-encrypts its connection to the endpoint, which likely has a different certificate
- - External certificate uses public certificate: my-app.mindwatering.net
- - Internal certificate users internal certificate: my-app.project-namespace.svc.cluster.local
- The full path of the connection is encrypted, even over the internal network
- The router uses health checks to determine the authenticity of the host
- The internal certificate is created by the service-ca controller, which generates and signs service certificates for internal traffic
- - Creates a secret populated with a signed certificate and key
- - Deployment mounts the secret as a volume to use the signed certificate/key

Client --> HTTPS --> Edge route (router - tls.crt / tls.key encryption) --> HTTPS (.local certificate) --> Container/Pod --> Application - tls.crt / tls.key .local encryption - service-ca certificate and key)


Expose an unencrypted Apache HTTP app:
[admin@rocp ~]$ oc expose svc mwwordpress-http --hostname mwwordpress-http.apps.mindwatering.net
route.route.openshift.io/mwwordpress-http exposed

[admin@rocp ~]$ oc get routes
NAME HOST/PORT PATH SERVICES PORT ...
mwwordpress-http mwwordpress-http.apps.mindwatering.net mwwordpress-http 8080 ...

browser --> mwwordpress-http.apps.mindwatering.net (http)


Expose an encrypted Apache HTTP app:
(Reuses the previous unsecure service created above)
[admin@rocp ~]$ oc create route edge mwwordpress-https --service mwwordpress-http --hostname mwwordpress-https.apps.mindwatering.net
route.route.openshift.io/mwwordpress-https created

browser --> mwwordpress-https.apps.mindwatering.net (https)

Notes:
- The certificate is via the OCP internal CA, so it will not be trusted unless the CA parent certificate is imported to the workstation/browser
- The traffic is encrypted at the route edge, but the service port is still not encrypted

Delete the encrypted route:
[admin@rocp ~]$ oc delete route mwwordpress-https
route.route.openshift.io "mwwordpress-https" deleted


Expose Passthrough Encrypted Apache HTTP app:
a. Create the certificate and key files:
- Create the key file:
[admin@rocp ~]$ openssl genrsa -out mwwordpressinternal.key 4096

- Create the certificate signing request (CSR):
[admin@rocp ~]$ openssl req -new -key mwwordpressinternal.key -out mwwordpressinternal.csr -subj "/C=US/ST=North Carolina/L=Wake Forest/O=Mindwatering/ CN=mwwordpress-https.apps.mindwatering.net"

- Create a passphrase.txt file and populate with desired password:
[admin@rocp ~]$ vi mwwordpresspassphrase.txt
abcd123abcd123abcd123...
<esc>:wq (to save)

- Create the signed certificate file:
[admin@rocp ~]$ openssl x509 -req -in mwwordpressinternal.csr -passin file:mwwordpressinternalpassphrase.txt -CA mwinternal-CA.pem -CAkey mwinternal-CA.key -CAcreateserial -out mwwordpressinternal.crt -days 1825 -sha256 -extfile mwwordpress.mindwatering.net
Certificate request self-signature ok
subject=C = US, ST = North Carolina, L = Wake Forest, O = Mindwatering, CN = mwwordpress-https.apps.mindwatering.net

b. Create the K8s secret that stores the crt and key files:
[admin@rocp ~]$ oc create secret tls mwwordpressinternal-certs --cert mwwordpressinternal.crt --key mwwordpressinternal.key
secret/mwwordpressinternal-certs created

c. Update the mwwordpress deployment for the new passthrough ingress:
- Export the current deployment to a yaml file:
[admin@rocp ~]$ oc get deployment mwwordpress -o yaml > mwwordpress.yaml

- Edit mwwordpress.yaml and update the volume mounts section and adjust the service port as needed:
[admin@rocp ~]$ vi mwwordpress.yaml
apiVersion: apps/v1
kind: Deployment
...
volumeMounts:
- name: mwwordpressinternal-certs
readOnly: true
mountPath: /usr/local/etc/ssl/certs
...
volumes:
- name: mwwordpressinternal-certs
secret:
secretName: mwwordpressinternal-certs
---
apiVersion: v1
kind: Service
...
ports:
- name: mwwordpress-https
port: 8443
protocol: TCP
targetPort: 8443
...
<esc>:wq (to save)

d. Apply (redeploy) the deployment:
[admin@rocp ~]$ oc apply -f mwwordpress.yaml

Note:
- If creating a new deployment, instead do:
[admin@rocp ~]$ oc create -f mwwordpress.yaml

e. Verify the mwwordpressinternal-certs secret is mounted:
[admin@rocp ~]$ oc set volumes deployment/todo-https
mwwordpress-https
secret/mwwordpressinternal-certs as tls-certs
mounted at /usr/local/etc/ssl/certs

f. Create the passthrough route:
[admin@rocp ~]$ oc create route passthrough mwwordpress-https --service mwwordpress-https --port 8443 --hostname mwwordpress-https.apps.mindwatering.net
route.route.openshift.io/mwwordpress-https created



Configure Network Policies:
Review:
- Restricts network traffic between projects and pods and other projects and pods in the cluster
- Restricts by configuring isolation policies for individual pods
- Control network traffic between pods by using labels instead of IP addresses
- Create logical zones in the SDN that map to your organization network zones
- - With logical zones, the location of running pods becomes irrelevant, because with network policies, you can separate traffic regardless of where it originates
- To manage network communication between pods in two namespaces:
- - Assign a label to the namespace that needs access to another namespace
- - Create a network policy that selects these labels
- - Reuse a network policy to select labels on individual pods to create ingress or egress rules
- Use selectors under spec to assign which destination pods are affected by the policy, and selectors under spec.ingress to assign which source pods are allowed.
- Do not require administrative privileges; thereby, giving developers more control over the applications within their project(s)
- Are K8s native resources, and managed with the oc command
- Do not block traffic between ingress pods that use host networking to other ingress pods using host networking in/on the same node

Important:
- If a pod does not match any network policies, then OpenShift does NOT restrict traffic to that pod.
- When creating an environment to allow network traffic only explicitly, you must include a deny-all policy.
- If a deny-all policy exists and the apps have an OCP ingress and OCP monitoring, then OCP ingress and monitoring will be blocked, as well.

a. Assign network labels to the namespace:
Notes:
- Assumes namespaces specify a VLAN network, and internal departments are segmented/isolated on VLANS.
- Example:
mwnet02 namespace = mwnet02 network VLAN
mwnet03 namespace = mwnet03 network VLAN

[admin@rocp ~]$ oc label namespace mwnet02 network=mwnet02
[admin@rocp ~]$ oc label namespace mwnet03 network=mwnet03
...

b. Allow two internal department's apps to communicate with each other:
- mwitapp on mwnet02
- mwcsrapp on mwnet03

Notes:
- podSelector: If empty, the policy applies to all pods in the namespaces, if specified only those deployment pods use this policy
- ingress: Defines a list of ingress traffic rules to filter and use this policy
- from: Limits traffic source; and the source is NOT limited to only "this" project/namespace
- ports: Limits traffic destination ports of the selected pods

Important:
- This is similar in concept an internal NAT and the old DMZ idea, where the NAT has access to the DMZ, but the DMZ has no access to the NAT (except traffic/requests coming from the NAT)
- To make both ways, the other project/namespace owner, or developer, needs to create the companion policy
- However, network policies manage security between namespaces which are typically actually on the same network segment. So technically, they are the same VLAN or network in reality, but network policies still isolate their traffic.

c1. Allow pods labeled "mwitapp" in the #2 Mindwatering tenant communicate with pods in the #3 tenant, if their deployment pods have label "mwinternal-apps", and only on port 9443.
[admin@rocp ~]$ vi mwnet02-mwnet03-netpolicy_mwitapp.yaml
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: mwnet02-policy
namespace: mwnet02
spec:
podSelector:
matchLabels:
deployment: mwitapp
ingress:
- from:
- namespaceSelector:
matchLabels:
network: mwnet03
podSelector:
matchLabels:
role: mwinternal-apps
ports:
- port: 9443
protocol: TCP

c2. Allow all traffic from Mindwatering networks #2 into network #3:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: mwnet02-policy
namespace: mwnet02
spec:
podSelector: {}
ingress:
- from:
- namespaceSelector:
matchLabels:
network: mwnet03

c3. Allow traffic between mwdev projects' apps on the mwintdev network:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: mwdev-policy
namespace: mwdev
spec:
podSelector: {}
ingress:
- from:
- namespaceSelector:
matchLabels:
network: mwintdev
podSelector:
matchLabels:
app: mobile

c4a. Deny all traffic between all pods:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: default-deny
spec:
podSelector: {}

c4b. After the deny all traffic policy above, the below policy allows ingress and monitoring:
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-openshift-ingress
spec:
podSelector: {}
ingress:
- from:
- namespaceSelector:
matchLabels:
policy-group.network.openshift.io/ingress: ""
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-openshift-monitoring
spec:
podSelector: {}
ingress:
- from:
- namespaceSelector:
matchLabels:
network.openshift.io/policy-group: monitoring


Zero-Trust Environments:
- Assume every interaction begins in an untrusted state where users can only access files and objects specifically allowed
- All communication must be encrypted (TLS)
- Client applications must verify authenticity of servers
- A trusted CA (typically internal) signs the certificates that are used to encrypt traffic
- - Apps recognize other apps authenticity because they both share the same parent signed certificate (CA)

Important:
- Native K8s does not encrypt internal traffic by default
- OCP encrypts network traffic between the nodes and the control plane themselves
- OCP does not automatically encrypt all traffic between applications


OCP Certificate Authority (CA):
- the service-ca controller to generate and sign service certificates for internal traffic/
- The service-ca controller creates a deployment's secret that it populates with a signed certificate and key, which the deployment mounts as volume (see service-ca further above).


Review: Deployment TLS Set-up Steps:
a. Create the secret
b. Mount the secret in the deployment

a. Create an app secret:
- login:
[admin@rocp ~]$ oc login -u myadminid -p <mypassword> https://api.ropc.mindwatering.net:6443
Login successful ...

- switch to the project/namespace:
[admin@rocp ~]$ oc project app-helloworld
Now using project "app-helloworld" on server ...

- create the secret:
[admin@rocp ~]$ oc annotate service app-helloworld service.beta.openshift.io/serving-cert-secret-name=app-helloworld-secret

- validate created okay:
[admin@rocp ~]$ oc describe service app-helloworld
<review and verify the following lines>
...
Annotations: service.beta.openshift.io/serving-cert-secret-name: app-helloworld-secret
service.beta.openshift.io/serving-cert-signed-by: openshift-service-serving-signer@1234567899
...

[admin@rocp ~]$ oc describe secret app-helloworld-secret
<review and verify the following lines>
Name: app-helloworld-secret
Namespace: app-helloworld
...output omitted...
Type: kubernetes.io/tls

Data
====
tls.key: 1234 bytes
tls.crt: 2345 bytes
...

b. Mount in the app-helloworld deployment:
- Export the current deployment to a yaml file:
[admin@rocp ~]$ oc get deployment app-helloworld -o yaml > app-helloworld.yaml

- Edit app-helloworld.yaml and update the volume mounts section:
[admin@rocp ~]$ vi app-helloworld.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: app-helloworld
annotations:
...
creationTimestamp: null
labels:
app: app-helloworld-pod
name: app-helloworld
spec:
template:
spec:
containers:
- name: helloworld
ports:
- containerPort: 9443
volumeMounts:
- name: app-helloworld-volume
mountPath: /etc/pki/nginx/
volumes:
- name: app-helloworld-volume
secret:
defaultMode: 420
secretName: app-helloworld-secret
items:
- key: tls.crt
path: server.crt
- key: tls.key
path: private/server.key

[admin@rocp ~]$ oc patch --patch-file app-helloworld.yaml
- or -
[admin@rocp ~]$ oc apply -f app-helloworld.yaml
<view and optionally watch the redeploy>

[admin@rocp ~]$ oc get deployment/app-helloworld
<view and confirm ready>

c. Verify using a web browser or openssl client:
- get service external IP:
[admin@rocp ~]$ oc get svc/app-helloworld
<view external IP and verify against DNS with ping, etc.>

[admin@rocp ~]$ oc exec no-ca-bundle -- openssl s_client -connect app-helloworld.apphelloworld.svc:443
<verify certificate there - of course
depth=1 CN = openshift-service-serving-signer@1234567899
CONNECTED(00000004)
---
Certificate chain
0 s:CN = server.network-svccerts.svc
i:CN = openshift-service-serving-signer@1234567899
1 s:CN = openshift-service-serving-signer@1234567899
i:CN = openshift-service-serving-signer@1234567899
---
...output omitted...
verify error:num=19:self signed certificate in certificate chain
DONE

d. Swap the previous configuration, by reconfiguring the app deployment (see step b above) to use a config-map. Create ca-bundle configmap:
[admin@rocp ~]$ oc create configmap ca-bundle
configmap/ca-bundle created

[admin@rocp ~]$ oc annotate configmap ca-bundle service.beta.openshift.io/inject-cabundle=true
configmap/ca-bundle annotated

[admin@rocp ~]$ oc get configmap ca-bundle -o yaml
<confirm that the configmap contains the CERTIFICATE>
...
data:
service-ca.crt: |
-----BEGIN CERTIFICATE-----
...

e. Swap out the end of the spec volume section, and add the configmap to the deployment:
[admin@rocp ~]$ vi app-helloworld.yaml
...
spec:
...
volumes:
- configMap:
defaultMode: 420
name: ca-bundle
items:
- key: service-ca.crt
path: tls-ca-bundle.pem
name: trusted-ca
name: trusted-ca
<esc>:wq (to save)

f. Apply again the new way of delivering the certs:
[admin@rocp ~]$ oc apply -f app-helloworld.yaml
<view and optionally watch the redeploy>

g. Again, verify using a web browser or openssl client:
[admin@rocp ~]$ oc exec no-ca-bundle -- openssl s_client -connect app-helloworld.apphelloworld.svc:443
<verify certificate there - of course
depth=1 CN = openshift-service-serving-signer@1234567899
CONNECTED(00000004)
---
Certificate chain
0 s:CN = server.network-svccerts.svc
i:CN = openshift-service-serving-signer@1234567899
1 s:CN = openshift-service-serving-signer@1234567899
i:CN = openshift-service-serving-signer@1234567899
---
...output omitted...
verify error:num=19:self signed certificate in certificate chain
DONE


CA Certificates - Client Service Application Configuration:
- For a client service application to verify the validity of a certificate, the application needs the CA bundle that signed that certificate.
- Use the service-ca controller to inject the CA bundle
- - Apply the service.beta.openshift.io/inject-cabundle=true annotation to an object
- - Apply this annotation to configuration maps, API services, custom resource definitions (CRD), mutating webhooks, and validating webhooks
- Certificates are valid for 26 months, by default, and is automatically rotated after 13 months
- - A pod's service restart automatically injects the newly rotated CA bundle
- - After rotation is a 13-month grace period where the original CA certificate is still valid while awaiting a service restart
- - Each pod that is configured to trust the original CA certificate must be restarted in some way


Alternatives to Service Certificates:
- service mesh to encrypt service-to-service communication (e.g. Red Hat OpenShift Service Mesh)
- certmanager operator to delegate the certificate signing process to a trusted external service, and renew those certificates


Applying to a Configuration Map:
- Apply the above annotation to a configuration map to inject the CA bundle into the data: { service-ca.crt } field
- The service-ca controller replaces all data in the selected configuration map with the CA bundle
- - Use a dedicated configuration map to prevent overwriting existing data

Example:
[admin@rocp ~]$ oc annotate configmap ca-bundle service.beta.openshift.io/inject-cabundle=true
configmap/ca-bundle annotated

Applying to an API Service:
Apply the above annotation to an API service to inject the CA bundle into the spec.caBundle field.

Applying to a Custom Resource Definition (CRD):
Apply the above annotation to a CRD to inject the CA bundle into the spec.conversion.webhook.clientConfig.caBundle field.

Applying to an Mutating or Validating Webhook:
Apply the above annotation to a mutating or validating webhook to inject the CA bundle into the clientConfig.caBundle field.


Manual Key Rotation:
- Process immediately invalidates the former service CA certificate
- Immediately restart all pods after performing a manual rollover

[admin@rocp ~]$ oc delete secret mycertificate-secret
secret/mycertificate-secret deleted

[admin@rocp ~]$ oc delete secret/mysigning-key -n openshift-service-ca
secret/mysigning-key deleted

[admin@rocp ~]$ oc rollout restart deployment myapp -n myappnamespace
<confirm deployment pods restart with watch as desired>


Adding Liveness and Readiness Probes for Passthrough TLS:
Notes:
- Reusing the app-helloworld example from earlier
- Adding a secret to contain the TLS certificate to the deployment, which is in the ~/certs subdirectory

a. Create the secret:
[admin@rocp ~]$ oc create secret tls app-helloworld-cert --cert certs/helloworld.pem --key certs/helloworld.key
secret/app-helloworld-cert created

b. Update the app to use this secret and add the additional secret volume mount:
[admin@rocp ~]$ vi app-helloworld.yaml
...
spec:
template:
spec:
containers:
- name: helloworld
ports:
- containerPort: 9443
volumeMounts:
- name: app-helloworld-cert
mountPath: /etc/pki/nginx/helloworld
- name: trusted-ca
mountPath: /etc/pki/ca-trust/extracted/pem
volumes:
- name: app-helloworld-cert
secret:
defaultMode: 420
secretName: app-helloworld-cert
- name: trusted-ca
- configMap:
defaultMode: 420
name: trusted-ca
items:
- key: service-ca.crt
path: tls-ca-bundle.pem
<esc>:wq (to save)

c. Update the app again with the liveness and readiness probe additions:
...
spec:
template:
spec:
containers:
- name: helloworld
ports:
- containerPort: 9443
readinessProbe:
httpGet:
port: 9443
path: /readyz
scheme: HTTPS
livenessProbe:
httpGet:
port: 9443
path: /livez
scheme: HTTPS
env:
- name: TLS_ENABLED
value: "true"
- name: PAGE_URL
value: "https://<page_url_target>:9443"
volumeMounts:
- name: app-helloworld-cert
mountPath: /etc/pki/nginx/helloworld
- name: trusted-ca
mountPath: /etc/pki/ca-trust/extracted/pem
volumes:
- name: app-helloworld-cert
secret:
defaultMode: 420
secretName: app-helloworld-cert
- name: trusted-ca
- configMap:
defaultMode: 420
name: trusted-ca
items:
- key: service-ca.crt
path: tls-ca-bundle.pem
<esc>:wq (to save)

d. Apply to update/rolling redeployment of the deployment:
[admin@rocp ~]$ oc apply -f app-helloworld.yaml
<view and optionally watch the redeploy>

e. Expose the passthrough route for the helloworld service outside the cluster:
[admin@rocp ~]$ oc create route passthrough app-helloworld-https --service helloworld --port 8080 --hostname helloworld.apps.ocp4.mindwatering.net
route.route.openshift.io/product-https created

f. Again, verify using a web browser or openssl client:
[admin@rocp ~]$ oc exec no-ca-bundle -- openssl s_client -connect helloworld.apps.ocp4.mindwatering.net
<verify certificate there - of course
depth=1 CN = openshift-service-serving-signer@1234567899
CONNECTED(00000004)
---
Certificate chain
0 s:CN = server.network-svccerts.svc
i:CN = openshift-service-serving-signer@1234567899
1 s:CN = openshift-service-serving-signer@1234567899
i:CN = openshift-service-serving-signer@1234567899
---
...output omitted...
verify error:num=19:self signed certificate in certificate chain
DONE


Non HTTP/HTTPS TCP SNI Applications:
- Like HTTPS before SNI, non HTTP ports typically require a separate IP or port for each service as SNI not historically available

Review HTTP/HTTPS TLS:
- Expose Services with Ingresses and Routes when HTTP/HTTPS
- - Cluster IP (available with the cluster pods
- - Cluster IP external IP feature (outside the cluster)
- - Ingress and Routes - External cluster access (NodePort and LoadBalancer)

MetalLB Component:
- LoadBalancer service provider clusters that do not run on a cloud provider (on-prem / bare metal cluster, or cluster nodes running on VMs)
- Use ping and traceroute commands for testing external cluster IPs/ports and internal cluster IPs/ports
- MetalLB modes;
- - Layer 2
- - Border Gateway Protocol (BGP)
- Installed with Operator Lifecycle Manager:
- - Install the operator
- - Configure using custom resource definitions (typically includes IP address range)

Example Generic LB YAML web server:
[admin@rocp ~]$ cat ./websvr/websvr-lb.yaml
apiVersion: v1
kind: Service
metadata:
name: websvr-service
namespace: websvr
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
name: websvr
type: LoadBalancer

Example Generic LB imperative web server:
[admin@rocp ~]$ kubectl expose deployment websvr --port=8080 --target-port=8080 --name=websvr-service --type=LoadBalancer
-or-
[admin@rocp ~]$ oc expose deployment/websvr --port=8080 --target-port=8080 --name=websvr-service --type=LoadBalancer

To view run one of the following commands:
[admin@rocp ~]$ kubectl describe services websvr-service
<view IPs, Port, TargetPort, NodePort, and Endpoints>
-or-
[admin@rocp ~]$ kubectl get service -n websvr
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
websvr-service LoadBalancer 172.99.11.77 192.168.22.33 8080:31234/TCP
-or-
[admin@rocp ~]$ oc get websvr-service -o jsonpath="{.status.loadBalancer.ingress}"
[{"ip":"192.168.22.33"}]

.spec.externalTrafficPolicy:
- Preserving the client IP routing policy can be specified for applications that need a state maintained
- Changes the default "Cluster" routing policy to "Local"
- Cluster obscures the client source IP and may cause a second hop to another node, but typically has good overall load-spreading
- Local preserves the client source IP and avoids a second hop for LoadBalancer and NodePort type Services, but risks potentially imbalanced traffic spreading

[admin@rocp ~]$ cat ./websvr/websvr-lb-local.yaml
apiVersion: v1
kind: Service
metadata:
name: websvr-service
namespace: websvr
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
name: websvr
externalTrafficPolicy: Local
type: LoadBalancer

Delete a Service with:
[admin@rocp ~]$ oc delete service/websvr-service
service "websvr-service" deleted

Note:
- When a service is deleted, the IP(s) it was using are available to other services.


Multis CNI (Network) Secondary Networks:
- Uses the K8s CNI network plugin
- Allows application pods to use custom internal networks rather than the default ones for performance or isolation reasons (or both the default and an additional CNI network)
- Allows applications pods to be exposed externally using a secondary network (overlay)


Network Attachment Custom Resource Types:
- Host device: Attaches a network interface to a single pod.
- Bridge: Uses an existing bridge interface on the node, or configures a new bridge interface. The pods that are attached to this network can communicate with each other through the bridge, and to any other networks that are attached to the bridge.
- IPVLAN: Creates an IPVLAN-based network that is attached to a network interface.
- MACVLAN: Creates an MACVLAN-based network that is attached to a network interface.


Bridges Notes:
- Network interfaces that can forward packets between different network interfaces that are attached to the bridge
- Virtualization environments often use bridges to connect the network interfaces of virtual machines to the network.

VLAN Notes:
- IPVLAN and MACVLAN are Linux network drivers that are designed for container environments.
- Container environments often use these network drivers to connect pods to the network.
- Although bridge interfaces, IPVLAN, and MACVLAN have similar purposes, they have different characteristics:
- - Including different usage of MAC addresses, filtering capabilities, and other features
- - For example, use IPVLAN instead of MACVLAN in networks with a limit of MAC addresses, because IPVLAN uses fewer MAC addresses. However, IPVLAN shares the host MAC(s) which is incompatible with DHCP where each IP is given to a unique MAC.
- The DHCP-CNI plugin CNIConfig uses a bridged "dhcp-shim" so that the dynamic IP is allocated from the network DHCP server.


Configuring Secondary Networks:
- Make available the network on cluster nodes
- Use operators to customize node network configuration
- - Kubernetes NMState network operator, or
- - SR-IOV (Single Root I/O Virtualization) network operator
- - - The SR-IOV network operator configures SR-IOV network devices for improved bandwidth and latency on certain platforms and devices
- With the operators, you define custom resources to describe the specified network configuration, and the operator applies the configuration

Example Multis CNI NetworkAttachmentDefinition:
[admin@rocp ~]$ cat ./websvr/multis-vlan-networkattachmentdefinition.yaml
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: macvlan
namespace: websvr
spec:
config: '{
"cniVersion": "0.3.0",
"type": "macvlan",
"master": "eth0",
"mode": "bridge",
"ipam": {
"type": "kube-ovn",
"server_socket": "/run/openvswitch/kube-ovn-daemon.sock",
"provider": "macvlan.websvr"
}
}'

Example NetworkAttachmentDefinition using host-device:
[admin@rocp ~]$ cat ./websvr/hostdev-websvr-networkattachmentdefinition.yaml
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
name: hostdev.websvr
spec:
config: |-
{
"cniVersion": "0.3.1",
"name": "hostdev.websvr",
"type": "host-device",
"device": "ens4",
"ipam": {
"type": "dhcp"
}
}

Note:
- The metadata name and the config name should match.

Example Network adding the NetworkAttachmentDefinition (NAD) with the spec:
[admin@rocp ~]$ cat ./websvr/hostdev-websvr-network-spec-nad.yaml
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
...
additionalNetworks:
- name: websvr
namespace: websvr
rawCNIConfig: |-
{
"cniVersion": "0.3.1",
"name": "websvr",
"type": "host-device",
"device": "ens4",
"ipam": {
"type": "dhcp"
}
}
type: Raw

Note:
- To give a static addresses, change the ipam type from dhcp to static like:
[admin@rocp ~]$ cat ./websvr/hostdev-websvr-network-spec-nad2.yaml
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
...
additionalNetworks:
- name: websvr
namespace: websvr
rawCNIConfig: |-
{
"cniVersion": "0.3.1",
"name": "websvr",
"type": "host-device",
"device": "ens4",
"ipam": {
"type": "static",
"addresses": [
{"address": "192.168.123.10/24"}
]
}
}
type: Raw

- or -

apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
name: websvr
spec:
config: |-
{
"cniVersion": "0.3.1",
"name": "websvr",
"type": "host-device",
"device": "ens4",
"ipam": {
"type": "static",
"addresses": [
{"address": "192.168.123.100/24"}
]
}
}

- To configure a whereabout bridge :
[admin@rocp ~]$ cat ./websvr/hostdev-websvr-network-spec-nad-whereabout-bridge.yaml
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
name: whereabouts-macvlan
spec:
config: |-
{
"cniVersion": "0.3.1",
"name": "whereabouts-nad-shim",
"type": "bridge",
"device": "ens4",
"ipam": {
"type": "whereabouts"
}
}

- To configure a whereabout passthrough:
[admin@rocp ~]$ cat ./websvr/hostdev-websvr-network-spec-nad-whereabout-passthrough.yaml
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
name: macvlan-whereabouts-pass
namespace: httpd
spec:
config: |-
{
"cniVersion": "0.3.1",
"name": "macvlan-whereabouts",
"type": "macvlan",
"master": "enp9s0",
"mode": "passthru",
"ipam": {
"type": "whereabouts",
"range": "192.168.123.0/24",
"range_start": "192.168.123.201",
"range_end": "192.168.123.250",
"gateway": "192.168.123.1",
"routes": [
{ "dst": "0.0.0.0/0" }
]
}
}

Get all IP Reservations on the whereabouts:
[admin@rocp ~]$ oc get overlappingrangeipreservations.whereabouts.cni.cncf.io -A

Add a metadata annotation to the deployment/CNI config and patch deployment:
[admin@rocp ~]$ oc patch deployment websvr -n websvr --type merge --patch '{"spec": {"template": {"metadata": {"annotations": {"k8s.v1.cni.cncf.io/networks": "websvr"}}}}}'


Attaching Secondary Networks to Pods:
- To configure secondary networks, create either:
- - A NetworkAttachmentDefinition resource, or
- - Update the configuration of the cluster network operator to add a secondary network
- Network attachment definitions can create and manage virtual network devices, including virtual bridges
- Network attachment definitions can also perform additional network configuration
- - Virtual network devices attach to existing networks that are configured and managed outside OCP, or
- - Other network attachment definitions use existing network interfaces on the cluster nodes
- Network attachment resources are namespaced, and available only to deployment pods using that namespace

Deployment/pods using a network on the cluster nodes:
[admin@rocp ~]$ cat ./websvr/websvr-lb-cni-clusternetwork.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: websvr
namespace: websvr
spec:
selector:
matchLabels:
app: websvr
name: websvr
template:
metadata:
annotations:
k8s.v1.cni.cncf.io/networks: websvr
labels:
app: websvr
name: websvr
spec:
...

Deployment using both the default K8s network and an addition one:
[admin@rocp ~]$ cat ./websvr/websvr-lb-cni-clusterandadditionalnetwork.yaml
apiVersion: apps/v1
...
spec:
...
template:
metadata:
annotations:
k8s.v1.cni.cncf.io/networks: websvr
ovn.kubernetes.io/ip_address: 10.12.13.14
ovn.kubernetes.io/mac_address: 00:00:00:34:6A:B6
macvlan.websvr.kubernetes.io/ip_address: 172.15.0.115
macvlan.websvr.kubernetes.io/mac_address: 00:00:00:15:5A:B5
labels:
app: websvr
name: websvr
spec:
...



K8s Role-Based Access Control (RBAC) Cluster and Project Quotas:
Overview:
- Use RBAC to limit resources cluster-wide, or by project namespaces
- Types:
- - Resource Limits: Limit the resources that a workload consumes via upper bound limits
- - Resource Requests: Limit the resources by the workload itself, can prevent deployment creation/rollover if insufficient resources exist
- Resource Limits:
- - limits.cpu
- - limits.memory
- - requests.cpu
- - requests.memory
- Resource quotas generate the kube_resourcequota metric. In the OCP UI, review for trend analysis.
- Cluster-wide (not project) quotas are nested in a quota subkey under spec instead of directly under spec. (e.g spec: --> quota --> hard ...)

Example Project/namespace-based resource quota:
[admin@rocp ~]$ cat ./websvr/websvr-resourcequota.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: websvr-memory
namespace: websvr
spec:
hard:
limits.memory: 4Gi
requests.memory: 2Gi
scopes: {}
scopeSelector: {}

Example resource quota for the project/namespace:
[admin@rocp ~]$ cat ./websvr/websvr-projectnamespacepods.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: websvr-scaling
namespace: websvr
spec:
hard:
count/pods: "20"

[admin@rocp ~]$ oc apply -f ./websvr/websvr-projectnamespacepods.yaml
- or -
[admin@rocp ~]$ oc create resourcequota websvr --hard=count/pods=20

View resource by having an empty --api-group="" parameter:
[admin@rocp ~]$ oc api-resources --api-group="" --namespaced=true
<view resources for the project namespace>

View quota of pods:
[admin@rocp ~]$ oc get quota
NAME AGE REQUEST LIMIT
websvr 999m01s count/pods: 2/20
- or -
[admin@rocp ~]$ oc get quote websvr -o yaml
<view long format of hard and used pods>

[admin@rocp ~]$ oc scale deployment websvr --replicas=21
<view deployment websvr scaled msg>

[admin@rocp ~]$ oc get pod,deployment
<confirm that 1 of the pods READY column shows 0/1>

Example Error Message Creating Deployment:
[admin@rocp ~]$ oc create deployment --image=nginx websvr -n websvr
error: failed to create deployment: deployments.apps "websvr" is forbidden: exceeded quota: example, requested: count/deployments.apps=20, used: count/deployments.apps=20, limited: count/deployments.apps=20

Example Error Message Scaling/Expanding Deployment or Performing Rollover:
[admin@rocp ~]$ oc get event --sort-by .metadata.creationTimestamp
LAST SEEN TYPE REASON OBJECT MESSAGE
...
10s Normal ScalingReplicaSet deployment/websvr Scaled up replica set websvr-5abcd9e876 to 1
9s Warning FailedCreate replicaset/websvr-5abcd9e876 Error creating: pods "websvr-5abcd9e876-zyxw1" is forbidden: exceeded quota: websvr, requested: count/pods=1, used: count/pods=20, limited: count/pods=20
5s Warning FailedCreate replicaset/websvr-5abcd9e876 (combined from similar events): Error creating: pods "websvr-5abcd9e876-a2bc4" is forbidden: exceeded quota: websvr, requested: count/pods=1, used: count/pods=20, limited: count/pods=20
...
- or -
Use UI:
OCP --> login --> Administration --> ResourceQuotas

Notes:
- OCP Perspective project pages also show the developers/admins quotas that apply to a specific project
- In the ResourceQuotas page, the ClusterResourceQuota type are viewable, resources of the ResourceQuota type can be created, but not objects of the ClusterResourceQuota.

Example cluster resource quota across projects/namespaces:
[admin@rocp ~]$ cat ./websvr/websvr-cluster-resourcequota.yaml
apiVersion: quota.openshift.io/v1
kind: ClusterResourceQuota
metadata:
name: websvr
spec:
quota:
hard:
limits.cpu:
selector:
annotations: {}
labels:
matchLabels:
kubernetes.io/metadata.name: websvr

Notes:
- Without the namespace this applies to any deployments named websvr, not just ones inside websvr namespace.
- The selector matches a name, websvr, but it could match namespaces and limit itself again to a namespace.
- Users who are project/namespace limited, may not be able to view cluster-wide resource groups.
- The --all-namespaces argument to oc commands such as the get and describe commands does not work with AppliedClusterResourceQuota resources. These resources are listed only when you select a namespace.

Example cluster resource quota with restricts per namespaces:
[admin@rocp ~]$ cat ./websvr/websvr-cluster-nmresourcequota.yaml
apiVersion: quota.openshift.io/v1
kind: ClusterResourceQuota
metadata:
name: webapps
spec:
quota:
hard:
requests.cpu: "10"
selector:
annotations: null
labels:
matchLabels:
group: dev
status:
namespaces:
- namespace: websvr
status:
hard:
requests.cpu: "10"
used:
requests.cpu: 500m
- namespace: appsvr
status:
hard:
requests.cpu: "5"
used:
requests.cpu: 250m
...
total: 2
hard:
requests.cpu: "10"
used:
requests.cpu: 2250m

Perform imperative update quota via command line:
[admin@rocp ~]$ oc create namespace namespace-quota-test
namespace/namespace-quota-test created

[admin@rocp ~]$ oc create quota memory --hard=requests.memory=4Gi,limits.memory=16Gi -n namespace-quota-test
resourcequota/memory created


Using LimitRange Resource for Pods and Projects/Namespaces:
Overview:
- Configures default and maximum compute and memory resources for pods per project/namespace
- Does NOT affect currently running pods, recreate or rollover to apply a limit
- Limits include rollouts/rollovers where old/existing pods are running while new replacement pods are being deployed/applied - if combined replica sets exceed quota/limits, rollout cannot complete.
- Do not create multiple conflicting limit ranges in a namespace/project as it will be unclear which one is applied

Options:
- default: specify default limits for deployment pod workloads
- defaultRequest: specify default request limits at creation
- max: specify max cpu or memory
- min: specify min cpu or memory (like VM reservation - give at least)

Notes:
- max value must be higher or equal to the default value
- default value must be higher or equal to the defaultRequest value
- defaultRequest must be higher than the min value
- Use oc describe pod <deploymentname> to confirm quotas and limits applied after oc create.

Example for the webapps namespace:
[admin@rocp ~]$ cat ./websvr/websvr-cluster-nmrangelimits.yaml
apiVersion: v1
kind: LimitRange
metadata:
name: webapps-limit-range
namespace: webapps
spec:
limits:
- default:
cpu: 500m
memory: 512Mi
defaultRequest:
cpu: 250m
memory: 256Mi
max:
cpu: "2"
memory: 4Gi
min:
cpu: 125m
memory: 128Mi
type: Container


Example Error:
[admin@rocp ~]$ oc get event --sort-by .metadata.creationTimestamp
LAST SEEN TYPE REASON OBJECT MESSAGE
...
5m43s Warning FailedCreate replicaset/webapp-1a2bcd3ef4 Error creating: pods "example-1a2bcd3ef4-a5z98" is forbidden: maximum cpu usage per Container is 1, but limit is 1200m
...

Creating Limit Range via UI:
- OCP login --> Administration (left menu category/twistie) --> LimitRanges (left menu) --> click Create LimitRange (button)
- Type or paste the YAML or JSON definition into the editor.
- Click Create (button)


Creating a Project Template:
- Use oc adm create-bootstrap-project-template to print out a template that you can export to create your own project template.
- Until modified, this printed template has same behavior as the default project in OCP, adds admin cluster role over the new project/namespace to the requesting user.
- Users with the self-provisioner cluster role can create projects. By default, all authenticated users have the role assigned to them.
a. Login:
[admin@rocp ~]$ oc login -u myadminid -p <mypassword> https://api.ropc.mindwatering.net:6443
Login successful ...

b. Export the current project template:
[admin@rocp ~]$ oc adm create-bootstrap-project-template -o yaml > ~/newprojecttemplate/filename.txt
[admin@rocp ~]$ cat ~/newprojecttemplate/newtemplate.yaml
apiVersion: template.openshift.io/v1
kind: Template
metadata:
creationTimestamp: null
name: project-request
objects:
- apiVersion: project.openshift.io/v1
kind: Project
metadata:
annotations:
openshift.io/description: ${PROJECT_DESCRIPTION}
openshift.io/display-name: ${PROJECT_DISPLAYNAME}
openshift.io/requester: ${PROJECT_REQUESTING_USER}
creationTimestamp: null
name: ${PROJECT_NAME}
spec: {}
status: {}
- apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
creationTimestamp: null
name: admin
namespace: ${PROJECT_NAME}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: ${PROJECT_ADMIN_USER}
parameters:
- name: PROJECT_NAME
- name: PROJECT_DISPLAYNAME
- name: PROJECT_DESCRIPTION
- name: PROJECT_ADMIN_USER
- name: PROJECT_REQUESTING_USER

Notes:
- The Project object has $variables set by the parameters (at the bottom of the YAML file), this specific project can handle these specific 5 parameters.
- Customize the resources and test until you get the desired template behavior.
- Remove elements/key that do not apply to resource creation: creationTimestamp, resourceVersion, uid, and status.
- Remove extraneous annotations and other resources that should not be automatically included in the new project template.
- If you create the same template name, you are replacing the old version of the template.
- To revert, change the spec back to the spec: {} starting point.

Alternate to exporting and adding back with the oc create -f flag, we could have edited in place instead:
[admin@rocp ~]$ oc edit projects.config.openshift.io cluster
...
<updated the spec: {} to the above content>
...

c. Update the template:
To make the project template available, update the projects.config.openshift.io/cluster resource to use the new project template, and update its specs section. The default name of the project template is project-request, change to desired.
[admin@rocp ~]$ cat ~/newprojecttemplate/newtemplate.yaml
apiVersion: template.openshift.io/v1
kind: Template
metadata:
creationTimestamp: null
name: newproject-request
objects:
- apiVersion: project.openshift.io/v1
kind: Project
metadata:
annotations:
openshift.io/description: ${PROJECT_DESCRIPTION}
openshift.io/display-name: ${PROJECT_DISPLAYNAME}
openshift.io/requester: ${PROJECT_REQUESTING_USER}
creationTimestamp: null
name: ${PROJECT_NAME}
spec:
projectRequestTemplate:
name: newproject-request
status: {}
- apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
creationTimestamp: null
name: admin
namespace: ${PROJECT_NAME}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: ${PROJECT_ADMIN_USER}
- apiVersion: v1
kind: LimitRange
metadata:
name: memory
namespace: ${PROJECT_NAME}
spec:
limits:
- default:
memory: 512Mi
defaultRequest:
memory: 256Mi
max:
memory: 1Gi
min:
memory: 128Mi
type: Container
- apiVersion: v1
kind: ResourceQuota
metadata:
name: memory
namespace: ${PROJECT_NAME}
spec:
hard:
limits.memory: 16Gi
requests.memory: 4Gi
parameters:
- name: PROJECT_NAME
- name: PROJECT_DISPLAYNAME
- name: PROJECT_DESCRIPTION
- name: PROJECT_ADMIN_USER
- name: PROJECT_REQUESTING_USER

d. Add the template to OCP:
[admin@rocp ~]$ oc create -f ~/newprojecttemplate/filename.txt -n openshift-config

Watch:
[admin@rocp ~]$ oc watch get pod -n openshift-apiserver
<confirm running>

To change the subject(s) of who can self-provision, to members of the webprovisioners group, edit the yaml within OCP:
[admin@rocp ~]$ oc edit clusterrolebinding/self-provisioners
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
...
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: self-provisioner
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: webprovisioners

<save the change>


Removing Cluster Role Bindings:
e. If you remove all the subjects, e.g. '{"subjects":null}', you can remove the cluster role binding, with the following command:
[admin@rocp ~]$ oc adm policy remove-cluster-role-from-group

However:
- The remove-cluster-role-from-group command removes the cluster role binding when the last subject is removed.
- Use extra caution and avoid using the command with protected role bindings.
- The command removes the permission, but only until the API is restarted - NOT permanent
- Removing the permission permanently after deleting the binding is more than just the "book" command above removing the subject(s).


Permanently (even after API restart) disable self-provisioning in the cluster:
[admin@rocp ~]$ oc annotate clusterrolebinding/self-provisioners --overwrite rbac.authorization.kubernetes.io/autoupdate=false
clusterrolebinding.rbac.authorization.k8s.io/self-provisioners annotated

[admin@rocp ~]$ oc patch clusterrolebinding.rbac self-provisioners -p '{"subjects": null}'
clusterrolebinding.rbac.authorization.k8s.io/self-provisioners patched


K8s Operators and the Operator Lifecycle Manager
Operator Overview:
- Deploy workloads to Kubernetes with resources such as deployments, replica sets, stateful sets, daemon sets, jobs, and cron jobs.
- Created resources create a workload that runs software that is packaged as a container image, in different modalities.
- The operator pattern is a way to implement reusable software to manage such complex workloads.
- By using operators, cluster administrators create CRs that describe a complex workload, and the operator creates and manages the workload.

Examples:
- Jobs execute a one-off task
- Cron jobs execute tasks periodically
- Other resources create persistent workloads: deployments, stateful sets, or daemon sets differ on how the workload is distributed in a cluster.
- Complex workloads involve different (sub) component workloads: such as database server, back-end service, or front-end service.
- Maintenance task workloads: data back-up, updating/restoring workloads


Operator Pattern:
An operator typically defines custom resources (CRs).
- The operator CRs contain the needed information to deploy and manage the workload.
- The operator watches the cluster for instances of the CRs, and then creates the Kubernetes resources to deploy the custom workload.

For example, an operator that deploys database servers:
a. defines a database resource where you can specify the database name, sizing requirements, and other parameters
b. when a database resource created, the database operator creates a stateful set and a persistent volume that provide the database that is described in the database resource.
c. If the database resource describes a backup schedule and target, then the operator creates a cron job that backs up the database to the target according to the schedule.


Deploying Operators:
K8s software implement the operator pattern, in different ways:
- Cluster operators:
- - Cluster operators provide the platform services of OpenShift, such as the web console and the OAuth server.

- Add-on operators:
- - OpenShift includes the Operator Lifecycle Manager (OLM).
- - The OLM helps users to install and update operators in a cluster.
- - Operators that the OLM manages are also known as add-on operators, in contrast with cluster operators that implement platform services.

- Other operators:
- - Software providers can create software that follows the operator pattern, and then distribute the software as manifests, Helm charts, or any other software distribution mechanism.

- Cluster Operators:
- - The Cluster Version Operator (CVO) installs and updates cluster operators as part of the OpenShift installation and update processes.
- - The CVO provides cluster operator status information as resources of the ClusterOperator type.
- - Inspect the cluster operator resources to examine cluster health.

To view the ClusterOperators:
a. By oc command:
[admin@rocp ~]$ oc get clusteroperator
NAME VERSION AVAILABLE PROGRESSING DEGRADED ... MESSAGE
authentication 4.14.0 True False False ...
baremetal 4.14.0 True False False ...
cloud-controller-manager 4.14.0 True False False ...
...

b. By UI:
- OCP login --> Administration (left menu category/twistie) --> Cluster Settings (left menu) --> click ClusterOperators


The Operator Lifecycle Manager (OLM) and the OperatorHub:
- The OLM follows the operator pattern, and so the OLM provides CRs to manage operators with the Kubernetes API.
- Administrators can use the OLM to install, update, and remove operators.
- - Administrators use the OLM to install, update, and remove operators independently from cluster updates.
- - Administrators can switch an operator to any desired update channel available
- Interact with the OLM via the web console.
- - The web console also displays available operators and provides a wizard to install operators.
- Also install operators by using the Subscription CR and other CRs.
- The OLM uses operator catalogs to find available operators to install.
- - Operator catalogs are container images that provide information about available operators, such as descriptions and available versions.
- The OLM creates a resource of the PackageManifest type for each available operator.
- Operators that are installed with the OLM have a different lifecycle from cluster operators.
- The Cluster Version Operator (CVO) installs and updates cluster operators in lockstep with the cluster.

OpenShift includes several default operator catalogs:
- Red Hat:
Red Hat packages, ships, and supports operators in this catalog.

- Certified:
Independent software vendors support operators in this catalog.

- Community:
Operators without official support.

- Marketplace:
Commercial operators that you can buy from Red Hat Marketplace.

Cluster contains two workloads for each operator:
- The operator workload managed by the OLM
- The workloads that are associated with the custom resources (CRs), and which the operator manages

To automate manual K8s tasks, implement operators that fit task's operator pattern.
SDKs provide components and frameworks with developing operations:
- Operator SDK:
- - Contains tools to develop operators with the Go programming language, and Ansible.
- - Also contains tools to package Helm charts as operators.

- Java Operator SDK:
- - Contains tools to develop operators with the Java programming language.
- - Has a Quarkus extension to develop operators with the Quarkus framework.

Definitions Review:
- Operator Pattern: Way to implement resuable software to manage complex workloads
- Cluster Operator: Component that provides cluster platform services
- Cluster Version Operator: Component that manages cluster operations
- Operations Lifecycle Manager (OLM): Component that manages add-on operators (non cluster operators)
- Add-on Operator: Component that the OLM installs


Managing/Using Operators:
- Custom resources are the most common way to interact with operators.
- Create custom resources by using the custom resource tabs on the Installed Operators page:
- - Select the tab to correspond to the custom resource type to create, and then click Create (button).
- Custom resources (CRs) use the same creation page as other Kubernetes resources.
- - Choose either the YAML (tab) view or the Form view to configure a new resource.
- - - In the YAML view, you use the YAML editor to compose the custom resource. The editor provides a starting template that you can customize.
- - - - The YAML view also displays documentation about the custom resource schema.
- - - - The oc explain command provides the same documentation.
- - - In the Form view, a set of fields are presented for the resource.
- - - - Edit the fields individually. Fields may provide a drop-down list with the available values.
- - - - Fields might provide documentation/help text and further configuration assistance.
- - - - May not contain all fields possible for the custom resource's customization
- - When complete, OCP creates a resource from the values in the form.


OLM Operator Resources:
OLM uses the following resource types:
- Catalog source:
- - Each catalog source resource references an operator repository. Periodically, the OLM examines the catalog sources in the cluster and retrieves information about the operators in each source.

- Package manifest:
- - The OLM creates a package manifest for each available operator. The package manifest contains the required information to install an operator, such as the available channels.

- Operator group:
- - Operator groups define how the OLM presents operators across namespaces.

- Subscription:
- - Cluster administrators create subscriptions to install operators.

- Operator:
- - The OLM creates operator resources to store information about installed operators.

- Install plan:
- - The OLM creates install plan resources as part of the installation and update process.
- - When requiring approvals, administrators must approve install plans.

- Cluster Service Version (CSV):
- - Each version of an operator has a corresponding CSV.
- - The CSV contains the information that the OLM requires to install the operator.

When installing an operator, an administrator must create:
- The Subscription
- The Operator Group.
The OLM generates all other resources automatically.


Troubleshooting Operators:
When the OLM fails to install or update operators, or operators stop working:
- First identify the operator workload failed.
- Examine the status and conditions of the CSV, subscription, and install plan resources.
- - The Operator Deployments field in the Operator details page shows operator workloads.
- - Operators might create further workloads, including workloads that follow the definitions that you provide in custom resources.


Managing Operators via the UI:

View available operators (to install) in the OperatorHub:
- OCP login --> Administrator (left menu category/twistie) --> Operators (category twistie) --> OperationHub (left menu)

View installed operators:
- OCP login --> Administrator (left menu category/twistie) --> Operators (category twistie) --> Installed Operators (left menu)

Install an Operator in OperationHub:
a. Selecting an Operator to install:
- The OperationHub page has the following filters on the left of the page and informational chiclets (icon selections) on the right for selection
- - With no heading, filter by types of operators: All Items, Developer Tools, Monitoring, Networking, Security, Storage, Other
- - Source (catalog groupings)
- - Provider (vendor groupings)
- Has a Filter by keyword search bar at the top

Notes:
- Before installing an operator, review its full documentation, and the amount of post-installation configuration after installation of an operator.
- The installation mode and installed namespace options are related. Learn the supported namespace and installation mode options before starting installation.
- Do not confuse namespaces in the installation wizard:
- - The operator installation mode determines which namespaces the operator monitors for custom resources (e.g. all). This mode is a distinctly separate option from the installed namespace option, which determines the operator workload namespace.

b. Open a chiclet (icon)
- Review/select the Channel source (e.g. Stable)
- Review/select the Version desired from the options available for installation.
- Review the Capability level check marks
- Review the Source and Provider.
- When done, click Install (button) to start the Install Operator wizard.

c. In the Install Operator wizard:
- Update channel (heading):
- - Select update channel (e.g. stable) and version if not done already.

- Installation mode (heading):
- - All namespaces on the cluster (default)
- - - Typically suitable for most operators.
- - - Default mode configures the operator to monitor all namespaces for custom resources (CRs).
- - - Not visible if the operator wants a specific operator namespace for itself (which is common)
- - Select Namespace
- - - OLM installs to the selected namespace
- - - Some operators install to its own specific namespace: openshift-operators namespace. The namespace will likely not exist, and a note is presented saying it will automatically be created.
- - - Some operators suggest creating a new namespace of the administrator's choosing.

All Namespaces Example:
- An operator that deploys database servers defines a custom resource that describes a database server.
- When using the All namespaces on the cluster (default) installation mode, users can create those custom resources in their namespaces. Then, the operator deploys database servers in the same namespaces, along with other user workloads.
- Cluster administrators can combine this mode with self-service features and other namespace-based features, such as role-based access control and network policies, to control user usage of operators.

- Update approval:
- - By default, operators upgrade automatically when new versions are available.
- - Choose Manual updates to prevent automatic updates.

- Other possible prompts on the installation page(s):
- - For an operator that includes monitoring in its definition, the wizard displays a further option to enable the monitoring.
- - WARNING: Adding monitoring from non-Red Hat operators is not supported.

- Click Install (button)

The web console creates subscription and operator group resources according to the selected options in the wizard. After the installation starts, the web console displays progress information.

After completion, click View Operator to display the Operator details page.
To get there again, choose Installed Operators (page) and locate the installed operator in the list.


Check an installed Operator via Web Console UI:
- The Installed Operators page has the installed operators listed with Status (e.g. Install Succeeded, and an "Upgrade available" link, or "Up to date" text displayed)
- Click an installed operator to view its Operator Details page:
- - Details: displays detailed information about the CSV.
- - YAML: displays the CSV details in YAML format
- - Subscription: View or change the installation options, such as the update channel and update approval. View links to the install plans of the operator. For Manual updates, they are performed here.
- - Events: Events related to the operator
- - Tabs for CRs:
- - - For each CR the operator defines, a tab is added that lists all the resources of that CR type
- - All Instances: Aggregates all resources of types that the operator defines

Use the web console to Review and Approve an upgrade plan to perform an operator upgrade:
- Installed Operators (page) --> click Upgrade available (link) --> click Preview InstallPlan --> review the install plan, click Approve (button) to update the operator.


Ways to configure Operator upgrades/updates:
- Automatic: With automatic updates, the OLM updates an operator as soon as the configured channel has a later version of the operator.
- Manual: With manual updates, the OLM updates an operator when the configured channel has a later version of the operator, and an administrator approves the update.


Uninstall an Operator via Installed Operators (page):
Note:
- Depending on the operator various manual operations may be needed besides uninstalling the operator itself. Review operator documentation for the actual steps.

Example:
a. Uninstall the operator:
- Installed Operators (page) --> open the operator to delete , Actions (dropdown) --> Uninstall Operator (selection)

b. Delete the namespace:
- Home (left menu) --> Projects (view/page) --> In the Name filter, type an unique keyword to locate the namespace --> Click namespace/project to open --> Click Delete (button)

c. Manual resource clean-up:
- Operator removal may require manual cleanup of resources. See the operator's documentation for the resources that require manual cleanup and the steps required.


Performing an operator upgrade by API command:
a. Get the install plan and current upgrade status:
- The oc get sub command:
[admin@rocp ~]$ oc get sub -n openshift-operators <operator-name> -o yaml
<view very long page of output>

Notes:
- currentCSV: displays latest available version of operator in the channel
- installPlanRef: displays the install plan reference name, version, and uid.
- installedCSV: key displays the current version of the operator installed

Important:
- The installPlanRef name allows retrieval of the method of the install and upgrades.

b. Get the installPlan:
- Assuming, the installPlanRef name is install-12abc, use the oc get installplan command:
[admin@rocp ~]$ oc get installplan -n openshift-operators install-12abc -o yaml

Notes:
- approval: displays approval type needed (e.g. Manual by setting spec during a patch command)
- approved: displays if a change upgrade is approved (e.g. false, change to true during a patch command)
- clusterServiceVersionNames: displays the updated target version

c. Perform the installation:
To upgrade by API command, modify the install plan resource and approve:
[admin@rocp ~]$ oc patch installplan install-12abc --type merge --patch '{"spec":{"approved":true}}' -n openshift-package-12abc
installplan.operators.coreos.com/install-12abc patched

View available Operators:
[admin@rocp ~]$ oc get catalogsource -n openshift-marketplace
NAME DISPLAY TYPE ... AGE
do123-catalog-cs do123 Operator Catalog Cs grpc ... 1d2h

View available PackageManfests for operators:
[admin@rocp ~]$ oc get packagemanifests
NAME CATALOG AGE
lvms-operator do280 Operator Catalog Cs 1d2h
kubevirt-hyperconverged do280 Operator Catalog Cs 1d2h
file-integrity-operator do280 Operator Catalog Cs 1d2h
compliance-operator do280 Operator Catalog Cs 1d2h
metallb-operator do280 Operator Catalog Cs 1d2h

Use oc describe to get the catalog source, confirm the namespace, evaluate available channels and CSVs for upgrades, upgrade install modes (e.g. Automatic or Manual), package manifest subscription.
[admin@rocp ~]$ oc describe packagemanifest lvms-operator -n openshift-marketplace
Name: lvms-operator
...
Spec:
Status:
Catalog Source: do280-catalog-cs
Catalog Source Display Name: do280 Operator Catalog Cs
Catalog Source Namespace: openshift-marketplace
Catalog Source Publisher:
Channels:
Current CSV: lvms-operator.v4.14.1
Current CSV Desc:
Annotations:
...

Get the CRs definitions for the Operator for the MetalLB Operator:
[admin@rocp ~]$ oc get csv metallb-operator.v4.14.0-202401151553 -o jsonpath="{.spec.customresourcedefinitions.owned[*].name}{'\n'}"
addresspools.metallb.io addresspools.metallb.io bfdprofiles.metallb.io bgpadvertisements.metallb.io bgppeers.metallb.io bgppeers.metallb.io communities.metallb.io ipaddresspools.metallb.io l2advertisements.metallb.io metallbs.metallb.io

Hint:
- Use the oc describe again, to view details on each CR definition.

Example creating an Operator components individually:
- Create a namespace for an Operator if manually required before Operator installation:
[admin@rocp ~]$ cat ~/operator-demo/operator-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
labels:
openshift.io/cluster-monitoring: "true"
pod-security.kubernetes.io/enforce: privileged
name: openshift-operator-demo

[admin@rocp ~]$ oc create -f ~/operator-demo/operator-namespace.yaml
namespace/openshift-operator-demo created

- Create an operator group for an Operator if manually required before Operator installation:
[admin@rocp ~]$ cat ~/operator-demo/operator-group.yaml
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: openshift-operator-demo
namespace: openshift-operator-demo
spec:
targetNamespaces:
- openshift-operator-demo

[admin@rocp ~]$ oc create -f ~/operator-demo/operator-group.yaml
operatorgroup.operators.coreos.com/openshift-operator-demo created

- Create a subscription for the operator:
[admin@rocp ~]$ cat ~/operator-demo/operator-subscription.yaml
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: openshift-operator-demo
namespace: openshift-operator-demo
spec:
channel: "stable"
installPlanApproval: Manual
name: openshift-operator-demo
source: do280-catalog-cs
sourceNamespace: openshift-marketplace

[admin@rocp ~]$ oc create -f ~/operator-demo/operator-subscription.yaml
subscription.operators.coreos.com/openshift-operator-demo created

- Review Operator resource as configured:
[admin@rocp ~]$ oc describe operator openshift-operator-demo
<review output>

[admin@rocp ~]$ oc get installplan -n openshift-operator-demo install-1abc23 -o jsonpath='{.spec}{"\n"}'
{"approval":"Manual","approved":false,"clusterServiceVersionNames":["openshift-operator-demo.v1.3.3","openshift-operator-demo.v1.3.3"],"generation":1}

- Install the plan:
[admin@rocp ~]$ oc patch installplan install-1abc23 --type merge -p '{"spec":{"approved":true}}' -n openshift-operator-demo
installplan.operators.coreos.com/install-1abc23 patched


Uninstall an operator via command line:
Note:
- Requires the cluster-admin role
a. Verify the operator and its namespace, and display its version installed:
[admin@rocp ~]$ oc get subscription.operators.coreos.com <operator-name> -n <operator-namespace> -o yaml | grep currentCSV
currentCSV: <operator-name>.v1.23.0

a. Delete the operator subscription:
[admin@rocp ~]$ oc delete subscription.operators.coreos.com <operator-name> -n <operator-namespace>
subscription.operators.coreos.com "<operator-name>" deleted

b. Delete the cluster service version from above:
[admin@rocp ~]$ oc delete clusterserviceversion <operatorname>.v1.23.0 -n <operator-namespace>
clusterserviceversion.operators.coreos.com "operatorname.v1.23.0" deleted



Security Context Constraints:
Security Context Constraints (SCCs):
- A security mechanism that limits the access from running a pod in OCP to the host environment
- Uses a service account K8s object, within a Project, to apply a SCC to a deployment pod
- - A service account represent the identity of an application that runs in a pod.
- - If the pod definition does not specify a service account, then the pod uses the default service account

Notes:
- OpenShift grants no rights to the default service account used for typical business workloads.
- Do not grant additional permissions to the default service account, as it grants those additional permissions to all pods using default in the project.

SCCs include constraining the following host resources:
- Running privileged containers
- Requesting extra capabilities for a container
- Using host directories as volumes
- Changing the SELinux context of a container
- Changing the user ID


OCP default provided SCCs:
- anyuid
- hostaccess
- hostmount-anyuid
- hostnetwork
- hostnetwork-v2
- lvms-topolvm-node
- lvms-vgmanager
- machine-api-termination-handler
- node-exporter
- nonroot
- nonroot-v2
- privileged
- restricted
- restricted-v2 (most OCP pods use v2)

To get information on a SCC:
[admin@rocp ~]$ oc describe scc nonroot
Name: nonroot
...

Some containers may require relaxed SCCs to access:
- file systems
- sockets
- SELinux context

Hint:
- Use the scc-subject-review subcommand to list all the security context constraints that can overcome the limitations that hinder the container:
[admin@rocp ~]$ oc get deployment deployment-name -o yaml | oc adm policy scc-subject-review -f -
<view output>

- or you can use a deployment's saved yaml file:
[admin@rocp ~]$ oc adm policy scc-subject-review -f ~/deployment-name.yaml


Review the SCCs applied:
[admin@rocp ~]$ oc describe pod console-1ab1cdefg1-12a34 -n openshift-console | grep scc
openshift.io/scc: restricted-v2


Gotcha with restricted-v2:
Container images that are downloaded from public container registries may fail to run when using the restricted-v2 SCC.
Example:
- Container image that requires running as a specific user ID can fail because the restricted-v2 SCC runs the container by using a random user ID.
- Container image that listens on port 80 or on port 443 can fail because the random user ID that the restricted-v2 SCC uses cannot start a service that listens on a privileged network port (port numbers that are less than 1024).

UID Workaround with anyuid:
- Defines the run as user strategy to be RunAsAny, which means that the pod can run as any available user ID in the container
- With anyuid, containers that require a specific user can run with that UID.

Access Beyond the Container with privileged:
- Used when containers need to access the runtime environment of the host.
- For example, the S2I builder class of privileged containers requires access beyond the limits of its own containers.
- Can pose security risks, because they can use any resources on an OpenShift node.
- Use SCCs to enable access for privileged containers by creating service accounts with privileged access.


Changing Pods to Run with a Different SCC:
- Create a service account that is bound to a pod
- Use the oc create serviceaccount command to create the service account
- - Use the -n option if the service account must be created in a different namespace from the current one
- Associate the created service account with a SCC using the oc adm policy command
- - Identity a service account by using the -z option
- - Use the -n option if the service account must be created in a different namespace from the current one

Notes:
- Only cluster administrators can assign an SCC to a service account or remove an SCC from a service account
- Allowing pods to run with a less restrictive SCC can make your cluster less secure. Use with caution

Create a new service account and add the account to a SCC:
[admin@rocp ~]$ oc create serviceaccount service-account-name

[admin@rocp ~]$ oc adm policy add-scc-to-user SCC -z service-account-name

Switch a SCC for an existing deployment:
[admin@rocp ~]$ oc set serviceaccount deployment/deployment-name service-account-name
deployment.apps/deployment-name service-account-name updated

Edit pod yaml file to add the service account to apply later:
[admin@rocp ~]$ vi ~/deployment-name/pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: deployment-name
namespace: mwapps
spec:
restartPolicy: Never
serviceAccountName: deployment-name-sa
containers:
- name: deployment-name
...
<esc>:wq (to save)



Securing K8s APIs:
Overview:
- User or an application can query and modify the cluster state
- OCP defaults restricted access to the OCP cluster from malicious interactions; therefore, admins must grant access to the different Kubernetes APIs.
- - Role-based access control (RBAC) authorization is preconfigured in OpenShift.
- - An application requires explicit RBAC authorization to access restricted Kubernetes APIs.


K8s API Use Case:
- Regular business applications successfully use the default service account without requiring access to the Kubernetes APIs
- Infrastructure applications require access to monitor or to modify the cluster resources

Infrastructure applications use cases:
- Monitoring Applications:
- - Applications in this category need read access to watch cluster resources or to verify cluster health.
- - For example, a service such as Red Hat Advanced Cluster Security (ACS) needs read access to scan your cluster containers for vulnerabilities.

- Controllers:
- - Controllers applications constantly watch and try to reach the intended state of a resource.
- - For example, GitOps tools, such as ArgoCD, have controllers that watch cluster resources that are stored in a repository, and update the cluster to react to changes in that repository.
- - Use a controller to update a Kubernetes resource by reacting to changes as an alternative to using GitOps; however, do not use both a controller and GitOps for such changes because of the conflicts created.

- Operators:
- - Operators automate creating, configuring, and managing instances of Kubernetes-native applications.
- - Operators need permissions for configuration and maintenance tasks.
- - For example, a database operator might create a deployment when it detects a CR that defines a new database.


[admin@rocp ~]$ cat ~/app-secret-reader/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: app-secret-reader
rules:
- apiGroups: [""] 1
resources: ["secrets"] 2
verbs: ["get", "watch", "list"]

Notes:
- apiGroups: Empty string represents the core API
- resources: the resource target type(s) for this cluster role
- verbs: Verbs/actions that are given (allowed) for the application to perform on a resource

Important:
- Scope matters:
- - Use the default cluster roles that OCP defines which have wider permissions across the cluster (e.g. secret-reader in clusterrole.yaml above)
- - Use the less restrictive edit cluster role to allow an application to create or update most objects.


Review Binding Roles to Service Accounts:
(See the generic role section earlier.)

Overview:
- For an application to use role permissions, you must bind the role or cluster role to the application service account.
- To bind a role or cluster role to a service account in a namespace, you can use the oc adm policy command with the add-role-to-user subcommand.

Example:
[admin@rocp ~]$ oc adm policy add-role-to-user some-cluster-role -z service-account

Note:
- optionally use -z to avoid specifying the system:serviceaccount:project prefix when you assign the role to a service account that exists in the current project.


Assigning an Application Service Account to Pods:
- OCP uses RBAC authorization by using the roles that are associated to the service account to grant or deny access to the resource.
- Specify the service account name in the spec.serviceAccountName pod definition field.
- Applications must use the service account token internally when accessing a Kubernetes API.

- Implementation:
- - Before OCP 4.11, OpenShift generated a secret with a token when creating a service account.
- - Starting with OpenShift 4.11, tokens are no longer generated automatically; use the TokenRequest API to generate the service account token, and then mount the token as a pod volume for the application to access it.


Scoping Application Access to Kubernetes API Resources:
An application might require access to a resource in the same namespace, or in a different namespace, or in all namespaces.
- Same Namespace:
- - Use a role or a cluster role and a service account in that namespace.
- - Create a role binding that associates to the service account the actions that the role grants on the resource.
- - Using a role binding with a cluster role grants access only to the resource within the namespace.

- Different Namespace:
- - Create the role binding in the project with the resource.
- - The subject for the binding references the application service account that is in a different namespace from the binding.
- - Syntax: system:serviceaccount:project:service-account

Example for an application pod in the project-1 project that requires access to project-2 secrets:
- Create an app-sa service account in the project-1 project.
- Assign the app-sa service account to your application pod.
- Create a role binding on the project-2 project that references the app-sa service account and the secret-reader role or cluster role (system:serviceaccount:project-1:app-sa)

- Accessing API Resources in All Namespaces:
- - Grant your application service account the cluster role by using a cluster role binding
- - The cluster role binding grants the application cluster access to the API.

Steps to configure an existing webapi with an webapp deployment in a different namespace:
a. Login:
[admin@rocp ~]$ oc login -u myadminid -p <mypassword> https://api.ropc.mindwatering.net:6443
Login successful ...

b, Switch to the webapp project/namespace:
[admin@rocp ~]$ oc project webapp
Now using project "webapp" on server ...

c. Create the service account:
[admin@rocp ~]$ oc create sa configmap-webapp-sa
serviceaccount/configmap-webapp-sa created

d. Update the webapp-deployment.yaml file with the service account and re-apply it:
( spec --> template --> spec section)
[admin@rocp ~]$ cat ~/webapp/webapp-deployment.yaml
apiVersion: apps/v1
kind: Deployment
...
spec:
selector:
...
template:
metadata:
...
spec:
serviceAccountName: configmap-webapp-sa
containers:
...

e. Apply/redeploy the webapp deployment:
[admin@rocp ~]$ oc apply -f reloader-deployment.yaml
<view apply result>

f, Switch to the webapi project/namespace:
[admin@rocp ~]$ oc project webapi
Now using project "webapi" on server ...

g. Add the new service account to the webapi, so that the webapp can use it with edit role access.
Note:
- The edit role access will allow the webapp deployment to edit most objects in the webapi project

[admin@rocp ~]$ oc policy add-role-to-user edit system:serviceaccount:webapi:configmap-webapp-sa --rolebinding-name=webapp-edit -n webapi
clusterrole.rbac.authorization.k8s.io/edit added: "system:serviceaccount:webapi:configmap-webapp-sa"



K8s Cron Jobs:
Overview:
- Automates regular cluster and application management tasks
- Used by cluster administrators to schedule automated tasks
- Used by other users to schedule application maintenance tasks
- When a cron job is due to execute, a job resource gets created from a template in the cron job definition.
- K8s jobs and cron jobs are also workload resource types which include privileged deployments or daemon sets.

Types:
- - Job: K8s job specifies a task that is run once only.
- - Cron Job: Like OS cron jobs, K8s cron jobs are scheduled to execute regular tasks.

Preview the YAML generated for a job created from template;. The YAML contains a job pod template, its pod spec, its pod container, with its image, that runs the curl command.
[admin@rocp ~]$ oc create job --dry-run=client -o yaml job-dryrun --image=registry.access.redhat.com/ubi8/ubi:8.6 -- curl https://www.mindwatering.net
apiVersion: batch/v1
kind: Job
metadata:
creationTimestamp: null
name: job-dryrun
spec:
template:
metadata:
creationTimestamp: null
spec:
containers:
- command:
- curl
- https://www.mindwatering.net
image: registry.access.redhat.com/ubi8/ubi:8.6 6
name: job-dryrun
resources: {}
restartPolicy: Never
status: {}

Preview the YAML generated for a job created from template. The YAML contains a job pod template, its pod spec, its pod container, with its image, that runs the curl command. Unlike the job above, the cron job has three (one more) spec sections, a restartPolicy OnFailure and the schedule added.
[admin@rocp ~]$ oc create cronjob --dry-run=client -o yaml job-dryrun --image=registry.access.redhat.com/ubi8/ubi:8.6 --schedule '0 0 * * *' -- curl https://www.mindwatering.net
apiVersion: batch/v1
kind: CronJob
metadata:
creationTimestamp: null
name: job-dryrun
spec:
jobTemplate:
metadata:
creationTimestamp: null
name: job-dryrun
spec:
template:
metadata:
creationTimestamp: null
spec:
containers:
- command:
- curl
- https://www.mindwatering.net
image: registry.access.redhat.com/ubi8/ubi:8.6 7
name: job-dryrun
resources: {}
restartPolicy: OnFailure
schedule: 0 0 * * *
status: {}

Cron Job Syntax:
- Same as a standard Linux cron job:
minute (0 - 59)
hour (0 -23)
day of month (1 - 31)
month (1-12, or month first letters: jan, feb, mar, apr, jun, jul, aug, sep, oct, nov, dec)
day of week (0 - 7, or day first letters: sun, mon, tue, wed, thu, fri, sat, sun again)

A fraction for the hour column (2nd column) means ever n hours. */2 indicates every 2 hours, */4 indicates every 4 hours.
Run every 4 hours: 0 */4 * * *
Run daily at midnight: 0 0 * * *
Run Sunday at midnight: 0 0 * * 0 or 0 0 * * 7

Friday night 10 PM backup cron job example:
a. Test the command:
[admin@rocp ~]$ bash -xc 'wp maintenance-mode activate ; wp db export | gzip > database.sql.gz ; wp maintenance-mode deactivate ; rclone --dry-run copy database.sql.gz remote://backupappliance/backups/ ; rm -v database.sql.gz ;'

b. Convert the command to YAML:
Notes:
- YAML folded style uses the ">" symbol to convert following new lines to spaces during parsing
- Commands are separated with ";"
- Args are passed to the last command entry "-xc", in this case
- rclone supports lots of back-end targets

[admin@rocp ~]$ cat ~/backupscripts/mwwordpress-backup.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: mwwordpress-backup
spec:
schedule: 0 22 * * 5
jobTemplate:
spec:
template:
spec:
dnsPolicy: ClusterFirst
restartPolicy: Never
containers:
- name: wp-cli
image: registry.io/wp-maint-bkup-appliance/wp-cli:2.7
resources: {}
command:
- bash
- -xc
args:
- >
wp maintenance-mode activate ;
wp db export | gzip > database.sql.gz ;
wp maintenance-mode deactivate ;
rclone copy database.sql.gz remote://backupappliance/backups/ ;
rm -v database.sql.gz ;


Privileged pod example using a CRI-tool that can run cleans up images on all nodes in the cluster:
Notes:
- OCP automatically cleans up images via its garbage collection: Default is triggered at 85% disk usage, to clear down to 80% free. This script really just shows how to loop nodes.
- debug runs a debug pod on each node to perform the execution

[admin@rocp ~]$ cat ~/maintscripts/image.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: maintenance
app: crictl
data:
maintenance.sh: |
#!/bin/bash
NODES=$(oc get nodes -o=name)
for NODE in ${NODES}
do
echo ${NODE}
oc debug ${NODE} -- chroot /host /bin/bash -xc 'crictl images ; crictl rmi --prune'
echo $?
done


Application maintenance example that runs a task hourly:
[admin@rocp ~]$ cat ~/maintscripts/runapp.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: mwdb-update
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: mwdb-statusupdate
image: python:3.6.2-slim
command: ["/bin/bash"]
args: ["-c", "python /maintscripts/statusupdate.py"]
volumeMounts:
- name: maintscripts
mountPath: /maintscripts/
restartPolicy: OnFailure
volumes:
- name: maintscripts
persistentVolumeClaim:
claimName: maintscripts-pv-claim



Over-the-Air (OTA) OCP 4 Updates:

Overview:
Distribution system:
- Red Hat Enterprise Linux CoreOS --> Software Distribution System (SDS) --> OCP 4
- K8s --> OCP (e.g. K8s 1.25 --> OCP 4.12)
- Manages controller manifests, cluster roles, other OCP resources to upgrade/update the cluster to specific versions.
- Hosted at console.redhat.com/openshift
- Cluster images hosted at: quay.io
- Allows updates skipping over previous minor point releases

Process Overview:
- The local OCP 4 cluster uses a Prometheus-based Telemeter component to report on the state of each cluster operator
- - The data is anonymized and sent up to Red Hat SDS
- OTA watcher detects when new images are pushed to quay.io
- OTA generates all possible update paths for your cluster.
- OTA, using gathered information about the cluster and entitlements, determines the available upgrade paths.
- - The web console sends a notification when a new update is available.
- The local OCP Cluster Version Operator (CVO) receives update status from the watcher
- - Downloads new image versions
- - Updates the cluster components (cluster and platform services) via their cluster operators
- - Delegates any extra components that the Operator Lifecycle Manager (OLM) manages

Important:
- Starting with OCP 4.10, the OTA system requires a persistent connection to redhat.com and quay.io
- If running a disconnected cluster is needed, review the RH documentation "Updating a Restricted Network Cluster"
- Currently, the operators upgraded are not extended to the Independent Software Vendor (ISV) operators


Update Channels:
- Updates sorted by types of upgrade paths.
- OTA policy engine uses the desired upgrade paths to perform updates
- Candidate channel:
- - Updates for testing feature acceptance and assisting qualifying the next version of OCP
- - After further checks/updates, are promoted to the Fast or Stable channels
- - Not supported by Red Hat (unless also promoted)
- Fast channel:
- - Updates delivered as announced for General Availability (GA) release
- - OCP clusters set/opted-into the Red Hat Connected Customers program are in the Fast channel
- - Supported by Red Hat
- Stable channel:
- - Red Hat Site Reliability Engineering (SRE) teams with Red Hat Support monitor the Connected Customers program clusters.
- - Operational issues monitored from a Fast channel update will make the update skipped for the Stable channel.
- - After additional testing and validation, Fast channel updates are then enabled to the Stable channel.
- - Recommended for production environments
- - Supported by Red Hat

Extended Update Support (EUS) Releases:
- Starts with Red Hat OCP 4.8, onward
- Even-numbered EUS releases and Stable releases are equivocal. (e.g. eus-4.12 and stable-4.12)
- EUS channel:
- - When switching to the UES-4.x channel, the stable-4.x does not receive the z-stream updates until the next EUS version becomes available.

Recommendations:
- Stable channel for production
- Not recommended to switch OCP from either stable or fast to candidate
- Okay to switch OCP from stable to fast, or fast to stabe


Changing the Update Channel:
- UI Method:
OCP login --> Administration (left menu category/twistie) --> Cluster Settings (left menu) --> on Cluster Settings page, Details (tab) --> under Channel (heading/field), click Pencil icon (Edit) --> In the Select channel (pop-up) page, under Channel (heading), choose the channel in dropdown option: candidate-4.20, candidate-4.21, eus-4.20, fast-4.20, stable-4.20 )

- Command Line Method:
[admin@rocp ~]$ oc patch clusterversion version --type="merge" --patch '{"spec":{"channel":"stable-4-20"}}'
clusterversion.config.openshift.io/version patched


Pausing Health Check So Nodes are Not Rebooted and Falsely Identify as Unhealthy:
a. Using the machinehealthcheck resources, verify cluster is currently healthy:
[admin@rocp ~]$ oc get machinehealthcheck -n openshift-machine-api
<confirm 100%>

b. Add a pause annotation to the API termination handler:
[admin@rocp ~]$ oc annotate machinehealthcheck -n openshift-machine-api machine-api-termination-handler cluster.x-k8s.io/paused=""
machinehealthcheck.machine.openshift.io/machine-api-termination-handler annotated

c. After the update/ugrade, remove the pause annotation:
[admin@rocp ~]$ oc annotate machinehealthcheck -n openshift-machine-api machine-api-termination-handler cluster.x-k8s.io/paused-


Installing an update:
- UI Method:
OCP login --> Administration (left menu category/twistie) --> Cluster Settings (left menu) --> on Cluster Settings page, Details (tab) --> Select a version (button - visible if updates are available) --> in the Update cluster (pop-up) page, under Select new version (heading/drop-down field), select the target version to update (e.g. stable-4.22), under Update options (heading), leave or toggle the radio button between Full cluster update to Partial cluster update as desired, click Update (button) to start the upgrade.

WARNING:
- Rolling backwards is NOT supported. If the update/upgrade fails open a Red Hat support ticket for them to help you resolve.

- Command Line Method:
a. Get the current version:
[admin@rocp ~]$ oc get clusterversion
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version 4.20.0 True False 2d Cluster version is 4.20.0

[admin@rocp ~]$ oc get clusterversion -o jsonpath='{.items[0].spec.channel}{"\n"}'
stable-4.20

b. Query the updates/upgrades available (which also gives you the current version):
[admin@rocp ~]$ oc adm upgrade
<view output, current version will be presented, and then under "Recommended updates:" note the version(s) available in your channel (e.g. VERSION 4.20.10)

c. Perform the update/upgrade:
- If you want the latest version, you don't need to specify the exact version:
[admin@rocp ~]$ oc adm upgrade --to-latest=true
- or -
- If you want a specific version, specify the version:
[admin@rocp ~]$ oc adm upgrade --to=4.20.10

d. Watch the upgrade status:
- Watch the overall:
[admin@rocp ~]$ oc get cluster version
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version 4.20.0 True True 1m Working towards 4.20.10 ...

- Watch the operators get updated:
[admin@rocp ~]$ oc get clusteroperators
NAME VERSION AVAILABLE PROGRESSING DEGRADED ...
authentication 4.20.0 True False False ...
baremetal 4.20.10 False True False ...
cloud-controller-manager 4.20.10 True False True ...
...

- Check for any failures/partials:
[admin@rocp ~]$ oc describe clusterversion
...
State: Partial
...

e. If no failures, when done, the cluster status will show the new version:
[admin@rocp ~]$ oc get clusterversion
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version 4.20.10 True False 2d Cluster version is 4.20.10


Review:
- The CVO downloads image updates from quay.io repository.
- The CVO updates the cluster components (cluster and platform services) via their cluster operators
- The CVO delegates any extra components that the Operator Lifecycle Manager (OLM) manages, and they are updated.


Updating Operators with the Operator Lifecycle Manager (OLM):
- Updates OLM-managed add-on operators using the OCP web console or the oc CLI
- Each installed operator is either set to have OLM auto-applied updates, or set to require administrator approval.
- Operator providers may set-up multiple channels for updates, the administrator selects a channel and whether to auto-update.
- Administrators can create custom catalogs that specify which versions of operators to include in the catalog, and then set the operators to auto-update, and they update to the versions only in the catalog.
- Administrators can also upgrade outside the OLM operator by using Helm charts or YAML resource files. The OLM will NOT manage these resources.
- When using the CLI, the oc command includes an installPlanApproval property that is set to Automatic or Manual.


REVIEW: To change an Install Operator's update settings:
- OCP login --> Administrator (left menu category/twistie) --> Operators (left menu sub-category/twistie) --> Install Operators (left menu)
- On the Installed Operators page, click Upgrade available (text link) for any operators to manually approve.



Kubernetes API Deprecation Policy
- Categorized based on feature maturity (experimental, pre-release, and stable)
- Over time, alpha(s) become beta(s) and beta(s) become v1
- Deprecated API versions removed after 3 releases

API version Category Description
v1alpha1: Alpha, Experimental features
v1beta1 : Beta, Pre-release features
v1: Stable, Stable features, generally available


List the API Version of a Job Resource:
[admin@rocp ~]$ oc api-resources | egrep '^NAME|job'
<view results, jobs and cronjobs are batch/v1>

If you use an old job file where the API version still has batch/v1beta1, OCP will let you know the version it was deprecated and when it will be removed (if not already).
e.g. Warning: batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob


List any workloads that are using APIs that are deprecated:
Note:
- An empty/blank value in the REMOVEDINRELEASE column indicates that there is no current sunset for that resource's API Version

[admin@rocp ~]$ oc get apirequestcounts | awk '{if(NF==4){print $0}}'
<view the REMOVEDINRELEASE column>


List any installed resource APIs that are deprecated:
[admin@rocp ~]$ FILTER='{range .items[?(@.status.removedInRelease!="")]}{.status.removedInRelease}{"\t"}{.status.requestCount}{"\t"}{.metadata.name}{"\n"}{end}'
[admin@rocp ~]$ oc get apirequestcounts -o jsonpath="${FILTER}" | column -t -N "RemovedInRelease,RequestCount,Name"
RemovedInRelease Name
1.25 cronjobs.v1beta1.batch
...

If you get a result, list the resources that uses/installed those deprecated ones:
[admin@rocp ~]$ FILTER2='{range .status.currentHour..byUser[*]}{..byVerb[*].verb}{","}{.username}{","}{.userAgent}{"\n"}{end}'
[admin@rocp ~]$ oc get apirequestcounts -o jsonpath="${FILTER2}" | column -t -N "RemovedInRelease,RequestCount,Name"
<view list>


View Deprecation Alert Settings:
- OCP login --> Observe (category/twistie) --> Alerting (left menu) --> Alerting rules --> Alerting rule details
- Select alert:
- - APIRemovedInNextReleaseInUse
- - APIRemovedInNextEUSReleaseInUse


CCP to K8s Mapping:
4.12 = 1.25 K8s
4.13 = 1.26
4.14 = 1.27
...
4.20 = 1.32 K8s

Ready, Rejected, Accepted releases can be viewed at:
- multi.ocp.releases.ci.openshift.org






previous page

×