Troubleshooting - OCI Service Operator for Kubernetes
Identify the causes and fixes for problems with OCI Service Operator for Kubernetes problems on Service Mesh.
By default,
operator-sdk
installs OCI Service Operator for
Kubernetes bundle in the 'default' namespace. For most use cases, a namespace is
specified (for example, olm
) when OCI Service Operator for
Kubernetes is installed. Therefore, kubectl
command might require
the namespace parameter: -n $NAMESPACE
.Install: Verify That the Operator Lifecycle Manager (OLM) Installation Was Successful
Issue
Successful OLM installation needs verification.
Solution
To verify the successful OLM installation, run the status command:
## status of olm
$ operator-sdk olm status
INFO[0007] Fetching CRDs for version "0.20.0"
INFO[0007] Fetching resources for resolved version "v0.20.0"
INFO[0031] Successfully got OLM status for version "0.20.0"
NAME NAMESPACE KIND STATUS
operatorgroups.operators.coreos.com CustomResourceDefinition Installed
operatorconditions.operators.coreos.com CustomResourceDefinition Installed
olmconfigs.operators.coreos.com CustomResourceDefinition Installed
installplans.operators.coreos.com CustomResourceDefinition Installed
clusterserviceversions.operators.coreos.com CustomResourceDefinition Installed
olm-operator-binding-olm ClusterRoleBinding Installed
operatorhubio-catalog olm CatalogSource Installed
olm-operators olm OperatorGroup Installed
aggregate-olm-view ClusterRole Installed
catalog-operator olm Deployment Installed
cluster OLMConfig Installed
operators.operators.coreos.com CustomResourceDefinition Installed
olm-operator olm Deployment Installed
subscriptions.operators.coreos.com CustomResourceDefinition Installed
aggregate-olm-edit ClusterRole Installed
olm Namespace Installed
global-operators operators OperatorGroup Installed
operators Namespace Installed
packageserver olm ClusterServiceVersion Installed
olm-operator-serviceaccount olm ServiceAccount Installed
catalogsources.operators.coreos.com CustomResourceDefinition Installed
system:controller:operator-lifecycle-manager ClusterRole Installed
The output shows a list of installed components. Each entry in the STATUS
column must be Installed
. If any columns aren't listed as Installed
perform the following steps.
- Uninstall OLM.
## Uninstall the OLM $ operator-sdk olm uninstall
- Reinstall OLM.
## Install the OLM $ operator-sdk olm install --version 0.20.0
Install: OCI Service Operator for Kubernetes OLM Installation Fails with Error
Issue
After installing OLM, checking the installation status returns an error.
Solution
Steps to reproduce:
- After installation, verify the successful OLM installation by running
the status command:
## status of olm $ operator-sdk olm status
- Error returned:
## FATA[0034] Failed to install OLM version "latest": detected existing OLM resources: OLM must be completely uninstalled before installation
## Uninstall the OLM
$ operator-sdk olm uninstall
If the command fails, run the status command to get version information:
## status of olm
$ operator-sdk olm status
Next, try the following options:
- Option 1: Run the following command to uninstall OLM using version.
$ operator-sdk olm uninstall --version <OLM_VERSION>
- Option 2: Run the following commands to uninstall OLM and its related
components.
$ export OLM_RELEASE=<OLM_VERSION> $ kubectl delete apiservices.apiregistration.k8s.io v1.packages.operators.coreos.com $ kubectl delete -f https://github.com/operator-framework/operator-lifecycle-manager/releases/download/${OLM_RELEASE}/crds.yaml $ kubectl delete -f https://github.com/operator-framework/operator-lifecycle-manager/releases/download/${OLM_RELEASE}/olm.yaml
- Option 3: In case OLM uninstall still fails, run the following commands
to uninstall OLM and its related components.
$ kubectl delete apiservices.apiregistration.k8s.io v1.packages.operators.coreos.com $ kubectl delete -f https://raw.githubusercontent.com/operator-framework/operator-lifecycle-manager/master/deploy/upstream/quickstart/crds.yaml $ kubectl delete -f https://raw.githubusercontent.com/operator-framework/operator-lifecycle-manager/master/deploy/upstream/quickstart/olm.yaml
Verify that uninstall was successful with the following commands.
- Verify that OLM successfully uninstalled.
$ kubectl get namespace olm Error from server (NotFound): namespaces olm not found
- Verify that OLM uninstalled successfully by ensuring that OLM owned
CustomResourceDefinitions are removed.
$ kubectl get crd | grep operators.coreos.com
- Check that the OLM deployments are terminated.
$ kubectl get deploy -n olm No resources found.
Install: OCI Service Operator for Kubernetes OLM Installation Fails with Timeout Message
Issue
OCI Service Operator for Kubernetes OLM installation fails with the following message.
## FATA[0125] Failed to run bundle upgrade: error waiting for CSV to install: timed out waiting for the condition
Explanation
The error signifies the installer timed out waiting for an installation
condition to complete. The error message might be misleading because, given
enough time, the installer eventually reports
Succeeded
.
Solution
To ensure that OCI Service Operator for Kubernetes OLM installation succeeded, follow these steps.
- Verify the status of the CSV.
$ kubectl get csv NAME DISPLAY VERSION REPLACES PHASE oci-service-operator.vX.X.X oci-service-operator X.X.X Succeeded
- If the phase does not reach
Succeeded
, delete the bundle pod of the OCI Service Operator for Kubernetes version you are deploying.$ kubectl get pods | grep oci-service-operator-bundle $ kubectl delete pod <POD_FROM_ABOVE_COMMAND>
- After the bundle pod is deleted, reinstall the OSOK bundle.
$ operator-sdk run bundle iad.ocir.io/oracle/oci-service-operator-bundle:X.X.X -n oci-service-operator-system --timeout 5m ## or for Upgrade $ operator-sdk run bundle-upgrade iad.ocir.io/oracle/oci-service-operator-bundle:X.X.X -n oci-service-operator-system --timeout 5m
Note
Replace X.X.X with current version of the OCI Service Operator for Kubernetes. To get the current version, go to the GitHub release site at: https://github.com/oracle/oci-service-operator/releasesNote
Users must be logged into the Oracle Registry atiad.ocir.io
in Docker to run the command. To ensure you are logged in, see Pulling Images Using the Docker CLI. - Verify that the OCI Service Operator for Kubernetes is deployed
successfully.
$ kubectl get deployments | grep "oci-service-operator-controller-manager" NAME READY UP-TO-DATE AVAILABLE AGE oci-service-operator-controller-manager 1/1 1 1 2d9h
Install: OSOK Operator Installation Fails with Run Bundle
Issue
OSOK Operator installation fails with the following message:
FATA[0121] Failed to run bundle: install plan is not available for the subscription
Debug
The error signifies the installation failed to run the operator bundle. The error is caused one of the following issues: failing to pull the image, a network issue, or an upstream OPM issue.
- List the pods in the operator installation namespace.
kubectl get pods -n oci-service-operator-system
- View the logs of the failing pod in the operator namespace.
kubectl logs iad-ocir-io-oracle-oci-service-operator-bundle-X-X-X -n oci-service-operator-system
The command produces the following output:
mkdir: can't create directory '/database': Permission denied
The container is running with uid 1001 and this user can't create the directory. The problem is an upstream OPM issue. See:
Solution
To make OCI Service Operator for Kubernetes installation successful, follow these steps.
- Clean up the existing operator installation in the corresponding
namespace.
operator-sdk cleanup oci-service-operator -n oci-service-operator-system
- Install the OCI Service Operator for Kubernetes Operator in the Kubernetes cluster in your namespace (
oci-service-operator-system
) using the following command.operator-sdk run bundle --index-image quay.io/operator-framework/opm:v1.23.1 iad.ocir.io/oracle/oci-service-operator-bundle:X.X.X -n oci-service-operator-system --timeout 5m
Note
Replace X.X.X with current version of the OCI Service Operator for Kubernetes. To get the current version, go to the GitHub release site at: https://github.com/oracle/oci-service-operator/releasesNote
Users must be logged into the Oracle Registry atiad.ocir.io
in Docker to run the command. To ensure you're logged in, see Pulling Images Using the Docker CLI. - The command produces output similar to the following:
INFO[0036] Successfully created registry pod: iad-ocir-io-oracle-oci-service-operator-bundle-X-X-X INFO[0036] Created CatalogSource: oci-service-operator-catalog INFO[0037] OperatorGroup "operator-sdk-og" created INFO[0037] Created Subscription: oci-service-operator-vX-X-X-sub INFO[0040] Approved InstallPlan install-tzk5f for the Subscription: oci-service-operator-vX-X-X-sub INFO[0040] Waiting for ClusterServiceVersion "oci-service-operator-system/oci-service-operator.vX.X.X" to reach 'Succeeded' phase INFO[0040] Waiting for ClusterServiceVersion "oci-service-operator-system/oci-service-operator.vX.X.X" to appear INFO[0048] Found ClusterServiceVersion "oci-service-operator-system/oci-service-operator.vX.X.X" phase: Pending INFO[0049] Found ClusterServiceVersion "oci-service-operator-system/oci-service-operator.vX.X.X" phase: InstallReady INFO[0053] Found ClusterServiceVersion "oci-service-operator-system/oci-service-operator.vX.X.X" phase: Installing INFO[0066] Found ClusterServiceVersion "oci-service-operator-system/oci-service-operator.vX.X.X" phase: Succeeded INFO[0067] OLM has successfully installed "oci-service-operator.vX.X.X"
Install: OSOK Operator Installation Fails with Authorization Error
Issue
OSOK Operator installation fails with the following message:
Error: failed to authorize: failed to fetch oauth token: unexpected status: 401 Unauthorized
Sample install command:
$ operator-sdk run bundle iad.ocir.io/oracle/oci-service-operator-bundle:X.Y.Z -n oci-service-operator-system --timeout 5m
Here's the complete error message.
INFO[0002] trying next host error="failed to authorize: failed to fetch
oauth token: unexpected status: 401 Unauthorized" host=iad.ocir.io
FATA[0002] Failed to run bundle: pull bundle image: error pulling image
iad.ocir.io/oracle/oci-service-operator-bundle:X.X.X: error resolving
name : failed to authorize: failed to fetch oauth token: unexpected
status: 401 Unauthorized
Explanation
This error occurs because Docker isn't logged into the Oracle Registry (iad.ocir.io).
Solution
To refresh the token used for pulling images from OCIR. Run the following command:
oci raw-request --region us-ashburn-1 --http-method GET --target-uri "https://iad.ocir.io/20180419/docker/token" | jq -r .data.token | docker login -u BEARER_TOKEN --password-stdin iad.ocir.io
Upgrade: OSOK Operator Upgrade Fails with Run Bundle
Issue
Problem: OSOK Operator upgrade fails with the following message.
FATA[0307] Failed to run bundle upgrade: error waiting for CSV to install
Explanation
The error signifies the upgrade of the operator bundle failed. The error has one of the following causes: failing to pull the image, a network issue, or an upstream OPM issue.
Solution
To make OCI Service Operator for Kubernetes upgrade successful, follow these steps.
- Uninstall OLM.
operator-sdk olm uninstall
- Reinstall OLM.
operator-sdk olm install --version X.X.X
- Install OSOK operator.
operator-sdk run bundle iad.ocir.io/oracle/oci-service-operator-bundle:X.X.X -n oci-service-operator-system --timeout 5m
Note
Replace X.X.X with current version of the OCI Service Operator for Kubernetes. To get the current version, go to the GitHub release site at: https://github.com/oracle/oci-service-operator/releasesNote
Users must be logged into the Oracle Registry atiad.ocir.io
in Docker to run the command. To ensure you are logged in, see Pulling Images Using the Docker CLI.
Cluster: Verify that OCI Service Operator for Kubernetes Pods are Running
Check pods
Verify that the OSOK pods are running successfully.
$ kubectl get pods | grep "oci-service-operator-controller-manager"
oci-service-operator-controller-manager-6797f45589-hhn9s 1/1 Running 0 2d9h
Solution
If pods not running, verify pod logs for specific issue using the following command.
$ kubectl logs pod/<POD_FROM_ABOVE_COMMAND> -f
Cluster: Authorization Failed or Requested Resource Not Found
Issue
Received an authorization failed or requested resource not found when deploying the service.
Example
"message": Failed to create or update resource: Service error:NotAuthorizedOrNotFound. Authorization failed or requested resource not found.. http status code: 404.
Solution
The error occurs because of user authorization. To resolve the issue, review the following.
Check that the instance principals are configured correctly for the provisioned OCI resource. See Policies when Managing Service Mesh with kubectl.
Cluster: Custom Resource Creation and Deletion Errors
Description
You have one of the following error messages or issues with customer resources.
- Issue: Custom resource creation fails.
- Error: Cannot delete custom resource as the state is unknown.
- Error: Custom Resource state is not Active.
Solution
To troubleshoot the error, use the following options.
- Option 1: Review the OCI Service Operator for Kubernetes controller pod log as described in preceding section: Cluster: Verify that OCI Service Operator for Kubernetes Pods are Running.
- Option 2: Run the following commands.
- Get the
CUSTOMRESOURCE
.$ kubectl get crds | grep "servicemesh.oci.oracle.com" | awk -F. '{print $1}' accesspolicies ingressgatewaydeployments ingressgatewayroutetables ingressgateways meshes virtualdeploymentbindings virtualdeployments virtualserviceroutetables virtualservices
Use the following command to get the error code from the status field of individual custom resources.
$ kubectl get <CUSTOMRESOURCE> <NAME> -n <NAMESPACE> -o json | jq '.status'
Here's a sample command.
$ kubectl get virtualserviceroutetables pet-details-route-table -n pet-rescue -o json | jq '.status'
The following sample output for a virtual service route table contains one
Unknown
condition. For a successful custom resource creation, all the statuses are true.{ "conditions": [ { "lastTransitionTime": "2022-01-07T08:35:43Z", "message": "Dependencies resolved successfully", "observedGeneration": 2, "reason": "Successful", "status": "True", "type": "ServiceMeshDependenciesActive" }, { "lastTransitionTime": "2022-01-09T05:15:30Z", "message": "Invalid RouteRules, route rules target weight sum should be 100 for the resource!", "observedGeneration": 2, "reason": "BadRequest", "status": "Unknown", "type": "ServiceMeshConfigured" }, { "lastTransitionTime": "2022-01-07T08:36:03Z", "message": "Resource in the control plane is Active, successfully reconciled", "observedGeneration": 1, "reason": "Successful", "status": "True", "type": "ServiceMeshActive" } ], "virtualDeploymentIdForRules": [ [ "ocid1.meshvirtualdeployment.oc1.iad.amaaaaaamgzdkjyak7tam2jdjutbend4h7surdj4t5yv55qukvx43547kbnq" ] ], "virtualServiceId": "ocid1.meshvirtualservice.oc1.iad.amaaaaaamgzdkjyagkax7xtz7onqlb65ec32dtu67erkmka2x6tlst4xigsa", "virtualServiceName": "pet-details", "virtualServiceRouteTableId": "ocid1.meshvirtualserviceroutetable.oc1.iad.amaaaaaamgzdkjyavcphbc5iobjqa6un7bqq2ki2xyfpvzfd6jzmdsv4l6va" }
- Get the
Cluster: Custom Resource Operations Fails with Webhook Errors
Issue
Custom Resource Operations Fails with Webhook Errors.
Error from server (InternalError): error when creating "service_mesh/mesh_create.yaml": Internal error occurred: failed calling webhook "mesh-validator.servicemesh.oci.oracle.cloud.com": failed to call webhook: Post "https://oci-service-operator-controller-manager-service.oci-service-operator-system.svc:443/validate-servicemesh-oci-oracle-com-v1beta1-mesh?timeout=10s": service "oci-service-operator-controller-manager-service" not found
Explanation
An improper installation causes this error and hence the operator has issues calling the webhooks.
Solution
To make OCI Service Operator for Kubernetes installation successful, follow these steps.
- Clean up the existing operator installation in the corresponding
namespace.
operator-sdk cleanup oci-service-operator -n oci-service-operator-system
- Install the OCI Service Operator for Kubernetes Operator in the Kubernetes cluster in your namespace (
oci-service-operator-system
) using the following command.operator-sdk run bundle --index-image quay.io/operator-framework/opm:v1.23.1 iad.ocir.io/oracle/oci-service-operator-bundle:X.X.X -n oci-service-operator-system --timeout 5m
Note
Replace X.Y.Z with current version of the OCI Service Operator for Kubernetes. To get the current version, go to the GitHub release site at: https://github.com/oracle/oci-service-operator/releasesNote
Users must be logged into the Oracle Registry atiad.ocir.io
in Docker to run the command. To ensure you're logged in, see Pulling Images Using the Docker CLI. - The command produces output similar to the following:
INFO[0036] Successfully created registry pod: iad-ocir-io-oracle-oci-service-operator-bundle-X-Y-Z INFO[0036] Created CatalogSource: oci-service-operator-catalog INFO[0037] OperatorGroup "operator-sdk-og" created INFO[0037] Created Subscription: oci-service-operator-vX-X-X-sub INFO[0040] Approved InstallPlan install-tzk5f for the Subscription: oci-service-operator-vX-Y-Z-sub INFO[0040] Waiting for ClusterServiceVersion "oci-service-operator-system/oci-service-operator.vX.Y.Z" to reach 'Succeeded' phase INFO[0040] Waiting for ClusterServiceVersion "oci-service-operator-system/oci-service-operator.vX.Y.Z" to appear INFO[0048] Found ClusterServiceVersion "oci-service-operator-system/oci-service-operator.vX.Y.Z" phase: Pending INFO[0049] Found ClusterServiceVersion "oci-service-operator-system/oci-service-operator.vX.Y.Z" phase: InstallReady INFO[0053] Found ClusterServiceVersion "oci-service-operator-system/oci-service-operator.vX.Y.Z" phase: Installing INFO[0066] Found ClusterServiceVersion "oci-service-operator-system/oci-service-operator.vX.Y.Z" phase: Succeeded INFO[0067] OLM has successfully installed "oci-service-operator.vX.Y.Z"
- Verify the successful installation of webhooks corresponding to all the
Service Mesh resources with
kubectl
.First, check validating webhooks.
$ kubectl get validatingwebhookconfiguration | grep "servicemesh.oci.oracle.cloud.com" | awk -F. '{print $1}' ap-validator ig-validator igd-validator igrt-validator mesh-validator vd-validator vdb-validator vs-validator vsrt-validator
Next, check mutating webhooks.
$ kubectl get mutatingwebhookconfiguration | grep "servicemesh.oci.oracle.com" | awk -F. '{print $1}' proxy-injector
Cluster: Operator / Application Pod Fails with "Error: runAsNonRoot and image will run as root"
Issue
The operator or application pod fails with the following error.
Error: runAsNonRoot and image will run as root
Solution
This error occurs when podSecurityPolicies
are enforced on your cluster and privileged access isn't given to the operator and application pods. To resolve the issue, you provide privileged access to all pods in operator namespaces where the operator and the application pods are running.
To run operator with pod security policies enabled:
# pod security policy to allow non-privileged access for all volumes and run as any user
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: operator-psp
spec:
privileged: true
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
runAsUser:
rule: RunAsAny
fsGroup:
rule: RunAsAny
volumes:
- '*'
---
# Cluster role which grants access to use pod security policy
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: operator-psp-crole
rules:
- apiGroups:
- policy
resourceNames:
- operator-psp
resources:
- podsecuritypolicies
verbs:
- use
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: operator-psp-crole-binding
roleRef:
kind: ClusterRole
name: operator-psp-crole
apiGroup: rbac.authorization.k8s.io
subjects:
# Authorize all service accounts in oci-service-operator-system namespace
- kind: Group
apiGroup: rbac.authorization.k8s.io
name: system:serviceaccounts:<operator's namespace>
- kind: Group
apiGroup: rbac.authorization.k8s.io
name: system:serviceaccounts:<application namespace>
Cluster: VDB Is Active but Pod Doesn't Contain Sidecar Proxies
Issue
VDB is Active but pod doesn't contain sidecar proxies.
Solution
To resolve the issue, perform the following checks.
- Check whether sidecar injection is enabled on the namespace level or at the pod level. If not, enable it by following the steps here: Sidecar Injection on Pods.
- Check whether
SIDECAR_IMAGE
is present inoci-service-operator-servicemesh-config
map:$ kubectl get configmap oci-service-operator-servicemesh-config -n NAMESPACE -o json | jq '.data.SIDECAR_IMAGE' "iad.ocir.io/iaaaaaaa/oci-service-mesh-proxy:0.x.x"
- If
SIDECAR_IMAGE
isn't present, then check whether an OCI policy exists forMESH_PROXY_DETAILS_READ
for the dynamic group that customer used to create service mesh resources, see: Policies when Managing Service Mesh with kubectl.
Cluster: Pods Restarting Continuously
Issue
The pods in your cluster are restarting continuously.
Solution
Checking the following setting to resolve the issue.
- Check whether
SIDECAR_IMAGE
is present inoci-service-operator-servicemesh-config
map using the following command. If the sidecar isn't present, restart the pods.$ kubectl get configmap oci-service-operator-servicemesh-config -n NAMESPACE -o json | jq '.data.SIDECAR_IMAGE' "iad.ocir.io/iaaaaaaa/oci-service-mesh-proxy:0.x.x"
- Check the logs of the pod by using the following command:
$ kubectl logs -f pod/POD_NAME -n NAMESPACE_NAME -c oci-sm-proxy
- To enable debug logs for pods, see: Proxy Logging Level
Cluster: OCI Service Operator for Kubernetes Doesn't Uninstall Because of Finalizers
Issue
When uninstalling, CRDs aren't deleted because of finalizer clean-up. The system returns the following error message.
FATA[0123] Cleanup operator: wait for customresourcedefinition deleted: Get "https://x.x.x.x:6443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/ingressgateways.servicemesh.oci.oracle.com": context deadline exceeded
The example uses ingress gateways, but similar issues can happen with any mesh resource.
Solution
Solution: To resolve this issue, perform the following steps.
-
Fetch all the CRDs that are yet to be deleted. Note
Tip
The operator SDK performs deletion in alphabetical order. Because the preceding error occurred atingressgateways
you're seeing all resources, including nonservice mesh resources that follow in alphabetical order.$ kubectl get crd | grep oci.oracle.com ingressgateways.servicemesh.oci.oracle.com 2022-01-09T14:13:26Z meshes.servicemesh.oci.oracle.com 2022-01-09T14:13:26Z mysqldbsystems.oci.oracle.com 2022-01-09T14:13:27Z streams.oci.oracle.com 2022-01-09T14:13:26Z virtualdeploymentbindings.servicemesh.oci.oracle.com 2022-01-09T14:13:24Z virtualdeployments.servicemesh.oci.oracle.com 2022-01-09T14:13:27Z virtualserviceroutetables.servicemesh.oci.oracle.com 2022-01-09T14:13:26Z virtualservices.servicemesh.oci.oracle.com 2022-01-09T14:13:27Z
-
Delete all the objects present in the preceding CRDs.
Tip
Try deleting child resources first and then proceed to the parent resources. Otherwise, the deletion step gets stuck.## Delete all the objects for that custom resource $ kubectl delete CUSTOM_RESOURCE --all --all-namespaces ## Once deleted, verify there are no more objects present for that custom resource $ kubectl get CUSTOM_RESOURCE --all-namespaces
- Delete all the remaining CRDs using the following command. CRDs deleted
before might produce not-found messages.
$ kubectl delete crd virtualdeploymentbindings.servicemesh.oci.oracle.com $ kubectl delete crd ingressgatewaydeployments.servicemesh.oci.oracle.com $ kubectl delete crd ingressgatewayroutetables.servicemesh.oci.oracle.com $ kubectl delete crd ingressgateways.servicemesh.oci.oracle.com $ kubectl delete crd accesspolicies.servicemesh.oci.oracle.com $ kubectl delete crd virtualserviceroutetables.servicemesh.oci.oracle.com $ kubectl delete crd virtualdeployments.servicemesh.oci.oracle.com $ kubectl delete crd virtualservices.servicemesh.oci.oracle.com $ kubectl delete crd meshes.servicemesh.oci.oracle.com $ kubectl delete crd autonomousdatabases.oci.oracle.com $ kubectl delete crd mysqldbsystems.oci.oracle.com $ kubectl delete crd streams.oci.oracle.com
- Undeploy OCI Service Operator for Kubernetes as described here: Clean up OCI Service Operator for Kubernetes.
- If OCI Service Operator for Kubernetes is installed in the default
namespace, then run the following command to get rid of other resources
created during installation.
$ kubectl delete role,clusterrolebinding,clusterrole,secrets,pods,rs,deploy,svc,cm,ing -l operators.coreos.com/oci-service-operator.default=
- If OCI Service Operator for Kubernetes is installed using
-n NAMESPACE
, then delete the namespace to get rid of other resources created during installation.$ kubectl delete ns NAMESPACE