Updating a Virtual Node Pool
Find out how to update a virtual node pool using Kubernetes Engine (OKE).
For general information about updating node pools, see Modifying Node Pool and Worker Node Properties.
- Open the navigation menu and click Developer Services. Under Containers & Artifacts, click Kubernetes Clusters (OKE).
- Select the compartment that contains the cluster.
- On the Clusters page, click the name of the cluster that you want to modify.
- On the Cluster details page, under Resources, click Node pools.
- Click the name of the virtual node pool that you want to modify.
On the Virtual node pool details tab, information about the virtual node pool is displayed, including the following details:
- The status of the node pool.
- The node pool's OCID.
- The type of the worker nodes in the node pool (virtual).
- The configuration currently used when starting new virtual nodes in the node pool, including the following details:
- The version of Kubernetes to run on worker nodes.
- The shape to use for worker nodes.
- The availability domains, fault domains, and different regional subnets (recommended) or AD-specific subnets hosting worker nodes.
-
Change virtual node pool and virtual node properties as follows:
- Click Edit and specify:
- Name: A different name for the node pool. Avoid entering confidential information.
- Node count: A different number of virtual nodes to create in the virtual node pool, placed in the availability domains you select, and in the regional subnet (recommended) or AD-specific subnet you specify for each availability domain. See Scaling Node Pools.
- Node Placement Configuration:
- Availability domain: An availability domain in which to place virtual nodes.
- Fault domains: (Optional) One or more fault domains in the availability domain in which to place virtual nodes.
Optionally click Another Row to select more domains and subnets in which to place virtual nodes.
When the virtual nodes are created, they're distributed as evenly as possible across the availability domains and fault domains you select. If you don't select any fault domains for a particular availability domain, the virtual nodes are distributed as evenly as possible across all the fault domains in that availability domain.
- Virtual Node Communication:
- Subnet: A different regional subnet (recommended) or AD-specific subnet configured to host virtual nodes. If you specified load balancer subnets, the virtual node subnets must be different. The subnets you specify can be private (recommended) or public, and can be regional (recommended) or AD-specific. We recommend that the pod subnet and the virtual node subnet are the same subnet (in which case, the virtual node subnet must be private). For more information, see Subnet Configuration.
- Use security rules in Network Security Group (NSG): Control access to the virtual node subnet using security rules defined for one or more network security groups (NSGs) that you specify (up to a maximum of five). You can use security rules defined for NSGs instead of, or as well as, those defined for security lists (NSGs are recommended). For more information about the security rules to specify for the NSG, see Security Rules for Worker Nodes and Pods.
- Pod Communication:
- Subnet: A different regional subnet configured to host pods. The pod subnet you specify for virtual nodes must be private. We recommend that the pod subnet and the virtual node subnet are the same subnet (in which case, Oracle recommends defining security rules in network security groups rather than in security lists). For more information, see Subnet Configuration.
- Use security rules in Network Security Group (NSG): Control access to the pod subnet using security rules defined for one or more network security groups (NSGs) that you specify (up to a maximum of five). You can use security rules defined for NSGs instead of, or as well as, those defined for security lists (NSGs are recommended). For more information about the security rules to specify for the NSG, see Security Rules for Worker Nodes and Pods.
For more information about pod communication, see Pod Networking.
- Kubernetes labels and taints: (Optional) Enable the targeting of workloads at specific node pools by adding labels and taints to virtual nodes:
- Labels: One or more labels (in addition to a default label) to add to virtual nodes in the virtual node pool to enable the targeting of workloads at specific node pools.
- Taints: One or more taints to add to virtual nodes in the virtual node pool. Taints enable virtual nodes to repel pods, and so ensure that pods don't run on virtual nodes in a particular virtual node pool. You can apply taints only to virtual nodes.
For more information, see Assigning Pods to Nodes in the Kubernetes documentation.
- Click Save Changes to save the updated properties.
- Click Edit and specify:
- Use the Node pool tags tab to add or modify the tags applied to the virtual node pool. Tagging enables you to group disparate resources across compartments, and enables you to annotate resources with your own metadata. For more information, see Tagging Kubernetes Cluster-Related Resources.
-
Under Resources, click the following resources to perform more actions:
- Click Virtual Nodes to see information about specific worker nodes in the virtual node pool.
- Click Work requests to perform the following tasks:
- Get the details of a particular work request for the virtual node pool resource.
- List the work requests for the virtual node pool resource.
For more information, see Viewing Work Requests.
Use the oci ce virtual-node-pool update command and required parameters to update a virtual node pool:
oci ce virtual-node-pool update --virtual-node-pool-id <virtual-node-pool-ocid> [OPTIONS]
For a complete list of parameters and values for CLI commands, see the CLI Command Reference.
Run the UpdateVirtualNodePool operation to update a virtual node pool.