Use the Conformity Knowledge Base AI to help improve your Cloud Posture

Use Sandbox with gVisor for GKE Clusters Nodes

Trend Cloud One™ – Conformity is a continuous assurance tool that provides peace of mind for your cloud infrastructure, delivering over 1000 automated best practice checks.

Risk Level: Medium (should be achieved)

To enhance security in multi-tenant Google Kubernetes Engine (GKE) environments, ensure that your cluster nodes are using GKE Sandbox with gVisor to isolate untrusted workloads. This provides an extra layer of pod-level isolation and requires nodes to use the Container-Optimized OS with containerd (cos_containerd) image.

Security

GKE Sandbox provides an extra layer of isolation between containers and the underlying host kernel, mitigating the risk of container escape vulnerabilities. This prevents malicious code within a container from affecting the node's operating system, accessing sensitive data, or impacting other containers on the same node. This feature is crucial for scenarios where users upload and execute code, as it significantly reduces the attack surface. In conclusion, enabling GKE Sandbox with gVisor is highly recommended for GKE cluster nodes, especially in multi-tenant environments or when running untrusted workloads.


Audit

To determine if your Google Kubernetes Engine (GKE) cluster nodes are using GKE Sandbox with gVisor, perform the following operations:

Using GCP Console

01 Sign in to the Google Cloud Management Console.

02 Select the Google Cloud Platform (GCP) project that you want to examine from the console top navigation bar.

03 Navigate to Kubernetes Engine console available at https://console.cloud.google.com/kubernetes.

04 In the left navigation panel, under Resource Management, choose Clusters and select the OVERVIEW tab to access the list of GKE clusters provisioned for the selected GCP project.

05 Click on the name (link) of the GKE cluster that you want to examine.

06 Select the NODES tab to access the node pools created for the selected cluster.

07 Click on the name (link) of the GKE cluster node pool that you want to examine.

08 In the Security section, check the Sandbox with gVisor attribute value to determine if GKE Sandbox with gVisor is used for secure Pod isolation. If Sandbox with gVisor is set to Disabled, the nodes managed by the selected cluster node pool are not protected using GKE Sandbox with gVisor.

09 Repeat steps no. 7 and 8 for each node pool provisioned for the selected GKE cluster.

10 Repeat steps no. 5 – 9 for each GKE cluster provisioned within the selected GCP project.

11 Repeat steps no. 2 – 10 for each GCP project deployed in your Google Cloud account.

Using GCP CLI

01 Run projects list command (Windows/macOS/Linux) with custom output filters to list the ID of each GCP project available in your Google Cloud account:

gcloud projects list
	--format="table(projectId)"

02 The command output should return the requested GCP project IDS:

PROJECT_ID
cc-web-project-123123
cc-dev-project-112233

03 Run container clusters list command (Windows/macOS/Linux) with the ID of the GCP project that you want to examine as the identifier parameter and custom output filters to describe the name and the region of each GKE cluster provisioned for the selected project:

gcloud container clusters list
	--project cc-web-project-123123
	--format="table(NAME,ZONE)"

04 The command output should return the requested cluster names and their regions:

NAME: cc-gke-backend-cluster
ZONE: us-central1

NAME: cc-gke-frontend-cluster
ZONE: us-central1

05 Run container node-pools list command (Windows/macOS/Linux) with the name of the GKE cluster that you want to examine as the identifier parameter, to describe the name of each node pool provisioned for the selected cluster:

gcloud container node-pools list
	--cluster=cc-gke-backend-cluster
	--region=us-central1
	--format="(NAME)"

06 The command output should return the requested GKE node pool names:

NAME:
cc-gke-backend-pool-001
cc-gke-backend-pool-002
cc-gke-backend-pool-003

07 Run container node-pools describe command (Windows/macOS/Linux) with the name of the cluster node pool that you want to examine as the identifier parameter and custom output filters to determine if GKE Sandbox with gVisor is used for secure Pod isolation:

gcloud container node-pools describe cc-gke-backend-pool-001
	--cluster=cc-gke-backend-cluster
	--region=us-central1
	--format="json(config.sandboxConfig)"

08 The command output should return the GKE Sandbox configuration available for the selected node pool:

null

If the container node-pools describe command output returns null, as shown in the example above, the nodes managed by the selected cluster node pool are not protected using GKE Sandbox with gVisor.

09 Repeat step no. 7 and 8 for each node pool provisioned for the selected GKE cluster.

10 Repeat steps no. 5 - 9 for each GKE cluster provisioned for the selected GCP project.

11 Repeat steps no. 3 – 10 for each GCP project deployed in your Google Cloud account.

Remediation / Resolution

To enable GKE Sandbox with gVisor as an additional layer of isolation for your Kubernetes Pods, perform the following operations:

Using GCP Console

01 Sign in to the Google Cloud Management Console.

02 Select the Google Cloud Platform (GCP) project that you want to access from the console top navigation bar.

03 Navigate to Kubernetes Engine console available at https://console.cloud.google.com/kubernetes.

04 In the left navigation panel, under Resource Management, choose Clusters and select the OVERVIEW tab to access the list of GKE clusters deployed for the selected GCP project.

05 Click on the name (link) of the GKE cluster that you want to configure.

06 Select the NODES tab to access the node pools created for the selected cluster.

07 Click on the name (link) of the GKE node pool that you want to re-create and collect all the configuration information available for the selected resource.

08 Navigate back to the NODES tab, choose ADD NODE POOL, and perform the following actions to create a new GKE node pool:

  1. For Node pool details, provide the following information:
    1. Provide a unique name for the new node pool in the Name box.
    2. Enter the number of nodes for the new pool in the Number of nodes box.
    3. Choose whether or not to enable cluster auto-scaler. Must match the node pool configuration collected at step no. 7.
    4. Choose whether or not to enable private nodes. Selecting the Enable private nodes checkbox will enable the selected GKE cluster to provision nodes with only internal IP addresses (i.e., private nodes), which prevent external clients from accessing the cluster nodes.
    5. If required, check the Specify node locations setting checkbox and choose additional node zones.
    6. For Node pool upgrade strategy, configure the Surge Upgrade or Blue-Green Upgrade feature for the new node pool. Must match the node pool configuration collected at step no. 7.
  2. For Nodes, provide the following information:
    1. Select Container-Optimized OS with containerd (cos_containerd) (default) from the Image type dropdown list.
    2. Choose the machine family, type, and series for the new node pool. Select the appropriate boot disk type and size. Must match the node pool configuration collected at step no. 7.
    3. For Boot disk encryption, check the Cloud KMS key setting checkbox and select the Customer-Managed Encryption Key (CMEK) that you want to use for encryption, from the Select a Cloud KMS key dropdown list. For "The service-\@compute-system.iam.gserviceaccount.com service account does not have the "cloudkms.cryptoKeyEncrypterDecrypter" role. Verify the service account has permission to encrypt/decrypt with the selected key", choose GRANT to grant the specified service account the required IAM role on the selected CMEK.
  3. For Networking, specify the maximum number of Kubernetes Pods per node in the Maximum Pods per node box, choose whether or not to automatically create secondary ranges.
  4. For Security, provide the following information:
    1. Choose the service account required by the cluster node pool from the Service account dropdown list.
    2. Select the appropriate access scopes. Must match the node pool configuration collected at step no. 7.
    3. Check the Enable sandbox with gVisor setting checkbox to enable GKE Sandbox with gVisor as an additional layer of isolation for the Kubernetes Pods.
    4. Under Shielded options, perform the following actions:
      1. Check the Enable secure boot setting checkbox to enable the Secure Boot feature for all the cluster nodes within the new node pool.
      2. Select Enable integrity monitoring to enable the Integrity Monitoring feature for the new cluster node pool.
      3. Check the Enable Confidential GKE Nodes setting checkbox if you want to encrypt your Kubernetes workload data in-use, using confidential GKE nodes.
  5. For Metadata, add any required resource labels (tags), and configure the metadata settings such as GCE instance metadata based on the configuration information taken from the source node pool at step no. 7.
  6. Choose CREATE to create the new GKE cluster node pool.

09 (Optional) Once the new cluster node pool is operating successfully, you can remove the source node pool in order to stop adding charges to your Google Cloud monthly bill. Go back to the NODES tab and perform the following actions:

  1. Click on the name (link) of the source node pool that you want to delete.
  2. Choose DELETE from the console top menu to initiate the removal process.
  3. In the confirmation box, type the node pool name in the required text box, and choose DELETE to confirm the node pool removal.

10 Repeat steps no. 7 - 9 to enable GKE Sandbox with gVisor for other node pools provisioned within the selected GKE cluster.

11 Repeat steps no. 5 – 10 for each GKE cluster that you want to configure, created for the selected GCP project.

12 Repeat steps no. 2 – 11 for each GCP project available in your Google Cloud account.

Using GCP CLI

01 Run container node-pools describe command (Windows/macOS/Linux) with the name of the GKE node pool that you want to re-create as the identifier parameter and custom output filters to describe the configuration information available for the selected node pool:

gcloud container node-pools describe cc-gke-backend-pool-001
	--cluster=cc-gke-backend-cluster
	--region=us-central1
	--format=json

02 The command output should return the return the requested configuration information:

{
	"config": {
		"diskSizeGb": 150,
		"diskType": "pd-standard",
		"imageType": "COS",
		"metadata": {
			"disable-legacy-endpoints": "true"
		},
		"serviceAccount": "default",
		"shieldedInstanceConfig": {
			"enableSecureBoot": true
		}
	},
	"locations": [
		"us-central1-b",
		"us-central1-c"
	],

	...

	"management": {
		"autoRepair": true,
		"autoUpgrade": true
	},
	"maxPodsConstraint": {
		"maxPodsPerNode": "110"
	},
	"name": "cc-gke-backend-pool-001",
	"podIpv4CidrSize": 24,
	"status": "RUNNING",
	"upgradeSettings": {
		"maxSurge": 1
	},
	"version": "1.30.6-gke.1"
}

03 Run container node-pools create command (Windows/macOS/Linux) with the information returned at the previous step as the configuration data for the command parameters, to create a new node pool with the GKE Sandbox with gVisor feature, by including the --sandbox="type=gvisor" parameter in the command request:

gcloud container node-pools create cc-new-backend-pool-001
	--cluster=cc-gke-backend-cluster
	--region=us-central1
	--image-type=cos_containerd
	--node-locations=us-central1-b,us-central1-c
	--machine-type=e2-standard-2
	--disk-type=pd-standard
	--disk-size=150
	--shielded-secure-boot
	--sandbox="type=gvisor"

04 The command output should return the full URL of the new GKE node pool:

Created [https://dataproc.googleapis.com/v1/projects/cc-web-project-123123/regions/us-central1/clusters/cc-new-backend-pool-001]

05 (Optional) Once the new node pool is operating successfully, you can remove the source node pool in order to stop adding charges to your GCP bill. Run container node-pools delete command (Windows/macOS/Linux) using the name of the resource that you want to remove as the identifier parameter, to remove the specified node pool from your GKE cluster:

gcloud container node-pools delete cc-gke-backend-pool-001
	--cluster=cc-gke-backend-cluster
	--region=us-central1

06 Type Y and press Enter to confirm the cluster node pool removal:

The following node pool will be deleted.
[cc-gke-backend-pool-001] in cluster [cc-gke-backend-cluster] in [us-central1]
Do you want to continue (Y/n)?  Y

07 The output should return the container node-pools delete command request status:

Deleting node pool cc-gke-backend-pool-001... done.
Deleted [https://container.googleapis.com/v1/projects/cc-web-project-123123/zones/us-central1/clusters/cc-gke-backend-cluster/nodePools/cc-gke-backend-pool-001].

08 Repeat steps no. 1 - 4 to enable GKE Sandbox with gVisor for other node pools provisioned for the selected GKE cluster.

09 Repeat steps no. 1 - 8 for each GKE cluster that you want to configure, available within the selected GCP project.

10 Repeat steps no. 1 – 9 for each GCP project deployed in your Google Cloud account.

References

Publication date Jan 6, 2025