Use the Conformity Knowledge Base AI to help improve your Cloud Posture

Enable Integrity Monitoring for Cluster Nodes

Trend Cloud One™ – Conformity is a continuous assurance tool that provides peace of mind for your cloud infrastructure, delivering over 1000 automated best practice checks.

Risk Level: Medium (should be achieved)

Ensure that the Integrity Monitoring feature is enabled for your Google Kubernetes Engine (GKE) cluster nodes in order to monitor and automatically check the runtime boot integrity of your shielded cluster nodes using the Google Cloud Monitoring service.

Security

Integrity Monitoring enables monitoring and attestation of the boot integrity for your GKE cluster nodes. The attestation is performed against the integrity policy baseline. This baseline is initially derived from the implicitly trusted boot image when the cluster node is created. To protect your application data and ensure that the boot loader on your GKE cluster nodes remains untampered, it is strongly recommended to enable Integrity Monitoring for all cluster nodes.


Audit

To determine if the Integrity Monitoring feature is enabled for all your GKE cluster nodes, perform the following operations:

Using GCP Console

01 Sign in to the Google Cloud Management Console.

02 Select the GCP project that you want to examine from the console top navigation bar.

03 Navigate to Google Kubernetes Engine (GKE) console at https://console.cloud.google.com/kubernetes.

04 In the main navigation panel, under Kubernetes Engine, select Clusters to access the list with the GKE clusters provisioned within the selected project.

05 Click on the name (link) of the GKE cluster that you want to examine.

06 Select the NODES tab to access the node pools created for the selected cluster.

07 Click on the name (link) of the cluster node pool that you want to examine.

08 In the Security section, check the Integrity monitoring feature status. If Integrity monitoring is set to Disabled, the Integrity Monitoring feature is not enabled for the nodes running within the selected Google Kubernetes Engine (GKE) cluster node pool.

09 Repeat steps no. 7 and 8 for each node pool provisioned for the selected GKE cluster.

10 Repeat steps no. 5 – 9 for each GKE cluster created for the selected GCP project.

11 Repeat steps no. 2 – 10 for each project deployed within your Google Cloud account.

Using GCP CLI

01 Run projects list command (Windows/macOS/Linux) with custom query filters to list the ID of each project available in your Google Cloud account:

gcloud projects list
  --format="table(projectId)"

02 The command output should return the requested GCP project ID(s):

PROJECT_ID
cc-bigdata-project-123123
cc-appdata-project-112233

03 Run container clusters list command (Windows/macOS/Linux) using the ID of the GCP project that you want to examine as the identifier parameter and custom query filters to describe the name and the region of each GKE cluster created for the selected project:

gcloud container clusters list
  --project cc-bigdata-project-123123
  --format="(NAME,LOCATION)"

04 The command output should return the requested GKE cluster names and their regions:

NAME                       LOCATION
cc-gke-analytics-cluster   us-central1
cc-gke-operations-cluster  us-central1

05 Run container node-pools list command (Windows/macOS/Linux) using the name of the GKE cluster that you want to examine as the identifier parameter, to describe the name of each node pool provisioned for the selected cluster:

gcloud container node-pools list
  --cluster=cc-gke-analytics-cluster
  --region=us-central1
  --format="(NAME)"

06 The command output should return the requested cluster node pool name(s):

NAME
cc-gke-dev-pool-001
cc-gke-dev-pool-002

07 Run container node-pools describe command (Windows/macOS/Linux) using the name of the cluster node pool that you want to examine as the identifier parameter and custom output filtering to describe the Integrity Monitoring feature status for the selected node pool:

gcloud container node-pools describe cc-gke-dev-pool-001
  --cluster=cc-gke-analytics-cluster
  --region=us-central1
  --format="yaml(config.shieldedInstanceConfig.enableIntegrityMonitoring)"

08 The command output should return the requested feature configuration status:

config:
  shieldedInstanceConfig: {}

If the container node-pools describe command output returns null, or an empty object for the config.shieldedInstanceConfig configuration attribute (i.e. {}), as shown in the output example above, the Integrity Monitoring feature is not enabled for the nodes running within the selected Google Kubernetes Engine (GKE) cluster node pool.

09 Repeat step no. 7 and 8 for each node pool provisioned for the selected GKE cluster.

10 Repeat steps no. 5 – 9 for each GKE cluster created for the selected GCP project.

11 Repeat steps no. 3 – 10 for each GCP project deployed in your Google Cloud account.

Remediation / Resolution

To enable the Integrity Monitoring feature for your Google Kubernetes Engine (GKE) cluster nodes, you have to re-create the existing cluster node pools with the appropriate monitoring configuration by performing the following operations:

Using GCP Console

01 Sign in to the Google Cloud Management Console.

02 Select the GCP project that you want to examine from the console top navigation bar.

03 Navigate to Google Kubernetes Engine (GKE) console at https://console.cloud.google.com/kubernetes.

04 In the main navigation panel, under Kubernetes Engine, select Clusters.

05 Click on the name (link) of the GKE cluster that you want to reconfigure.

06 Select the NODES tab to access the node pools created for the selected cluster.

07 Click on the name (link) of the cluster node pool that you want to re-create and collect all the configuration information available for the selected resource.

08 Go back to the NODES tab and choose ADD NODE POOL to initiate the setup process.

09 On the Add a node pool setup page, perform the following actions:

  1. For Node pool details, provide the following information:
    • Provide a unique name for the new node pool in the Name box.
    • Enter the number of nodes for the new pool in the Number of nodes box.
    • Choose whether or not to enable cluster auto-scaler. Must match the node pool configuration collected at step no. 7.
    • (Optional) If required, select the Specify node locations checkbox and choose additional zone nodes.
    • (Optional) If required, configure the Surge Upgrade feature for the new node pool. Must match the node pool configuration collected at step no. 7.
  2. For Nodes, provide the following information:
    • Select the type of the node image from the Image type dropdown list.
    • Choose the machine family, type, and series for the new node pool. Select the appropriate boot disk type and size. Must match the node pool configuration collected at step no. 8.
    • Select the Enable customer-managed encryption for boot disk checkbox and choose the Customer-Managed Key (CMK) that you want to use for data encryption from the Select a customer-managed key dropdown list. If your CMK does not appear in the dropdown list, select DON'T SEE YOUR KEY? ENTER KEY RESOURCE NAME and provide the full resource ID of your key. There is a configuration box marked "The service-<project-number>@compute-system.iam.gserviceaccount.com service account does not have the "cloudkms.cryptoKeyEncrypterDecrypter" role. Verify the service account has permission to encrypt/decrypt with the selected key". In this configuration box, choose GRANT to grant the specified service account the required IAM role on the selected CMK.
    • Enter the maximum number of Kubernetes Pods per node in the Maximum Pods per node box.
  3. For Security, provide the following information:
    • Choose the service account required by the cluster node pool from the Service account dropdown list.
    • Select the appropriate access scope(s). Must match the node pool configuration collected at step no. 7.
    • Under Shielded options, select the Enable integrity monitoring checkbox to enable the Integrity Monitoring feature for all the cluster nodes operating within the new node pool. Select Enable secure boot to enable the Secure Boot feature as well for the new node pool.
  4. For Metadata, add any required resource labels (tags), and configure the metadata settings such as GCE instance metadata based on the configuration information taken from the source node pool at step no. 7.
  5. Choose CREATE to create the new cluster node pool.

10 (Optional) Once the new cluster node pool is operating successfully, you can remove the source node pool in order to stop adding charges to your Google Cloud bill. Go back to the NODES tab and perform the following actions:

  1. Click on the name (link) of the source node pool that you want to delete.
  2. Choose DELETE from the console top menu to initiate the removal process.
  3. In the confirmation box, choose DELETE to confirm the node pool deletion.

11 Repeat steps no. 7 – 10 to enable the Integrity Monitoring feature for other node pools provisioned within the selected GKE cluster.

12 Repeat step no. 5 – 11 for each GKE cluster that you want to reconfigure, created for the selected GCP project.

13 Repeat steps no. 2 – 12 for each GCP project available in your Google Cloud account.

Using GCP CLI

01 Run container node-pools describe command (Windows/macOS/Linux) using the name of the cluster node pool that you want to re-create as the identifier parameter and custom output filtering to describe the configuration information available for the selected node pool:

gcloud container node-pools describe cc-gke-dev-pool-001
  --cluster=cc-gke-analytics-cluster
  --region=us-central1
  --format=json

02 The command output should return the return the requested configuration information:

{
  "config": {
    "diskSizeGb": 150,
    "diskType": "pd-standard",
    "imageType": "COS",
    "metadata": {
      "disable-legacy-endpoints": "true"
    },
    "serviceAccount": "default",
    "shieldedInstanceConfig": {
      "enableSecureBoot": true
    }
  },
  "locations": [
    "us-central1-b",
    "us-central1-c"
  ],

  ...

  "management": {
    "autoRepair": true,
    "autoUpgrade": true
  },
  "maxPodsConstraint": {
    "maxPodsPerNode": "110"
  },
  "name": "cc-gke-dev-pool-001",
  "podIpv4CidrSize": 24,
  "status": "RUNNING",
  "upgradeSettings": {
    "maxSurge": 1
  },
  "version": "1.15.12-gke.2"
}

03 Run container node-pools create command (Windows/macOS/Linux) using the information returned at the previous step as the configuration data for the command parameters, to create a new GKE cluster node pool and enable the Integrity Monitoring feature for the new GKE resource by including the --shielded-integrity-monitoring parameter in the command request:

gcloud beta container node-pools create cc-gke-new-dev-pool-001
  --cluster=cc-gke-analytics-cluster
  --region=us-central1
  --node-locations=us-central1-b,us-central1-c
  --machine-type=e2-standard-2
  --disk-type=pd-standard
  --disk-size=150
  --shielded-integrity-monitoring

04 The command output should return the full URL of the new cluster node pool:

Created [https://dataproc.googleapis.com/v1/projects/cc-bigdata-project-123123/regions/us-central1/clusters/cc-gke-new-dev-pool-001]
 

05 (Optional) Once the new node pool is operating successfully, you can remove the source node pool in order to stop adding charges to your GCP bill. Run container node-pools delete command (Windows/macOS/Linux) using the name of the resource that you want to remove as the identifier parameter, to remove the specified node pool from your GKE cluster:

gcloud container node-pools delete cc-gke-dev-pool-001
  --cluster=cc-gke-analytics-cluster
  --region=us-central1

06 Type Y to confirm the cluster node pool removal:

The following node pool will be deleted.
[cc-gke-dev-pool-001] in cluster [cc-gke-analytics-cluster] in [us-central1]
Do you want to continue (Y/n)?  Y

07 The output should return the container node-pools delete command request status:

Deleting node pool cc-gke-dev-pool-001...done.
Deleted [https://container.googleapis.com/v1/projects/cc-bigdata-project-123123/zones/us-central1/clusters/cc-gke-analytics-cluster/nodePools/cc-gke-dev-pool-001].

08 Repeat steps no. 1 – 7 to enable the Integrity Monitoring feature for other node pools available within the selected GKE cluster.

09 Repeat step no. 1 – 8 for each GKE cluster that you want to reconfigure, created for the selected GCP project.

10 Repeat steps no. 1 – 9 for each GCP project deployed in your Google Cloud account.

References

Publication date May 10, 2021