Info icon
End of Life Notice: For Trend Cloud One™ - Conformity Customers, Conformity will reach its End of Sale on “July 31st, 2025” and End of Life “July 31st, 2026”. The same capabilities and much more is available in TrendAI Vision One™ Cloud Risk Management. For details, please refer to Upgrade to TrendAI Vision One™
Use the Knowledge Base AI to help improve your Cloud Posture

Check for Kubelet Configuration File Ownership

TrendAI Vision One™ provides continuous assurance that gives peace of mind for your cloud infrastructure, delivering over 1400 automated best practice checks.

Risk Level: Medium (should be achieved)

Ensure that the Kubelet configuration file ownership is set to "root:root" as only the root user and group should be able to read or modify the kubelet.conf file. This prevents unauthorized changes to the Kubelet configuration file, thereby maintaining the integrity and security of the worker node.

Security

kubelet.conf is the Kubeconfig file on a Kubernetes cluster worker node. The file contains the credentials and API server details the Kubelet uses to authenticate and communicate with the Kubernetes control plane. Because the Kubelet runs with elevated privileges, the kubelet.conf file contains sensitive authentication credentials, and restricting ownership to root:root prevents unauthorized modification, reduces the risk of privilege escalation, and strengthens the node's overall security posture. For security and compliance, kubelet.conf must only be writable by system administrators (typically root).


Audit

To determine the file ownership set for the Kubelet configuration file (kubelet.conf), perform the following operations:

Using OCI Console

  1. Sign in to your Oracle Cloud Infrastructure (OCI) account.

  2. Navigate to Kubernetes Clusters (OKE) console available at https://cloud.oracle.com/containers/clusters.

  3. For Applied filters, choose an OCI compartment from the Compartment dropdown menu, to list the OCI Kubernetes Engine (OKE) clusters provisioned in the selected compartment.

  4. Click on the name (link) of the OCI Kubernetes Engine (OKE) cluster that you want to examine, listed in the Name column.

  5. Select the Node pools tab and click on the name (link) of the node pool that you want to examine.

  6. Select the Nodes tab and click on the name (link) of the node (instance) that you want to examine.

  7. Select the Details tab and choose Copy next to Public IP address, in the Instance access section, to get the public IP address of your OKE cluster node.

  8. Use your preferred method to open an SSH connection to the selected cluster node. For the public IP address, use the IP address copied in the previous step. The default username is opc for Oracle Linux and Red Hat Enterprise Linux compatible images, as well as Windows platform images. For Ubuntu images, the default username is ubuntu. See Connecting to an Instance for more details.

  9. Once connected to your OKE cluster worker node, run the commands listed below to determine the Kubelet configuration file ownership:

    1. Run the following command to determine if the Kubelet service is running:
      	sudo systemctl status kubelet
      	
    2. The command output should return Active: active (running).
    3. Run the following command to find the Kubelet configuration file for your node:
      	ps -ef | grep kubelet
      	
    4. If the --config flag is present, it should return the location of the Kubelet configuration file, such as /etc/kubernetes/kubelet.conf.
    5. Run the following command to obtain the kubelet.conf file ownership:
      	stat -c %U:%G /etc/kubernetes/kubelet.conf
      	
    6. The output should return the Kubelet configuration file's ownership. For compliance, the kubelet.conf file ownership must be set to root:root.
  10. Repeat steps no. 6 - 9 for each worker node running within the selected node pool.

  11. Repeat steps no. 5 - 10 for each node pool created for the selected OKE cluster.

Using OCI CLI

  1. Run iam compartment list command (Windows/macOS/Linux) with output query filters to list the ID of each compartment available in your Oracle Cloud Infrastructure (OCI) account:

    oci iam compartment list
    	--all
    	--include-root
    	--query 'data[]."id"'
    
  2. The command output should return the requested OCI compartment identifiers (OCIDs):

    [
    	"ocid1.tenancy.oc1..aaaabbbbccccddddabcd1234abcd1234abcd1234abcd1234abcd1234abcd",
    	"ocid1.compartment.oc1..abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd"
    ]
    
  3. Run ce cluster list command (Windows/macOS/Linux) with the ID of the OCI compartment that you want to examine as the identifier parameter, to list the ID of each OCI Kubernetes Engine (OKE) cluster available in the selected OCI compartment:

    oci ce cluster list
    	--compartment-id 'ocid1.tenancy.oc1..aaaabbbbccccddddabcd1234abcd1234abcd1234abcd1234abcd1234abcd'
    	--all
    	--query 'data[]."id"'
    
  4. The command output should return the requested OKE cluster IDs:

    [
    	"ocid1.cluster.oc1.ap-sydney-1.aaaabbbbccccddddabcd1234abcd1234abcd1234abcd1234abcd1234abcd",
    	"ocid1.cluster.oc1.ap-sydney-1.abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd"
    ]
    
  5. Run ce node-pool list command (Windows/macOS/Linux) with the ID of the OKE cluster that you want to examine as the identifier parameter, to list the ID of each node pool created for your OKE cluster:

    oci ce node-pool list
    	--compartment-id 'ocid1.tenancy.oc1..aaaabbbbccccddddabcd1234abcd1234abcd1234abcd1234abcd1234abcd'
    	--cluster-id 'ocid1.cluster.oc1.ap-sydney-1.aaaabbbbccccddddabcd1234abcd1234abcd1234abcd1234abcd1234abcd'
    	--query 'data[]."id"'
    
  6. The command output should return the OKE node pool IDs:

    [
    	"ocid1.nodepool.oc1.ap-sydney-1.abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd",
    	"ocid1.nodepool.oc1.ap-sydney-1.aaaabbbbccccddddabcd1234abcd1234abcd1234abcd1234abcd1234abcd"
    ]
    
  7. Run ce node-pool get command (Windows/macOS/Linux) to describe the public IP address of each worker node running within the selected OKE node pool:

    oci ce node-pool get
    	--node-pool-id 'ocid1.nodepool.oc1.ap-sydney-1.abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd'
    	--query 'data.nodes[]."public-ip"'
    
  8. The command output should return the public IP address of each OKE cluster worker node (instance):

    [
    	"<public-ip-node-1>",
    	"<public-ip-node-2>",
    	"<public-ip-node-3>"
    ]
    
  9. Use your preferred method to open an SSH connection to your OKE cluster worker node. For the public IP address, use the IP address returned in the previous step. The default username is opc for Oracle Linux and Red Hat Enterprise Linux compatible images, as well as Windows platform images. For Ubuntu images, the default username is ubuntu. See Connecting to an Instance for more details.

  10. Once connected to your OKE cluster node, run the commands listed below to determine the Kubelet configuration file ownership:

    1. Run the following command to determine if the Kubelet service is running:
      	sudo systemctl status kubelet
      	
    2. The output should return Active: active (running).
    3. Run the following command to find the Kubelet configuration file for your node:
      	ps -ef | grep kubelet
      	
    4. If the --config flag is present, it should return the location of the Kubelet configuration file, such as /etc/kubernetes/kubelet.conf.
    5. Run the following command to obtain the kubelet.conf file ownership:
      	stat -c %U:%G /etc/kubernetes/kubelet.conf
      	
    6. The output should return the Kubelet configuration file's ownership. For compliance, the kubelet.conf file ownership must be set to root:root.
  11. Repeat steps no. 9 and 10 for each worker node running within the selected node pool.

  12. Repeat steps no. 7 - 11 for each node pool created for the selected OKE cluster.

Remediation / Resolution

To ensure the file ownership for the Kubelet configuration file defined your OKE cluster worker nodes is set to **root:root**, perform the following operations:

Using OCI Console

  1. Sign in to your Oracle Cloud Infrastructure (OCI) account.

  2. Navigate to Kubernetes Clusters (OKE) console available at https://cloud.oracle.com/containers/clusters.

  3. For Applied filters, choose an OCI compartment from the Compartment dropdown menu, to list the OCI Kubernetes Engine (OKE) clusters provisioned in the selected compartment.

  4. Click on the name (link) of the OCI Kubernetes Engine (OKE) cluster that you want to configure, listed in the Name column.

  5. Select the Node pools tab and click on the name (link) of the node pool that you want to access.

  6. Select the Nodes tab and click on the name (link) of the node (instance) that you want to configure.

  7. Select the Details tab and choose Copy next to Public IP address, in the Instance access section, to get the public IP address of your OKE cluster node.

  8. Use your preferred method to open an SSH connection to the selected cluster node. For the public IP address, use the IP address copied in the previous step. The default username is opc for Oracle Linux and Red Hat Enterprise Linux compatible images, as well as Windows platform images. For Ubuntu images, the default username is ubuntu. See Connecting to an Instance for more details.

  9. Once connected to your OKE cluster worker node, run the following command to set the file ownership for the Kubelet configuration file to root:root (recommended):

    chown root:root /etc/kubernetes/kubelet.conf
    
  10. Repeat steps no. 6 - 9 for each worker node running within the selected node pool.

  11. Repeat steps no. 5 - 10 for each node pool deployed for the selected OKE cluster.

Using OCI CLI

  1. Run iam compartment list command (Windows/macOS/Linux) with output query filters to list the ID of each compartment available in your Oracle Cloud Infrastructure (OCI) account:

    oci iam compartment list
    	--all
    	--include-root
    	--query 'data[]."id"'
    
  2. The command output should return the requested OCI compartment identifiers (OCIDs):

    [
    	"ocid1.tenancy.oc1..aaaabbbbccccddddabcd1234abcd1234abcd1234abcd1234abcd1234abcd",
    	"ocid1.compartment.oc1..abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd"
    ]
    
  3. Run ce cluster list command (Windows/macOS/Linux) with the ID of the OCI compartment that you want to examine as the identifier parameter, to list the ID of each OCI Kubernetes Engine (OKE) cluster available in the selected OCI compartment:

    oci ce cluster list
    	--compartment-id 'ocid1.tenancy.oc1..aaaabbbbccccddddabcd1234abcd1234abcd1234abcd1234abcd1234abcd'
    	--all
    	--query 'data[]."id"'
    
  4. The command output should return the requested OKE cluster IDs:

    [
    	"ocid1.cluster.oc1.ap-sydney-1.aaaabbbbccccddddabcd1234abcd1234abcd1234abcd1234abcd1234abcd",
    	"ocid1.cluster.oc1.ap-sydney-1.abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd"
    ]
    
  5. Run ce node-pool list command (Windows/macOS/Linux) with the ID of the OKE cluster that you want to examine as the identifier parameter, to list the ID of each node pool created for your OKE cluster:

    oci ce node-pool list
    	--compartment-id 'ocid1.tenancy.oc1..aaaabbbbccccddddabcd1234abcd1234abcd1234abcd1234abcd1234abcd'
    	--cluster-id 'ocid1.cluster.oc1.ap-sydney-1.aaaabbbbccccddddabcd1234abcd1234abcd1234abcd1234abcd1234abcd'
    	--query 'data[]."id"'
    
  6. The command output should return the OKE node pool IDs:

    [
    	"ocid1.nodepool.oc1.ap-sydney-1.abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd",
    	"ocid1.nodepool.oc1.ap-sydney-1.aaaabbbbccccddddabcd1234abcd1234abcd1234abcd1234abcd1234abcd"
    ]
    
  7. Run ce node-pool get command (Windows/macOS/Linux) to describe the public IP address of each worker node running within the selected OKE node pool:

    oci ce node-pool get
    	--node-pool-id 'ocid1.nodepool.oc1.ap-sydney-1.abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd'
    	--query 'data.nodes[]."public-ip"'
    
  8. The command output should return the public IP address of each OKE cluster worker node (instance):

    [
    	"<public-ip-node-1>",
    	"<public-ip-node-2>",
    	"<public-ip-node-3>"
    ]
    
  9. Use your preferred method to open an SSH connection to your OKE cluster worker node. For the public IP address, use the IP address returned in the previous step. The default username is opc for Oracle Linux and Red Hat Enterprise Linux compatible images, as well as Windows platform images. For Ubuntu images, the default username is ubuntu. See Connecting to an Instance for more details.

  10. Once connected to your OKE cluster worker node, run the following command to set the file ownership for the Kubelet configuration file to root:root (recommended):

    chown root:root /etc/kubernetes/kubelet.conf
    
  11. Repeat steps no. 9 and 10 for each worker node running within the selected node pool.

  12. Repeat steps no. 7 - 11 for each node pool deployed for the selected OKE cluster.

References

Publication date Dec 1, 2025