Enabling Auditing in Kubernetes with kubeadm

28 December, 2020

If you are going to be provisioning and hosting a Kubernetes cluster in production for your company or yourself using kubeadm you should be considering how you will secure your cluster. One of the more important parts of a Kubernetes cluster to secure is the API, the API is at the core of the Kubernetes control-plane and allows users and parts of the cluster to query and manipulate the states of API objects in Kubernetes (for example: Pods, Namespaces, ConfigMaps, and Events).

As part of securing the API you will want to enable Auditing, Auditing allows cluster administrators to answer the following questions:

The Kubernetes docs provide a fairly straightforward guide on how to configure this manually but lack the necessary steps to achieve this when deploying a Kubernetes cluster with a kubeadm config file, using the config file makes it easy to deploy a cluster using something like Ansible or other tools. In this guide I will provide the steps and example configuration you will need to enable and deploy auditing in your own Kubernetes cluster.

Audit Policy

At the heart of Auditing in Kubernetes is the Audit policy, Audit policies defines rules about what events should be recorded and what data they should include. I won't go in too much detail about the Audit policies here but you can find an example on the Kubernetes docs website and I will provide an example one below for you to get started with.

kubeadm

kubeadm init provides a flag --config which can be used to pass a configuration file containing all your custom control-plane configurations flags rather than specifying them in the command line. You can find out more about this here in the Kubernetes docs. I will provide the basic configuration you will need to configure Auditing.

Deploying Auditing with kubeadm

First we will need to create a file called audit-policy.yaml, this file will need to be copied to your Kubernetes master node or nodes, if you are using Ansible or a similar tool be sure to copy this file over before you run kubeadm init --config=config.yaml

$ mkdir /etc/kubernetes/policies
$ touch /etc/kubernetes/policies/audit-policy.yaml
apiVersion: audit.k8s.io/v1beta1
kind: Policy
rules:
  # Do not log from kube-system accounts
  - level: None
    userGroups:
    - system:serviceaccounts:kube-system
  - level: None
    users:
    - system:apiserver
    - system:kube-scheduler
    - system:volume-scheduler
    - system:kube-controller-manager
    - system:node

  # Do not log from collector
  - level: None
    users:
    - system:serviceaccount:collectorforkubernetes:collectorforkubernetes

  # Don't log nodes communications
  - level: None
    userGroups:
    - system:nodes

  # Don't log these read-only URLs.
  - level: None
    nonResourceURLs:
    - /healthz*
    - /version
    - /swagger*

  # Log configmap and secret changes in all namespaces at the metadata level.
  - level: Metadata
    resources:
    - resources: ["secrets", "configmaps"]

  # A catch-all rule to log all other requests at the request level.
  - level: Request

In the configuration below the audit-log-maxage & audit-log-maxsize which will ensure log files are rotated when they reach 200MB and to only keep 30 days of logs.

apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
  extraArgs:
    audit-log-path: /var/log/apiserver/audit.log
    audit-policy-file: /etc/kubernetes/policies/audit-policy.yaml
    audit-log-maxage: "30"
    audit-log-maxsize: "200"
  extraVolumes:
  - name: policies
    path: "/etc/kubernetes/policies"
    mountPath: "/etc/kubernetes/policies"
    hostPath: "/etc/kubernetes/policies"
    readOnly: true
    pathType: DirectoryOrCreate
  - name: log
    path: "/var/log/apiserver"
    mountPath: "/var/log/apiserver"
    hostPath: "/var/log/apiserver"
    pathType: DirectoryOrCreate
networking:
  podSubnet: "10.244.0.0/16"
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
$ kubeadm init --config=config.yaml
$ ls /var/log/apiserver/audit.log
$ tail -f /var/log/apiserver/audit.log

You will now be able to verify what actions are happening in your cluster.

The following articles/authors helped me when writing this blog post;