Cloudogu Logo

Hello, we are Cloudogu!

Experts in Software Lifecycle Management and process auto­mation, supporter of open source soft­ware and developer of the Cloudogu EcoSystem.

featured image Kubernetes AppOps Security Part 2: Using Network Policies (2/2) - Advanced Topics and Tricks
01/20/2020 in DevOps

Kubernetes AppOps Security Part 2: Using Network Policies (2/2) - Advanced Topics and Tricks


Johannes Schnatterer
Johannes Schnatterer

Technical Lead


This article is part 2 of the series „Kubernetes AppOps Security“
Read the first part now.

When you deploy applications to a managed Kubernetes clusters, your business operations are still responsible for security, right? Not entirely! Although Kubernetes abstracts from the hardware, its API still offers developers many opportunities to improve the security of the applications running on it as compared to the standard settings. This article addresses advanced topics in CNI network policies, testing, debugging, restrictions, alternatives, and pitfalls.

In a Kubernetes cluster, everything (nodes, pods, Kubelets, etc.) can communicate with each other by default. If an attacker succeeds in exploiting a security vulnerability in one of the applications, he can easily expand his attack to all underlying systems in the same cluster. You can restrict this vulnerability using the on-board network policy features found in Kubernetes. The first part in this article series recommends whitelisting incoming and outgoing data traffic. If you want to try it yourself in a defined environment, you will find complete examples with instructions in the “cloudogu/k8s-security-demos” repository on GitHub.

This is what it looks like in practice: What are the pitfalls of network policies? Where does the mechanism have its limits? Are there any alternatives? How can it be ensured that the network policies work as intended? What should be done in the event of an error? This installment in our article series provides answers to these advanced topics.

CNI plugin support

As was mentioned in the first part of our article series, network policies are specified in Kubernetes, but they are enforced using the Container Networking Interface (CNI) plugin. Their specification and implementation can therefore differ. For example,

  • the flannel CNI plugin generally does not support any network policies,
  • the API was extended several times (egress in Kubernetes 1.8 or later, whereas namespaceSelector and podSelector can be used simultaneously in 1.11 or later), which is why these features are only implemented in newer versions of the CNI plugins, and
  • a systematic test of various CNI plugins showed that WeaveNet interprets many network policies differently than Calico does.

It is therefore advisable that you thoroughly evaluate a CNI plugin before you use it. The Kubernetes Sonobuoy conformance test can come in helpful here. It can be used to test whether a Kubernetes cluster supports all of the features in accordance with the specification in its current configuration. This can easily be done using the Network Policies feature.

Users of a managed cluster have no influence on the CNI plugin that is used. They depend on whether the provider offers the Network Policies feature. Typically, the feature must also be activated explicitly for the cluster. Otherwise, the network policies will not be enforced. Major providers often rely on the Calico CNI plugin, for example Google’s GKE (available since March 2018), Amazon’s EKS (available since June 2018), and Microsoft’s AKS (available since May 2019).

Testing and debugging

It is therefore advisable to validate that the network policies are enforced according to the specification on the running Kubernetes cluster. No tool that allows automated tests to be performed has yet been created. However, there are many options for manual testing that can also be used for debugging purposes.

In the simplest case, the user opens a shell in the container using kubectl exec. Of course, this only works if the associated image includes a shell. However, it is very likely that there are not many tools in the container that can be used to access the network. In order to minimize the attack surface as much as possible, it is a good practice to install as few packages as possible in container images that are intended for production environments.

A simple way to debug such minimal containers is to run temporary containers that include the necessary tools. The nicolaka/netshoot image is useful when it comes to networking matters. It includes many network utilities (such as curl, ifconfig, nmap, ngrep, socat, tcpdump, etc.). Technically, one container can be started in the same environment as another container by running both in the same Linux namespaces (PID, Network, etc.). With Docker, this can easily be done for the network namespace using docker run --net container: <container_name> nicolaka/netshoot. It’s more challenging for Kubernetes. In this case pods abstract from access to the containers. All containers within the same pod share the Linux namespaces. With kubectl 1.22, it is not yet readily possible to launch short-lived new containers in a pod. However, in perspective, the “kubectl debug” command will allow short-lived containers to be launched for debugging. Until then, the following options are available (see Listing 9 for specific examples):

  • Temporary pods can be started with the same labels as the actual pod. This is easy to do, and it does not affect the actual pod. However, the pods will not be in the same Linux namespace. Yet this is sufficient for many network policy test cases.
  • Additional containers (which are often also called sidecars) can be explicitly added to a deployment. Implementation via YAML is cumbersome and requires the creation of a new pod. It is therefore only of limited suitability for production. The new container has to be removed manually after debugging, which requires another restart.
  • If there is access to nodes, additional containers can be started directly in the namespace of the desired container via the container runtime. For Docker, this can be accomplished via the docker run command specified above.
  • If there is no access to the nodes, the container runtime can be accessed via another pod. If you use Docker as a container runtime, you can start additional containers by mounting the Docker socket. There is also a third-party tool that automates this process. However, this is only possible if no PodSecurityPolicy prevents the execution of containers with the user root and the mounting of the Docker socket.
  • In addition, a temporary pod can be started in the host’s network namespace, i.e., that of the Kubernetes node. All of the node interfaces, for example, can be accessed from here, and packets can be sent from within the cluster but outside the pod network. This is also only possible if it is not prevented by a PodSecurityPolicy.
# Temporären Pod mit bestimmten Labels starten
$ kubectl run --generator=run-pod/v1 tmp-shell --rm -i --tty \
  --labels "app=a" -n team-a --image nicolaka/netshoot  -- /bin/bash

# Temporären Pod im Network Namespace des Hosts (Kubernetes node) starten
$ kubectl run tmp-shell --generator=run-pod/v1 --rm -i --tty \
    --overrides='{"spec": {"hostNetwork": true}}' -n team-a --image nicolaka/netshoot -- /bin/bash

# Debug Container in Deployment einfügen und in einem zugehörigen Pod eine Shell öffnen
$ kubectl patch deployment app-a -n team-a -p "$(cat <<EOF
spec:
  template:
  spec:
    containers:
    - image: nicolaka/netshoot
      name: netshoot
      args:
      - sleep
      - '9999999999999999999'
EOF
)"
$ kubectl exec -ti $(kubectl get pod -l app=a -o jsonpath="{.items[0].metadata.name}") \
  -c netshoot bash

Listing 1: Launch the Netshoot container in Kubernetes

Pitfalls and tips when using network policies

The learning curve for network policies is not exactly flat. For this reason, there are some pitfalls lurking here, and we have already mentioned some of them. The following section summarizes these and provides some further tips.

  • Network policies are enforced using the CNI plugin. Their specification and implementation can therefore differ. It is therefore advisable to validate that the network policies are also enforced according to the specification on the running Kubernetes cluster.
  • If a CNI plugin is used that does not support network policies, existing network policies are not enforced. This can lead to a false sense of security, which you are also warned about in the Kubernetes Security Audit.
  • Labels must be added to the namespaces for the purposes of selecting namespaces.
  • If you whitelist the ingress, you run the risk of forgetting monitoring tools, ingress controllers, and DNS.
  • When you whitelist the egress, you can easily forget to establish the connection to the Kubernetes API server. In addition, egress in the namespace API is generally only available starting with Kubernetes version 1.8.
  • The simultaneous use of namespaceSelector and podSelector is only possible starting with Kubernetes version 1.11.
  • After you change the network policies, it is recommended that you restart all potentially affected pods. For example, after you change network policies, Prometheus can sometimes fetch additional metrics. The error will only be displayed after a restart. When you use Traefik, the connection to the API server will remain open after changing the network policy, but an error will occur directly when restarting. When you have long-running applications, it can be surprising when you discover that they no longer work as expected the next time you restart without having made any obvious changes. It is therefore advisable to restart immediately after changing the network policies, such as, for example, by using kubectl rollout restart deployment (available since kubectl 1.15).

Limitations and alternatives/extensions

This series of articles shows how options such as network policies can help you secure a Kubernetes cluster. Nevertheless, there are some requirements that you cannot satisfy using network policies. For example,

  • there is no way of enforcing cluster-wide policies,
  • allowing egress on the domain name level or
  • filtering on ISO/OSI level 7 (such as HTTP or gRPC).

There are two options for satisfying these requirements: Using proprietary extensions for the CNI plugins (Cilium and Calico, for example, offer the options mentioned above) or using a service mesh such as Istio or Linkerd.

The barriers for using proprietary extensions for the CNI plugins are relatively small. These extensions are available as a custom resource definition, i.e., as an extension of the Kubernetes API, and they can simply be applied to the cluster in the usual YAML syntax. However, caution is required when mixing these extensions with standard network policies. For example, Cilium advises against this. These extensions can also be more closely coupled with the respective CNI plugin through the use of proprietary network policies. If you do this, it can be more difficult to replace such an extension if, for example, you need better performance or other extensions (such as encryption) later on.

In addition to policies on ISO/OSI layer 7, a service mesh offers many other functions, such as resilience patterns and end-to-end encryption and observability that fall outside the context of this article. However, the use of a service mesh increases the overall complexity of the infrastructure enormously. So, if you only want to filter on the HTTP level, you can achieve the goal with less effort using the proprietary features of a CNI plugin. On the other hand, anyone who operates a service mesh can achieve even more security by combining Kubernetes network policies and a service mesh.

Conclusion

The learning curve of Kubernetes on-board network policies is not flat. Their benefits for the security of applications running on Kubernetes clusters are high. The first article in this series shows a pragmatic usage case, and it demonstrates how the amount of effort should be kept within limits. This second part helps limit the amount of effort by avoiding pitfalls and providing you with tips on debugging. It shows that it is generally recommended to test network policies in the cluster. If you are thinking about introducing a service mesh, you would not be wasting your time if you tried to use network policies exclusively, since the advantages of this tool can be utilized in addition.

Download this article (PDF)

You can download the original article (German), published in JavaSPEKTRUM 06/2019.

This article is part 2 of the series „Kubernetes AppOps Security“.
Read all articles now: