1 May 14, 2018 — https://sf.gl/1553

6 takeaways from Kubecon Europe 2018

I attended Kubecon/CloudNative Con last week, and it was a great way to see how various large and small companies – all 4000+ participants – are using Kubernetes in their systems architecture, what problems they're having and how they're solving them. Interestingly enough, a lot of the issues we've been having at work are the same we saw at Kubecon.

Everyone's standardising on Kubernetes

What's increasingly clear to me, though, is that Kubernetes is it. It's what everyone is standardising on, especially large organisations, and it's what the big cloud providers are building hosted versions of. If you aren't managing your infrastructure with Kubernetes yet, it's time to get going.

Here's a roundup of the trends at Kubecon as well as some of my learnings.

Wasn't there?

If you didn't have a chance to go, take a look at Alen Komljen's list of 10 recommended talks, and go view the full list of videos here.

1. Monitoring

As for Kubernetes itself, I often feel like I'm barely scratching the surface of the things it can do, and it's hard to get a good picture of everything since Kubernetes is so complex.

What's more, once you are running it in production, you don't just need to know what Kubernetes can do, you also start seeing the things Kubernetes can't do.

One of those things is monitoring and a general insight on what happens behind the scenes.

Coming from administering servers in the classic way, I feel that both a lot of the things I used to do has been abstracted away – and hidden away.

A good monitoring solution takes care of that. One of the tools that are very popular is Sysdig, which allows you to get a complete picture on what's happening with your services running on your cluster.

These tools typically use a Linux kernel extension that allows it to track what each container is doing: network connections, executions, filesystem access, etc. They're typically integrated with Kubernetes itself, so you can't just see the Docker containers, you can also see pods, namespaces, etc.

Sysdig even allows you to set up rules based on container activity, and you can then capture a "Sysdig Trace", so you can go back in time and see exactly which files a container downloaded or which commands were ran. A feature like that is great for debugging, but also for security.

Open source monitoring tools like Prometheus were also talked about a lot, but it seems like a lot of work to set up and manage compared to the huge amount of functionality commercial software gives you out of the box. It's definitely something I'll be looking at.

2. Security

Like monitoring, another thing that reveals the young age of Kubernetes is the security story. I feel like security is something that's looked past when deploying Kubernetes, mainly because it's difficult.

The myth about Docker containers being secure by default is starting to go away. As the keynote by Liz Rice about Docker containers running as root showed, it's easy to make yourself vulnerable by configuring your Kubernetes deployments wrong – 86% of containers on DockerHub run as root.

What can you do about it?

  • Take a look at the CIS Kubernetes security guidelines
  • Have a good monitoring solution that lets you discover intruders
  • Use RBAC
  • Be wary of external access to your cluster API and dashboard
  • Further sandboxing with gVisor, KataContainers, but comes at a performance cost
  • Mind the path of data. As an example, container logs go directly from stdout/stderr to kubelet to be read by the dashboard or CLI, which could be vulnerable.
  • Segment your infrastructure: In the case of kernel vulnerabilities like Meltdown, there's not much you can do but segment different workloads via separate clusters and firewalls.

3. Git Push Workflows

Everything old is new again. An idea that has gotten a large amount of traction at Kubecon is Git Push Workflows, which is mostly about testing, building and deploying services based on actions carried out in Git via hooks.

Gitkube

You don't even need a classic tool like Jenkins. Just push to Kubernetes: with gitkube, you can do just that, and Kubernetes takes care of the rest. Have a Dockerfile for running unit tests, a Dockerfile for building a production image and you're close to running your whole CI pipeline directly on Kubernetes.

Jenkins X

Nevertheless, the next generation of cloud-native CI tools have emerged, the latest one being Jenkins X, which takes out all the complexity of building a fully Kubernetes based CI pipeline, complete with test environments, Github integration and Kubernetes cluster creation. It's pretty neat if you're starting out from scratch.

Some things still aren't straight forward, like secrets management. Where do they go, how are they managed? What about Kubernetes or Helm templates, do they live in your services repository, or somewhere else?

4. DevOps & teams structure

A cool thing about Kubecon is that you get an insight into how companies are running their Kubernetes clusters, structuring their teams and running their services.

In the case of Zalando, most teams have one or more Kubernetes cluster at their disposal, managed by a dedicated team that maintains them, which includes tasks like testing and upgrading to the newest version every few months – something that could perhaps be looked over by busy teams focused on writing software.

The way to go, it seems, is to give each teams as much freedom and flexibility as possible, so they can concentrate on their work, and let dedicated teams focus on the infrastructure: Let's be honest, Kubernetes, and the complexity it brings, can be a large time sink for a development team that's trying to get some work done.

5. Cluster Organisation

It goes without saying, but I wasn't aware of it when I first started using Kubernetes: You can have a lot of clusters!

One per team, one per service, or several per service: it's up to you. At CERN, there's currently about 210 clusters.

While there's some additional overhead involved, it can help you improving security by segregating your environments and make it easier to upgrade to newer Kubernetes versions.

6. Service Mesh

While Kubernetes was designed for running any arbitrary workload in a scalable fashion, it wasn't designed explicitly for running a microservice architecture, which is why, once your architecture starts getting more complex, you see the need for Service Mesh software like Istio, Linkerd and the newer, lightweight Conduit.

Why use a service mesh? Microservices are hard! In a microservice world, failures are most often found in the interaction between the microservices. Service Mesh software is designed to help you dealing with inter-service issues such as discoverability, canary deployments and authentication.


One Response to “6 takeaways from Kubecon Europe 2018”

  1. Steve B

    Thanks for this! Great overview on things to be aware of when starting my Kubernetes journey.

Leave a Comment




Note: Your comment will be shown after it has been approved.