Archive for the ‘Articles’ Category

Sep 30 0 Tips and tricks for htop

If you’ve ever logged in to a Linux server to check what’s going on, you’re probably using htop, a text-based system monitoring tool for Unix based systems.

It runs on host Unix systems, including OS X, via Homebrew: brew install htop

htoprunning in a byobu session; hover over to view explanations.

Basic Usage

  1. Displays the CPU usage (each CPU core gets a line, my CPU was 4 hyper threads = 4 lines).
    For CPU: Green = user apps, red = kernel usage. For memory: blue = low priority; yellow = IRQ.
  2. Displays the amount of processes and threads, load average (1, 5, 15 mins) and the system uptime.
  3. Has the columns. How to add more is shown below.
  4. List item for each process (or thread, if enabled);
    1. PID is the process ID
    2. PRI (“Priority”) is -20 (highest), 19 (lowest)
    3. VIRT is total amount of memory that can be allocated (including virtual)
    4. RES (“Resident”) shows actual physical memory used, S (status) R = running, ? = idle
    5. CPU% is amount of total CPU usage
    6. TIME is how much total cpu time that process is using
  5. The process path and name. See below for shortcuts to display environment, etc.
  6. Menu items: mouse can be used in addition to F keys, if enabled.

Use the mouse

While htop is a text-mode application, on most terminals, you can actually use the mouse to select processes, press the menu keys and navigate the Setup menu.

Selecting one or more processes

With the SPACE key, you can select multiple processes. You can them kill them via F9.

Other things to do with a selected process:

  • To view the environment variables of a specific process, just navigate to the process via the arrow keys and press E.
  • Set IO priority via I.
  • List open files with lsof with L.
  • Trace syscalls with S.
  • Toggle path with P.

Hide threads

Per default htop shows threads of non-system programs, but this can result in the list being very verbose (leading to a bunch of duplicate program names) and hard to navigate. To turn that off, simply go to Setup > Display Options and check off both “Hide kernel threads” and “Hide userland process threads”.

Alternatively, kernel and user threads can be toggled with K and H, respectively.

Add some more columns

Per default, htop doesn’t show all its information. To add more columns, go to Setup > Columns and choose some new ones. Which to choose? Here’s the ones I commonly use:

  • PERCENT_CPU, PERCENT_MEMORY – shows how much a program is using in percentages
  • IO_RATE – shows how much disk IO the process is using

Filter by users

To select and view a specific users’ processes, type U.

Aug 5 0 An introduction to Byobu

Byobu advertises itself as a terminal multiplexer and a terminal window manager. But what those words actually mean?

If we start from scratch, a terminal is the text-based interface to computers. On Windows computers, it’s called the Command Prompt. While it could be seen as the “old-fashioned” way to interact with computers, for programmers, system administrators and in many other technical fields, terminals (and text-based user interfaces) are still used widely due to their efficiency and speed of use.

As a layer on top of the terminal, programs such as screen and, later, tmux were created to allow users to better manage their terminals: with them, you could disconnect from a server, and still have your program running, and then connect to it later as if you were never gone. You can split the screen in several parts (that’s the multiplex part), so you can change a configuration file while streaming a log file and watching your changes take effect in real time, just like you’re dragging windows around on your Mac or PC. You can also have several desktops – or windows – that can be switched between, just like on your Mac. That’s the window manager part.

Byobu is again a layer on top of Screen and tmux. Think of it as an extension: Byobu is a collection of scripts and utilities that enhances the behaviour of these programs.

Installing byobu

On Debian, Ubuntu and similar Linux distributions: apt-get install byobu

For other distributions, see the official site.

On Mac: first get Homebrew, then brew install byobu

First run

When running byobu for the first time, it will start with just your shell in a single window. The bottom of your screen has the status bar, which displays your OS and version, a list of open windows, and various system metrics like pending updates, RAM usage and time and date.

To change these, press F9, choose Toggle Status Notifications, and select/unselect the ones you want.

Something you should do first is to choose Byobu’s escape sequence. That’s a special key that can trigger some of Byobu’s functionality. Think of it as a shortcut key. Type CTRLA, or run byobu-ctrl-a . If you’re in doubt, use “Emacs mode”, which let’s you keep using CTRLA to navigate text. Byobu’s default escape sequence is then F12, which you’ll use in a minute.

You can also make Byobu start automatically with byobu-enable. It’s useful on servers where you probably don’t have a lot of different terminal windows open, and want your terminal history and programs to be running between sessions. To disable that, use byobu-disable.

Basic window management

Creating a window: F2

Create a horizontal split pane: SHIFTF2 (or F12|)

Create a vertical split pane: CTRLSHIFTF2 (or F12%)

Go back and forward through window list: F3 and F4

Go back and forward through split panes: SHIFTF3 and SHIFTF4

More window management

Close a window, or a pane: CTRLD

Toggle between layout grid templates: SHIFTF8

Scrolling: SHIFTALTPage-down/up

Search down (while scrolling): /

Search up (while scrolling): ?

Naming a window: F8

Fullscreen a pane: F12 then Z

To visually navigate your windows, with previews: F12 then S, then arrow keys, numbers

Mouse mode

Type F12, then : (to enable the internal terminal), then set mouse on (for other commands, list-commands) then ENTER to enable mouse mode. With it you can do several actions with the mouse:

  • Switch between active panes and windows. Click on a window name or pane to switch.
  • Scroll, with the mouse wheel or trackpad
  • Resize panes by dragging and dropping

Display the time

To display the time in big letters, press F12 then T.


To exit byobu, leaving your session running in the background (and logging out, if you’re in a SSH session), F6. (To avoid logging out, use SHIFTF6.)

To completely kill your session, and byobu in the background, type F12 then : and type kill-server.

More information


Jul 17 3 What is a webhook?

The Webhook is the Web’s way to integrate completely different systems in semi-real time.

As time has passed, the Web (or more precisely, HTTP, the protocol used for requesting and fetching the Web site you’re currently reading) has become the default delivery mechanism for almost anything that’s transferred over the Internet.

Refrigerators, industrial control systems, lightbulbs, speakers, routers, anti-virus programs – everything is controlled via the Web these days. It’s not because HTTP is perfect, fast or fault tolerant, rather, it’s because everybody speaks it. It’s very easy to send and request a HTTP call. In fact, many programming languages can do it in a single line of code.

With an URL, you can send and receive information to and from any other system, often in the form of an API – Application Programming Interface, a defined set of URLs that can act on a system.

But what if a regular API isn’t enough, what if I don’t want to ping for data from the API every minute – what if they could just let me know if something changed? As that became a growing need, the word for applications exchanging information automatically like this came to be known as a Webhook. Something happened? Nudge my webhook and I’ll know!

A general method of setting them up also started taking form: subscriptions. On most systems, this involves setting these subscriptions up by – you guessed it – a fax to the system operator.

Sorry, just kidding. Webhooks are of course also set up via the Web, and many sites provides either a visual user interface for setting them up, or via an API: that way, other applications can subscribe and unsubscribe to your application as they please.

Implementation types

While Webhooks aren’t a standard, most people implement them in the same way: When an event happens in your program that another program is subscribed to, loop though each subscription and check whether it actually has access to view that thing, and if so, send a HTTP request to your subscriber’s URL.

The content of the request can then be either the actual content (mostly in JSON or XML format), or a reference to the content that can be fetched later: something happened, but call this other URL to see exactly what happened. Doing it like that can save on bandwidth if you batch requests later (fetch the latest since last time).

Obviously, and the whole reason we’re doing this: it is then up to the subscriber to carry out an action on the Webhook if it is needed.

Did my toaster send a Webhook about my toast finishing? Better send a text via Trello to remind me there’s breakfast! (I have a bad short term memory)

But what happens if problems occur along the way?

Error Handling

If you don’t take certain precautions, Webhooks are very prone to failure and lost data.

If I’m suddenly unable to send webhooks to my subscriber, it might be a good idea to re-send it later at some point. But how long should I wait? If I send it a second after, it’s probably still down. Maybe 10 seconds or 10 minutes are better for a specific use case. Maybe they’ll be completely bombarded by my retries and they’ll be worse off. Maybe random retry times, or constantly rising ones would solve the problem.

You might also not want to keep sending a Webhook an infinite amount of times. That is a long time, after all.

Speaking of time: it might be work it to consider timeouts for your use-case. Even though you’re great at implementing fast software, maybe your customers aren’t, so set a timeout that’s reasonable for all parties, or you’ll risk delivering duplicate Webhooks to your customers when retrying due to timeouts, or maybe they’ll never get their data.

Maybe you want to give control to your subscribers: let them view a log of the failed requests, and let them re-send specific requests at their leisure.

All of this requires a way to queue your calls: don’t do them while delivering another request, for example.

Webhooks should always be queued.

That includes the recipient side. You can’t be sure that those you’re subscribing to will wait forever to hand over the requests; you need to answer webhooks quickly, a couple seconds should be more than plenty to queue your action and return a response. So queue your actions, too!

As they say: your webhook should do almost nothing.

Testing your Webhook

What if you want to get to started with Webhooks, but don’t have a system set up to receive them? What if you’re developing a Webhook-based system, and you just want to see whether it works?

For that use case, I made Webhook Tester for fun back in March 2016 – I often needed to test webhooks, so I decided to build my own site that worked like I wanted, and it’s become really popular, now on page 1 on Google if you search for webhook.

What can I use Webhooks for?

Almost every cloud service has some sort of Webhook functionality, but if you’re not a developer, they’re hard to use. For that, check out IFTTT (if this, then that), or Zapier – under the hood, they use Webhooks.

What do you use Webhooks for? Let me know in the comments!

May 14 1 6 takeaways from Kubecon Europe 2018

I attended Kubecon/CloudNative Con last week, and it was a great way to see how various large and small companies – all 4000+ participants – are using Kubernetes in their systems architecture, what problems they’re having and how they’re solving them. Interestingly enough, a lot of the issues we’ve been having at work are the same we saw at Kubecon.

Everyone’s standardising on Kubernetes

What’s increasingly clear to me, though, is that Kubernetes is it. It’s what everyone is standardising on, especially large organisations, and it’s what the big cloud providers are building hosted versions of. If you aren’t managing your infrastructure with Kubernetes yet, it’s time to get going.

Here’s a roundup of the trends at Kubecon as well as some of my learnings.

Wasn’t there?

If you didn’t have a chance to go, take a look at Alen Komljen’s list of 10 recommended talks, and go view the full list of videos here.

1. Monitoring

As for Kubernetes itself, I often feel like I’m barely scratching the surface of the things it can do, and it’s hard to get a good picture of everything since Kubernetes is so complex.

What’s more, once you are running it in production, you don’t just need to know what Kubernetes can do, you also start seeing the things Kubernetes can’t do.

One of those things is monitoring and a general insight on what happens behind the scenes.

Coming from administering servers in the classic way, I feel that both a lot of the things I used to do has been abstracted away – and hidden away.

A good monitoring solution takes care of that. One of the tools that are very popular is Sysdig, which allows you to get a complete picture on what’s happening with your services running on your cluster.

These tools typically use a Linux kernel extension that allows it to track what each container is doing: network connections, executions, filesystem access, etc. They’re typically integrated with Kubernetes itself, so you can’t just see the Docker containers, you can also see pods, namespaces, etc.

Sysdig even allows you to set up rules based on container activity, and you can then capture a “Sysdig Trace”, so you can go back in time and see exactly which files a container downloaded or which commands were ran. A feature like that is great for debugging, but also for security.

Open source monitoring tools like Prometheus were also talked about a lot, but it seems like a lot of work to set up and manage compared to the huge amount of functionality commercial software gives you out of the box. It’s definitely something I’ll be looking at.

2. Security

Like monitoring, another thing that reveals the young age of Kubernetes is the security story. I feel like security is something that’s looked past when deploying Kubernetes, mainly because it’s difficult.

The myth about Docker containers being secure by default is starting to go away. As the keynote by Liz Rice about Docker containers running as root showed, it’s easy to make yourself vulnerable by configuring your Kubernetes deployments wrong – 86% of containers on DockerHub run as root.

What can you do about it?

  • Take a look at the CIS Kubernetes security guidelines
  • Have a good monitoring solution that lets you discover intruders
  • Use RBAC
  • Be wary of external access to your cluster API and dashboard
  • Further sandboxing with gVisor, KataContainers, but comes at a performance cost
  • Mind the path of data. As an example, container logs go directly from stdout/stderr to kubelet to be read by the dashboard or CLI, which could be vulnerable.
  • Segment your infrastructure: In the case of kernel vulnerabilities like Meltdown, there’s not much you can do but segment different workloads via separate clusters and firewalls.

3. Git Push Workflows

Everything old is new again. An idea that has gotten a large amount of traction at Kubecon is Git Push Workflows, which is mostly about testing, building and deploying services based on actions carried out in Git via hooks.


You don’t even need a classic tool like Jenkins. Just push to Kubernetes: with gitkube, you can do just that, and Kubernetes takes care of the rest. Have a Dockerfile for running unit tests, a Dockerfile for building a production image and you’re close to running your whole CI pipeline directly on Kubernetes.

Jenkins X

Nevertheless, the next generation of cloud-native CI tools have emerged, the latest one being Jenkins X, which takes out all the complexity of building a fully Kubernetes based CI pipeline, complete with test environments, Github integration and Kubernetes cluster creation. It’s pretty neat if you’re starting out from scratch.

Some things still aren’t straight forward, like secrets management. Where do they go, how are they managed? What about Kubernetes or Helm templates, do they live in your services repository, or somewhere else?

4. DevOps & teams structure

A cool thing about Kubecon is that you get an insight into how companies are running their Kubernetes clusters, structuring their teams and running their services.

In the case of Zalando, most teams have one or more Kubernetes cluster at their disposal, managed by a dedicated team that maintains them, which includes tasks like testing and upgrading to the newest version every few months – something that could perhaps be looked over by busy teams focused on writing software.

The way to go, it seems, is to give each teams as much freedom and flexibility as possible, so they can concentrate on their work, and let dedicated teams focus on the infrastructure: Let’s be honest, Kubernetes, and the complexity it brings, can be a large time sink for a development team that’s trying to get some work done.

5. Cluster Organisation

It goes without saying, but I wasn’t aware of it when I first started using Kubernetes: You can have a lot of clusters!

One per team, one per service, or several per service: it’s up to you. At CERN, there’s currently about 210 clusters.

While there’s some additional overhead involved, it can help you improving security by segregating your environments and make it easier to upgrade to newer Kubernetes versions.

6. Service Mesh

While Kubernetes was designed for running any arbitrary workload in a scalable fashion, it wasn’t designed explicitly for running a microservice architecture, which is why, once your architecture starts getting more complex, you see the need for Service Mesh software like Istio, Linkerd and the newer, lightweight Conduit.

Why use a service mesh? Microservices are hard! In a microservice world, failures are most often found in the interaction between the microservices. Service Mesh software is designed to help you dealing with inter-service issues such as discoverability, canary deployments and authentication.

May 7 8 Automating AWS infrastructure with Terraform

When you start using cloud hosting solutions like Amazon Web Services, Microsoft Azure or Rackspace Cloud, it doesn’t take long to feel overwhelmed by the choice and abundance of features of the platforms. Even worse, the initial setup of your applications or Web sites on a cloud platform can be very cumbersome; it involves a lot of clicking, configuring and discovering how the different parts fit together.

With tools like Terraform, building your infrastructure becomes a whole lot easier and manageable. Terraform basically allows system administrators to sit down and script their whole infrastructure stack, and connect the different parts together, just like assigning a variable in a programming language. Instead, with Terraform, you’re assigning a load balancer’s backend hosts to a list of servers, for example.

In this tutorial I’ll walk you through a configuration example of how to set up a complete load balanced infrastructure with Terraform, and in the end you can download all the files and modify it to your own needs. I’ll also talk a little about where you can go from here if you want to go further with Terraform.

You can download all the files needed for this how-to on Github.

Getting up and running

To start using Terraform, you’ll need to install it. It’s available as a single binary for most platforms, so download the zip file and place it somewhere in your PATH, like /usr/local/bin. Terraform runs completely on the command-line, so you’ll need a little experience executing commands on the terminal.


A core part of Terraform is the variables file,, which is automatically included due to the file name. It’s a place where you can define the hard dependencies for your setup, and in this case we have two:

  1. a path to a SSH public key file,
  2. the name of the AWS region we wish to create our servers in.

Both of these variables have defaults, so Terraform won’t ask you to define them when running the planning step which we’ll get to in a minute.

Create a folder somewhere on your harddrive, create a new file called, and add the following:

variable "public_key_path" {
  description = "Enter the path to the SSH Public Key to add to AWS."
  default = "~/.ssh/"

variable "aws_region" {
  description = "AWS region to launch servers."
  default     = "eu-central-1"

Main file

Terraform’s main entrypoint is a file called, which you’ll need to create. Add the following 3 lines:

provider "aws" {
  region = "${var.aws_region}"

This clause defines the provider. Terraform comes bundled with functionality for some providers, like Amazon Web Services which we’re using in this example. One of the things you can configure it with is the default region, and we’re getting that from the variables file we just created. Terraform looks for a file and includes it automatically. You can also configure AWS in other ways, like explicitly adding an AWS Access Key and Secret Key, but in this example we’ll add those as environment variables. We’ll also get to those later.


Next we’ll start adding some actual infrastructure, in Terraform parlance that’s called a resource:

resource "aws_vpc" "vpc_main" {
  cidr_block = ""
  enable_dns_support = true
  enable_dns_hostnames = true
  tags {
    Name = "Main VPC"

resource "aws_internet_gateway" "default" {
  vpc_id = "${}"

resource "aws_route" "internet_access" {
  route_table_id          = "${aws_vpc.vpc_main.main_route_table_id}"
  destination_cidr_block  = ""
  gateway_id              = "${}"

# Create a public subnet to launch our load balancers
resource "aws_subnet" "public" {
  vpc_id                  = "${}"
  cidr_block              = "" # - (256)
  map_public_ip_on_launch = true

# Create a private subnet to launch our backend instances
resource "aws_subnet" "private" {
  vpc_id                  = "${}"
  cidr_block              = "" # - (4096)
  map_public_ip_on_launch = true
Network setup

To contain our setup, an AWS Virtual Private Cloud is created and configured with an internal IP range, as well as DNS support and a name. Next to the resource clause is aws_vpc, which is the resource we’re creating. After that is the identifier, vpc_main, which is how we’ll refer to it later.

We’re also creating a gateway, a route and two subnets: one for public internet-facing services like the load balancers, and a private subnet that don’t need incoming network access.

As you can see, different parts are neatly interlinked by referencing them like variables.

Trying it out

At this point, we can start testing our setup. You’ll have two files in a folder, and with the content that was just listed. Now it’s time to actually create it in AWS.

To start, enter your AWS Access Keys as environment variables in the console, simply type the following two lines:

AWS_SECRET_ACCESS_KEY="Your secret key"

Next, we’ll create the Terraform plan file. Terraform will, with your AWS credentials, check out the status of the different resources you’ve defined, like the VPC and the Gateway. Since it’s the first time you’re running it, Terraform will instill everything for creation in the resulting plan file. Just running the plan command won’t touch or create anything in AWS.

terraform plan -o terraform.plan

You’ll see an overview of the resources to be created, and with the -o terraform.plan argument, the plan is saved to a file, ready for execution with apply.

terraform apply terraform.plan

Executing this command will make Terraform start running commands on AWS to create the resources. As they run, you’ll see the results. If there’s any errors, for example you already created a VPC with the same name before, you’ll get an error, and Terraform will stop.

After running apply, you’ll also see a new file in your project folder: terraform.tfstate – a cache file that maps your resources to the actual ones on Amazon. You should commit this file to git if you want to version control your Terraform project.

So now Terraform knows that your resources were created on Amazon. They were created with the AWS API, and the IDs of the different resources are saved in the tfstate file – running terraform plan again will result in nothing – there’s nothing new to create.

If you change your file, like changing the VPC subnet to instead of, Terraform will figure out the necessary changes to carry out in order to to update the resources. That may result in your resources (and their dependents) being destroyed and re-created.

More resources

Having learnt a little about how Terraform works, let’s go ahead and add some more things to our project.

We’ll add 2 security groups, which we’ll use to limit network access to our servers, and open up for public load balancers using the AWS ELB service.

# A security group for the ELB so it is accessible via the web
resource "aws_security_group" "elb" {
  name        = "sec_group_elb"
  description = "Security group for public facing ELBs"
  vpc_id      = "${}"

  # HTTP access from anywhere
  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = [""]
  # HTTPS access from anywhere
  ingress {
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = [""]

  # Outbound internet access
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = [""]

# Our default security group to access the instances over SSH and HTTP
resource "aws_security_group" "default" {
  name        = "sec_group_private"
  description = "Security group for backend servers and private ELBs"
  vpc_id      = "${}"

  # SSH access from anywhere
  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = [""]

  # HTTP access from the VPC
  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = [""]
  # Allow all from private subnet
  ingress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["${aws_subnet.private.cidr_block}"]

  # Outbound internet access
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = [""]

Our elb security group is only reachable from port 80 and 443, HTTP and HTTPS, while the default one only has public access on port 22, SSH. It also allows access from the whole VPC (including public facing load balancers) on port 80, as well as full access from other servers. Both allow all outgoing traffic.

After the ELBs, we need to define a public key which is placed on the instances we create later. Here, we use the pre-defined variable to specify the path on the local filesystem.

resource "aws_key_pair" "auth" {
  key_name   = "default"
  public_key = "${file(var.public_key_path)}"


You probably thought that there was a lot of duplicate code in those two security groups, and you’re right. To combat that, Terraform provides custom modules, which is basically like including files.

Since we need to configure quite a few things in our EC2 instances, but the things we configure are almost always the same across them, we’ll create a module for our instances. Do do that, create a new folder called instance.

In the instance folder, create 3 new files:

variable "private_key_path" {
  description = "Enter the path to the SSH Private Key to run provisioner."
  default = "~/.ssh/id_rsa"

variable "aws_amis" {
  default = {
    eu-central-1 = "ami-060cde69"

variable "disk_size" {
  default = 8

variable "count" {
  default = 1

variable "group_name" {
  description = "Group name becomes the base of the hostname of the instance"

variable "aws_region" {
  description = "AWS region to launch servers."
  default     = "eu-central-1"

variable "instance_type" {
  description = "AWS region to launch servers."
  default     = "t2.small"

variable "subnet_id" {
  description = "ID of the AWS VPC subnet to use"

variable "key_pair_id" {
  description = "ID of the keypair to use for SSH"

variable "security_group_id" {
  description = "ID of the VPC security group to use for network"
resource "aws_instance" "instance" {
  count = "${var.count}"

  instance_type          = "${var.instance_type}"
  ami                    = "${lookup(var.aws_amis, var.aws_region)}"
  key_name               = "${var.key_pair_id}"
  vpc_security_group_ids = ["${var.security_group_id}"]
  subnet_id              = "${var.subnet_id}"
  root_block_device {
      volume_size = "${var.disk_size}"
  tags {
      Name = "${format("%s%02d", var.group_name, count.index + 1)}" # -> "backend02"
      Group = "${var.group_name}"
  lifecycle {
    create_before_destroy = true
  # Provisioning
  connection {
    user = "ubuntu"
    private_key = "${file(var.private_key_path)}"

  provisioner "remote-exec" {
    inline = [
      "sudo apt-get -y update",
# Used for configuring ELBs.
output "instance_ids" {
    value = ["${aws_instance.instance.*.id}"]

In the variables file, we have a few things worth mentioning:

  • a default path to the private key of the public key – we’ll need the private key for connecting via SSH and launching the provisioner,
  • we define a list of AMIs, or more specifically a map. Here, since we’re only focusing on Amazon’s EU Central 1 region, we’ve only defined an AMI for that region (It’s Ubuntu 16.04 LTS). You need to go browse Amazon’s AMI library if you use another region, or you want to use another operating system,
  • some defaults are defined, like the count of instances, disk size, etc. These can be overwritten when invoking the module,
  • some variables don’t have defaults – weirdly, Terraform doesn’t let you automatically inherit variables, which is why I’ve chosen to place the private key path here. Otherwise I’d have to pass the main Terraform variable to every module.

The output file allows the module to export some properties – you have to explicitly define outputs for everything you want to reference later. The only thing I have to reference is the actual instance IDs (for use in the ELBs), so that’s the only output.

Using the Tags array, we can add some info to our instances. I’m using one of Terraforms built-in functions, format, to generate a friendly hostname based on the group name and a 1-indexed number. Also, the provisioner clause is a little bare. Instead, one would typically reference an Chef or Ansible playbook, or just run some commands to set up your environment and bootstrap your application.

Back in your main Terraform file,, you can now start referencing your AWS EC2 Instance module:

module "backend_api" {
    source                 = "./instance"
    subnet_id              = "${}"
    key_pair_id            = "${}"
    security_group_id      = "${}"
    count                  = 2
    group_name             = "api"

module "backend_worker" {
    source                 = "./instance"
    subnet_id              = "${}"
    key_pair_id            = "${}"
    security_group_id      = "${}"
    count                  = 2
    group_name             = "worker"
    instance_type          = "t2.medium"

module "frontend" {
    source                 = "./instance"
    subnet_id              = "${}"
    key_pair_id            = "${}"
    security_group_id      = "${}"
    count                  = 2
    group_name             = "frontend"

module "db_mysql" {
    source                 = "./instance"
    subnet_id              = "${}"
    key_pair_id            = "${}"
    security_group_id      = "${}"
    count                  = 3
    disk_size              = 30
    group_name             = "mysql"
    instance_type          = "t2.medium"

Instead of resource, the modules are referenced using the module clause. All modules have to have a source reference, pertaining to the directory of where the module’s file is located.

Again, since modules can’t automatically inherit or reference parent resources, we’ll have to explicitly pass the subnet, key pair and security groups to the module.

This example consists of 9 instances:

  • 2x backend,
  • 2x backend workers,
  • 2x frontend servers,
  • 3x MySQL servers.

Load balancers

To finish our terraform file, we add the remaining component: load balancers.

# Public Backend ELB
resource "aws_elb" "backend" {
  name = "elb-public-backend"

  subnets         = ["${}", "${}"]
  security_groups = ["${}"]
  instances       = ["${module.backend_api.instance_ids}"]

  listener {
    instance_port     = 80
    instance_protocol = "http"
    lb_port           = 80
    lb_protocol       = "http"
  health_check {
    healthy_threshold   = 2
    unhealthy_threshold = 2
    timeout             = 3
    target              = "HTTP:80/healthcheck.php"
    interval            = 30

# Public Frontend ELB
resource "aws_elb" "frontend" {
  name = "elb-public-frontend"

  subnets         = ["${}", "${}"]
  security_groups = ["${}"]
  instances       = ["${module.frontend.instance_ids}"]

  listener {
    instance_port     = 80
    instance_protocol = "http"
    lb_port           = 80
    lb_protocol       = "http"
  health_check {
    healthy_threshold   = 2
    unhealthy_threshold = 2
    timeout             = 3
    target              = "HTTP:80/healthcheck.php"
    interval            = 30

# Private ELB for MySQL cluster
resource "aws_elb" "db_mysql" {
  name = "elb-private-galera"

  subnets         = ["${}"]
  security_groups = ["${}"]
  instances       = ["${module.db_mysql.instance_ids}"]
  internal        = true

  listener {
    instance_port     = 3306
    instance_protocol = "tcp"
    lb_port           = 3306
    lb_protocol       = "tcp"
  health_check {
    healthy_threshold   = 2
    unhealthy_threshold = 2
    timeout             = 3
    target              = "HTTP:9222/" # Galera Clustercheck listens on HTTP/9222
    interval            = 30

The load balancers provide the entrypoints for our application. One thing to note here is how the instances are referenced[Footnote 1].

Main output file

To put a cherry on top, we’ll create an output file for our main project, Again, due to the filename, Terraform will automatically pick it up.

# Public Load Balancers

output "api_address" {
  value = "${aws_elb.backend.dns_name}"

output "frontend_address" {
  value = "${aws_elb.frontend.dns_name}"

# Private Load Balancers

output "galera_address" {
  value = "${aws_elb.db_mysql.dns_name}"

This will display the hostnames of our ELBs in a friendly format after running terraform apply, which is handy for copying into a configuration file or your browser.

You can now run terraform plan again like before, but since you’re using modules, you’ll have to run terraform get first to include them.

Then you can see that it will create the remaining infrastructure when you do terraform apply.

You can clone, fork or download the full project over on Github.

Next steps

Where can you go from here? I have a couple ideas:

  • Move your DNS to Amazon Route53 and automate your DNS entries with the outputs from the ELBs.
  • In addition to Route53, see what other AWS services you can provision using Terraform, like S3 buckets, autoscaling groups, AMIs, IAM groups/policies…
  • Further use modules to simplify your main file, for example by nesting multiple resources in one file. You could, for example, have all your network setup in a single module to make the base file more concise.
  • Integrate with provisioning software like Ansible, using their EC2 inventory to easily provision new instances.


  1. Yes, the instance IDs are inside a string, which is how all resources or modules are references, even though they technically are arrays and (in my opinion) shouldn’t be encapsulated in a string. But that’s how it is.

Apr 13 20 How to use Apple’s SF Mono font in your editor

At WWDC 2016, Apple unveiled a brand new font which was called San Francisco. The font went on to become the default font in macOS and iOS, replacing Helvetica (which replaced Lucida Sans). On watchOS, a special Compact variant of San Francisco, was used.

Later, Apple introduced yet another variant, a monospaced variant, which I think simply looks fantastic – especially on a high-resolution display like the MacBook. It has gone and replaced my previous favourite monospace font, Anonymous Pro.

Weirdly enough, the fonts are not available for selection in macOS, you just can’t use San Francisco for editing a document in Pages, for example.

Currently, though, the standard and Compact versions of San Francisco is available on Apple’s developer portal, but unfortunately the monospaced version is not.

Fortunately, if you have macOS Sierra, the version is included inside the in macOS.

Here’s how you extract the font from and install it on your computer so you can use it in your text editor, for example:

  1. Go to’s resources folder:
    1. Right click the Finder icon in the Dock
    2. Click ‘Go to Folder…’
    3. Enter this path: /Applications/Utilities/
    4. Click Go
  2. You’ll see a list of fonts in the folder.
    1. Select all of the fonts in the folder.
    2. Right click on them and click ‘Open’
  3. A window will pop-up previewing the font. Click Install Font.
  4. You’ll perhaps get a window that says there’s problems with the fonts. I did too.
    1. Go ahead and click ‘Select all fonts’
    2. Click ‘Install Checked’
    3. You’ll get another dialog
    4. Click ‘Install’
  5. Font Book will show the new font as installed. You’ll now be able to select the SF Mono font in your editor. 🎉

Here’s a GIF of the whole process:

Jan 31 2 Back up Elasticsearch with S3 compatible providers

ElasticSearch is a popular search engine and database that’s being used in applications where search and analytics is important. It’s been used as a primary database in such applications as HipChat, storing billions of messages while making them searchable.

While being very feature-complete for use cases like that, being new (compared to other popular datastores like MySQL), ElasticSearch also has a disadvantage when being used as a permanent datastore: backups.

In the early days of ElasticSearch, backup was crude. You shut down your node, or flushed its contents to disk, and did a copy of the data storage directory on the harddrive. Copying a data directory, isn’t very convenient for high-uptime applications, however.

In later versions, ES introduces snapshots which will let you do a complete copy of an index. As of version 2, there’s several different snapshot repository plugins available:

  • HDFS
  • Amazon S3
  • Azure
  • File system/Directory

File System

For the file system repository type, ElasticSearch requires that the same directory is being mounted on all nodes in the cluster. This starts getting inconvenient fast as your ES cluster grows.

The mount type could be NFS, CIFS, SSHFS or similar. To make sure the file mount is always available, you can use a program like AutoFS to make sure.

On clusters with a few nodes, I haven’t had good luck with it – even using AutoFS, the connection can be unstable and lead to errors from ElasticSearch, and I’ve also experienced nodes crashing when the repository mount came offline.


Then there’s S3 and Azure. They work great – provided that there isn’t anything preventing you from storing your data with a 3rd party, American-owned cloud provider. It’s plug and play.

S3 Compatible

If you for some reason can’t use S3, there’s other providers that provide storage cloud services that are compatible with the S3 API.

If you prefer an on-prem solution, you can use storage engine that support it. Minio is a server written in Go that’s very easy to get started with. More complex tools include Riak S2 and Ceph.

Creating an S3 compatible repository is the same as creating an Amazon S3 repository. You need to install the cloud-aws plugin in ES, and in the elasticsearch.yml config file, you need to add the following line: S3SignerType

Not adding this line will result in errors like these: 
null (Service: Amazon S3; Status Code: 400; Error Code: InvalidArgument; Request ID: null)


The request signature we calculated does not match the signature you provided

Per default, it’s AWSS3SignerType, and that prevents you from using an S3 compatible storage repository.

Setting the repository up in ES is similar to the AWS type, except you also specify an endpoint. For example, with the provider, you’d add a repository like this:

POST http://es-node:9200/_snapshot/backups
  "type": "s3",
  "settings": {
    "bucket": "backups",
    "base_path": "/snapshots",
    "endpoint": "",
    "protocol": "https",
    "access_key": "Ze5Zepu0Cofax8",
    "secret_key": "Qepi7Pe0Foj2RuNat2Fox8Zos7YuNat2Fox8Zos7Yu"

To learn more about the snapshot endpoints, here’s a link to the ES documentation.

If you take a lot of different backups, I’d also recommend to take a look at the kopf ES plugin, which has a nice web interface for creating, restoring and otherwise administering snapshots.

Periodical snapshots

I’ve had success setting up snapshots using cronjobs. Here’s an example on how to automatically do snapshots.

On one of the ES nodes, simply add a cronjob which fires a simple request to ES, like this, which creates a snapshot with the current date:

0,30 * * * * curl -XPUT ''$(date +\%d-\%m-\%Y-\%H-\%M-\%S)''

This will create a snapshot in the backups repository with a name like “20-12-2016-11-30-00” – the current date and time. You can also use a similar command to create a new ES repository every month, for example, so you can periodically take a complete snapshot of the cluster.

If you want a little more control, Elastic provides a nice tool called Curator which lets you easily organise repositories, snapshots, deleting old indexes, and more. Instead of doing a curl request in a cronjob, you write a Curator script which you can run in a cronjob – it gives you more flexibility.

Concurrency errors with snapshots

This section isn’t S3 specific, but I’ve run into these issues so often that I thought I’d write a little about them.

ElasticSearch can be extremely finicky when there’s network timeouts while doing snapshots, for example, and you won’t get any help from the official ES documentation.

For example, you may experience that a snapshot is stuck. It’s IN_PROGRESS, but it never finishes. You can then do a DELETE <repository_name>/<snapshot_name>, and it will be of status ABORTED. Then you might experience you’re stuck. It will stay at ABORTED forever, and when trying to DELETE it again, you’ll get this:

 "error": {
 "root_cause": [
     "type": "concurrent_snapshot_execution_exception",
     "reason": "[<repository_name>:<snapshot_name>] another snapshot is currently running cannot delete"
 "type": "concurrent_snapshot_execution_exception",
 "reason": "[<repository_name>:<snapshot_name>] another snapshot is currently running cannot delete"
 "status": 503

Now, trying to create another snapshot gets you this:

 "error": {
 "root_cause": [
     "type": "concurrent_snapshot_execution_exception",
     "reason": "[<repository_name>:<snapshot_name>] a snapshot is already running"
 "type": "concurrent_snapshot_execution_exception",
 "reason": "[<repository_name>:<snapshot_name>] a snapshot is already running"
 "status": 503

The only way to fix this is to do either a rolling upgrade (e.g. restart one node, then the next), or a complete restart of the whole cluster. That’s it.

Jan 15 1 Simple Mac window management with BetterTouchTool

As a software developer, I not only work with lots of different windows on my computer screen, but with lots of different sets of windows. Not only am I dependent on windows being in different places, but in different sizes. As such, I need to manage all these windows in some way.

For example, I often need to have 3 browser windows open. Maybe one for documentation, one for a project management tool and one for testing. And then I’d of course want a text editor. Maybe for a while I’d like one of the windows to take up more space, so I move one to a different screen and make the other window larger.

It would take me a while to manually drag these windows to their right places.

Luckily, a program for Mac called BetterTouchTool allows me to easily define sets of hotkeys that carries out all this moving and sizing of windows. I find that it speeds up my workflow a lot – I can easily organise my desktop.

It’s even preferable to the Windows 7-style drag-to-maximize Snap feature since I don’t have to use my mouse at all.

Here’s the shortcuts I’ve defined:

Use the link below to download a BTT preset of these shortcuts.

Did you create any cool sets of shortcuts or workflow improvements with BetterTouchTool you want to share? Let us know in the comments.

Sep 27 2 How to extend a LVM volume group

Extending a logical volume group usually needs to be done when the size of a VMware disk has been increased for a Linux VM. When resizing a disk, the volume isn’t extended automatically, so you need to extend the logical volume in the VM’s volume group.

This article assumes that:

  • You have a LVM volume group with a logical volume
  • You’ve added free space in the virtualizer, e.g. VMware
  • You’re running Ubuntu. Might also work with other distributions
  • You have basic knowledge of partitions and Linux

Creating new partition with Gparted

  1. Start by creating a new partition from the free space. I prefer doing this with a GUI using gparted. You need XQuartz if you’re on a Mac.
    1. SSH into the box with -X, e.g. ssh -X myserver
    2. Install gparted: apt-get install -y gparted and run gparted
    3. Find the unallocated space (a grey bar)
    4. Select and create partition. Choose lvm2 pv  as the “file system”
    5. Click OK
    6. Click Apply in the toolbar and again in the dialog
    7. Note the disk name in the Partition column, e.g. /dev/sda3
  2. You should see the disk with fdisk -l
  3. Run pvcreate <disk>, e.g. pvcreate /dev/sda3
  4. Find the volume group: run vgdisplay (name is where it says VG Group)
  5. Extend the VG with the disk: vgextend <vg name> <disk>, e.g. vgextend VolumeGroup /dev/sda3
  6. Run vgscanpvscan
  7. Run lvdisplay to find the LV Path, e.g. /dev/VolumeGroup/root
  8. Extend the logical volume: lvextend <lv path> <disk>, e.g. lvextend /dev/VolumeGroup/root /dev/sda3
  9. Resize the file system: resize2fs <lv path>, e.g. resize2fs /dev/VolumeGroup/root
  10. Finally, verify that the size of the partition has been increased with df -h

Aug 30 0 Office Dashboards with Raspberry Pi

If you’re in need of a simple computer to drive an infoscreen, which usually just consists of showing a website in fullscreen, Raspberry Pi computers are a great choice. They’re cheap, newer versions have WiFi and HDMI output, and they’re small – so they’re easy to mount on the back of a TV.

Even better, most never TVs have a USB port nowadays, so for a power source, just plug your Pi in to the TV.

Sample dashboard

One problem, however, is that it gets increasingly hard to control the infoscreens the more you add. For example, if you have 6, you don’t want to have to manage them independently, and you want to be able to change the setup quickly.

At the office, we’ve set up 6 Samsung TVs, each with their own Pi. On each is a different dashboard:

What I ended up with is a simple provisioning script that configures the Pi:

Quite a bit is happening here – in order:

  1. WiFi power management is disabled. I’ve found that it makes WiFi very unstable on the Pis.
  2. We need to set the hostname; every pi needs a unique one. We’ll see why later.
  3. The desktop wallpaper is changed. You can remove this part if you like raspberries!
  4. Chromium is installed. I tried Midori, and others. Chromium just works.
  5. We setup a script that starts when the desktop session loads, This will run Chromium.
  6. Then we reboot to apply the hostname change.

In the script, there’s two things you must modify: URLs to the desktop wallpaper and the directory to the dashboard files.

So what’s the dashboard files?

The way it works is this: if the hostname of the Pi is raspberry-pi-1 and you set dashboard_files_url to, the Pi will go to – that way, if you want to change the URL of one of your screens, you just have to change a file on your Web server.

While you could do more about it and make a script on your server, I just went with simple html files with a meta-refresh redirect.

I feel that it’s easier to manage this stuff from a central place rather than SSHing into several different machines.

Do you have a better way of managing your dashboards? Do you have a cool dashboard you’ve designed? Tell us in the comments!

Update regarding Chromium and disable-session-crashed-bubble switch

Newer updates of Chromium removed support for the –disable-session-crashed-bubble that disabled the “Restore pages?” pop-up.

This is annoying since we cut off power to the Raspberry Pi’s to shut them down, and a power cut triggers the popup.

Even more annoying, alternative browsers on the Raspberry Pi like Midori or kweb can’t run the newest Kibana – a dashboard used for monitoring – at all, so I had to find a workaround for this.

The alternative I found was that you can use the –incognito switch, which will prevent the popup, but then you can’t use dashboard that require a cookie to be present (ie. because of login), like Zabbix or Jenkins.

If –incognito won’t do, the solution I found so far was to use xte to simulate a click on the X button of the dialog. It’s stupid, I know, but since the Chromium developers don’t think anyone is using that feature, there you go.

Note that you might want to change the mouse coordinates to where the X button is.


# Close that fucking session crashed bubble
sleep 60
xte "mousemove 1809 20" -x:0
sleep 1
xte "mouseclick 1" -x:0

If you have a better solution, don’t hesitate to write in the comments!

Nov 27 0 Strikethroughs in the Safari Web Inspector styles? Here’s why

Safari uses a strikethrough in showing invalid properties in style sheets. This is not documented, and there’s no tooltips to explain this multicolored line.

There are 2 known different strikethroughs, red and black.

Styles getting overridden by other styles are striked out in black:

Strike (overridden)


But when it’s an invalid or supported property, or the value can’t be parsed, the strikethrough in the style sidebar is red:



Nov 18 19 300,000 login attempts and 5 observations

About a year ago, I developed a WordPress extension called WP Login Attempt Log. All it does is log every incorrect login attempt to your WordPress page and display some graphics and a way to search the logs. It logs the username, the password, the IP address and also the user agent, e.g. the browser version.

Observation number 1: attacks come and go



Screenshot of login attempts from the past 2 weeks, from the plugin

One thing that is striking about this graph is how much the number of attacks differ per day. Some day I will get tens of thousands of attempts, on other days I will get under 100. On average, though, I get about 2200 attempts per day, 15,000 per week and 60,000 per month. It suggests that my site is part of a rotation, or maybe that someone really wants to hack my blog on mondays.

Observation number 2: passwords are tried multiple times

All in all, there’s about 36,000 unique passwords that have been used to brute-force my WordPress blog. From the total number of around 360,000 attacks, each password must is used on average of 10 times. But of course, some are used more than others, as you can see in the table below.

What’s interesting is that there’s not a larger amount of different passwords. From the large password database leaks the past few years – we’re talking tens of millions – one could expect the amount of different passwords more closely matching the number of total attempts.

Of course, there might also just be 10 different people out to hack my blog, and they all have the same password list. :-)

Observation number 3: the most common password is “admin”

An empty password was tried around 5,300 times. Here’s a list of the most used passwords, along with how many times they were used:

Attempts Password
5314 (blank)
523 admin
284 password
269 123456
233 admin123
230 12345
215 123123
213 12345678
207 1234
205 admin1
203 internet
202 pass
201 qwerty
198 mercedes
194 abc123
191 123456789
191 111111
191 password1
190 freedom
190 eminem
190 cheese
187 test
187 1234567
186 sandra
184 123
182 metallica
181 simonfredsted
180 friends
179 jeremy
178 1qaz2wsx
176 administrator

This is not a list of recommended passwords. :-) Definitely don’t use any of those.

Observation number 4: 100 IP addresses account for 83% of the attempts

The top 1 IP address that have tried hacking my blog,, originates from a location you wouldn’t suspect: Amazon. That IP has tried to attack my blog a whopping 45,000 times, 4 times that of the second IP on the list.

I took the top 25 offenders and did a WHOIS on them. I guess if you’re looking for a server company to do your WordPress hacking, here you go:

Attempts IP Address ISP Country Code
45465 Amazon Technologies US
15287 CenturyLink US
10842 VDC VN
10425 Green Web Samaneh Novin Co IR
10423 Turk Telekom TR
10048 Webfusion Internet Solutions GB
10040 OVH SAS FR
10040 Hetzner Online AG DE
10040 US
10040 Kaunas University of Technology LT
10036 012 Smile Communications IL
10035 Private Joint Stock Company datagroup UA
10030 Joint Stock Company TYVASVIAZINFORM RU
10030 VDC VN
10029 Amt Services srl IT
9328 IHNetworks, LLC US
9327 Inetmar internet Hizmetleri San. Tic. Ltd. Sti TR
9327 PlusServer AG DE
9208 TIME dotCom Berhad MY
8804 UKfastnet Ltd GB
8201 INTERWERK – Rotorfly Europa GmbH & Co. KG DE
7598 K Telecom RU
6952 INternetdienste GmbH DE
5231 PrivateSystems Networks US
3546 Kyivstar PJSC UA
3202 Hetzner Online AG DE
2099 Fastweb IT

Another interesting thing about this is is the amount of IPs hovering at around 10,000 attempts. It seems like there’s a limit where the attacker gave up, moved on to the next target. Maybe all these are a part of a single botnet, and each machine in it is only allowed to attack 10,000 times. Who knows.

Observation number 5: protect yourself by using an unique username

WordPress hackers are really sure that you’ll use a pretty standard username, or at least something to do with the name of your blog. A total of just 165 different usernames were tried, compared to the tens of thousands of passwords.

Therefore my final takeaway is to choose an obscure username as well as an obscure password. There’s only 11 usernames that have been used more than a hundred times. This was kind of surprising to me.

Attempts Username
164360 admin
119043 simon
15983 administrator
10787 test
10429 adm
10416 user
9871 user2
9253 tester
8147 support
1818 simonfredsted
189 simo
57 root
57 login
56 admin1
3 qwerty
2 aaa

That’s a lotta attacks, what do I have to fear?

WordPress blogs is one of the most targeted platforms for hackers, many sites use it, from big news organisations to small blogs like this one. If someone can get a login and start fiddling with your links, they can boost traffic to their own viagra peddling sites.

But, as long as you keep your software updated (WordPress makes this very, very easy) and keep the following two rules in mind, you’re totally safe.

Bottom line: Set a lengthy password consisting of random characters, letters and digits, and use a username that’s not a part of your name, site URL or “admin”. Maybe just use some random letters, your password manager will remember it after all.

If you do those two things, there’s nothing to worry about.


If you have your own take on this data, or think I’ve misinterpreted something, feel free to leave a comment below or on Hacker News – I’d love to hear your thoughts.

Sep 27 2 Update to my Roundcube skin

In my last post I introduced you to the release of my personal Roundcube skin. It’s been a few months, and in the meantime a new version of Roundcube arrived bringing changes to the way skins are handled. As it turns out, my skin wasn’t compatible with the new version.

Therefore I’ve updated the skin – now version 1.0 – with the necessary fixes, mainly compatibility with Roundcube 1.0 — for example you can actually write emails now.

There’s also a new beautiful login screen along with some UI improvements – in the Settings area, especially.

Screenshot of the Fredsted Roundcube skin version 1.0

It’s available on good ole github, but you can also just download the zipball if you prefer. Installation instructions are included.

Don’t hesitate creating some pull requests or reach out about problems in the comments — there’s still lots more work to be done, and I’ll make many more design tweaks in the coming weeks and cleaning up the code.

Jun 18 9 How to reverse engineer a wireless router

The Zyxel WRE2205

The Zyxel WRE2205

The Zyxel WRE2205 (rebranded Edimax EW-7303APN V2) is a plug-formed wireless extender. What’s interesting to me about this device is its extremely small size. Many of my standard power bricks like are larger than this unit — but they don’t contain a small Linux minicomputer and advanced wireless functionality.

Trying out the WRE2205 for its intended purpose, I discovered that its wireless performance was quite subpar, slower than my actual Internet connection, but still very usable. Of course, that’s understandable. It has no antenna. So I replaced it with a faster AirPort Express, which can also act as a wireless bridge.

No longer needing the device for its intended purpose, I thought about how cool it would be to have an actual Linux plug PC I could SSH to and use for all sorts for home automation purposes, or leave it somewhere public, name the SSID “FreeWifi” and install Upside-Down-Ternet. The possibilities are endless!

So I started getting the desire to hack this thing. And having seen some bad router software in the many devices I’ve owned, I thought that there could be a chance of rooting this thing.

As anyone who’ve poked around with consumer network equipment knows, a good place to start is binwalk. binwalk is a utility that lists and extracts filesystems embedded in files like router firmware updates. What these “update” files actually do, is that they replace the whole contents of the operating system partition with a completely new version. This is why these devices may “brick” when cutting the power during an upgrade: it may not boot without all the files.

To my delight, binwalk came up with a squashfs filesystem embedded in the latest firmware update from Zyxel’s Web site.

simon@workstation:~$ binwalk -v wre2205.bin

Scan Time:     2014-06-18 22:44:24
Signatures:    212
Target File:   wre2205.bin
MD5 Checksum:  e2aa557aa38e70f376d3a3b7dfb4e36f

509           0x1FD         LZMA compressed data, properties: 
                            0x88, dictionary size: 1048576 bytes, 
                            uncompressed size: 65535 bytes
11280         0x2C10        LZMA compressed data, properties: 0x5D, 
                            dictionary size: 8388608 bytes, 
                            uncompressed size: 2019328 bytes
655360        0xA0000       Squashfs filesystem, big endian, 
                            version 2.0, size: 1150773 bytes, 445 inodes, 
                            blocksize: 65536 bytes, 
                            created: Wed Mar 26 04:14:59 2014


binwalk is so great that it can even extract it for us:

simon@workstation:~$ binwalk -Me wre2205.bin
Target File:   _wre2205.bin.extracted/2C10 
MD5 Checksum:  a47fd986435b2f3b0af9db1a3e666cf1 
1626688       0x18D240      Linux kernel version "2.4.18-MIPS-01.00 
                            (root@localhost.localdomain) (gcc version 
                            3.4st.localdomain) (gcc version 3.4.6-1.3.6)
                            #720 Wed Mar 26 11:10"


It's all the files for the park... It tells me everything!

It’s all the files for the park… It tells me everything!

We can see it’s a Linux 2.4 MIPS kernel. Good. “I know this”, as they say in Jurassic Park.

What we get is a directory containing the whole Linux system. What’s interesting is you can see the configuration and especially all the shell scripts. There are so many shell scripts. Also the source for the Web interface is of course included.

However, most of the functionality is actually not written with whatever scripting language it’s using. It comes from within the Web server, which apparently is heavily modified. The Web files mainly display some variables and send some forms. Not yet that exciting.

The Web server, by the way, is something called boa, an open source http server. Studying the config file, something interesting is located in the file /etc/boa/boa.passwd. The contents:


An MD5-hashed password, it seems. A kind of creepy thing because the default username for the admin user is admin, not root. And it’s referenced in the Auth directive of boa’s config file.  So Zyxel has their own little backdoor. I didn’t get to cracking that password, because I moved on to the /web directory, containing all the web files.

The Web Interface for the WRE2205

The WRE2205 Web Interface

The standard things are there, of course. The jQuery library (version 1.7), some JavaScript, some graphics and some language files. The standard header/footer pages (in this case, though, because Zyxel is stuck in the 1990s, a frameset), and so on.

Beginning to look through file filenames, two interesting ones were to find: /web/debug.asp and /web/mp.asp. None of these are referenced in the “public” Web interface. Having access to debug files is always a good thing when trying to break into something.

The first file, debug.asp, looks like a password prompt of some sort.

Screenshot from 2014-06-18 23:11:50
One might reasonably assume it has something to do with showing some different log files, despite the weird sentence structure. No clues in the config file, and typing some random passwords didn’t work (1). Let’s move on.   The next file, mp.asp, looks much more interesting:

Screenshot from 2014-06-18 23:17:08
There are several good signs here despite the rather minimalist interface. First, it actually looks like a command prompt: the text box selects itself upon loading the page, there’s a # sign, usually an indicator of a system shell. Here there was also a clue in the source code, the input field’s name is actually command. Simply entering nothing and pressing GO yields the following result:

Screenshot from 2014-06-18 23:23:08
Bingo. It seems to launch a shell script where the input box is the parameter. Let’s take a look at this fellow:

Screenshot from 2014-06-18 23:26:01
Lots of different debug commands that yield different things. So, entering ENABLEWIRELESS in the prompt would run /bin/ ENABLEWIRELESS and return the output in HTML. (I have no idea what “interface” and yes/no switch does, entering eth0 doesn’t work, so maybe it’s an example?)

At the bottom there’s even a COMMAND command that allows us to execute any command. At least they tried to make this a little secure by limiting the applications you can execute:

         if [ "$2" = "ifconfig" ] || [ "$2" = "flash" ] || [ "$2" = "cat" ] 
            || [ "$2" = "echo" ] || [ "$2" = "cd" ] || [ "$2" = "sleep" ]  
            || [ "$2" = "kill" ] || [ "$2" = "iwpriv" ] || [ "$2" = "reboot" ] 
            || [ "$2" = "ated" ] || [ "$2" = "AutoWPA" ]; then
             $2 $3 $4 $5 $6

But, at this point there’s really no point, since doing stuff like this will be completely broken in any case, and we can just do something like this:

And so we have full control. Since || means OR, and the command fails when there’s no valid command, the last command will be run.

As we can see from the above screenshot, the web server is running as root so now we have full control of the little plug computer. Software like wget comes preinstalled, so you can just go ahead and download shell scripts, an SSH server, or otherwise have fun on the 1.5 megabytes of space you have to play with. Good luck!

I kind of expected that I had to use an exploit or buffer overflow, get out a disassembler like IDA, or do a port scan, or do some more digging — but just below the surface there are some quite big security issues. Of course, you need to know the admin password since the whole /web directory is behind Basic authentication.

However, since the boa webserver is an old version with exploits available, you probably won’t even need that. We can assume it’s not a feature since it’s hidden. So with such a big hole, I wonder what else lies below the surface?



  1.  I later found, by running strings on the http server executable, that typing report.txt shows some debug information.

Dec 18 3 Monitoring ’dd’ progress

On Linux, to view the progress of the file/disk copy tool dd, you could send the USR1 signal to get a progress output. This apparently doesn’t work on Apple’s OS.

However, with Activity Monitor, it’s easy to see the progress of dd when, for example, copying an operating system image onto a USB (which can take a while…). Simply compare the size of the image with the “bytes written” column to get a good idea of how much progress it has done:

dd progress with Activity Monitor

If you need to view more detailed progress, or use dd lot, you can try installing pv, a utility which echoes the amount of data piped through it. One would use it with dd like this:

dd if=/file/image | pv | of=/dev/disk8

That would render something like this, letting you know the progress:

1,38MiB 0:00:08 [  144kiB/s]

Also, with pv, you could specify the –size parameter to get an estimation of the time it will take to finish. pv can be installed with, for example, Homebrew.

Apr 28 2 Fixing slow ProFTPd logins

Recently a few users on a Virtualmin server have experienced issues with slow FTP logins. It took a long time to login and often wouldn’t log in at all.

To correect this, first log on to the Webmin interface on http://yourserver:10000. At the top left, click Webmin.

A bit further down, under Servers, select ProFTPD Server.

Under Global Configuration, select the Networking icon.


Then you’ll see a screen with a whole bunch of settings. Set the following options to No:

  1. Set Lookup remote Ident username?
  2. Do reverse DNS lookups of client addresses?


Now click save, and on the ProFTPd page press Apply settings on the bottom. Your logins should now be instant.

Feb 26 7 A native KeePass app for Mac

Password storage is incredibly important to me. Since I began seeing friends and others get their identities and online lives taken away because of reusing and/or using weak passwords, I started taking password security extremely seriously.

When I chose the utility to use for this, I had a couple basic requirements.

  1. It had to be open source, for obvious reasons
  2. I had to be able to access my passwords on all my devices (iPad, iPhone, MacBook, workstation)

Things like 1Password and Lastpass didn’t fullfill the first requirement, although very handy because of browser integration and the mobile apps. So I ended up choosing a combination of the KeePass framework and Safari+Mac OS X keychain for my password storage needs, with KeePassX for my client, along with a mobile app, MiniKeePass, that syncs my KeePass database using Dropbox. As an added bonus, the iOS mobile app is open source as well.

I use KeePass as my primary password storage database, and Safari’s password saving feature for sites I access often, like my blog and reddit account.

I’m very happy with this solution, but unfortunately the Mac KeePassX currently has a very ugly, un-Mac-like user interface. I’ve been waiting for something which incorporates the native Mac user interface controls.

And, finally, today stumbled across this KeePass Mac client developed by Michael Starke from Hick’n’Hack Software. It seems like it’s in very early alpha, but it can load KeePass files and display their contents, so the basis functionality is almost done. It seems like it’s using the MiniKeePass framework library for its backend functionality. I cloned and ran it immediately as I’ve been wanting this ever since I started using KeePass for storing my passwords.

Unfortunately I can’t seem to be able to copy passwords yet, and there’s no detail dialog when you click on a password entry.

But since, as of writing, the last commit is 13 hours ago I’m sure this functionality will be added soon. I’m just so happy someone is making this. This definitely makes me want to learn Objective C properly so I can contribute to this project! If you know ObjC, you should definitely go add some pull requests!


Here’s a screenshot from the release I just built:

Screen Shot 2013-02-26 at 5.59.35 PM

Compare this to the current KeePass:

Screen Shot 2013-02-26 at 6.20.28 PM

Feb 23 4 Sync SSH config across computers with Dropbox

Here’s a little time-saving tip for Mac OS X/Linux users: if you work with lots of different Macs and servers daily, store your SSH configuration file in dropbox, and create a symbolic link to it so you can sync it across your computers.

With this, once I add a new machine to my SSH config, it’s immediately available across all of my computers, my workstation, laptop, work machine, etc. I’m terrible at remembering hostnames and IP-addresses, so this comes in handy as I acquire control over more and more servers.

Also, you can of course extend this method to sync other types of configuration files, like your git config or bash profile. Dropbox is a neat tool!

Step 1

Create a folder in your Dropbox to store files like these.

mkdir ~/Dropbox/configs

Step 2

Move your ssh config to this folder. I just call it ssh-config.txt instead of simply config for easier access and as to not mix it up with other configuration files.

mv ~/.ssh/config ~/Dropbox/configs/ssh-config.txt

Step 3

Create a symbolic link to the new configuration file.

ln -s ~/Dropbox/configs/ssh-config.txt ~/.ssh/config

Apr 13 11 InstaDJ – a quick way to assemble YouTube playlists

I made a website that lets you create YouTube playlists easily – and share them, too.

Everybody is online nowadays. Nobody uses CDs anymore. So at parties it’s common to see a laptop hooked up to a stereo where people go up and select songs on YouTube during the night. It kinda sucks though:

  • Music starts and stops randomly as people get drunk and start searching for songs while another is playing.
  • You need to get up and change the track when it stops.
  • It’s too hard to make a playlist on YouTube. You can’t really make one on the fly.
  • What’s more, you have to be logged in with your Google ID to make playlists. I don’t want random people to mess with my account (e.g. Gmail), especially drunk people.

Sure, there’s Grooveshark. But people who aren’t nerds can’t figure out how to use Grooveshark and will just go to YouTube instead. It’s too easy to interrupt a playlist, especially when you’re drunk. The add to playlist button is easily missed.

Grooveshark is also missing many songs due to silly record companies.

Other sites exist, I know. But no matter which one you use, people will inevitably go to YouTube because it’s got all the content and it’s what people know and love.

Even other “Youtube DJ” sites exist. I’ve been through a few. They either a) require login, b) are hard to use, c) can’t autoplay, d) don’t work.

So I got fed up with all this and made InstaDJ. It’s a dead-simple Web site where you can add YouTube videos to a playlist on the fly. Even drunk people get it.

InstaDJ allows you to search and queue YouTube videos, using a simple interface everybody understands, in a way which doesn’t interrupt the music.

What it does

  • Search YouTube videos
  • View user uploads and favorites
  • Queue YouTube videos
  • Auto-selects HD video if available
  • Generate URL to playlists
  • Share playlist
  • It’s free and there’s no ads
  • Easy to use, minimalist interface

I even find myself just using InstaDJ instead of playing music from my iTunes library.

Don’t you want to try it out? Just click here to go to

For the technically interested, it’s built with the YouTube API, Twitter Bootstrap and jQuery. Enjoy.

Feb 9 0 Apple Predictions for 2012 – 2013

  • All Apple products with screens will begin to have Retina-grade displays, starting with the iPad 3 coming in the first half of 2012, then the MacBooks and finally the Thunderbolt display and iMacs.
  • iPad 3 will have, other than retina display, double the ram, quad core processor, better 8MP camera, thinner, but same 10hr battery life. The design will be similar to that of iPad 2. Oh, and Siri. Coming Q1 2012.
  • Mac OS X will merge with iOS in the next version coming late 2013, potentially removing a lot of functionality, upsetting professionals. There will be no 10.8. It will simply be called iOS. (They’ve run out of cat names)
  • iMacs will never have touch-screens though.
  • Apple’s 42″ television set will premiere before 2013. It will look like a large Thunderbolt Display. It will feature iOS. Apple will also partner with TV stations to offer more on-demand programming. It will probably feature Siri so you can channel surf without moving your body. It will be available in black and white.
  • Mac Pro will be discontinued after the next and final generation comes when Intel’s new chipset is ready in mid 2012, so sometime in late 2013. Super fast Thunderbolt-equipped iMacs will take over Mac Pro’s market.
  • Apple’s server offerings will be replaced with cloud services and the increasingly powerful Mac Mini. (Apple doesn’t even use their own hardware for servers anymore)
  • Final Cut Pro X will receive rigorous updates. Video professionals will regret their shift to Media Composer as digital formats replace tape.
  • We will see the transition to ARM-based Macs in late 2013 or the beginning of 2014, starting with the MacBook Air.
  • As battery life will be longer, we’ll begin to see security features in the MacBook power adapters like Apple recently patented.
  • iPhone 5 mid-late 2012 will have a new, thinner design going back to round corners plus Siri will end its beta phase at the same time. It will have LTE support. iPhone 5 won’t have any exceptional features, but it will be the best selling iPhone ever.
  • iPod Classic will be discontinued and replaced with the iPod touch which will be renamed iPod. It will be expensive because of the 128GB SSD.
  • iPod Shuffle and iPod Nano will merge.

Apple will become more consumer and entertainment oriented and will slim its product line accordingly.

Right now, Apple is preparing and conditioning us for the iOS merge with their Lion operating system.

As we’ve seen with the release of Lion and the 10.7.3 update, retina support on Macs is coming soon.

ARM-based Macs are further away; it’s simply not fast enough yet. But it’ll come unless something drastic happens at Intel.

Apple’s TV will become the best selling  TV ever, no doubt about it. Right now, TVs suck with their slow menus and bloated designs with huge bezels and bright LED lights. Consumers want an easy to use, minimalist designed, internet-connected television with an Apple logo on it. And they’ll pay a premium for it.

Mac Pro is becoming a legacy machine. Apple will probably have to release a new one to please their professional market, but they’re not happy about it. Thunderbolt display will mostly be a dock for Macbooks.

Right now is an exciting time at Apple. It seems like their growth can’t stop. But will their changes succeed? Will the competition finally pull itself together and release products worth buying?