Archive for the ‘Articles’ Category

May 7 4 Automating AWS infrastructure with Terraform

When you start using cloud hosting solutions like Amazon Web Services, Microsoft Azure or Rackspace Cloud, it doesn’t take long to feel overwhelmed by the choice and abundance of features of the platforms. Even worse, the initial setup of your applications or Web sites on a cloud platform can be very cumbersome; it involves a lot of clicking, configuring and discovering how the different parts fit together.

With tools like Terraform, building your infrastructure becomes a whole lot easier and manageable. Terraform basically allows system administrators to sit down and script their whole infrastructure stack, and connect the different parts together, just like assigning a variable in a programming language. Instead, with Terraform, you’re assigning a load balancer’s backend hosts to a list of servers, for example.

In this tutorial I’ll walk you through a configuration example of how to set up a complete load balanced infrastructure with Terraform, and in the end you can download all the files and modify it to your own needs. I’ll also talk a little about where you can go from here if you want to go further with Terraform.

You can download all the files needed for this how-to on Github.

Getting up and running

To start using Terraform, you’ll need to install it. It’s available as a single binary for most platforms, so download the zip file and place it somewhere in your PATH, like /usr/local/bin. Terraform runs completely on the command-line, so you’ll need a little experience executing commands on the terminal.

Variables

A core part of Terraform is the variables file, variables.tf, which is automatically included due to the file name. It’s a place where you can define the hard dependencies for your setup, and in this case we have two:

  1. a path to a SSH public key file,
  2. the name of the AWS region we wish to create our servers in.

Both of these variables have defaults, so Terraform won’t ask you to define them when running the planning step which we’ll get to in a minute.

Create a folder somewhere on your harddrive, create a new file called variables.tf, and add the following:

variable "public_key_path" {
  description = "Enter the path to the SSH Public Key to add to AWS."
  default = "~/.ssh/id_rsa.pub"
}

variable "aws_region" {
  description = "AWS region to launch servers."
  default     = "eu-central-1"
}
variables.tf

Main file

Terraform’s main entrypoint is a file called main.tf, which you’ll need to create. Add the following 3 lines:

provider "aws" {
  region = "${var.aws_region}"
}

This clause defines the provider. Terraform comes bundled with functionality for some providers, like Amazon Web Services which we’re using in this example. One of the things you can configure it with is the default region, and we’re getting that from the variables file we just created. Terraform looks for a variables.tf file and includes it automatically. You can also configure AWS in other ways, like explicitly adding an AWS Access Key and Secret Key, but in this example we’ll add those as environment variables. We’ll also get to those later.

Network

Next we’ll start adding some actual infrastructure, in Terraform parlance that’s called a resource:

resource "aws_vpc" "vpc_main" {
  cidr_block = "10.0.0.0/16"
  
  enable_dns_support = true
  enable_dns_hostnames = true
  
  tags {
    Name = "Main VPC"
  }
}

resource "aws_internet_gateway" "default" {
  vpc_id = "${aws_vpc.vpc_main.id}"
}

resource "aws_route" "internet_access" {
  route_table_id          = "${aws_vpc.vpc_main.main_route_table_id}"
  destination_cidr_block  = "0.0.0.0/0"
  gateway_id              = "${aws_internet_gateway.default.id}"
}

# Create a public subnet to launch our load balancers
resource "aws_subnet" "public" {
  vpc_id                  = "${aws_vpc.vpc_main.id}"
  cidr_block              = "10.0.1.0/24" # 10.0.1.0 - 10.0.1.255 (256)
  map_public_ip_on_launch = true
}

# Create a private subnet to launch our backend instances
resource "aws_subnet" "private" {
  vpc_id                  = "${aws_vpc.vpc_main.id}"
  cidr_block              = "10.0.16.0/20" # 10.0.16.0 - 10.0.31.255 (4096)
  map_public_ip_on_launch = true
}
Network setup

To contain our setup, an AWS Virtual Private Cloud is created and configured with an internal IP range, as well as DNS support and a name. Next to the resource clause is aws_vpc, which is the resource we’re creating. After that is the identifier, vpc_main, which is how we’ll refer to it later.

We’re also creating a gateway, a route and two subnets: one for public internet-facing services like the load balancers, and a private subnet that don’t need incoming network access.

As you can see, different parts are neatly interlinked by referencing them like variables.

Trying it out

At this point, we can start testing our setup. You’ll have two files in a folder, variables.tf and main.tf with the content that was just listed. Now it’s time to actually create it in AWS.

To start, enter your AWS Access Keys as environment variables in the console, simply type the following two lines:

AWS_ACCESS_KEY_ID="AKIA..."
AWS_SECRET_ACCESS_KEY="Your secret key"

Next, we’ll create the Terraform plan file. Terraform will, with your AWS credentials, check out the status of the different resources you’ve defined, like the VPC and the Gateway. Since it’s the first time you’re running it, Terraform will instill everything for creation in the resulting plan file. Just running the plan command won’t touch or create anything in AWS.

terraform plan -o terraform.plan

You’ll see an overview of the resources to be created, and with the -o terraform.plan argument, the plan is saved to a file, ready for execution with apply.

terraform apply terraform.plan

Executing this command will make Terraform start running commands on AWS to create the resources. As they run, you’ll see the results. If there’s any errors, for example you already created a VPC with the same name before, you’ll get an error, and Terraform will stop.

After running apply, you’ll also see a new file in your project folder: terraform.tfstate – a cache file that maps your resources to the actual ones on Amazon. You should commit this file to git if you want to version control your Terraform project.

So now Terraform knows that your resources were created on Amazon. They were created with the AWS API, and the IDs of the different resources are saved in the tfstate file – running terraform plan again will result in nothing – there’s nothing new to create.

If you change your main.tf file, like changing the VPC subnet to 192.168.0.0/24 instead of 10.0.0.0/16, Terraform will figure out the necessary changes to carry out in order to to update the resources. That may result in your resources (and their dependents) being destroyed and re-created.

More resources

Having learnt a little about how Terraform works, let’s go ahead and add some more things to our project.

We’ll add 2 security groups, which we’ll use to limit network access to our servers, and open up for public load balancers using the AWS ELB service.

# A security group for the ELB so it is accessible via the web
resource "aws_security_group" "elb" {
  name        = "sec_group_elb"
  description = "Security group for public facing ELBs"
  vpc_id      = "${aws_vpc.vpc_main.id}"

  # HTTP access from anywhere
  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  
  # HTTPS access from anywhere
  ingress {
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  # Outbound internet access
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

# Our default security group to access the instances over SSH and HTTP
resource "aws_security_group" "default" {
  name        = "sec_group_private"
  description = "Security group for backend servers and private ELBs"
  vpc_id      = "${aws_vpc.vpc_main.id}"

  # SSH access from anywhere
  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  # HTTP access from the VPC
  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["10.0.0.0/16"]
  }
  
  # Allow all from private subnet
  ingress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["${aws_subnet.private.cidr_block}"]
  }

  # Outbound internet access
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

Our elb security group is only reachable from port 80 and 443, HTTP and HTTPS, while the default one only has public access on port 22, SSH. It also allows access from the whole VPC (including public facing load balancers) on port 80, as well as full access from other servers. Both allow all outgoing traffic.

After the ELBs, we need to define a public key which is placed on the instances we create later. Here, we use the pre-defined variable to specify the path on the local filesystem.

resource "aws_key_pair" "auth" {
  key_name   = "default"
  public_key = "${file(var.public_key_path)}"
}

Modules

You probably thought that there was a lot of duplicate code in those two security groups, and you’re right. To combat that, Terraform provides custom modules, which is basically like including files.

Since we need to configure quite a few things in our EC2 instances, but the things we configure are almost always the same across them, we’ll create a module for our instances. Do do that, create a new folder called instance.

In the instance folder, create 3 new files:

variable "private_key_path" {
  description = "Enter the path to the SSH Private Key to run provisioner."
  default = "~/.ssh/id_rsa"
}

variable "aws_amis" {
  default = {
    eu-central-1 = "ami-060cde69"
  }
}

variable "disk_size" {
  default = 8
}

variable "count" {
  default = 1
}

variable "group_name" {
  description = "Group name becomes the base of the hostname of the instance"
}

variable "aws_region" {
  description = "AWS region to launch servers."
  default     = "eu-central-1"
}

variable "instance_type" {
  description = "AWS region to launch servers."
  default     = "t2.small"
}

variable "subnet_id" {
  description = "ID of the AWS VPC subnet to use"
}

variable "key_pair_id" {
  description = "ID of the keypair to use for SSH"
}

variable "security_group_id" {
  description = "ID of the VPC security group to use for network"
}
instance/variables.tf
resource "aws_instance" "instance" {
  count = "${var.count}"

  instance_type          = "${var.instance_type}"
  ami                    = "${lookup(var.aws_amis, var.aws_region)}"
  key_name               = "${var.key_pair_id}"
  vpc_security_group_ids = ["${var.security_group_id}"]
  subnet_id              = "${var.subnet_id}"
  
  root_block_device {
      volume_size = "${var.disk_size}"
  }
  
  tags {
      Name = "${format("%s%02d", var.group_name, count.index + 1)}" # -> "backend02"
      Group = "${var.group_name}"
  }
  
  lifecycle {
    create_before_destroy = true
  }
  
  # Provisioning
  
  connection {
    user = "ubuntu"
    private_key = "${file(var.private_key_path)}"
  }

  provisioner "remote-exec" {
    inline = [
      "sudo apt-get -y update",
    ]
  }
}
instance/main.tf
# Used for configuring ELBs.
output "instance_ids" {
    value = ["${aws_instance.instance.*.id}"]
}
instance/output.tf

In the variables file, we have a few things worth mentioning:

  • a default path to the private key of the public key – we’ll need the private key for connecting via SSH and launching the provisioner,
  • we define a list of AMIs, or more specifically a map. Here, since we’re only focusing on Amazon’s EU Central 1 region, we’ve only defined an AMI for that region (It’s Ubuntu 16.04 LTS). You need to go browse Amazon’s AMI library if you use another region, or you want to use another operating system,
  • some defaults are defined, like the count of instances, disk size, etc. These can be overwritten when invoking the module,
  • some variables don’t have defaults – weirdly, Terraform doesn’t let you automatically inherit variables, which is why I’ve chosen to place the private key path here. Otherwise I’d have to pass the main Terraform variable to every module.

The output file allows the module to export some properties – you have to explicitly define outputs for everything you want to reference later. The only thing I have to reference is the actual instance IDs (for use in the ELBs), so that’s the only output.

Using the Tags array, we can add some info to our instances. I’m using one of Terraforms built-in functions, format, to generate a friendly hostname based on the group name and a 1-indexed number. Also, the provisioner clause is a little bare. Instead, one would typically reference an Chef or Ansible playbook, or just run some commands to set up your environment and bootstrap your application.

Back in your main Terraform file, main.tf, you can now start referencing your AWS EC2 Instance module:

module "backend_api" {
    source                 = "./instance"
    subnet_id              = "${aws_subnet.private.id}"
    key_pair_id            = "${aws_key_pair.auth.id}"
    security_group_id      = "${aws_security_group.default.id}"
    
    count                  = 2
    group_name             = "api"
}

module "backend_worker" {
    source                 = "./instance"
    subnet_id              = "${aws_subnet.private.id}"
    key_pair_id            = "${aws_key_pair.auth.id}"
    security_group_id      = "${aws_security_group.default.id}"
    
    count                  = 2
    group_name             = "worker"
    instance_type          = "t2.medium"
}

module "frontend" {
    source                 = "./instance"
    subnet_id              = "${aws_subnet.private.id}"
    key_pair_id            = "${aws_key_pair.auth.id}"
    security_group_id      = "${aws_security_group.default.id}"
    
    count                  = 2
    group_name             = "frontend"
}

module "db_mysql" {
    source                 = "./instance"
    subnet_id              = "${aws_subnet.private.id}"
    key_pair_id            = "${aws_key_pair.auth.id}"
    security_group_id      = "${aws_security_group.default.id}"
    
    count                  = 3
    disk_size              = 30
    group_name             = "mysql"
    instance_type          = "t2.medium"
}

Instead of resource, the modules are referenced using the module clause. All modules have to have a source reference, pertaining to the directory of where the module’s main.tf file is located.

Again, since modules can’t automatically inherit or reference parent resources, we’ll have to explicitly pass the subnet, key pair and security groups to the module.

This example consists of 9 instances:

  • 2x backend,
  • 2x backend workers,
  • 2x frontend servers,
  • 3x MySQL servers.

Load balancers

To finish our terraform file, we add the remaining component: load balancers.

# Public Backend ELB
resource "aws_elb" "backend" {
  name = "elb-public-backend"

  subnets         = ["${aws_subnet.public.id}", "${aws_subnet.private.id}"]
  security_groups = ["${aws_security_group.elb.id}"]
  instances       = ["${module.backend_api.instance_ids}"]

  listener {
    instance_port     = 80
    instance_protocol = "http"
    lb_port           = 80
    lb_protocol       = "http"
  }
  
  health_check {
    healthy_threshold   = 2
    unhealthy_threshold = 2
    timeout             = 3
    target              = "HTTP:80/healthcheck.php"
    interval            = 30
  }
}

# Public Frontend ELB
resource "aws_elb" "frontend" {
  name = "elb-public-frontend"

  subnets         = ["${aws_subnet.public.id}", "${aws_subnet.private.id}"]
  security_groups = ["${aws_security_group.elb.id}"]
  instances       = ["${module.frontend.instance_ids}"]

  listener {
    instance_port     = 80
    instance_protocol = "http"
    lb_port           = 80
    lb_protocol       = "http"
  }
  
  health_check {
    healthy_threshold   = 2
    unhealthy_threshold = 2
    timeout             = 3
    target              = "HTTP:80/healthcheck.php"
    interval            = 30
  }
}

# Private ELB for MySQL cluster
resource "aws_elb" "db_mysql" {
  name = "elb-private-galera"

  subnets         = ["${aws_subnet.private.id}"]
  security_groups = ["${aws_security_group.default.id}"]
  instances       = ["${module.db_mysql.instance_ids}"]
  internal        = true

  listener {
    instance_port     = 3306
    instance_protocol = "tcp"
    lb_port           = 3306
    lb_protocol       = "tcp"
  }
  
  health_check {
    healthy_threshold   = 2
    unhealthy_threshold = 2
    timeout             = 3
    target              = "HTTP:9222/" # Galera Clustercheck listens on HTTP/9222
    interval            = 30
  }
}

The load balancers provide the entrypoints for our application. One thing to note here is how the instances are referenced[Footnote 1].

Main output file

To put a cherry on top, we’ll create an output file for our main project, output.tf. Again, due to the filename, Terraform will automatically pick it up.

# Public Load Balancers

output "api_address" {
  value = "${aws_elb.backend.dns_name}"
}

output "frontend_address" {
  value = "${aws_elb.frontend.dns_name}"
}

# Private Load Balancers

output "galera_address" {
  value = "${aws_elb.db_mysql.dns_name}"
}
output.tf

This will display the hostnames of our ELBs in a friendly format after running terraform apply, which is handy for copying into a configuration file or your browser.

You can now run terraform plan again like before, but since you’re using modules, you’ll have to run terraform get first to include them.

Then you can see that it will create the remaining infrastructure when you do terraform apply.

You can clone, fork or download the full project over on Github.

Next steps

Where can you go from here? I have a couple ideas:

  • Move your DNS to Amazon Route53 and automate your DNS entries with the outputs from the ELBs.
  • In addition to Route53, see what other AWS services you can provision using Terraform, like S3 buckets, autoscaling groups, AMIs, IAM groups/policies…
  • Further use modules to simplify your main file, for example by nesting multiple resources in one file. You could, for example, have all your network setup in a single module to make the base main.tf file more concise.
  • Integrate with provisioning software like Ansible, using their EC2 inventory to easily provision new instances.

Footnotes

  1. Yes, the instance IDs are inside a string, which is how all resources or modules are references, even though they technically are arrays and (in my opinion) shouldn’t be encapsulated in a string. But that’s how it is.

Apr 13 10 How to use Apple’s SF Mono font in your editor

At WWDC 2016, Apple unveiled a brand new font which was called San Francisco. The font went on to become the default font in macOS and iOS, replacing Helvetica (which replaced Lucida Sans). On watchOS, a special Compact variant of San Francisco, was used.

Later, Apple introduced yet another variant, a monospaced variant, which I think simply looks fantastic – especially on a high-resolution display like the MacBook. It has gone and replaced my previous favourite monospace font, Anonymous Pro.

Weirdly enough, the fonts are not available for selection in macOS, you just can’t use San Francisco for editing a document in Pages, for example.

Currently, though, the standard and Compact versions of San Francisco is available on Apple’s developer portal, but unfortunately the monospaced version is not.

Fortunately, if you have macOS Sierra, the version is included inside the Terminal.app in macOS.

Here’s how you extract the font from Terminal.app and install it on your computer so you can use it in your text editor, for example:

  1. Go to Terminal.app’s resources folder:
    1. Right click the Finder icon in the Dock
    2. Click ‘Go to Folder…’
    3. Enter this path: /Applications/Utilities/Terminal.app/Contents/Resources/Fonts
    4. Click Go
  2. You’ll see a list of fonts in the folder.
    1. Select all of the fonts in the folder.
    2. Right click on them and click ‘Open’
  3. A window will pop-up previewing the font. Click Install Font.
  4. You’ll perhaps get a window that says there’s problems with the fonts. I did too.
    1. Go ahead and click ‘Select all fonts’
    2. Click ‘Install Checked’
    3. You’ll get another dialog
    4. Click ‘Install’
  5. Font Book will show the new font as installed. You’ll now be able to select the SF Mono font in your editor. 🎉

Here’s a GIF of the whole process:

Jan 31 1 Back up Elasticsearch with S3 compatible providers

ElasticSearch is a popular search engine and database that’s being used in applications where search and analytics is important. It’s been used as a primary database in such applications as HipChat, storing billions of messages while making them searchable.

While being very feature-complete for use cases like that, being new (compared to other popular datastores like MySQL), ElasticSearch also has a disadvantage when being used as a permanent datastore: backups.

In the early days of ElasticSearch, backup was crude. You shut down your node, or flushed its contents to disk, and did a copy of the data storage directory on the harddrive. Copying a data directory, isn’t very convenient for high-uptime applications, however.

In later versions, ES introduces snapshots which will let you do a complete copy of an index. As of version 2, there’s several different snapshot repository plugins available:

  • HDFS
  • Amazon S3
  • Azure
  • File system/Directory

File System

For the file system repository type, ElasticSearch requires that the same directory is being mounted on all nodes in the cluster. This starts getting inconvenient fast as your ES cluster grows.

The mount type could be NFS, CIFS, SSHFS or similar. To make sure the file mount is always available, you can use a program like AutoFS to make sure.

On clusters with a few nodes, I haven’t had good luck with it – even using AutoFS, the connection can be unstable and lead to errors from ElasticSearch, and I’ve also experienced nodes crashing when the repository mount came offline.

S3/Azure

Then there’s S3 and Azure. They work great – provided that there isn’t anything preventing you from storing your data with a 3rd party, American-owned cloud provider. It’s plug and play.

S3 Compatible

If you for some reason can’t use S3, there’s other providers that provide storage cloud services that are compatible with the S3 API.

If you prefer an on-prem solution, you can use storage engine that support it. Minio is a server written in Go that’s very easy to get started with. More complex tools include Riak S2 and Ceph.

Creating an S3 compatible repository is the same as creating an Amazon S3 repository. You need to install the cloud-aws plugin in ES, and in the elasticsearch.yml config file, you need to add the following line:

cloud.aws.signer: S3SignerType

Not adding this line will result in errors like these:

com.amazonaws.services.s3.model.AmazonS3Exception: 
null (Service: Amazon S3; Status Code: 400; Error Code: InvalidArgument; Request ID: null)

and

The request signature we calculated does not match the signature you provided

Per default, it’s AWSS3SignerType, and that prevents you from using an S3 compatible storage repository.

Setting the repository up in ES is similar to the AWS type, except you also specify an endpoint. For example, with the provider Dunkel.de, you’d add a repository like this:

POST http://es-node:9200/_snapshot/backups
{
  "type": "s3",
  "settings": {
    "bucket": "backups",
    "base_path": "/snapshots",
    "endpoint": "s3-compatible.example.com",
    "protocol": "https",
    "access_key": "Ze5Zepu0Cofax8",
    "secret_key": "Qepi7Pe0Foj2RuNat2Fox8Zos7YuNat2Fox8Zos7Yu"
  }
}

To learn more about the snapshot endpoints, here’s a link to the ES documentation.

If you take a lot of different backups, I’d also recommend to take a look at the kopf ES plugin, which has a nice web interface for creating, restoring and otherwise administering snapshots.

Periodical snapshots

I’ve had success setting up snapshots using cronjobs. Here’s an example on how to automatically do snapshots.

On one of the ES nodes, simply add a cronjob which fires a simple request to ES, like this, which creates a snapshot with the current date:

0,30 * * * * curl -XPUT 'http://127.0.0.1:9200/_snapshot/backups/'$(date +\%d-\%m-\%Y-\%H-\%M-\%S)''

This will create a snapshot in the backups repository with a name like “20-12-2016-11-30-00” – the current date and time. You can also use a similar command to create a new ES repository every month, for example, so you can periodically take a complete snapshot of the cluster.

If you want a little more control, Elastic provides a nice tool called Curator which lets you easily organise repositories, snapshots, deleting old indexes, and more. Instead of doing a curl request in a cronjob, you write a Curator script which you can run in a cronjob – it gives you more flexibility.

Concurrency errors with snapshots

This section isn’t S3 specific, but I’ve run into these issues so often that I thought I’d write a little about them.

ElasticSearch can be extremely finicky when there’s network timeouts while doing snapshots, for example, and you won’t get any help from the official ES documentation.

For example, you may experience that a snapshot is stuck. It’s IN_PROGRESS, but it never finishes. You can then do a DELETE <repository_name>/<snapshot_name>, and it will be of status ABORTED. Then you might experience you’re stuck. It will stay at ABORTED forever, and when trying to DELETE it again, you’ll get this:

{
 "error": {
 "root_cause": [
   {
     "type": "concurrent_snapshot_execution_exception",
     "reason": "[<repository_name>:<snapshot_name>] another snapshot is currently running cannot delete"
   }
 ],
 "type": "concurrent_snapshot_execution_exception",
 "reason": "[<repository_name>:<snapshot_name>] another snapshot is currently running cannot delete"
 },
 "status": 503
}

Now, trying to create another snapshot gets you this:

{
 "error": {
 "root_cause": [
   {
     "type": "concurrent_snapshot_execution_exception",
     "reason": "[<repository_name>:<snapshot_name>] a snapshot is already running"
   }
 ],
 "type": "concurrent_snapshot_execution_exception",
 "reason": "[<repository_name>:<snapshot_name>] a snapshot is already running"
 },
 "status": 503
}

The only way to fix this is to do either a rolling upgrade (e.g. restart one node, then the next), or a complete restart of the whole cluster. That’s it.

Jan 15 0 Simple Mac window management with BetterTouchTool

As a software developer, I not only work with lots of different windows on my computer screen, but with lots of different sets of windows. Not only am I dependent on windows being in different places, but in different sizes. As such, I need to manage all these windows in some way.

For example, I often need to have 3 browser windows open. Maybe one for documentation, one for a project management tool and one for testing. And then I’d of course want a text editor. Maybe for a while I’d like one of the windows to take up more space, so I move one to a different screen and make the other window larger.

It would take me a while to manually drag these windows to their right places.

Luckily, a program for Mac called BetterTouchTool allows me to easily define sets of hotkeys that carries out all this moving and sizing of windows. I find that it speeds up my workflow a lot – I can easily organise my desktop.

It’s even preferable to the Windows 7-style drag-to-maximize Snap feature since I don’t have to use my mouse at all.

Here’s the shortcuts I’ve defined:

Use the link below to download a BTT preset of these shortcuts.

Did you create any cool sets of shortcuts or workflow improvements with BetterTouchTool you want to share? Let us know in the comments.

Sep 27 0 How to extend a LVM volume group

Extending a logical volume group usually needs to be done when the size of a VMware disk has been increased for a Linux VM. When resizing a disk, the volume isn’t extended automatically, so you need to extend the logical volume in the VM’s volume group.

This article assumes that:

  • You have a LVM volume group with a logical volume
  • You’ve added free space in the virtualizer, e.g. VMware
  • You’re running Ubuntu. Might also work with other distributions
  • You have basic knowledge of partitions and Linux

Creating new partition with Gparted

  1. Start by creating a new partition from the free space. I prefer doing this with a GUI using gparted. You need XQuartz if you’re on a Mac.
    1. SSH into the box with -X, e.g. ssh -X myserver
    2. Install gparted: apt-get install -y gparted and run gparted
    3. Find the unallocated space (a grey bar)
    4. Select and create partition. Choose lvm2 pv  as the “file system”
    5. Click OK
    6. Click Apply in the toolbar and again in the dialog
    7. Note the disk name in the Partition column, e.g. /dev/sda3
  2. You should see the disk with fdisk -l
  3. Run pvcreate <disk>, e.g. pvcreate /dev/sda3
  4. Find the volume group: run vgdisplay (name is where it says VG Group)
  5. Extend the VG with the disk: vgextend <vg name> <disk>, e.g. vgextend VolumeGroup /dev/sda3
  6. Run vgscanpvscan
  7. Run lvdisplay to find the LV Path, e.g. /dev/VolumeGroup/root
  8. Extend the logical volume: lvextend <lv path> <disk>, e.g. lvextend /dev/VolumeGroup/root /dev/sda3
  9. Resize the file system: resize2fs <lv path>, e.g. resize2fs /dev/VolumeGroup/root
  10. Finally, verify that the size of the partition has been increased with df -h

Aug 30 0 Office Dashboards with Raspberry Pi

If you’re in need of a simple computer to drive an infoscreen, which usually just consists of showing a website in fullscreen, Raspberry Pi computers are a great choice. They’re cheap, newer versions have WiFi and HDMI output, and they’re small – so they’re easy to mount on the back of a TV.

Even better, most never TVs have a USB port nowadays, so for a power source, just plug your Pi in to the TV.

Sample dashboard

One problem, however, is that it gets increasingly hard to control the infoscreens the more you add. For example, if you have 6, you don’t want to have to manage them independently, and you want to be able to change the setup quickly.

At the office, we’ve set up 6 Samsung TVs, each with their own Pi. On each is a different dashboard:

What I ended up with is a simple provisioning script that configures the Pi:

Quite a bit is happening here – in order:

  1. WiFi power management is disabled. I’ve found that it makes WiFi very unstable on the Pis.
  2. We need to set the hostname; every pi needs a unique one. We’ll see why later.
  3. The desktop wallpaper is changed. You can remove this part if you like raspberries!
  4. Chromium is installed. I tried Midori, and others. Chromium just works.
  5. We setup a script that starts when the desktop session loads, startup.sh. This will run Chromium.
  6. Then we reboot to apply the hostname change.

In the script, there’s two things you must modify: URLs to the desktop wallpaper and the directory to the dashboard files.

So what’s the dashboard files?

The way it works is this: if the hostname of the Pi is raspberry-pi-1 and you set dashboard_files_url to https://mycorp.com/files/dashboard/, the Pi will go to https://mycorp.com/files/dashboard/raspberry-pi-1.html – that way, if you want to change the URL of one of your screens, you just have to change a file on your Web server.

While you could do more about it and make a script on your server, I just went with simple html files with a meta-refresh redirect.

I feel that it’s easier to manage this stuff from a central place rather than SSHing into several different machines.

Do you have a better way of managing your dashboards? Do you have a cool dashboard you’ve designed? Tell us in the comments!

Update regarding Chromium and disable-session-crashed-bubble switch

Newer updates of Chromium removed support for the –disable-session-crashed-bubble that disabled the “Restore pages?” pop-up.

This is annoying since we cut off power to the Raspberry Pi’s to shut them down, and a power cut triggers the popup.

Even more annoying, alternative browsers on the Raspberry Pi like Midori or kweb can’t run the newest Kibana – a dashboard used for monitoring – at all, so I had to find a workaround for this.

The alternative I found was that you can use the –incognito switch, which will prevent the popup, but then you can’t use dashboard that require a cookie to be present (ie. because of login), like Zabbix or Jenkins.

If –incognito won’t do, the solution I found so far was to use xte to simulate a click on the X button of the dialog. It’s stupid, I know, but since the Chromium developers don’t think anyone is using that feature, there you go.

Note that you might want to change the mouse coordinates to where the X button is.

#!/bin/sh

# Close that fucking session crashed bubble
sleep 60
xte "mousemove 1809 20" -x:0
sleep 1
xte "mouseclick 1" -x:0

If you have a better solution, don’t hesitate to write in the comments!

Nov 27 0 Strikethroughs in the Safari Web Inspector styles? Here’s why

Safari uses a strikethrough in showing invalid properties in style sheets. This is not documented, and there’s no tooltips to explain this multicolored line.

There are 2 known different strikethroughs, red and black.

Styles getting overridden by other styles are striked out in black:

Strike (overridden)

 

But when it’s an invalid or supported property, or the value can’t be parsed, the strikethrough in the style sidebar is red:

co6uu

 

Nov 18 18 300,000 login attempts and 5 observations

About a year ago, I developed a WordPress extension called WP Login Attempt Log. All it does is log every incorrect login attempt to your WordPress page and display some graphics and a way to search the logs. It logs the username, the password, the IP address and also the user agent, e.g. the browser version.

Observation number 1: attacks come and go

 

181114x5krq

Screenshot of login attempts from the past 2 weeks, from the plugin

One thing that is striking about this graph is how much the number of attacks differ per day. Some day I will get tens of thousands of attempts, on other days I will get under 100. On average, though, I get about 2200 attempts per day, 15,000 per week and 60,000 per month. It suggests that my site is part of a rotation, or maybe that someone really wants to hack my blog on mondays.

Observation number 2: passwords are tried multiple times

All in all, there’s about 36,000 unique passwords that have been used to brute-force my WordPress blog. From the total number of around 360,000 attacks, each password must is used on average of 10 times. But of course, some are used more than others, as you can see in the table below.

What’s interesting is that there’s not a larger amount of different passwords. From the large password database leaks the past few years – we’re talking tens of millions – one could expect the amount of different passwords more closely matching the number of total attempts.

Of course, there might also just be 10 different people out to hack my blog, and they all have the same password list. :-)

Observation number 3: the most common password is “admin”

An empty password was tried around 5,300 times. Here’s a list of the most used passwords, along with how many times they were used:

Attempts Password
5314 (blank)
523 admin
284 password
269 123456
233 admin123
230 12345
215 123123
213 12345678
207 1234
205 admin1
203 internet
202 pass
201 qwerty
198 mercedes
194 abc123
191 123456789
191 111111
191 password1
190 freedom
190 eminem
190 cheese
187 test
187 1234567
186 sandra
184 123
182 metallica
181 simonfredsted
180 friends
179 jeremy
178 1qaz2wsx
176 administrator

This is not a list of recommended passwords. :-) Definitely don’t use any of those.

Observation number 4: 100 IP addresses account for 83% of the attempts

The top 1 IP address that have tried hacking my blog, 54.215.171.123, originates from a location you wouldn’t suspect: Amazon. That IP has tried to attack my blog a whopping 45,000 times, 4 times that of the second IP on the list.

I took the top 25 offenders and did a WHOIS on them. I guess if you’re looking for a server company to do your WordPress hacking, here you go:

Attempts IP Address ISP Country Code
45465 54.215.171.123 Amazon Technologies US
15287 63.237.52.153 CenturyLink US
10842 123.30.212.140 VDC VN
10425 185.4.31.190 Green Web Samaneh Novin Co IR
10423 95.0.223.134 Turk Telekom TR
10048 46.32.252.123 Webfusion Internet Solutions GB
10040 94.23.203.18 OVH SAS FR
10040 46.4.38.83 Hetzner Online AG DE
10040 108.168.129.26 iub.net US
10040 193.219.50.2 Kaunas University of Technology LT
10036 84.95.255.154 012 Smile Communications IL
10035 80.91.189.22 Private Joint Stock Company datagroup UA
10030 94.230.240.23 Joint Stock Company TYVASVIAZINFORM RU
10030 123.30.187.149 VDC VN
10029 89.207.106.19 Amt Services srl IT
9328 67.222.98.36 IHNetworks, LLC US
9327 85.95.237.218 Inetmar internet Hizmetleri San. Tic. Ltd. Sti TR
9327 62.75.238.104 PlusServer AG DE
9326 5.39.8.195 OVH SAS FR
9326 5.135.206.157 OVH SAS FR
9208 211.25.228.71 TIME dotCom Berhad MY
9168 176.31.115.184 OVH SAS FR
8804 78.137.113.44 UKfastnet Ltd GB
8201 134.255.230.21 INTERWERK – Rotorfly Europa GmbH & Co. KG DE
7598 5.199.192.70 K Telecom RU
6952 85.195.91.10 velia.net INternetdienste GmbH DE
5231 67.222.10.33 PrivateSystems Networks US
3546 5.248.87.146 Kyivstar PJSC UA
3202 78.46.11.250 Hetzner Online AG DE
2099 93.45.151.167 Fastweb IT
1940 92.222.16.54 OVH SAS FR

Another interesting thing about this is is the amount of IPs hovering at around 10,000 attempts. It seems like there’s a limit where the attacker gave up, moved on to the next target. Maybe all these are a part of a single botnet, and each machine in it is only allowed to attack 10,000 times. Who knows.

Observation number 5: protect yourself by using an unique username

WordPress hackers are really sure that you’ll use a pretty standard username, or at least something to do with the name of your blog. A total of just 165 different usernames were tried, compared to the tens of thousands of passwords.

Therefore my final takeaway is to choose an obscure username as well as an obscure password. There’s only 11 usernames that have been used more than a hundred times. This was kind of surprising to me.

Attempts Username
164360 admin
119043 simon
15983 administrator
10787 test
10429 adm
10416 user
9871 user2
9253 tester
8147 support
1818 simonfredsted
189 simo
57 root
57 login
56 admin1
3 qwerty
3 simonfredsted@simonfredsted.dk
3 simonfredsted.com
2 aaa

That’s a lotta attacks, what do I have to fear?

WordPress blogs is one of the most targeted platforms for hackers, many sites use it, from big news organisations to small blogs like this one. If someone can get a login and start fiddling with your links, they can boost traffic to their own viagra peddling sites.

But, as long as you keep your software updated (WordPress makes this very, very easy) and keep the following two rules in mind, you’re totally safe.

Bottom line: Set a lengthy password consisting of random characters, letters and digits, and use a username that’s not a part of your name, site URL or “admin”. Maybe just use some random letters, your password manager will remember it after all.

If you do those two things, there’s nothing to worry about.

 

If you have your own take on this data, or think I’ve misinterpreted something, feel free to leave a comment below or on Hacker News – I’d love to hear your thoughts.

Sep 27 0 Update to my Roundcube skin

In my last post I introduced you to the release of my personal Roundcube skin. It’s been a few months, and in the meantime a new version of Roundcube arrived bringing changes to the way skins are handled. As it turns out, my skin wasn’t compatible with the new version.

Therefore I’ve updated the skin – now version 1.0 – with the necessary fixes, mainly compatibility with Roundcube 1.0 — for example you can actually write emails now.

There’s also a new beautiful login screen along with some UI improvements – in the Settings area, especially.

Screenshot of the Fredsted Roundcube skin version 1.0

It’s available on good ole github, but you can also just download the zipball if you prefer. Installation instructions are included.

Don’t hesitate creating some pull requests or reach out about problems in the comments — there’s still lots more work to be done, and I’ll make many more design tweaks in the coming weeks and cleaning up the code.

Jun 18 9 How to reverse engineer a wireless router

The Zyxel WRE2205

The Zyxel WRE2205

The Zyxel WRE2205 (rebranded Edimax EW-7303APN V2) is a plug-formed wireless extender. What’s interesting to me about this device is its extremely small size. Many of my standard power bricks like are larger than this unit — but they don’t contain a small Linux minicomputer and advanced wireless functionality.

Trying out the WRE2205 for its intended purpose, I discovered that its wireless performance was quite subpar, slower than my actual Internet connection, but still very usable. Of course, that’s understandable. It has no antenna. So I replaced it with a faster AirPort Express, which can also act as a wireless bridge.

No longer needing the device for its intended purpose, I thought about how cool it would be to have an actual Linux plug PC I could SSH to and use for all sorts for home automation purposes, or leave it somewhere public, name the SSID “FreeWifi” and install Upside-Down-Ternet. The possibilities are endless!

So I started getting the desire to hack this thing. And having seen some bad router software in the many devices I’ve owned, I thought that there could be a chance of rooting this thing.

As anyone who’ve poked around with consumer network equipment knows, a good place to start is binwalk. binwalk is a utility that lists and extracts filesystems embedded in files like router firmware updates. What these “update” files actually do, is that they replace the whole contents of the operating system partition with a completely new version. This is why these devices may “brick” when cutting the power during an upgrade: it may not boot without all the files.

To my delight, binwalk came up with a squashfs filesystem embedded in the latest firmware update from Zyxel’s Web site.

simon@workstation:~$ binwalk -v wre2205.bin

Scan Time:     2014-06-18 22:44:24
Signatures:    212
Target File:   wre2205.bin
MD5 Checksum:  e2aa557aa38e70f376d3a3b7dfb4e36f

DECIMAL       HEX           DESCRIPTION
-------------------------------------------------------------
509           0x1FD         LZMA compressed data, properties: 
                            0x88, dictionary size: 1048576 bytes, 
                            uncompressed size: 65535 bytes
11280         0x2C10        LZMA compressed data, properties: 0x5D, 
                            dictionary size: 8388608 bytes, 
                            uncompressed size: 2019328 bytes
655360        0xA0000       Squashfs filesystem, big endian, 
                            version 2.0, size: 1150773 bytes, 445 inodes, 
                            blocksize: 65536 bytes, 
                            created: Wed Mar 26 04:14:59 2014

 

binwalk is so great that it can even extract it for us:

simon@workstation:~$ binwalk -Me wre2205.bin
Target File:   _wre2205.bin.extracted/2C10 
MD5 Checksum:  a47fd986435b2f3b0af9db1a3e666cf1 
DECIMAL       HEX           DESCRIPTION 
------------------------------------------------------------- 
1626688       0x18D240      Linux kernel version "2.4.18-MIPS-01.00 
                            (root@localhost.localdomain) (gcc version 
                            3.4st.localdomain) (gcc version 3.4.6-1.3.6)
                            #720 Wed Mar 26 11:10"

 

It's all the files for the park... It tells me everything!

It’s all the files for the park… It tells me everything!

We can see it’s a Linux 2.4 MIPS kernel. Good. “I know this”, as they say in Jurassic Park.

What we get is a directory containing the whole Linux system. What’s interesting is you can see the configuration and especially all the shell scripts. There are so many shell scripts. Also the source for the Web interface is of course included.

However, most of the functionality is actually not written with whatever scripting language it’s using. It comes from within the Web server, which apparently is heavily modified. The Web files mainly display some variables and send some forms. Not yet that exciting.

The Web server, by the way, is something called boa, an open source http server. Studying the config file, something interesting is located in the file /etc/boa/boa.passwd. The contents:

root:$1$iNT/snisG/y7YBVbw0tQaaaA

An MD5-hashed password, it seems. A kind of creepy thing because the default username for the admin user is admin, not root. And it’s referenced in the Auth directive of boa’s config file.  So Zyxel has their own little backdoor. I didn’t get to cracking that password, because I moved on to the /web directory, containing all the web files.

The Web Interface for the WRE2205

The WRE2205 Web Interface

The standard things are there, of course. The jQuery library (version 1.7), some JavaScript, some graphics and some language files. The standard header/footer pages (in this case, though, because Zyxel is stuck in the 1990s, a frameset), and so on.

Beginning to look through file filenames, two interesting ones were to find: /web/debug.asp and /web/mp.asp. None of these are referenced in the “public” Web interface. Having access to debug files is always a good thing when trying to break into something.

The first file, debug.asp, looks like a password prompt of some sort.

Screenshot from 2014-06-18 23:11:50
One might reasonably assume it has something to do with showing some different log files, despite the weird sentence structure. No clues in the config file, and typing some random passwords didn’t work (1). Let’s move on.   The next file, mp.asp, looks much more interesting:

Screenshot from 2014-06-18 23:17:08
There are several good signs here despite the rather minimalist interface. First, it actually looks like a command prompt: the text box selects itself upon loading the page, there’s a # sign, usually an indicator of a system shell. Here there was also a clue in the source code, the input field’s name is actually command. Simply entering nothing and pressing GO yields the following result:

Screenshot from 2014-06-18 23:23:08
Bingo. It seems to launch a shell script where the input box is the parameter. Let’s take a look at this rftest.sh fellow:

Screenshot from 2014-06-18 23:26:01
Lots of different debug commands that yield different things. So, entering ENABLEWIRELESS in the prompt would run /bin/rftest.sh ENABLEWIRELESS and return the output in HTML. (I have no idea what “interface” and yes/no switch does, entering eth0 doesn’t work, so maybe it’s an example?)

At the bottom there’s even a COMMAND command that allows us to execute any command. At least they tried to make this a little secure by limiting the applications you can execute:

    "COMMAND")    
         if [ "$2" = "ifconfig" ] || [ "$2" = "flash" ] || [ "$2" = "cat" ] 
            || [ "$2" = "echo" ] || [ "$2" = "cd" ] || [ "$2" = "sleep" ]  
            || [ "$2" = "kill" ] || [ "$2" = "iwpriv" ] || [ "$2" = "reboot" ] 
            || [ "$2" = "ated" ] || [ "$2" = "AutoWPA" ]; then
             $2 $3 $4 $5 $6
         fi
     ;;

But, at this point there’s really no point, since doing stuff like this will be completely broken in any case, and we can just do something like this:


And so we have full control. Since || means OR, and the rftest.sh command fails when there’s no valid command, the last command will be run.

As we can see from the above screenshot, the web server is running as root so now we have full control of the little plug computer. Software like wget comes preinstalled, so you can just go ahead and download shell scripts, an SSH server, or otherwise have fun on the 1.5 megabytes of space you have to play with. Good luck!

I kind of expected that I had to use an exploit or buffer overflow, get out a disassembler like IDA, or do a port scan, or do some more digging — but just below the surface there are some quite big security issues. Of course, you need to know the admin password since the whole /web directory is behind Basic authentication.

However, since the boa webserver is an old version with exploits available, you probably won’t even need that. We can assume it’s not a feature since it’s hidden. So with such a big hole, I wonder what else lies below the surface?

 

Footnotes:

  1.  I later found, by running strings on the http server executable, that typing report.txt shows some debug information.

Dec 18 3 Monitoring ’dd’ progress

On Linux, to view the progress of the file/disk copy tool dd, you could send the USR1 signal to get a progress output. This apparently doesn’t work on Apple’s OS.

However, with Activity Monitor, it’s easy to see the progress of dd when, for example, copying an operating system image onto a USB (which can take a while…). Simply compare the size of the image with the “bytes written” column to get a good idea of how much progress it has done:

dd progress with Activity Monitor

If you need to view more detailed progress, or use dd lot, you can try installing pv, a utility which echoes the amount of data piped through it. One would use it with dd like this:

dd if=/file/image | pv | of=/dev/disk8

That would render something like this, letting you know the progress:

1,38MiB 0:00:08 [  144kiB/s]

Also, with pv, you could specify the –size parameter to get an estimation of the time it will take to finish. pv can be installed with, for example, Homebrew.

Apr 28 2 Fixing slow ProFTPd logins

Recently a few users on a Virtualmin server have experienced issues with slow FTP logins. It took a long time to login and often wouldn’t log in at all.

To correect this, first log on to the Webmin interface on http://yourserver:10000. At the top left, click Webmin.

A bit further down, under Servers, select ProFTPD Server.

Under Global Configuration, select the Networking icon.

 networking

Then you’ll see a screen with a whole bunch of settings. Set the following options to No:

  1. Set Lookup remote Ident username?
  2. Do reverse DNS lookups of client addresses?

options

Now click save, and on the ProFTPd page press Apply settings on the bottom. Your logins should now be instant.

Feb 26 7 A native KeePass app for Mac

Password storage is incredibly important to me. Since I began seeing friends and others get their identities and online lives taken away because of reusing and/or using weak passwords, I started taking password security extremely seriously.

When I chose the utility to use for this, I had a couple basic requirements.

  1. It had to be open source, for obvious reasons
  2. I had to be able to access my passwords on all my devices (iPad, iPhone, MacBook, workstation)

Things like 1Password and Lastpass didn’t fullfill the first requirement, although very handy because of browser integration and the mobile apps. So I ended up choosing a combination of the KeePass framework and Safari+Mac OS X keychain for my password storage needs, with KeePassX for my client, along with a mobile app, MiniKeePass, that syncs my KeePass database using Dropbox. As an added bonus, the iOS mobile app is open source as well.

I use KeePass as my primary password storage database, and Safari’s password saving feature for sites I access often, like my blog and reddit account.

I’m very happy with this solution, but unfortunately the Mac KeePassX currently has a very ugly, un-Mac-like user interface. I’ve been waiting for something which incorporates the native Mac user interface controls.

And, finally, today stumbled across this KeePass Mac client developed by Michael Starke from Hick’n’Hack Software. It seems like it’s in very early alpha, but it can load KeePass files and display their contents, so the basis functionality is almost done. It seems like it’s using the MiniKeePass framework library for its backend functionality. I cloned and ran it immediately as I’ve been wanting this ever since I started using KeePass for storing my passwords.

Unfortunately I can’t seem to be able to copy passwords yet, and there’s no detail dialog when you click on a password entry.

But since, as of writing, the last commit is 13 hours ago I’m sure this functionality will be added soon. I’m just so happy someone is making this. This definitely makes me want to learn Objective C properly so I can contribute to this project! If you know ObjC, you should definitely go add some pull requests!

 

Here’s a screenshot from the release I just built:

Screen Shot 2013-02-26 at 5.59.35 PM

Compare this to the current KeePass:

Screen Shot 2013-02-26 at 6.20.28 PM

Feb 23 4 Sync SSH config across computers with Dropbox

Here’s a little time-saving tip for Mac OS X/Linux users: if you work with lots of different Macs and servers daily, store your SSH configuration file in dropbox, and create a symbolic link to it so you can sync it across your computers.

With this, once I add a new machine to my SSH config, it’s immediately available across all of my computers, my workstation, laptop, work machine, etc. I’m terrible at remembering hostnames and IP-addresses, so this comes in handy as I acquire control over more and more servers.

Also, you can of course extend this method to sync other types of configuration files, like your git config or bash profile. Dropbox is a neat tool!

Step 1

Create a folder in your Dropbox to store files like these.

mkdir ~/Dropbox/configs

Step 2

Move your ssh config to this folder. I just call it ssh-config.txt instead of simply config for easier access and as to not mix it up with other configuration files.

mv ~/.ssh/config ~/Dropbox/configs/ssh-config.txt

Step 3

Create a symbolic link to the new configuration file.

ln -s ~/Dropbox/configs/ssh-config.txt ~/.ssh/config

Apr 13 11 InstaDJ – a quick way to assemble YouTube playlists

I made a website that lets you create YouTube playlists easily – and share them, too.

Everybody is online nowadays. Nobody uses CDs anymore. So at parties it’s common to see a laptop hooked up to a stereo where people go up and select songs on YouTube during the night. It kinda sucks though:

  • Music starts and stops randomly as people get drunk and start searching for songs while another is playing.
  • You need to get up and change the track when it stops.
  • It’s too hard to make a playlist on YouTube. You can’t really make one on the fly.
  • What’s more, you have to be logged in with your Google ID to make playlists. I don’t want random people to mess with my account (e.g. Gmail), especially drunk people.

Sure, there’s Grooveshark. But people who aren’t nerds can’t figure out how to use Grooveshark and will just go to YouTube instead. It’s too easy to interrupt a playlist, especially when you’re drunk. The add to playlist button is easily missed.

Grooveshark is also missing many songs due to silly record companies.

Other sites exist, I know. But no matter which one you use, people will inevitably go to YouTube because it’s got all the content and it’s what people know and love.

Even other “Youtube DJ” sites exist. I’ve been through a few. They either a) require login, b) are hard to use, c) can’t autoplay, d) don’t work.

So I got fed up with all this and made InstaDJ. It’s a dead-simple Web site where you can add YouTube videos to a playlist on the fly. Even drunk people get it.

InstaDJ allows you to search and queue YouTube videos, using a simple interface everybody understands, in a way which doesn’t interrupt the music.

What it does

  • Search YouTube videos
  • View user uploads and favorites
  • Queue YouTube videos
  • Auto-selects HD video if available
  • Generate URL to playlists
  • Share playlist
  • It’s free and there’s no ads
  • Easy to use, minimalist interface

I even find myself just using InstaDJ instead of playing music from my iTunes library.

Don’t you want to try it out? Just click here to go to InstaDJ.com.

For the technically interested, it’s built with the YouTube API, Twitter Bootstrap and jQuery. Enjoy.

Feb 9 0 Apple Predictions for 2012 – 2013

  • All Apple products with screens will begin to have Retina-grade displays, starting with the iPad 3 coming in the first half of 2012, then the MacBooks and finally the Thunderbolt display and iMacs.
  • iPad 3 will have, other than retina display, double the ram, quad core processor, better 8MP camera, thinner, but same 10hr battery life. The design will be similar to that of iPad 2. Oh, and Siri. Coming Q1 2012.
  • Mac OS X will merge with iOS in the next version coming late 2013, potentially removing a lot of functionality, upsetting professionals. There will be no 10.8. It will simply be called iOS. (They’ve run out of cat names)
  • iMacs will never have touch-screens though.
  • Apple’s 42″ television set will premiere before 2013. It will look like a large Thunderbolt Display. It will feature iOS. Apple will also partner with TV stations to offer more on-demand programming. It will probably feature Siri so you can channel surf without moving your body. It will be available in black and white.
  • Mac Pro will be discontinued after the next and final generation comes when Intel’s new chipset is ready in mid 2012, so sometime in late 2013. Super fast Thunderbolt-equipped iMacs will take over Mac Pro’s market.
  • Apple’s server offerings will be replaced with cloud services and the increasingly powerful Mac Mini. (Apple doesn’t even use their own hardware for servers anymore)
  • Final Cut Pro X will receive rigorous updates. Video professionals will regret their shift to Media Composer as digital formats replace tape.
  • We will see the transition to ARM-based Macs in late 2013 or the beginning of 2014, starting with the MacBook Air.
  • As battery life will be longer, we’ll begin to see security features in the MacBook power adapters like Apple recently patented.
  • iPhone 5 mid-late 2012 will have a new, thinner design going back to round corners plus Siri will end its beta phase at the same time. It will have LTE support. iPhone 5 won’t have any exceptional features, but it will be the best selling iPhone ever.
  • iPod Classic will be discontinued and replaced with the iPod touch which will be renamed iPod. It will be expensive because of the 128GB SSD.
  • iPod Shuffle and iPod Nano will merge.

Apple will become more consumer and entertainment oriented and will slim its product line accordingly.

Right now, Apple is preparing and conditioning us for the iOS merge with their Lion operating system.

As we’ve seen with the release of Lion and the 10.7.3 update, retina support on Macs is coming soon.

ARM-based Macs are further away; it’s simply not fast enough yet. But it’ll come unless something drastic happens at Intel.

Apple’s TV will become the best selling  TV ever, no doubt about it. Right now, TVs suck with their slow menus and bloated designs with huge bezels and bright LED lights. Consumers want an easy to use, minimalist designed, internet-connected television with an Apple logo on it. And they’ll pay a premium for it.

Mac Pro is becoming a legacy machine. Apple will probably have to release a new one to please their professional market, but they’re not happy about it. Thunderbolt display will mostly be a dock for Macbooks.

Right now is an exciting time at Apple. It seems like their growth can’t stop. But will their changes succeed? Will the competition finally pull itself together and release products worth buying?

Aug 23 38 Mediacenter PC Review: Zotac ZBOX ID41

In this article I’ll be reviewing the Zotac ZBOX ID41, which is an inexpensive mini PC from Zotac. The thing about this PC is that it’s particularly appealing to media center owners and budget-constrained customers due to its price and small size.

In this review I’ll look at some of the factors that are important to me for a HTPC: noise, HD playback, expansion features and power usage.

Read the rest of this entry »

Apr 1 2 Google’s Aprils Fools 2011: Helvetica

So, if you search for Helvetica on Google today, this is what you’ll get:

Dec 29 1 Fix: Securing the DD-WRT location vulnerability

My Internet router uses a software called DD-WRT instead of the default firmware. DD-WRT is an open-source alternative to the factory-installed firmware for some routers.

Basically, it allows me to do more and have more control over my router.

Today, however, I read about a location vulnerability in the DD-WRT Web administration interface.

Using a DNS rebinding attack, malicious Web sites can track your location fairly accurately using the routers MAC address. For example, when you visit a malicious Web site, people can find out where you live.

How to enable password protection of the Info-site under Administration > Management inside the router administration page

Securing DD-WRT by enabling password-protection of the info-site

I don’t want anyone to know my location without my permission, so I found out how to disable the information page where the routers MAC address is shown.

By accessing the administration interface, and enabling password protection of the info-site, you can shut malicious users out.

Click the screenshot to learn how to enable password-protection.