Since the original CoreOS on Kimsufi post, things have changed slightly. The good news is, there's a much nicer way to get up and running now! With OVH's new installation template, getting CoreOS running on a Kimsufi server is now even easier!

Even better is that it comes with etcd, fleet and all of CoreOS' good stuff pre-configured and ready to go out of the box.

I'll be writing about using fleet to manage docker containers in a CoreOS cluster very soon! Stay tuned.


Installing CoreOS

  • (Optional) In Kimsufi manager, add your public ssh key if you've not already and set it as a default. This will allow password-less logins to your server.

  • In the dashboard, click Reinstall, choose the CoreOS template and select Custom installation.

  • Skip past the partitions screen (the CoreOS install template will ignore any custom partition settings anyway) and select your SSH key from the dropdown in the next screen.

  • Wait for the installation messages to finish.

  • SSH to your server with the core user... You're finished!

Warning! This guide is out of date. Please visit the newer version!

I recently bought a new dedicated server from Kimsufi, which is much more powerful than my old one. Rather than just installing Ubuntu, Debian or Arch and setting up services, I thought I'd try and go down the Docker route with CoreOS. This turned out to be much easier than I anticipated! So here's a quick rundown of how I got it working.


Kimsufi do not provide a CoreOS template to install, so we will be using the netboot rescue feature to install it manually on our server.

Installing CoreOS

  • In your Kimsufi dashboard, click 'Netboot', and select 'Rescue'. You will then be asked to reboot your server.

  • Once rebooted, you will receive an email with an IP, username and password to use with SSH.

  • Log in via SSH, and create a cloud-config.yaml file like below

# vi cloud-config.yaml

#cloud-config

users:  
  - name: "..."
    groups:
      - "sudo"
      - "docker"
    ssh-authorized-keys:
      - "ssh-rsa "..."

Replace name with the username you wish to use to log in to CoreOS, and ssh-rsa with your public ssh key.

  • Next, download the CoreOS installer, and run it with the cloud-config.yaml you just created.

# wget https://raw.github.com/coreos/init/master/bin/coreos-install

# ./coreos-install -d /dev/sda -C stable -c cloud-config.yaml

  • When CoreOS has been installed, go back to the Kimsufi dashboard, and change your netboot setting back to 'Hard Drive' and reboot again.

  • You should now be able to SSH to your CoreOS install with the username and private key provided.

  • From here on, you can set up docker containers, which will be another post. For now, you can read about Getting Started with Docker on the CoreOS website.

Sonos, sadly has never had a great reputation with podcasts. They just plain don't work. The proposed workarounds, by using services such as Stitcher tend to fall short of available shows and usability.

If you're an avid podcast consumer and use an Android device, chances are you use Pocket Casts by Shiftyjelly. Pocket Casts recently added a feature to stream straight to Chromecast from the app. Great. But sadly it does not play ball with Sonos (yet).

There have been many suggestions on how to get podcasts working nicely through Sonos via Android. This is by far the best setup I've come across. And especially if you already use Pocket Casts...


Prerequisites

  1. Sonos setup + Sonos Controller for Android... Duh.
  2. Pocket Casts

The Android Sonos Controller will read any audio files stored in /sdcard/Music and show in the menu as 'On this Mobile Device'

The controller will also read audio files stored in /sdcard/Podcasts if there is at least one file in /sdcard/Music. This is usually where things don't appear to work if you have no local music on the phone. A quick fix is to put a blank MP3 file into the Music directory.

Once you have a populated Music directory, Pocket Casts can be configured to download Podcasts into the appropriate directory automatically as follows:

  • Pocket Casts
    • Settings
      • Storage
        • Store podcasts on: Custom Folder
        • Custom folder location: /sdcard/Podcasts

Pocket Casts settings for Sonos Podcasts

Once this is done, you'll need to rescan both the /sdcard/Music and /sdcard/Podcasts directories. There are a few ways to do this:

You'll need to force a rescan each time you download a new podcast for it to appear in the Sonos Controller app.

After these steps, you'll be able to open the Sonos Controller and see your podcasts listed under the 'On this Mobile Device' menu option.

Sonos Podcasts on Android

I've been looking for a tiny little program for ages. It seems really simple, but I can't for the life of me find one that does what I want!

I wanted a countdown utility in my OS X menubar. Always visible, always looming. I wanted it to panic me when working towards a deadline.

Timers existed, but nothing could count down towards an absolute date or time, just a relative time.

I quickly set out to make my own. What else was I expected to do whilst writing a dissertation?

I'll briefly write about my experience in trying to write a Cocoa application and my three approaches until I got something working.


RubyCocoa

I already know Ruby. I've written Rails applications for my university course, so I thought this would be a solid choice. Absolutely not. RubyCocoa is a pain to get working properly, and not everything is supported in it. I was fighting dependency hell more than getting anything working.

I quickly moved onto Objective-C before wasting any more time.


Objective-C

I'm a big fan of C-like languages, but Objective-C is just an entirely different kettle of fish. Its syntax is bizzare, especially when combined with Cocoa's verbose naming scheme.

I got stuff vaugely working in Objective-C, in the sense that I could create an NSStatusItem and place it in my status bar, but the syntax held me back from doing anything more substantial with it.

Remembering that Apple released Swift not too long ago, I decided to give that a go instead after hearing favourable things.


Swift

Swift was actually really easy to get going with. It seemed like a horrific mixture of C and Javascript that somehow works well.

Within 2 hours and less than 90 lines, I was able to make my countdown clock. Albiet with hardcoded title and target date. But it works! And I can toggle the title on and off.

OS X Statusbar Countdown


I have the code up in a repo on GitHub, licensed under MIT. I'll be adding a GUI config to change the name and target date soon!

https://github.com/bbrks/osx-statusbar-countdown

Updated: 2015-02-23


Following up from my previous post, Dropbox, I love you but you're bringing me down. I have been trying out SpiderOak to backup and sync my files rather than settling rather unhappily for Google Drive.

The whole premise of SpiderOak is that it encrypts files locally on your machine before it uploads them. By doing this, in theory, only you are able to read your files.

There is one thing to note upfront, SpiderOak does not have user-friendliness in mind. However it does have a huge emphasis on security and privacy and I'd be willing to sacrifice UX with a slight cost increase in order to store my files securely. Rather than a single directory for your files to sit in. It has a concept of backups and sync folders.

The two are seperate, which means you can backup files without sharing them across multiple machines. It does take some set up, but once it's done, it's a much better system than Dropbox/Google Drive.

For example, this is my home directory. I am backing up my archived documents, my ebooks, design, dev, uni and work directories. As these contain my important files. Anything else, I can afford to lose.

So, these are backed up, but in order to sync them to another machine. I need to set up a Sync folder. By default, one exists called 'Hive' which is synced to all machines. This is similar to Dropbox/Google Drive, and you could just use that. But if you wish to have more control, you may want to set up additional sync folders.


First, let's have a look at what SpiderOak offers in comparison to Google Drive.

Service SpiderOak Google Drive
Encryption Full Client-Side encryption Server-Side encryption but data-mined
Client Closed source, no ARM client Closed source, No Linux client
2GB $0 $0
5GB $0 ('hurricanesafe' promo code) $0
15GB - $0
30GB $79/yr or $7/mo -
100GB $75/yr ('spring' promo code) $1.99/mo
1TB $129/yr or $12/mo $9.99/mo
5TB $279/yr or $25/mo -
10TB - $99.99/mo
20TB - $199.99/mo
30TB - $299.99/mo

As you can see, price wise SpiderOak is the expensive option until you start looking above 1TB. If you're wanting to back up over 5TB online, I'd definitely consider other options, potentially even self-hosted if you're looking to pay $3600 a year.

Unfortunatley both clients are closed source, this means any security promises they make you have to take with a pinch of salt. Sadly there is also no ARM client for SpiderOak either, so you can rule out using your Raspberry Pi, or ARM based file server with it.

For now, I am using SpiderOak, but patiently waiting for the ARM client before I decide to pour any money into it. I could really do with it working on my NAS.


Update: 2015-02-23

Following up on the post above, I have been using SpiderOak for a week now. I have one critisism with it. It is incredibly slow at uploading many files, as it encrypts files in small groups before uploading. This means that it can only upload one set of encrypted files at a time. If this process were parallelised, it would be much faster, as the bottleneck is definitely not my connection's upload speed.

I have been running a Sync operation for over a week solid, so 170+ hours on a dataset of approximately 500GB, mostly consisting of files about 20-30MB in size. If I were uploading them at full speed, it would take a little under 60 hours to complete. But it is currently only 45% done at 170 hours.

So, at the minute, uploading to SpiderOak has approximately 3Mb/s upload rate, which is just 16% of my upload speed.

Once this upload is done once, it will be fine, as only files that are changed and not duplicated need to be re-uploaded. With that said, the initial upload is painfully slow!

Skip the prose and get straight to the cloud storage comparison data here.

Also read my followup post on SpiderOak here.


Dear Dropbox,

We've had a long and fruitful six year relationship, you've helped me through some tough times. Times of disaster, times of worry, times in the wee hours approaching a deadline.

You've been there in the background, quietly and gently delivering my files into the cloud without question. You've really set me free from the traditional practice of offline file storage.

You have opened my eyes to the cloud and shown me all it has to offer. It has been thoroughly sublime.

It started off as a simple 2GB. We took it slow, but as time went on, my needs grew. I needed more out of this relationship. You agreed and offered me an extra 2GB, 500MB at a time through referrals.

Life was good. I went to university and you were there to support me. You let me have space, lots more infact.

We had a thing going. You gave me 12GB and I embraced it with open arms. It was all rosy for a couple of years...

And then you snatched it away from me. Just like that. Asking for a ransom to the tune of £79 if I wanted to stay for one more year, along with an alluring 30% off.

Fearing for my files, I seeked others. I wanted to go solo with my dedicated server and its 500GB HDD. But I feared it didn't offer the saftey net you and your big cloud did.

The next best option was with Google, despite my lack of trust. I get 15GB to start, and can upgrade to 100GB for not a lot of money, and upgrade again when I need the space. This is much better than your 1TB all-or-nothing affair. One which I cannot afford. If you'd have had lower priced tiers, I may have stuck around.

Yours,
Ben

For now, this is where my story ends. Tied up to yet another Google product. Below is a tabular comparison of popular cloud storage providers.


Cloud Storage Providers Comparison

In all seriousness though, here's the data I've gathered about cloud file storage service so you can make up your own mind! I think it's clear to see that whilst Google may not be your most favourable company, they're likely to stick around, at least as a company, if not a cloud storage provider, and they offer the most upgradable storage plans at the minute.

Service Free Storage Max Filesize Encryption Upgrade Cost (per month)
30GB 100GB 200GB 1TB 5TB 10TB 20TB 30TB
Google Drive 15GB 5TB Server-side 128-bit AES - $1.99 - $9.99 - $99.99 $199.99 $299.99
Dropbox 2GB* None Server-side 256-bit AES - - - £7.99 ($12.19) - - - -
SpiderOak 2GB None Client-side layered 256-bit AES $7.00 - - $12.00 $25.00 - -
MS OneDrive 15GB 10GB Transfer only 256-bit AES - £1.99 ($3.03) £3.99 ($6.08) £5.99 ($9.13) - - - -
box 10GB 250MB§ Server-side 256-bit AES - £7.00 ($10.67) - - - -
* Extensible up to 18GB through referrals
† Extensible up to 20GB through referrals
‡ Extensible to 5GB upon upgrade
§ Extensible up to 10GB through referrals

PS: I am fully aware that as a non-paying customer I should expect nothing. Unfortunately the pricing model was not there for me to become a paying customer. Google, unfortunately now has (more of) my business.

I have a lot on my plate at the moment, a breif description of which is down below. However I am looking for part-time web-dev work to squeeze in and do remotely, in order to keep me fed and keep the lights on! If anbody has any leads or wishes to chat about projects, drop me a line at ben@bbrks.me.


First on my todo list that I've just embarked upon, is my final year project at Aberystwyth University. This dissertation is to produce and possibly implement a major piece of techincal work, and write a report to the tune of 15-20,000 words to go alongside it.

My dissertation is AWESOME, or "Aberystwyth Web Evaluation Surveys Of Module Experiences"... It's pretty awesome! It aims to solve the problems faced when collecting module feedback at universities including low student response rates, poor UI/UX and student anonymity.

So that's currently an ongoing project, and you can read more about my process at the devblog over at http://diss.bbrks.me.

AWESOME Repository


Additionally, I have been thinking about finally updating TF2Dingalings, which has been in stasis for over a year. That is now also an ongoing project, little of which is public at the minute.

TF2Dingalings

As well as those, there are also numerous tiny side projects to distract me from my actual work. I'm really into graphical stuff at the minute. Being able to code something interactive and visually pretty is quite amazing. Check out my recent WebGL Orrery/Solar System for a nice example!

WebGL Orrery

I really should have mentioned this in a blog post sooner, probably as I was planning the trip. But alas. As you may or may not know, I drove my car, and my girlfriend around Europe for nearly two months over summer.

I can honestly say it was the most amazing, daunting, exhausting, exhillerating and fantastic trip I've ever had. It had many ups, many downs and sometimes wishes of coming home early! But I did it, and have a lot of photos to show for it!

Feel free to have a look at the blog we kept over at wanderings.in or at the photo album on Facebook.

Another quick guide to show you how to set up a Raspberry Pi as a DLNA/UPnP media server using an external hard drive as storage.

My home network has long been rather fragmented. Videos, Pictures and Music was spread across any one of 4 or 5 computers on the home network. Some of these computers were laptops, which meant things may or may not be avaliable at any given time. Additionally, any of these computers could be on or off at any one time.

If this sounds remotely familiar, you may want to think about setting up a home media server. One place that stores all of your media. One that is cheap to keep running 24/7 and one that is stable and reliable and ready whenever you need it.

For a long time now, I've had my Pi sitting serving the occasional web page, a torrent daemon, Samba shares and SSH. This is fine until you want to stream movies to smart TVs or game consoles. Samba just doesn't cut it for that use.

The following guide assumes you are running a Debian based distribution (Ubuntu, Raspbian, etc.) Other distros will be similar but not exactly the same.


Mounting your external drive

What use is a media server if you have nowhere to store your media? This will walk you through auto-mounting an external drive upon boot.

Plug in your USB drive and run the following to get a list of storage devices.

# lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda           8:0    0   1.8T  0 disk
└─sda1        8:1    0   1.8T  0 part
mmcblk0     179:0    0  29.7G  0 disk
├─mmcblk0p1 179:1    0    56M  0 part /boot
└─mmcblk0p2 179:2    0  29.7G  0 part /

You should get something like that. Above you can see I have my 32GB SD card (mmcblk0) with the OS installed, and a my external HDD (sda) with a 2TB partition at /dev/sda1

We'll go ahead and create a mount point for our drive and mount it.

# mkdir -p /mnt/ext
# mount /dev/sda1 /mnt/ext

Great, we have access to our files at /mnt/ext now. To get it to automatically mount on boot, we need to add it to our /etc/fstab file.

Next we'll find our device UUID and filesystem type.

# blkid
/dev/sda1: LABEL="2TB WD" UUID="5102-AA4B" TYPE="exfat"

Add this line to the bottom of your fstab, substituting your UUID, mount point and filesystem. Ensure you pay attention to formatting of this file!

UUID=5102-AA4B    /mnt/ext    exfat   defaults    0   0

Once we have that done, our drive is auto-mounted whenever we next reboot. Now to serve our media files...


Introducing MiniDLNA

MiniDLNA is a lightweight media server designed to support the DLNA and UPnP protocols. This works pretty nicely with almost any device you may want to consume media content on and it's so lightweight it's an ideal match for the Pi.

So, let's get started. First, run a full update and then install the minidlna package.

# apt-get update
# apt-get upgrade
# apt-get install minidlna

Once installed, we'll go ahead and edit the configuration file as follows.

/etc/minidlna.conf
---
media_dir=A,/mnt/ext/Music
media_dir=P,/mnt/ext/Pictures
media_dir=V,/mnt/ext/Videos
friendly_name=Raspberry Pi
inotify=yes

Fairly self explanitory, I hope. Set your directories, set a name for your media server, and ensure that the media library is automatically refreshed.

Now we start minidlna and we should have a working media server!

# service minidlna start

Upon the fist run, the media library will be built. This may take a while depending on how many files you have, once it's done you shouldn't have to do it again though.

Finally, ask minidlna to start up automatically upon boot.

# update-rc.d minidlna defaults

Tada! Enjoy your media :)

Oh and by the way, it streams 1080p to XBoxes, Playstations, Smart TVs and other computers flawlessly...

disney-up-dlna-media-server

This will be a super simple guide on how to quickly install multiple Ghost blogs on a Linux server, running Lighttpd as an internal proxy. This guide also takes you through running Ghost through forever, which ensures it is always running.

You don't need the proxying, but if you don't want to be messing around with port numbers it's the easiest way to set stuff up.

Okay so, first of all, you'll need to install Node.js. This is covered on the Ghost docs, but I'll reiterate here so it's all in one place.

I'm going to assume you're running a Debian based distro such as Ubuntu for this, however Node is easily available through many package managers.

# run as root (sudo)
$ run as non-privileged user

Fire up a shell and update your apt repositories and install Node.js:

# apt-get update
# apt-get install nodejs

Quickly check that Node installed correctly by running the following:

$ node -v; npm -v
v0.10.24
1.3.21

You may not get the exact versions, as long as it returns a number, you're okay.

Now we're going to download, extract and install Ghost into two directories for our blogs.

$ curl -L https://ghost.org/zip/ghost-latest.zip -o ghost.zip

$ mkdir first-blog/
$ unzip ghost.zip -d first-blog/
$ npm install --production

$ mkdir second-blog/
$ unzip ghost.zip -d second-blog/
$ npm install --production

We need to configure each of the Ghost installs, so change directories and copy the example config and open up your favourite text editor.

$ cd first-blog/
$ cp config.example.js config.js
$ nano config.js

At this stage, you can optionally configure the Mail settings, just follow the guide on the Ghost docs.

For each of the blogs, you need to set a unique port number, so find the section that looks like the below and change the port.

first-blog/config.js
---
production: {
    url: 'http://my-first-blog.com',
    ...
    server: {
        host: '127.0.0.1',
        port: '2368'
    }
}

Switch to your second install and do the same. Increment the port number by two for your second blog, for example:

second-blog/config.js
---
production: {
    url: 'http://my-second-blog.com',
    ...
    server: {
        host: '127.0.0.1',
        port: '2370'
    }
}

That's Ghost installed, and they can be accessed by using the port numbers you set, but we want to use the normal domains right?

So next we need to set up an internal proxy server with Lighttpd. This forwards requests from a domain to a specific port.

Go ahead and open up your Lighttpd config and add the following, make sure to check your domain and ports.

# nano /etc/lighttpd/lighttpd.conf

server.modules += ( "mod_proxy" )

$HTTP["host"] == "my-first-blog.com" {
    proxy.server = ( "" => ( (
        "host" => "127.0.0.1",
        "port" => "2368" )
    ) )
}

$HTTP["host"] == "my-second-blog.com" {
    proxy.server = ( "" => ( (
        "host" => "127.0.0.1",
        "port" => "2370" )
    ) )
}

Reload your Lighttpd config with the following

# service lighttpd force-reload

You've got everything you need to run multiple ghost blogs through Lighttpd now, but finally we're going to install Forever which keeps the blogs running in case of any crashes.

First, install Forever through the Node Package Manager.

# npm install forever -g

Now that's done, we're going to run Ghost and hopefully everything should be working!

$ NODE_ENV=production forever start first-blog/index.js
$ NODE_ENV=production forever start second-blog/index.js

If you have any problems, drop me an email at ben@bbrks.me

Online pseudonyms are great. You are in complete control of your identity online, but there are several pitfalls I have faced which can be easily avoided if you're aware of them before choosing an alias.

Long before I was thinking about my personal brand online, my career, or even entering that big scary world of high-school, I quickly learned through narcissism and self obsessed Googling that a lot of Ben Brooks' existed in the world. Some are novelists, politicians, property managers, Australian cyclists who have had far longer to build up their online presence and identity, the exception being the novelist. He's the same age as me and that's quite scary.

Your mother might think you're one in a million, but that still leaves 7000 people just like you. How do you compete with them for the desperate race to the number one spot on Google?!

Luckily for me, I had the genius idea of using a nickname which nobody else was using. Brilliant in theory, but there were several aspects I didn't fully think through before diving in and picking one.

Common mistakes when picking a pseudonym

  1. Picking a name that is entirely unpronounceable

    Probably my number one regret in picking my "bbrks" nickname. Not even I know how to say it, which can make it quite awkward in phone or face to face conversations, and it usually comes out as a mumble something similar to buh-brooks.

    Sure it's nice and short, but that's no use if you have to spell it out each time you say it. I usually tell people verbally to visit benbrooks.co.uk, which redirects them here.

  2. Picking a ridiculously unsuitable name

    No, your future employer isn't going to be impressed reading "XxL33TBlazeUp420xX" on your résumé. This should be obvious.

  3. Picking a really specific name

    Picking a name related to your area of work seems great at first, you want to be the guy known for being an awesome mop bucket designer. But what happens if you decide to change your career path?

    In the past two years I've gone from aspiring to be a programmer working for NASA and SpaceX, to being a back-end web developer, a front-end developer, UI/UX designer, photographer and of course the super amazing start-up I'm going to create that will make me millions. Not all of these dreams are reachable realistically, and you're pretty much guaranteed to change your mind at some point.

    Don't be the awesome mop bucket designer wanting to be a chef.

Alternatives to pseudonyms

Your real name may be commonly used, but you're probably forgetting you have a middle name or two. Those alone can be enough to differentiate you from the other people with an identical name. If you're after anonymity and separation of your real life self, this may not be for you. But it is an entirely reasonable thing for somebody who is just after uniqueness.

Ben Brooks returns about 90 million results on Google. Not ideal.

Ben Keith Brooks though? Just under 3 million. An impressive reduction. Don't hide your middle names, embrace them.

I hope I've at least opened your eyes to the mistakes you could run into when choosing an online pseudonym.

I am still undecided on mine, but it's like the sunk cost fallacy. If you pick a bad name and stick with it based on the amount of effort you've sunk into it, you're going to have a bad time.

We don't need page numbers on the web, you can scroll your way through a piece of text at the flick of a finger. But how do you know how far through an article you are?

I came across a discussion or two on Hacker News today about making indicators to display your progress while reading an article. Something that looked a little like this…

Horizontal progress bar

The idea is nice, but it does not translate well. The progress bar is horizontal, while the article you are reading is (presumably) vertical. This is poor natural mapping. You scroll down to move the indicator bar across. It just doesn't make sense.
Poor natural mapping

Kindles & other E-Book readers have something similar. A horizontal bar along the bottom of the screen which displays how far through a book you are, along with chapter markers.

Kindle progress bar

This sort of makes sense with a reader, you have defined pages, even if the content does reflow. You don't scroll vertically through a book on them. The page turns are horizontal, thus horizontal progress indicators make sense & have good mapping.


Thing is though, we already have a vertical progress bar for the web that has been around for decades. It's called the scroll bar, you may have heard of it.

Scrollbar History

"But Ben", I hear you say…

"What about the comments section below an article which takes up two times the amount of space‽ Won't that throw the proportions off?"

Yes, yes it will random dude (or dudette). There is an easy way around this though!

Simply hide all of the content which is not the article until the user reaches it. BOOM. You have a universal progress bar which everybody has been using since the 80s. (okay maybe not everyone)

"But how do you implement this" you ask? Easy. Gianni Chiappetta has kindly written a jQuery plugin for us :)


"But Ben!", - Hello again random dude/dudette.

"What happens when a user is reading on an iOS or OS X device and the scrollbars hide when not in use?"

Frankly I think it sucks that they hide when not in use, at least on large screens. The least they can do is offer a few pixels as a visual indication. But alas, Apple knows best./s

Apple Scrollbars

One alternative is to ignore iOS and OS X users. But that's not very nice. There are many scrolling libraries available that you can use, such as NiceScroll.

Or you can just make your own indicator like the one seen at the very start of the article. Just don't use a horizontal indicator for vertical progression :)

"But what happens when the article is split across several pages?"

Well then you've probably got bigger problems. An article shouldn't be split across multiple pages on the web! We have an infinitely long canvas to write on, along with an elegant way of moving through it. Why would you need to split it up?

"But ad revenue!"

Maybe there are better ways of monetising than forcing multiple pages of ads down your users throats? :)