Simple Liveblog to demonstrate WebSockets in Go

Yesterday I had some time left and thought it was the perfect moment to write a very simple live blog to learn more about WebSockets (in Go).


WebSockets are actually very easy in Go, they work just like the http handler. The required websocket package is not installed by default, but you can easily install the (by Go maintained) package: go get

Time to create a websocket handler:

func main() {
    http.Handle("/", websocket.Handler(HandleSocket))

    if err := http.ListenAndServe(":1234", nil); err != nil {
        log.Fatal("ListenAndServe:", err)

In the above snippet, HandleSocket is the name of your own defined handler function. In this easy example I created a handler function that will just send all received message on a socket back:

func HandleSocket(ws *websocket.Conn) {

    // Wait for incoming websocket messages
    for {
        var reply string

        if err := websocket.Message.Receive(ws, &reply); err != nil {
            log.Println("Can't receive from socket:", err)

        log.Println("Received back from client: " + reply)

        msg := "Received:  " + reply
        log.Println("Sending to client: " + msg)

        if err := websocket.Message.Send(ws, msg); err != nil {
            log.Println("Can't send to socket:", err)

Of course I enhanced this Go code to create an actual liveblog demo.


Now we have our websocket, we must actually do something with it in Javascript. The most simple implementation will prepend any new message to an html list with id messages:

var sock = null;
var wsuri = "ws://localhost:1234";

window.onload = function() {

    sock = new WebSocket(wsuri);

    sock.onopen = function() {
        console.log("connected to " + wsuri);

    sock.onclose = function(e) {
        console.log("connection closed (" + e.code + ")");

    sock.onmessage = function(e) {
        $('<li>' + '</li>').hide().prependTo('#messages').fadeIn(1000);

We must also be able to send messages from an admin panel:

function send() {
    var msg = document.getElementById('new_message').value;
    document.getElementById('new_message').value = '';


I combined the above little pieces to create a simple liveblog. The blog has an admin page on which someone can add messages, which are then broadcasted to all the other sockets.


Simple admin page

Simple admin page

The source code of the demo is available on Github:

Antwerpen vanaf Panorama van KBC Boerentoren

Deze week de kans gekregen om wat foto’s te nemen vanop de hoogste verdieping van de KBC Boerentoren.

Het resultaat: een aantal mooie panorama foto’s van Antwerpen…

Zicht op de kathedraal van Antwerpen en de Schelde. Rechts van de kathedraal zie je het Vleeshuis, en uiterst rechts staat de Sint-Pauluskerk. In de verte zie je de koeltorens van Doel. Foto werd genomen van op het panorama in de KBC Boerentoren.

Zicht op de kathedraal van Antwerpen en de Schelde. Rechts van de kathedraal zie je het Vleeshuis, en uiterst rechts staat de Sint-Pauluskerk.
In de verte zie je de koeltorens van Doel.

Bocht van de schelde, met op de voorgrond de Sint-Pauluskerk, rechts het MAS. In de verte op linkeroever valt nog een glimp op te vangen van Oosterweel. Links vooraan zie je ook het Vleeshuis.

Bocht van de schelde, met op de voorgrond de Sint-Pauluskerk, rechts het MAS.
In de verte op linkeroever valt nog een glimp op te vangen van Oosterweel.
Links vooraan zie je ook het Vleeshuis.

Zicht op de Schelde met centraal in beeld de Sint-Andrieskerk

Zicht op de Schelde met centraal in beeld de Sint-Andrieskerk

Politietoren "Den Oudaan" met rechts ernaast de Sint-Augustinuskerk en verderop het nieuwe justitiepaleis op de achtergrond

Politietoren “Den Oudaan” met rechts ernaast de Sint-Augustinuskerk en verderop het nieuwe justitiepaleis op de achtergrond

Van links naar rechts Sint-Antonius en het Sportpaleis (beide in de verte), Sint-Jacobskerk, Theater Building, Antwerp Tower en uiterst rechts Antwerpen-Centraal

Van links naar rechts Sint-Antonius en het Sportpaleis (beide in de verte), Sint-Jacobskerk, Theater Building, Antwerp Tower en uiterst rechts Antwerpen-Centraal

LaTeX and Fuzzy Logics

Fuzzy logics (especially fuzzy numbers and fuzzy intervals) can be beautifully plotted on a graph, …. aaaand of course, you can also do this using LaTeX and pgfplots!

            area style,
            axis x line=bottom,
            axis y line=left,
            enlarge x limits=false

            \addplot[fill, red, opacity=0.2] coordinates
                \addplot [red, mark = none, nodes near coords=\textbf{Fuzzy interval},every node near coord/.style={anchor=90,rotate=0,anchor=south,align=center}] coordinates {( 5, 1.1)};



Fuzzy interval

Fuzzy interval plotted with pgfplots in LaTeX

Setup an SSH tunnel on Mac OS X

There are some apps available to setup an SSH tunnel on OS X, but you can do it very easily in the terminal.

Just start a SOCKS web proxy using this SSH command:

$ ssh -D 8080 -C -N -p 22

Once your proxy is running you must tell OS X to use this web proxy. Go to System preferences, Network, Advanced. Open the Proxies tab and select SOCKS-proxy.
Set the server:, set the port: 8080. Save and apply the settings, and everything should work!

Installing MAMP (Mac OS X Apache MariaDB PHP) using MacPorts

MacPorts is a BSD ports like package management system for OS X.

The MacPorts Project is an open-source community initiative to design an easy-to-use system for compiling, installing, and upgrading either command-line, X11 or Aqua based open-source software on the OS X operating system.

The tool is very handy when it comes to installing command line tools for Mac. In this guide I will use it to install Apache, MariaDB and PHP. You could also install them using Homebrew, or use the packages that come with your Mac, but I prefer MacPorts… So if you don’t have MacPorts installed, follow the installation instruction on their website.

Before installing any ports, make sure you have the latest version of the ports tree:

$ sudo port selfupdate


If you have web sharing enabled on your Mac, you should disable it before continuing. Web sharing can be found under ‘System preferences’, ‘Sharing’, …

Time to install Apache:

$ sudo port install apache2

Whenever your installation is completed, you can edit Apache’s configuration file: /opt/local/apache2/conf/httpd.conf. Probably you want to set DocumentRoot to your local Sites folder. To do this change /opt/local/apache2/htdocs to your local sites folder e.g. /Users/Mathias/Sites.
Don’t forget to verify your changes after every modification you do to httpd.conf!

$ /opt/local/apache2/bin/apachectl -t

When everything is configured, you can start Apache using MacPorts services:

$ sudo port load apache2

Stopping services can be done using the unload statement.

Apache should be functioning right now, more configuration details can be found everywhere on the internet, I’m not gonna explain the whole config file here…

MariaDB (MySQL)

Again, we use MacPorts:

$ sudo port install mariadb-server

Once MariaDB is installed, we need to create the main databases:

$ sudo -u _mysql /opt/local/lib/mariadb/bin/mysql_install_db

Time to start MariaDB:

$ sudo port load mariadb-server

Next we need to create a password for the root user, don’t forget to do this step! This procedure will interactively ask you some security details:

$ /opt/local/lib/mariadb/bin/mysql_secure_installation

If you work a lot with sockets for MySQL/MariaDB, you can create a symbolic link from the default socket path to MacPort’s path:

$ sudo ln -s /opt/local/var/run/mariadb/mysqld.sock /tmp/mysql.sock

You can also specify the socket path in your PHP config file: see below…

Note: MacPorts MariaDB has skip-networking enabled by default in /opt/local/etc/mariadb/macports-default.cnf. If you want to use for your MySQL connections, you should comment out that line.

If you want to use mysql on the command line, you can link mysql to MariaDB:

$ sudo port select --set mysql mariadb


Last step is installing PHP:

$ sudo port install php56-apache2handler
$ sudo port install php56-mysql

Set up your PHP configuration files. For development purposes use:

$ cd /opt/local/etc/php56
$ sudo cp php.ini-development php.ini

For production use:

$ cd /opt/local/etc/php56
$ sudo cp php.ini-production php.ini

Enable the PHP module in Apache

$ cd /opt/local/apache2/modules
$ sudo /opt/local/apache2/bin/apxs -a -e -n "php5"

in Apache’s config file /opt/local/apache2/conf/httpd.conf, add index.php to the DirectoryIndex:

<IfModule dir_module>
    DirectoryIndex index.php index.html

Make sure that Apache includes the PHP config, check your httpd.conf file for the following lines:

# Include PHP configurations
Include conf/extra/mod_php56.conf

Also verify that the .so shared object for PHP is included:

# Load the PHP module
LoadModule php5_module modules/

Before we can use MySQL in our PHP code, we must set the default socket path in /opt/local/etc/php56/php.ini. Search for mysql.default_socket, mysqli.default_socket and pdo_mysql.default_socket and assign the MariaDB socket to them: /opt/local/var/run/mariadb/mysqld.sock.

If you regularly use PHP from the command line, you also want to link the php command to the MacPorts PHP version:

$ sudo port select --set php php56

If you want to have colored PHP CLI output,  you must enable it by installing php posix.

$ sudo port install php56-posix


Verify your Apache config, restart Apache, restart MariaDB and everything should work correctly!

PHP: Unit tests with Travis, PHPUnit and Composer

In a perfect world, every software developer writes tons of unit tests, and uses continuous integration to make sure everything keeps working. Travis, PHPUnit and Composer are there to save you a lot of time!

In this blogpost I will explain how to setup Travis and run PHPUnit to run unit tests in a PHP project with Composer.

ComposerComposer logo

Composer is a super awesome dependency manager for PHP. If you don’t use it yet, you are doing it wrong! :)  For this guide I assume you already have a working composer.json file and you know the basics of Composer.

Before you can actually test something you must of course run composer install to download all the dependencies.


PHPUnit is the ultimate unit testing framework for PHP. Installation instructions can be found here.

Next thing to do is create the phpunit.xml file. This file will contain your PHPUnit configuration and will make it very easy to run the unit tests from the command line.

<?xml version="1.0" encoding="UTF-8"?>

<phpunit backupGlobals="false"

        <testsuite name="My package's test suit">


In this phpunit.xml file I have set some basic variables which might be different for your project. But note the bootstrap argument. Assign the path to your Composer autoload.php file to the bootstrap variable.

Next you must specify the directory where your PHP test files are located.

Run your unit tests (and check if PHPUnit is correctly configured): phpunit

This should yield an output like this:

~/S/MyPackage (master) $ phpunit
PHPUnit 4.5.0 by Sebastian Bergmann and contributors.

Configuration read from /Path/To/My/Package/phpunit.xml


Time: 92 ms, Memory: 3.50Mb

OK (7 tests, 24 assertions)


TravisTravis logo

Composer and PHPUnit configured, it’s time to create the .travis.yml file to configure Travis.

language: php
  - 5.4
  - 5.5
  - 5.6

  - composer install --no-interaction

The Travis configuration is just a normal PHP config, but before installing, Travis must run composer install.

Push everything to Github and let Travis run your unit tests.

If you want strict synchronization between your local Composer dependencies and remote installations, you must also commit the composer.lock file to your repo. Most people place it in their .gitignore, but it is actually meant to be committed to Git.


If you need a live example of such a configuration, take a look at my ORM (Github / Travis) package. In this package I specified an other Composer vendor directory (core/lib), so I also changed the bootstrap variable in the PHPUnit configuration. You may also ignore the database section in the .travis.yml file…

CSS: semi-transparent border on images

When you display images on a white background (but also on other colors) you’d want to add a border to the image. But adding a ‘normal’ CSS border doesn’t always look very well.

The solution: semi-transparant borders that are not positioned around, but on the images. This subtle design element will make your galleries look way better. (Facebook, Flickr, Google, … they all use it)

no border

No border around/on the images

semi-transparent border on images

Semi-transparent border on images

Adding this borders can be done very easily using CSS.

Technique 1: outline

A first solution is using the outline and outline-offset property in CSS:

img.image {
    outline: 1px solid rgba(0, 0, 0, 0.1);
    outline-offset: -1px;

This is very easy and works perfectly in a lot of browsers, but sadly it isn’t supported in Internet Explorer 9 & 10, which have together a market share of over 15% (on the desktop).

Technique 2: psuedo element

Using the :after pseudo element has much better browser support. All modern browsers support it.

div.image-container:after {
    border: 1px solid rgba(0, 0, 0, 0.1);
    content: '';

    position: absolute;
    top: 0;
    right: 0;
    bottom: 0;
    left: 0;

The  above CSS code will use a pseudo element to position the border right on top the image.

Even though working with the pseudo element is better supported, it can be a bit harder to get it working. People often tend to forget how CSS really works. So mind that the parent div should have position: relative and that :after doesn’t work on img elements. (Look it up, because explaining all this would lead us way too far from the original topic.)

Create beautiful photo galleries with ‘Justified Gallery’

Organizing pictures of different sizes into a gallery can be a hard task, but using ‘Justified Gallery’, you can make good-looking — and responsive — photo galleries in minutes. ‘Justified Gallery’ is written in javascript, and renders the photos the same way as Flickr.

Example of 'Justified Gallery'

Example of ‘Justified Gallery’

First of all, you need to include the script and stylesheet, which you can find on the project homepage. Since this is a jQuery plugin, you must also include jQuery…

Now you’re ready to go.
The basic HTML code for the gallery looks like this:

<div id="gallery">
    <a href="path/to/image1.jpg">
        <img alt="Caption for my image" src="path/to/image1_thumbnail.jpg" />
    <a href="path/to/image2.jpg">
    	    <img alt="Another caption" src="path/to/image2_thumbnail.jpg" />

After you’ve created the HTML for your gallery,  you need to run justifiedGallery on that div.
Somewhere on the page you put a script tag, or you make a separate javascript file for it.

$( document ).ready(function() {
    	    rowHeight : 250,
    	    lastRow : 'nojustify',
    	    margins : 5

The rowHeight defines the height that ‘Justified Gallery’ will match. Depending on the set of images it may be a bit different:

However, the justification may resize the images, and, as a consequence, the row height may be a little bit different than 160px. This means that the row height is intended as your preferred height, and it is not an exact measure. If you want that the row height remains strictly fixed, you can use the fixedHeight option: this option will crop the images a little bit to make sure that the row height doesn’t change.

lastRow defines how the last row will be handled. If you want empty space after the last image, use nojustify. If you want the last row to fill the whole page width, use justify. You can also hide the row using hide.

Obviously margins sets the margin (in pixels) between the images.

‘Justified Gallery’ is very well documented on the project homepage. Take a look there for the various other options!

Backup your databases in Git

Storing backups of the database is import for any possible service on the internet. Git can be the right tool to backup databases.

Like other version control systems, Git tracks the changes, and will only push the changes in files to the remote. So if one line in a 1 million database dump is changed, we don’t need to transfer the whole dump to our backup server. This economization is done by Gits delta compression mechanism.1

Configuring Git

Generating SSH keys:

$ ssh-keygen -t rsa -C ""
Generating public/private rsa key pair.
Enter file in which to save the key (/Users/you/.ssh/id_rsa): [Press enter]
Enter passphrase (empty for no passphrase): [Type a passphrase]
Enter same passphrase again: [Type passphrase again]
Your identification has been saved in /Users/you/.ssh/id_rsa.
Your public key has been saved in /Users/you/.ssh/
The key fingerprint is:

If you want to execute the backup script automatically, leave the passphrase blank. This way SSH won’t ask for it. Note that this might be insecure!

Now create a remote Git repository, and add the public key to this Git service, e.g. Github, Gogs, ….

Init a new local repo with the SSH remote address, and commit/push an initial commit.

Backup script

#! /bin/sh

TIMESTAMP=$(date +"%F")


echo "Backupping databases"

databases=`$MYSQL --user=$MYSQL_USER -p$MYSQL_PASSWORD -e "SHOW DATABASES;" | grep -Ev "(Database|information_schema|performance_schema)"`

for db in $databases; do

        echo "  > Dumping $db to disk"
        $MYSQLDUMP --force --opt --user=$MYSQL_USER -p$MYSQL_PASSWORD --skip-extended-insert --databases $db > "$BACKUP_DIR/$db.sql"

        echo "  > Saving $db changes on Git server"
        git add $BACKUP_DIR/$db.sql
        git commit -m "$db `date +"%m-%d-%Y"`"
        git push


echo "Done"

The script will loop through all MySQL databases, and dump them to a .sql file (with their name).  After it is dumped, the file will be added to the local git repo and will be committed.

After each commit, the changes are pushed to the remote repo. This avoids having very big pushes to do, when working with large databases. If you want to push only once, just place the push at the end of the script.

Running the backup script

Running this script manually isn’t the best solution. Making an automated backup service of this is straightforward, just make a cronjob that executes the script every day (or any timespan you want).

Type crontab -e in the console, this will open your personal cron configuration in your favorite editor. Now add the cronjob to the crontab:

30 2 * * * /home/mathias/backup/mysql_git/ >> /home/mathias/backup_git_cron.log

This particular example will run the backup script every day at 2h30, and append the output of the script to a backup_git_cron.log file in my home directory. (Of course you are absolutely free to create any exotic cronjob that runs the backup script at your desired moment)

Big data and low-end hardware

Git works very well for small programming source files and small text files. Those database dumps, however,  aren’t always that small. On my VPS I have 200MB of database dumps, which each have to be compressed and packed for every commit. This takes a lot of time on a machine with 512MB ram, and even crashes sometimes on the largest files. While pushing I’ve seen this error way too much: error: pack-objects died of signal 9.

Some other Git users with larger files have reduced the limits regarding packing, which resulted in fewer problems packing those files:

git config --global pack.windowMemory "100m"
git config --global pack.packSizeLimit "100m"
git config --global pack.threads "1"

On my server that didn’t really seem to work (apparently I still haven’t got enough free ram): I noticed fewer problems while compressing the objects, but it still took quite some time (and crash) for the large dumps.
The solution for me seemed to be turning of the delta compression.2

echo '*.sql -delta' > .gitattributes

The above solution writes the setting to the .gitattributes file. If you commit this file, it will be turned of on any clone of the repo.

Another solution would be to migrate from Git to Mercurial. From what I’ve read, Mercurial stores diffs in stead of object packs.

There’s one huge difference between git and mercurial; the way the represent each commit. git represents commits as snapshots, while mercurial represents them as diffs.

Changesets (diffs) advantage is in taking up less space. Git recovers the space used for commits by using compression, but this requires an occasional explicit recompress step (“git pack”).

When the history of the repository become too large, it is useful to do a shallow clone of the remote: git clone --depth <depth> <remote-url>. This way you don’t keep large local history, but let the remote keep it.


Git might not be the perfect system for backups, but using it with care (and good hardware) it can provide a decent backup system.

And it’s always better than having no backup at all!


1 Later in this blogpost I have pointed out that disabling delta compression is better on low memory machines to use less memory. Note that without delta compression Git needs to send a lot more data to the remote.2

2 Note that disabling delta compression makes that Git needs to push the full packs of large files, and can therefore not rely on delta compression. So If you change one line of  500MB file, that is packed in 100MB pack, you will always need to send that 100MB pack to the remote (instead of just 15KB) when delta compression is turned off. Without delta compression Git also needs to store all the objects of the files. After 65 commits (of some large databases), I had a repo of almost 1GB. Running git gc shrank that repo to less than 100MB. (Unfortunately, running git gc on my 512MB ram server results in those well-known issues:  error: pack-objects died of signal 9, warning: suboptimal pack - out of memory and fatal: inflateInit: out of memory)