# LaTeX and Fuzzy Logics

Fuzzy logics (especially fuzzy numbers and fuzzy intervals) can be beautifully plotted on a graph, …. aaaand of course, you can also do this using LaTeX and pgfplots!

\begin{tikzpicture}
\begin{axis}[
height=3.5cm,
width=\textwidth/2,
ytick={0,1},
xtick={4,6},
area style,
xlabel={$$}, xmin=0,xmax=10, axis x line=bottom, axis y line=left, %ylabel={$$},
enlarge x limits=false
]

{(2.5,0)(4,1)(6,1)(7.5,0)}
\closedcycle;
\addplot [red, mark = none, nodes near coords=\textbf{Fuzzy interval},every node near coord/.style={anchor=90,rotate=0,anchor=south,align=center}] coordinates {( 5, 1.1)};

\end{axis}
\end{tikzpicture}

Fuzzy interval plotted with pgfplots in LaTeX

# Setup an SSH tunnel on Mac OS X

There are some apps available to setup an SSH tunnel on OS X, but you can do it very easily in the terminal.

Just start a SOCKS web proxy using this SSH command:

$ssh -D 8080 -C -N username@myserver.com -p 22 Once your proxy is running you must tell OS X to use this web proxy. Go to System preferences, Network, Advanced. Open the Proxies tab and select SOCKS-proxy. Set the server: 127.0.0.1, set the port: 8080. Save and apply the settings, and everything should work! # Installing MAMP (Mac OS X Apache MariaDB PHP) using MacPorts MacPorts is a BSD ports like package management system for OS X. The MacPorts Project is an open-source community initiative to design an easy-to-use system for compiling, installing, and upgrading either command-line, X11 or Aqua based open-source software on the OS X operating system. The tool is very handy when it comes to installing command line tools for Mac. In this guide I will use it to install Apache, MariaDB and PHP. You could also install them using Homebrew, or use the packages that come with your Mac, but I prefer MacPorts… So if you don’t have MacPorts installed, follow the installation instruction on their website. Before installing any ports, make sure you have the latest version of the ports tree: $ sudo port selfupdate

## Apache

If you have web sharing enabled on your Mac, you should disable it before continuing. Web sharing can be found under ‘System preferences’, ‘Sharing’, …

Time to install Apache:

$sudo port install apache2 Whenever your installation is completed, you can edit Apache’s configuration file: /opt/local/apache2/conf/httpd.conf. Probably you want to set DocumentRoot to your local Sites folder. To do this change /opt/local/apache2/htdocs to your local sites folder e.g. /Users/Mathias/Sites. Don’t forget to verify your changes after every modification you do to httpd.conf! $ /opt/local/apache2/bin/apachectl -t

When everything is configured, you can start Apache using MacPorts services:

$sudo port load apache2 Stopping services can be done using the unload statement. Apache should be functioning right now, more configuration details can be found everywhere on the internet, I’m not gonna explain the whole config file here… ### MariaDB (MySQL) Again, we use MacPorts: $ sudo port install mariadb-server

Once MariaDB is installed, we need to create the main databases:

$sudo -u _mysql /opt/local/lib/mariadb/bin/mysql_install_db Time to start MariaDB: $ sudo port load mariadb-server

Next we need to create a password for the root user, don’t forget to do this step! This procedure will interactively ask you some security details:

$/opt/local/lib/mariadb/bin/mysql_secure_installation If you work a lot with sockets for MySQL/MariaDB, you can create a symbolic link from the default socket path to MacPort’s path: $ sudo ln -s /opt/local/var/run/mariadb/mysqld.sock /tmp/mysql.sock

You can also specify the socket path in your PHP config file: see below…

Note: MacPorts MariaDB has skip-networking enabled by default in /opt/local/etc/mariadb/macports-default.cnf. If you want to use 172.0.0.1 for your MySQL connections, you should comment out that line.

If you want to use mysql on the command line, you can link mysql to MariaDB:

$sudo port select --set mysql mariadb ## PHP Last step is installing PHP: $ sudo port install php56-apache2handler
$sudo port install php56-mysql Set up your PHP configuration files. For development purposes use: $ cd /opt/local/etc/php56
$sudo cp php.ini-development php.ini For production use: $ cd /opt/local/etc/php56
$sudo cp php.ini-production php.ini Enable the PHP module in Apache $ cd /opt/local/apache2/modules
$sudo /opt/local/apache2/bin/apxs -a -e -n "php5" mod_php56.so in Apache’s config file /opt/local/apache2/conf/httpd.conf, add index.php to the DirectoryIndex: <IfModule dir_module> DirectoryIndex index.php index.html </IfModule> Make sure that Apache includes the PHP config, check your httpd.conf file for the following lines: # Include PHP configurations Include conf/extra/mod_php56.conf Also verify that the .so shared object for PHP is included: # Load the PHP module LoadModule php5_module modules/mod_php56.so Before we can use MySQL in our PHP code, we must set the default socket path in /opt/local/etc/php56/php.ini. Search for mysql.default_socket, mysqli.default_socket and pdo_mysql.default_socket and assign the MariaDB socket to them: /opt/local/var/run/mariadb/mysqld.sock. If you regularly use PHP from the command line, you also want to link the php command to the MacPorts PHP version: $ sudo port select --set php php56

If you want to have colored PHP CLI output,  you must enable it by installing php posix.

$sudo port install php56-posix Verify your Apache config, restart Apache, restart MariaDB and everything should work correctly! # PHP: Unit tests with Travis, PHPUnit and Composer In a perfect world, every software developer writes tons of unit tests, and uses continuous integration to make sure everything keeps working. Travis, PHPUnit and Composer are there to save you a lot of time! In this blogpost I will explain how to setup Travis and run PHPUnit to run unit tests in a PHP project with Composer. ## Composer Composer is a super awesome dependency manager for PHP. If you don’t use it yet, you are doing it wrong! For this guide I assume you already have a working composer.json file and you know the basics of Composer. Before you can actually test something you must of course run composer install to download all the dependencies. ## PHPUnit PHPUnit is the ultimate unit testing framework for PHP. Installation instructions can be found here. Next thing to do is create the phpunit.xml file. This file will contain your PHPUnit configuration and will make it very easy to run the unit tests from the command line. <?xml version="1.0" encoding="UTF-8"?> <phpunit backupGlobals="false" backupStaticAttributes="false" bootstrap="vendor/autoload.php" colors="true" convertErrorsToExceptions="true" convertNoticesToExceptions="true" convertWarningsToExceptions="true" processIsolation="false" stopOnFailure="false" syntaxCheck="false" > <testsuites> <testsuite name="My package's test suit"> <directory>./tests/</directory> </testsuite> </testsuites> </phpunit> In this phpunit.xml file I have set some basic variables which might be different for your project. But note the bootstrap argument. Assign the path to your Composer autoload.php file to the bootstrap variable. Next you must specify the directory where your PHP test files are located. Run your unit tests (and check if PHPUnit is correctly configured): phpunit This should yield an output like this: ~/S/MyPackage (master)$ phpunit
PHPUnit 4.5.0 by Sebastian Bergmann and contributors.

.......

Time: 92 ms, Memory: 3.50Mb

OK (7 tests, 24 assertions)

## Travis

Composer and PHPUnit configured, it’s time to create the .travis.yml file to configure Travis.

language: php
php:
- 5.4
- 5.5
- 5.6

install:
- composer install --no-interaction


The Travis configuration is just a normal PHP config, but before installing, Travis must run composer install.

Push everything to Github and let Travis run your unit tests.

If you want strict synchronization between your local Composer dependencies and remote installations, you must also commit the composer.lock file to your repo. Most people place it in their .gitignore, but it is actually meant to be committed to Git.

## Example

If you need a live example of such a configuration, take a look at my ORM (Github / Travis) package. In this package I specified an other Composer vendor directory (core/lib), so I also changed the bootstrap variable in the PHPUnit configuration. You may also ignore the database section in the .travis.yml file…

# Trainspotting (Antwerpse haven)

Na de examens even gaan uitwaaien in de haven en wat treinen gaan fotograferen (vergezeld van een ‘professionele’ trainspotter!).

# CSS: semi-transparent border on images

When you display images on a white background (but also on other colors) you’d want to add a border to the image. But adding a ‘normal’ CSS border doesn’t always look very well.

The solution: semi-transparant borders that are not positioned around, but on the images. This subtle design element will make your galleries look way better. (Facebook, Flickr, Google, … they all use it)

No border around/on the images

Semi-transparent border on images

Adding this borders can be done very easily using CSS.

### Technique 1: outline

A first solution is using the outline and outline-offset property in CSS:

img.image {
outline: 1px solid rgba(0, 0, 0, 0.1);
outline-offset: -1px;
}

This is very easy and works perfectly in a lot of browsers, but sadly it isn’t supported in Internet Explorer 9 & 10, which have together a market share of over 15% (on the desktop).

### Technique 2: psuedo element

Using the :after pseudo element has much better browser support. All modern browsers support it.

div.image-container:after {
border: 1px solid rgba(0, 0, 0, 0.1);
content: '';

position: absolute;
top: 0;
right: 0;
bottom: 0;
left: 0;
}

The  above CSS code will use a pseudo element to position the border right on top the image.

Even though working with the pseudo element is better supported, it can be a bit harder to get it working. People often tend to forget how CSS really works. So mind that the parent div should have position: relative and that :after doesn’t work on img elements. (Look it up, because explaining all this would lead us way too far from the original topic.)

# Create beautiful photo galleries with ‘Justified Gallery’

Organizing pictures of different sizes into a gallery can be a hard task, but using ‘Justified Gallery’, you can make good-looking — and responsive — photo galleries in minutes. ‘Justified Gallery’ is written in javascript, and renders the photos the same way as Flickr.

Example of ‘Justified Gallery’

First of all, you need to include the script and stylesheet, which you can find on the project homepage. Since this is a jQuery plugin, you must also include jQuery…

The basic HTML code for the gallery looks like this:

<div id="gallery">
<a href="path/to/image1.jpg">
<img alt="Caption for my image" src="path/to/image1_thumbnail.jpg" />
</a>
<a href="path/to/image2.jpg">
<img alt="Another caption" src="path/to/image2_thumbnail.jpg" />
</a>
...
</div>

After you’ve created the HTML for your gallery,  you need to run justifiedGallery on that div.
Somewhere on the page you put a script tag, or you make a separate javascript file for it.

$( document ).ready(function() {$("#gallery").justifiedGallery(
{
rowHeight : 250,
lastRow : 'nojustify',
margins : 5
}
);
});

The rowHeight defines the height that ‘Justified Gallery’ will match. Depending on the set of images it may be a bit different:

However, the justification may resize the images, and, as a consequence, the row height may be a little bit different than 160px. This means that the row height is intended as your preferred height, and it is not an exact measure. If you want that the row height remains strictly fixed, you can use the fixedHeight option: this option will crop the images a little bit to make sure that the row height doesn’t change.

lastRow defines how the last row will be handled. If you want empty space after the last image, use nojustify. If you want the last row to fill the whole page width, use justify. You can also hide the row using hide.

Obviously margins sets the margin (in pixels) between the images.

‘Justified Gallery’ is very well documented on the project homepage. Take a look there for the various other options!

# Backup your databases in Git

Storing backups of the database is import for any possible service on the internet. Git can be the right tool to backup databases.

Like other version control systems, Git tracks the changes, and will only push the changes in files to the remote. So if one line in a 1 million database dump is changed, we don’t need to transfer the whole dump to our backup server. This economization is done by Gits delta compression mechanism.1

### Configuring Git

Generating SSH keys:

$ssh-keygen -t rsa -C "your_email@example.com" Generating public/private rsa key pair. Enter file in which to save the key (/Users/you/.ssh/id_rsa): [Press enter] Enter passphrase (empty for no passphrase): [Type a passphrase] Enter same passphrase again: [Type passphrase again] Your identification has been saved in /Users/you/.ssh/id_rsa. Your public key has been saved in /Users/you/.ssh/id_rsa.pub. The key fingerprint is: 01:0f:f4:3b:ca:85:d6:17:a1:7d:f0:68:9d:f0:a2:db your_email@example.com  If you want to execute the backup script automatically, leave the passphrase blank. This way SSH won’t ask for it. Note that this might be insecure! Now create a remote Git repository, and add the public key to this Git service, e.g. Github, Gogs, …. Init a new local repo with the SSH remote address, and commit/push an initial commit. ### Backup script #! /bin/sh TIMESTAMP=$(date +"%F")
BACKUP_DIR="/home/mathias/backup/mysql_git"
MYSQL_USER="Mathias"
MYSQL=/usr/bin/mysql
MYSQLDUMP=/usr/bin/mysqldump

cd $BACKUP_DIR echo "Backupping databases" databases=$MYSQL --user=$MYSQL_USER -p$MYSQL_PASSWORD -e "SHOW DATABASES;" | grep -Ev "(Database|information_schema|performance_schema)"

for db in $databases; do echo " > Dumping$db to disk"
$MYSQLDUMP --force --opt --user=$MYSQL_USER -p$MYSQL_PASSWORD --skip-extended-insert --databases$db > "$BACKUP_DIR/$db.sql"

echo "  > Saving $db changes on Git server" git add$BACKUP_DIR/$db.sql git commit -m "$db date +"%m-%d-%Y""
git push

done

echo "Done"

The script will loop through all MySQL databases, and dump them to a .sql file (with their name).  After it is dumped, the file will be added to the local git repo and will be committed.

After each commit, the changes are pushed to the remote repo. This avoids having very big pushes to do, when working with large databases. If you want to push only once, just place the push at the end of the script.

### Running the backup script

Running this script manually isn’t the best solution. Making an automated backup service of this is straightforward, just make a cronjob that executes the script every day (or any timespan you want).

Type crontab -e in the console, this will open your personal cron configuration in your favorite editor. Now add the cronjob to the crontab:

30 2 * * * /home/mathias/backup/mysql_git/backup.sh >> /home/mathias/backup_git_cron.log

This particular example will run the backup script every day at 2h30, and append the output of the script to a backup_git_cron.log file in my home directory. (Of course you are absolutely free to create any exotic cronjob that runs the backup script at your desired moment)

### Big data and low-end hardware

Git works very well for small programming source files and small text files. Those database dumps, however,  aren’t always that small. On my VPS I have 200MB of database dumps, which each have to be compressed and packed for every commit. This takes a lot of time on a machine with 512MB ram, and even crashes sometimes on the largest files. While pushing I’ve seen this error way too much: error: pack-objects died of signal 9.

Some other Git users with larger files have reduced the limits regarding packing, which resulted in fewer problems packing those files:

git config --global pack.windowMemory "100m"
git config --global pack.packSizeLimit "100m"


On my server that didn’t really seem to work (apparently I still haven’t got enough free ram): I noticed fewer problems while compressing the objects, but it still took quite some time (and crash) for the large dumps.
The solution for me seemed to be turning of the delta compression.2

echo '*.sql -delta' > .gitattributes

The above solution writes the setting to the .gitattributes file. If you commit this file, it will be turned of on any clone of the repo.

Another solution would be to migrate from Git to Mercurial. From what I’ve read, Mercurial stores diffs in stead of object packs.

There’s one huge difference between git and mercurial; the way the represent each commit. git represents commits as snapshots, while mercurial represents them as diffs.

Changesets (diffs) advantage is in taking up less space. Git recovers the space used for commits by using compression, but this requires an occasional explicit recompress step (“git pack”).

When the history of the repository become too large, it is useful to do a shallow clone of the remote: git clone --depth <depth> <remote-url>. This way you don’t keep large local history, but let the remote keep it.

### Conclusion

Git might not be the perfect system for backups, but using it with care (and good hardware) it can provide a decent backup system.

And it’s always better than having no backup at all!

1 Later in this blogpost I have pointed out that disabling delta compression is better on low memory machines to use less memory. Note that without delta compression Git needs to send a lot more data to the remote.2

2 Note that disabling delta compression makes that Git needs to push the full packs of large files, and can therefore not rely on delta compression. So If you change one line of  500MB file, that is packed in 100MB pack, you will always need to send that 100MB pack to the remote (instead of just 15KB) when delta compression is turned off. Without delta compression Git also needs to store all the objects of the files. After 65 commits (of some large databases), I had a repo of almost 1GB. Running git gc shrank that repo to less than 100MB. (Unfortunately, running git gc on my 512MB ram server results in those well-known issues:  error: pack-objects died of signal 9, warning: suboptimal pack - out of memory and fatal: inflateInit: out of memory)

# En toen waren er servers…

Vanaf vandaag beschik ik over een Supermicro server. Het werd hoog tijd om eigen hardware te hebben en er zelf mee aan de slag te gaan!
Ik heb gekozen voor server met een zuinige Intel Atom D510 met 4GB ram.

# PHP fragments in Markdown

Ever wanted to use PHP variables and functions in a markdown file and use that file to generate HTML code from the Markdown syntax? Well, it isn’t too hard…

Assume you want to parse the following file as Markdown, after executing the PHP fragment:

<?php echo $var; ?> ------------------- Some text... > Maecenas sed diam eget risus varius blandit sit amet non magna. If you simply use file_get_contents($file);, it will just show the PHP string, in stead of replacing it with the value of the variable. So we want to grab the executed code in a variable and use that variable to generate HTML.

This is done using Output Buffering Control in PHP. We start an output buffer, include the file, and close the output buffer. Everything that would normally be outputted, is now stored in the output buffer.

<?php
// start output buffer
ob_start();

// include the markdown file
// working with the output buffer allows us to use php code in the md file.
include $file; // get the generated content from the output buffer$md = ob_get_clean();

// parse markdown
$html = md_to_html($md);
?>

Note: If you need newlines in Markdown, you must add a space after the PHP closing tag. I have e.g. added a space after the ?> on the first line to make the markdown parser work.