Recover Discourse from a backup, adjust domain name

Roughly based on Move your Discourse instance to a different server post. But without using the web UI. Because sysadmins prefers terminal.

IMPORTANT: Make SURE the backup dump was generated from the same Discourse version the one you’ll import it into.

Copy from host backup into shared folder. Imagine you uploaded via SSH in your host home directory.

cp ~/snapshot.tar.gz /srv/webplatform/shared/backups/

Note that the folder /srv/webplatform/discuss/shared/standalone/backups/ from the docker host would end up to be in /shared/backups/ inside the container.

Enter the VM, make sure enable_restore is run from discourse cli utility

./launcher enter app
discourse enable_restore

Find the mounted backup file from within the container

ll /shared/backups/default/

Make sure /shared/backups/foo.tar.gz is readable by can read

chmod o+r /shared/backups/default/foo.tar.gz
discourse restore foo.tar.gz
discourse disable_restore

Remap domain name

discourse remap discourse.specifiction.org discuss.webplatform.org

Then, work on user uploads and regenerate assets. That’ll make sure ;

rake uploads:clean_up
rake posts:rebake

Refer to

Thoughts about improving load resiliency for CMS driven Websites

The following is an article I wrote based on a note I sent to the Mozilla Developer Network mailing list about an idea that crossed my mind to help scale a CMS driven website.

I’m convinced i am not the first one who came out with a similar idea, but I thought it would be useful to share anyway.

What’s common on a CMS driven Website?

What affects the page load is what happens in the backend before getting HTML back to the web browser. Sometimes its about converting the text stored in the database back into HTML, making some specific views, and the list can go on and on.

Problem is that we are spending a lot of CPU cycles to end up serving time and time again the same content, unique for each user when about 90% the generated content could be exactly the same for everybody.

What makes a content site page unique from what an anonymous visitor can get?
with web apps is that we make backend servers generate HTML as if it was unique when, in truth, most of it could be cached.

What if we could make a separation between what’s common, and use caching and serve it to everybody.

How HTTP cache works, roughly

Regardless of what software does it: Squid, Varnish, NGINX, Zeus, caching is done the same way.

In the end, the HTTP caching layer basically keeps in RAM generated HTTP Response body and keeps in memory based on the headers it had when it passed it through to the original request. Only GET Responses, without cookies, are cacheable. Other response body coming from a [PUT, DELETE, POST] request aren’t.

To come back on my previous example, what part is unique in the the current user compared to what the anonymous visitor gets on a documentation Website page.

What does a Website view serves us then? The content of the page, the “chrome” (what’s always there), Links to account settings, edit, or visualize details for the current page, account settings, the username, link to logout.

It means we are making unique content for things that could be cached and wasting cycles.

Even then, most of the content is cacheable because they would generally be with the same paths in the CMS.

How about we separate dynamic from static?

This makes me wonder if we could improve site resiliency by leveraging HTTP caching, strip off any cookies, and factor out what’s unique on a page so that we get the same output for any context.

As for the contextual parts of the site ‘chrome’, how about we expose a context-root which would take care of serving dynamically generated HTML to use as partials.

One way of doing it would be to make that context-root generate simple HTML strings that we can pass to a JavaScript manager that’ll convert it into DOM and inject it in the ‘chrome’.

Since we can make cookies to be isolated to specific context-roots, we can keep the statefulness of the session on MDN and have a clear separation of what’s dynamic and what’s not.

Managing my PGP/OpenPGP keys and share across many machines

With my previous position just finished, I’m cleaning up my credential data I’ve stored on my computer. While doing so, I needed to update my PGP keys and I learned something that I’m glad existed.

When you use PGP to encrypt and/or sign your messages, you can only read messages sent to you from the machine you have the private key. Right?

The obvious solution would be to copy that private key everywhere; You mobile, servers, laptops.

WRONG!

The problem is that security experts would say that you shouldn’t do that unless you are certain things are “safe”. But what is safe anyway, once somebody else, or a zombie process, accessed your machine, it’s possible somebody gained access to your keys.

OpenPGP has a way to address this issue and it’s called “SubKeys”. Quoting gnupg.org manual

By default, a DSA master signing key and an ElGamal encryption subkey are generated when you create a new keypair. This is convenient, because the roles of the two keys are different, (…). The master signing key is used to make digital signatures (…). The encryption key is used only for decrypting encrypted documents sent to you.

This made me think I should make sure that I create a subkey, backup my main identity, erase any traces of it from my main computer, then import the new subkey.

That way, I would only need to use the main key to update the data on key servers and to edit my keys.

A nice effect of this way of working is that we can revoke a subkey. Since subkeys, signatures and revocation are part of the public key that you sync with key on keyservers, you can make the compromised subkey invalid.

The compromised key will be able to decrypt messages sent for that private subkey, provided the attacker can also gain access to where the messages are stored, but you won’t lose the most important part of the identity system. You’ll be able to create another subkey and keep the same identity.

I hope I got it right. If I’m wrong, don’t hesitate to tell me and I’ll adjust!

I got to read more about all of this, but I am glad I learned about subkeys.

Here are some notes I found that helped me understand this all better.

Leaving W3C

Two years ago I announced I was joining W3C as a full-time staff to work on the WebPlatform project.

A detail I didn’t share was that, like many of my W3C teammates, we are freelancers attached to one of the W3C host sites — mine was with MIT. Like any contract, it has an ending date and by tomorrow, mine will be over.

I’ve spent two amazing years improving the WebPlatform.org website. It was really a dream that came true.

I’ve worked on many projects such as improving the server deployment strategy in which we can now basically shut down every component of the site and rebuild from scratch only using source-controlled configuration management scripts.

One of the best things of being part of W3C, even though I was working most of the time not in a team, was that I had a great opportunity to work in collaboration with my wonderful colleagues, and my (now former) manager Doug Schepers.

It’s been a pleasure and a privilege to get to work with you all, and I won’t forget the great moments, the conversations, the travels, the challenges. But my time is up now, I got to hand my [ssh] keys.

I hope our paths will cross again.

The W3C Team taken in 2013 in Shenzhen, China.

Photo credits: Taken by Richard Ishida, during TPAC 2013 in Shenzhen, China.
(source: W3C team gallery).

Add OpenStack instance meta-data info in your salt grains

During a work session on my salt-states for WebPlatform.org I wanted to shape be able to query the OpenStack cluster meta-data so that I can adjust more efficiently my salt configuration.

What are grains? Grains are structured data that describes what a minion has such as which version of GNU/Linux its running, what are the network adapters, etc.

The following is a Python script that adds data in Salt Stack’ internal database called grains.

I have to confess that I didn’t write the script but adapted it to work within an OpenStack cluster. More precisely on DreamHost’s DreamCompute cluster. The original script came from saltstack/salt-contrib and the original file was ec2_info.py to read data from EC2.

The original script wasn’t getting any data in the cluster. Most likely due to API changes and that EC2 API exposes dynamic meta-data that the DreamCompute/OpenStack cluster don’t.

In the end, I edited the file to make it work on DreamCompute and also truncated some data that the grains subsystem already has.

My original objective was to get a list of security-groups the VM was assigned. Unfortunately the API doesn’t give that information yet. Hopefully I’ll find a way to get that information some day.

Get OpenStack instance detail using Salt

Locally

salt-call grains.get dreamcompute:uuid
local:
    10a4f390-7c55-4dd3-0000-a00000000000

Or for another machine

salt app1 grains.get dreamcompute:uuid
app1:
    510f5f24-217b-4fd2-0000-f00000000000

What size did we create a particular VM?

salt app1 grains.get dreamcompute:instance_type
app1:
    lightspeed

What data you can get

Here is a sample of the grain data that will be added to every salt minion you manage.

You might notice that some data will be repeated such as the ‘hostname’, but the rest can be very useful if you want to use the data within your configuration management.

dreamcompute:
    ----------
    availability_zone:
        iad-1
    block_device_mapping:
        ----------
        ami:
            vda
        ebs0:
            /dev/vdb
        ebs1:
            vda
        root:
            /dev/vda
    hostname:
        salt.novalocal
    instance_action:
        none
    instance_id:
        i-00000000
    instance_type:
        lightspeed
    launch_index:
        0
    local_ipv4:
        10.10.10.11
    name:
        salt
    network_config:
        ----------
        content_path:
            /content/0000
        name:
            network_config
    placement:
        ----------
        availability_zone:
            iad-1
    public_ipv4:
        203.0.113.11
    public_keys:
        ----------
        someuser:
            ssh-rsa ...an rsa public key... [email protected]
    ramdisk_id:
        None
    reservation_id:
        r-33333333
    security-groups:
        None
    uuid:
        10a4f390-7c55-4dd3-0000-a00000000000

What does the script do?

The script basically scrapes OpenStack meta-data service and serializes into saltstack grains system the data it gets.

OpenStack’s meta-data service is similar to what you’d get from AWS, but doesn’t expose exactly the same data. This is why I had to adapt the original script.

To get data from an instance you simply (really!) need to make an HTTP call to an internal IP address that OpenStack nova answers.

For example, from an AWS/OpenStack VM, you can know the instance hostname by doing

curl http://169.254.169.254/latest/meta-data/hostname
salt.novalocal

To know what the script calls, you can add a line at _call_aws(url) method like so

diff --git a/_grains/dreamcompute.py b/_grains/dreamcompute.py
index 682235d..c3af659 100644
--- a/_grains/dreamcompute.py
+++ b/_grains/dreamcompute.py
@@ -25,6 +25,7 @@ def _call_aws(url):

     """
     conn = httplib.HTTPConnection("169.254.169.254", 80, timeout=1)
+    LOG.info('API call to ' + url )
     conn.request('GET', url)
     return conn.getresponse()

When you saltutil.sync_all (i.e. refresh grains and other data), the log will tell you which endpoints it queried.

In my case they were:

[INFO    ] API call to /openstack/2012-08-10/meta_data.json
[INFO    ] API call to /latest/meta-data/
[INFO    ] API call to /latest/meta-data/block-device-mapping/
[INFO    ] API call to /latest/meta-data/block-device-mapping/ami
[INFO    ] API call to /latest/meta-data/block-device-mapping/ebs0
[INFO    ] API call to /latest/meta-data/block-device-mapping/ebs1
[INFO    ] API call to /latest/meta-data/block-device-mapping/root
[INFO    ] API call to /latest/meta-data/hostname
[INFO    ] API call to /latest/meta-data/instance-action
[INFO    ] API call to /latest/meta-data/instance-id
[INFO    ] API call to /latest/meta-data/instance-type
[INFO    ] API call to /latest/meta-data/local-ipv4
[INFO    ] API call to /latest/meta-data/placement/
[INFO    ] API call to /latest/meta-data/placement/availability-zone
[INFO    ] API call to /latest/meta-data/public-ipv4
[INFO    ] API call to /latest/meta-data/ramdisk-id
[INFO    ] API call to /latest/meta-data/reservation-id
[INFO    ] API call to /latest/meta-data/security-groups
[INFO    ] API call to /openstack/2012-08-10/meta_data.json
[INFO    ] API call to /latest/meta-data/
[INFO    ] API call to /latest/meta-data/block-device-mapping/
[INFO    ] API call to /latest/meta-data/block-device-mapping/ami
[INFO    ] API call to /latest/meta-data/block-device-mapping/ebs0
[INFO    ] API call to /latest/meta-data/block-device-mapping/ebs1
[INFO    ] API call to /latest/meta-data/block-device-mapping/root
[INFO    ] API call to /latest/meta-data/hostname
[INFO    ] API call to /latest/meta-data/instance-action
[INFO    ] API call to /latest/meta-data/instance-id
[INFO    ] API call to /latest/meta-data/instance-type
[INFO    ] API call to /latest/meta-data/local-ipv4
[INFO    ] API call to /latest/meta-data/placement/
[INFO    ] API call to /latest/meta-data/placement/availability-zone
[INFO    ] API call to /latest/meta-data/public-ipv4
[INFO    ] API call to /latest/meta-data/ramdisk-id
[INFO    ] API call to /latest/meta-data/reservation-id
[INFO    ] API call to /latest/meta-data/security-groups

Its quite heavy.

Hopefully the script respects HTTP headers and don’t bypass 304 Not Modified responses. Otherwise it’ll add load to nova. Maybe I should check that (note-to-self).

Install

You can add this feature by adding a file in your salt states repository in the _grains/ folder. The file can have any name ending by .py.

You can grab the grain python code in this gist.

enjoy!

Converting a dynamic site into static HTML documents

Its been two times now that I’ve been asked to make a website that was running on a CMS and make it static.

This is an useful practice if you want to keep the site content for posterity without having to maintain the underlying CMS. It makes it easier to migrate sites since the sites that you know you won’t add content to anymore becomes simply a bunch of HTML files in a folder.

My end goal was to make an EXACT copy of what the site is like when generated by the CMS, BUT now stored as simple HTML files. When I say EXACT, I mean it, even as to keep documents at their original location from the new static files. It means that each HTML document had to keep their same value BUT that a file will exist and the web server will find it. For example, if a link points to /foo, the link in the page remain as-is, even though its now a static file at /foo.html, but the web server will serve /foo.html anyway.

Here are a few steps I made to achieve just that. Notice that your mileage may vary, I’ve done those steps and they worked for me. I’ve done it once for a WordPress blog and another on the [email protected] website that was running on ExpressionEngine.

Steps

1. Browse and get all pages you think could be lost in scraping

We want a simple file with one web page per line with its full address.
This will help the crawler to not forget pages.

  • Use a web browser developer tool Network inspector, keep it open with “preserve log”.
  • Once you browsed the site a bit, from the network inspector tool, list all documents and then export using the “Save as HAR” feature.
  • Extract urls from har file using underscore-cli

    npm install underscore-cli
    cat site.har | underscore select ‘.entries .request .url’ > workfile.txt

  • Remove first and last lines (its a JSON array and we want one document per line)

  • Remove the trailing remove hostname from each line (i.e. start by /path), in vim you can do %s/http:\/\/www\.example.org//g
  • Remove " and ", from each lines, in vim you can do %s/",$//g
  • At the last line, make sure the " is removed too because the last regex missed it
  • Remove duplicate lines, in vim you can do :sort u
  • Save this file as list.txt for the next step.

2. Let’s scrape it all

We’ll do two scrapes. First one is to get all assets it can get, then we’ll go again with different options.

The following are the commands I ran on the last successful attempt to replicate the site I was working on.
This is not a statement that this method is the most efficient technique.
Please feel free to improve the document as you see fit.

First a quick TL;DR of wget options

  • -m is the same as --mirror
  • -k is the same as --convert-links
  • -K is the same as --backup-converted which creates .orig files
  • -p is the same as --page-requisites makes a page to get ALL requirements
  • -nc ensures we dont download the same file twice and end up with duplicates (e.g. file.html AND file.1.html)
  • --cut-dirs would prevent creating directories and mix things around, do not use.

Notice that we’re sending headers as if we were a web browser. Its up to you.

wget -i list.txt -nc --random-wait --mirror -e robots=off --no-cache -k -E --page-requisites \
     --user-agent='User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.94 Safari/537.36' \
     --header='Accept-Language: fr-FR,fr;q=0.8,fr-CA;q=0.6,en-US;q=0.4,en;q=0.2' \
     --header='Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8' \
     http://www.example.org/

Then, another pass

wget -i list.txt --mirror -e robots=off -k -K -E --no-cache --no-parent \
     --user-agent='User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.94 Safari/537.36' \
     --header='Accept-Language: fr-FR,fr;q=0.8,fr-CA;q=0.6,en-US;q=0.4,en;q=0.2' \
     --header='Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8' \
     http://www.example.org/

3. Do some cleanup on the fetched files

Here are a few commands I ran to clean the files a bit

  • Remove empty lines in every .orig files. They’re the ones we’ll use in the end after all

    find . -type f -regextype posix-egrep -regex '.*\.orig$' -exec sed -i 's/\r//' {} \;
    
  • Rename the .orig file into html

    find . -name '*orig' | sed -e "p;s/orig/html/" | xargs -n2 mv
    find . -type f -name '*\.html\.html' | sed -e "p;s/\.html//" | xargs -n2 mv
    
  • Many folders might have only an index.html file in it. Let’s just make them a file without directory

    find . -type f -name 'index.html' | sed -e "p;s/\/index\.html/.html/" | xargs -n2 mv
    
  • Remove files that has a .1 (or any number in them), they are most likely duplicates anyway

    find . -type f -name '*\.1\.*' -exec rm -rf {} \;
    

Run a NodeJS process through forever from within a Docker container

One of the components, Publican, that I had to manage has many moving parts. The end product of that component is basically static HTML documents that ends up on specs.webplatform.org.

Since we need to have many packages installed in very specific version, and automating the installation wouldn’t bring any more benefit than being self-contained, I thought it would be best to go through the steps of converting it into a Docker container.

The following is a procedure I wrote to teach my colleague, Robin Berjon, how to run his system called Publican from within a Docker container. Publican is basically a GitHub hook listener that generates specs written to be parsed by ReSpec or Bikeshed

Run publican inside Docker

What this’ll do is basically build a VM that’ll run a Docker container. The container will write in files outside of it.

You’ll quickly notice that the paths will look the same, its confusing, sorry about that. Fortunately for us, the paths in the procedure are the ones that will be mounted through Docker Volume (the -v option when you call docker) and will, in the end, be the same files.

Once you have a Docker container running on a VM, it’ll replicate how a production VM will run the tasks. Since we know where the container will write files, we’ll have our frontend servers to forward requests to publican, and serve files it generated.

Doing all this removes the need to do any rsync. NGINX within the VM that’ll run Docker will take care of serving static files, and frontend server will expose it to the public.

Steps

  1. Have Vagrant and VirtualBox installed

  2. Follow what’s in renoirb/salt-basesystem README.md

  • Make sure you follow Vagrant Sandbox utilities part

        vagrant ssh
        sudo salt-call state.highstate
        sudo salt-call state.sls vagrantsandbox.docker
        exit
    
  • Reboot the VM by doing vagrant reload

        vagrant reload
    
  1. No need to follow what’s in webplatform/publican DOCKER.md file. Those are notes to show how to build a container. For this time, we’ll use a container I already built and pushed on Docker hub!

  2. Setup what’s required to run the container

        vagrant ssh
    
  • Prepare the folders;

        sudo -s
        su webapps
        id
    
  • You should see

        uid=990(webapps) gid=990(webapps) groups=990(webapps),33(www-data),998(docker)
    
  • Prepare the folders

        cd /srv/webapps
        mkdir publican/data
        cd publican
    
    • If all went well so far; you should be able to do docker ps as the webapps user. Otherwise reboot and/or run salt-call with both state.highstate state.sls vagrantsandbox.docker states. There should be nothing left to do.

      docker ps
      CONTAINER ID        IMAGE                   COMMAND                CREATED...
      
  • Pull the publican Docker image I built (it’ll take about 10 minutes)

        docker pull webspecs/publican:wip
    
  1. Copy the other files in this Gist in your local coputer where you cloned the salt-basesystem repository. From that folder you can move them inside the Vagrant VM where you need.
  • Copy publican config

        cp /vagrant/config.json data/
    
  • Download Bikeshed stuff that i didn’t figure out yet what’s important to keep, extract it in /srv/webapps/publican/spec-data/

        wget https://renoirboulanger.com/spec-data.tar.bz2
        tar xfj spec-data.tar.bz2
    
  • You can open up another terminal session and connect to the Vagrant VM vagrant ssh (e.g. if you don’t use tmux or screen)

        mkdir -p spec-data/readonly/
        mkdir -p data/{gits,logs,publish,queue,temp}
    
  1. Run the container

        docker run -it --rm -v "$(pwd)/data":/srv/webapps/publican/data \
                   -v "$(pwd)/spec-data":/opt/bikeshed/bikeshed/spec-data \
                   -p 7002:7002 webspecs/publican:wip
    
  • If you see the following, you’re in the Docker container!!

        [email protected]:~$
    
  • Initiate the empty shell we just created (it’ll create stuff in the data/ folder outside of the container)

        publican.js init
    
  • It should look like this

    publican-init

  • Once done, exit the container. Notice that by doing this, you lose the state of the VM and anything that has been written in the container. But, since we use volumes (notice the -v /host/path:/container/path), we actually wrote outside of the container.

  • We can exit the container

        exit
    
  • At this stage, we had publican and bikeshed to generate files (we may call this a “cache warmup” of softs). Now, let’s prepare the Vagrant VM to serve the static content. Notice that the next commands are there only for the purpose of a local workspace, in production this step will also be managed automatically.

  • Let’s get back as the root user, and create a quick web server;

        exit
        apt-get -yqq install nginx
        mv /vagrant/default.conf /etc/nginx/sites-available/default
        service restart nginx
    
  • Let’s return back to as the webapps user and launch the runner

        su webapps
        cd /srv/webapps/publican/
    
  • Launch the container; this will also be managed automatically in production.

        docker run -it --rm -v "$(pwd)/data":/srv/webapps/publican/data \
                       -v "$(pwd)/spec-data":/opt/bikeshed/bikeshed/spec-data \
                       -p 7002:7002 webspecs/publican:wip bin/run.sh
    

    It should look like this

    publican-run-hook

  • get your Vagrant VM IP address

        ifconfig
    
  • Should start by 172... or 192...; visit a browser to that address

Gists

Here are the files mentioned in this post

config.json

Publican expects this file as data/config.json.

{
    "bikeshed":     "/opt/bikeshed/bikeshed.py"
,   "rsyncPath":    "/srv/webapps/publican/"
,   "python":       "python2"
,   "logFile":      "logs/all.log"
,   "email":        {
        "to":       "[email protected]"
    ,   "from":     "[email protected]"
    ,   "host":     "localhost"
    ,   "level":    "error"
    ,   "handleExceptions": true
    }
,   "purgeAllURL":  "https://api.fastly.com/service/fooo/purge_all"
,   "purgeAllKey":  "baar"
}

default.conf

A minimal NGINX web server digging for static content that Publican generates.

# file: /etc/nginx/sites-enabled/default

server {
  listen 80 default_server;
  root /srv/webapps/publican/data/publish;
  index index.html index.htm;
  server_name localhost;
  location / { try_files $uri $uri/ =404; }
}
Dockerfile

Here is the project’s Dockerfile I created. I think it should be smaller, but Publican works with the following script.

Each step in a Dockerfile creates a “commit”, make sure you have as few of them as possible, and also make sure that you clean after yourself. Remember that a Docker container is re-deployable and smallest the size of the container, the better!

Notice a few details;

  • ENV DEBIAN_FRONTEND=noninteractive helps with dialogs
  • USER webapps tells “where” the rest of the script will make commands as a different user than root. Make sure what’s required by root to be done before!
  • COPY ... this is basically how you import content inside the container (i.e. make the container heavier)
#
# Publican Docker runner
#
# See also:
#   * https://github.com/nodesource/docker-node/blob/master/ubuntu/trusty/node/0.10.36/Dockerfile

FROM nodesource/trusty:0.10.36

MAINTAINER Renoir Boulanger <[email protected]>

ENV DEBIAN_FRONTEND=noninteractive

# Dependencies: Bikeshed, PhantomJS, Bikshed’s lxml
RUN apt-get update && apt-get -y upgrade && \
    apt-get install -yqq git python2.7 python-dev python-pip libxslt1-dev libxml2-dev zlib1g-dev && \
    apt-get install -yqq libfontconfig1 libfreetype6 curl && \
    apt-get autoremove -yqq --purge && \
    pip install --upgrade lxml

# Copy everything we have locally into the container
# REMINDER: Make sure you run `make clone-bikeshed`, we prefer to keep a copy locally outside
# of the data volume. Otherwise it would make problems saying that bikeshed clone is not in the
# same filesystem.
COPY . /srv/webapps/publican/

# Make sure we have a "non root" user and
# delete any local workbench data/ directory
RUN /usr/sbin/groupadd --system --gid 990 webapps && \
    /usr/sbin/useradd --system --gid 990 --uid 990 -G sudo --home-dir /srv/webapps --shell /bin/bash webapps && \
    sed -i '/^%sudo/d' /etc/sudoers && \
    echo '%sudo ALL=NOPASSWD: ALL' >> /etc/sudoers && \
    mv /srv/webapps/publican/bikeshed /opt && \
    rm -rf data && \
    mkdir -p data/temp && \
    rm -rf Dockerfile Makefile .git .gitignore DOCKER.md && \
    chown -R webapps:webapps /srv/webapps/publican && \
    chown -R webapps:webapps /opt/bikeshed

# Switch from root to webapps system user
# It **HAS to be** the SAME uid/gid as the owner on the host from which we’ll use as volume
USER webapps

# Where the session will start from
WORKDIR /srv/webapps/publican

# Environment variables
ENV PATH /srv/webapps/publican/node_modules/.bin:/srv/webapps/publican/bin:/srv/webapps/publican/.local/bin:$PATH
ENV HOME /srv/webapps/publican
ENV TMPDIR /srv/webapps/publican/data/temp
ENV NODE_ENV production
ENV GIT_DISCOVERY_ACROSS_FILESYSTEM true

# Run what `make deps` would do
RUN pip install --upgrade --user --editable /opt/bikeshed && \
    mkdir -p node_modules && npm install

# Declare which port we expect to expose
EXPOSE 7002

# Allow cli entry for debug, but make sure docker-compose.yml uses "command: bin/run.sh"
ENTRYPOINT ["/bin/bash"]

# Note leftover: Ideally, it should exclusively run
#ENTRYPOINT ["/bin/bash", "/srv/webapps/publican/bin/run.sh"]

# Note leftover: What it ends up doing
#CMD ["node_modules/forever/bin/forever", "--fifo", "logs", "0"]

Forever start script

If you notice in the Docker run command, I call a file bin/run.sh, here it is.

docker run -it --rm -p 7002:7002 \
           webspecs/publican:latest bin/run.sh

Publican runs its process using Forever. The objective of forever is to keep a process to run at all times.

While this isn’t ideal for NodeJS services, in the present use-case of a Docker container who has the only purpose to run a process; Forever apt for the job!

#!/bin/bash

export RUNDIR="/srv/webapps/publican"

cd $RUNDIR

node_modules/forever/bin/forever start $RUNDIR/bin/server.js
node_modules/forever/bin/forever --fifo logs 0

More to come

I have more notes to put up, but not enough time to give more context. Come back later for more!

Make Discourse “long polling” work behind Fastly

While working on deploying Discourse, I’ve came across a statement that took time for me to understand. Discourse has a subsystem that’s similar to WebSocket and Server Sent events which takes care to automatically update the page asynchronously. Note that this post is canonical version of my answer.

The confusing part was;

@sam
To server “long polling” requests from a different domain, set the Site Setting long polling base url to the origin server.
For example, if your CDN is pulling from “http://some-origin.com” be sure to plug in http://some-origin.com/ into the site setting. If you don’t your site will be broken.

This post is about clarifying why and how to work around this particular problem.

The confusing part is that if “http://some-origin.com/“. If you are behind Fastly, you have to use a CNAME entry and then you have to have a sub domain name and not the top level.

Background: In DNS, a top level domain name (i.e. “some-origin.com”) can only have A records. Since Fastly requires we use a CNAME entry, we have no choice but to use a sub domain name.

Let’s say that we will then use “http://discourse.some-origin.com/” to serve our Discourse forum so we can use Fastly.

Now there’s this thing called “long polling” which is basically an OPTION HTTP request with a long time before returning anything. If we use the Fastly or Varnish address, as Discourse would by default, Varnish will time out and “long polling” won’t work.

More background: Varnish has this option to bypass in known contexts through vcl_pipe which is roughly a raw TCP socket. But Fastly doesn’t offer it because of the size of their setup.

Proposed setup

Let’s enable long polling and expose our site under Fastly. We’ll need two names, one pointing to Fastly’s and the other to the IP addresses we give within the service dashboard.

  1. discourse.some-origin.com that’s our desired Discourse site domain name
  2. discoursepolling.some-origin.com (pick any name) that we’ll configure in Discourse to access directly to our public facing frontend web server

In my case, I generally have many web apps running that are only accessible from my internal network. I refer to them as “upstream”; the same term NGINX uses in their config. Since this number of web apps you would host on a site can fluctuate, you might still want the number public IP address to remain stable. That’s why I setup a NGINX server in front that proxies to internal web app server. I refer to them as “frontends”.

Let’s say you have two public facing frontends running NGINX.

Those would be the ones you setup in Fastly like this.

Fastly service configuration, at Hosts tab

Here we see two Backends in Fastly pannel at Configure -> Hosts.

Notice that in this example i’m using 443 port because my backends are configured to communicate between Fastly and my frontends through TLS. But you don’t need to.

Quoting again @sam;

@sam
To server “long polling” requests from a different domain, set the Site Setting long polling base url to the origin server.

Really means here is that we would have to put one of those IP addresses in Discourse settings.

What I’d recommend is to create a list of A entries for all your frontends.

In the end we need three things:

  1. What’s the public name that Fastly will serve
  2. Which IPs are the frontends
  3. Which hostname we want to use for long polling and we’ll add it to our VirtualHost

The zone file would look like this;

# The public facing URL
discourse.some-origin.com.  IN CNAME global.prod.fastly.net.

# The list of IP addresses you’d give to Fastly as origins/backends
frontends.some-origin.com.  IN A 8.8.8.113
frontends.some-origin.com.  IN A 8.8.8.115

# The long polling URL entry
discoursepolling.some-origin.com.  IN CNAME frontends.some-origin.com.

That way you can setup the “long polling base url” correctly without setting a single point of failure.

In Discourse admin, adjust long polling base url setting

Then, we can go in Discourse admin zone and adjust the “long polling base url” to our other domain name.

# /etc/nginx/sites-enabled/10-discourse

# Let’s redirect to SSL, in case somebody tries to access the direct IP with
# host header.
server {
    listen      80;
    server_name discoursepolling.some-origin.com discourse.some-origin.com;
    include     common_params;
    return      301 https://$server_name$request_uri;
}

server {
    listen      443 ssl;
    server_name discoursepolling.some-origin.com discourse.some-origin.com;
    # Rest of NGINX server block
    # Also, I would make a condition if we are in discoursepolling but not
    # under using anything specific to polling.
    # #TODO; find paths specific to polling
}

To see if it works; look at your web browser developer tool “Network inspector” for /poll calls on discoursepolling.some-origin.com, and see if you have 200 OK status code.

Use your web browser developer tools and inspect network traffic to see if requests made to discoursepolling worked

Note that this screenshot is showing webplatform.org but that’s beside the point i’m trying to illustrate.

Hope this helped.

Setting up Discourse with Fastly as a CDN provider and SSL

The following is a copy of what I published in a question on meta.discourse.org about “Enable a CDN for your Discourse while working on discuss.webplatform.org.

Setup detail

Our setup uses Fastly, and leverage their SSL feature. Note that in order for you to use SSL too, you’ll have to contact them to have it onto your account.

SEE ALSO this post about Make Discourse “long polling” work behind Fastly. This step is required and is a logical next step to this procedure.

In summary;

  • SSL between users and Fastly
  • SSL between Fastly and “frontend” servers. (That’s the IP we put into Fastly hosts configuration, and are also refered to as “origins” or “backends” in CDN-speak)
  • Docker Discourse instance (“upstream“) which listens only on private network and port (e.g. 10.10.10.3:8000)
  • More than two publicly exposed web servers (“frontend“), with SSL, that we use as “backends” in Fastly
  • frontend server running NGINX with an upstream block proxying internal upstream web servers that the Discourse Docker provides.
  • We use NGINX’s keepalive HTTP header in the frontend to make sure we minimize connections

Using this method, if we need to scale, we only need add more internal Discourse Docker instances, we can add more NGINX upstream entries.

Note that I recommend to use direct private IP addresses instead of internal names. It removes complexity and the need to rewrite Hosts: HTTP headers.

Steps

Everything is the same as basic Fastly configuration, refer to setup your domain.

Here are the differences;

  1. Setup your domain name with the CNAME Fastly will provide you (you will have to contact them for your account though), ours is like that ;

    discuss.webplatform.org.  IN  CNAME  webplatform.map.fastly.net.
    
  2. In Fastly pannel at Configure -> Hosts, we tell which publicly available frontends IPs

    Notice we use port 443, so SSL is between Fastly and our frontends. Also, you can setup Shielding (which is how you activate the CDN behavior within Fastly) by enabling it on only one. I typically set it on the one I call “first”.

    Fastly service configuration, at Hosts tab

  3. In Fastly pannel Configure -> Settings -> Request Settings; we make sure we forward X-Forwarded-For header. You DONT need this; you can remove it.

    Fastly service configuration, at Settings tab

  4. Frontend NGINX server has a block similar to this.

    In our case, we use Salt Stack as the configuration management system, it basically generates the Virtual Hosts for us as using Salt reactor system. Every time a Docker instance would become available, the configuration will be rewritten using this template.

    • {{ upstream_port }} would be at 8000 in this example

    • {{ upstreams }} would be an array of current internal Docker instances, e.g. ['10.10.10.3','10.10.10.4']

    • {{ tld }} would be webplatform.org in production, but can be anything else we need in other deployment, it gives great flexibility.
    • Notice the use of discoursepolling alongside the discourse subdomain name. Refer to this post about Make Discourse “long polling” work behind Fastly to understand its purpose

      upstream upstream_discourse {
      {%- for b in upstreams %}
          server    {{ b }}:{{ upstream_port }};
      {%- endfor %}
          keepalive 16;
      }
      
      server {
          listen      443 ssl;
          server_name discoursepolling.{{ tld }} discourse.{{ tld }};
      
          root    /var/www/html;
          include common_params;
          include ssl_params;
      
          ssl                 on;
          ssl_certificate     /etc/ssl/2015/discuss.pem;
          ssl_certificate_key /etc/ssl/2015/201503.key;
      
          # Use internal Docker runner instance exposed port
          location / {
              proxy_pass             http://upstream_discourse;
              include                proxy_params;
              proxy_intercept_errors on;
      
              # Backend keepalive
              # ref: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive
              proxy_http_version 1.1;
              proxy_set_header Connection "";
          }
      }
      

    Note that I removed the include proxy_params; line. If you have lines similar to proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;, you don’t need them (!)

How to run your own OAuth Identity provider service

Generally, we connect our application against a provider so it can share details about a user. Most of the documentation you’ll find online explains how to use their service, but very few outlines concisely how it is if you want to be your own provider and share state across applications you also manage.

Documentation would generally allow your third party app to let users use their information from another site such as Facebook, GitHub, Twitter, etc.

But what if you wanted to share information across your web applications in a similar way?

This post is a quick summary of how things works so you can get acquainted with the basics.

Whitelist

Big sites aren’t generally one big code repository but a set of separate components. A way to make each component to share your account details is most possibly by making a difference between their own infrastructure and third parties.

If your app were to use an external resource such as Google, the process would end up making Google users to be asked if they really want to share their information with you. This is why they would get a dialog similar to this.

img

While its OK to ask confirmation from a user if he wants to share his details with an external site, in the case of two components from the same site can share this information transparently.

If you are your own Identity Provider, you can configure your relying parties as “whitelisted” so that your accounts system don’t display such dialog.

Becoming your own Identity provider

In the case of WebPlatform.org we wanted to become our own Identity provider and found that we could deploy our own fork of Firefox Accounts (“FxA”) would allow us to do so.

The way its designed is that we have an OAuth protected “profile” endpoint that holds user details (email, full name, etc) as a “source of truth”. Each relying party (your own wiki, discussion forum, etc) gathers information from and ensure it has the same information locally.

In order to do so, we have to make a module/plugin for each web application so we it can query and bootstrap users locally based on the accounts system. We call those “relying parties”.

Once we have relying party adapter in place, a web app will have the ability to check by itself with the accounts server to see if there’s a session for the current browsing session. If it has, it’ll give either an already generated OAuth Bearer token, or generate one for the one for the service in question –the “SSO” behavior.

With the OAuth Bearer token in hand, a relying party (i.e. the WebPlatform.org annotation service) can go read details from the “profile” endpoint and then check locally if it already has a user available.

If the relying party doesn’t have a user, it’ll create one.

Each relying party is responsible to sync its local user and session state based on what the accounts service gives.

Upgrade to Python 2.7.9 on Ubuntu 14.04 LTS, and make your own .deb package for deployment

I had this post hanging in my drafts on how I attempted to install a valid Python 2.7.9 runtime environment on Ubuntu 14.04 and make my own .deb package for easy re-deployment.

IMPORTANT This procedure isn’t complete as I had to shift focus elsewhere. I might rework this article to adjust what’s missing.

While I understand that Ubuntu 14.04 will remain using Python 2.7.6 internally, applications we run can be configured to use another python environment. Its what virtualenv is all about after all, isn’t it.

This post attempts to install, and make an installable .deb package of Python 2.7.9 and is meant to be used by web applications without touching the system’s python runtime.

Why not replacing internal Python version?

The reason is simple. If you replace internal Python version, other softwares within the OS will have broken dependencies.

I realized this while I wanted to upgrade the version and breaking an hard dependency I have on Salt Stack. Since many components within a given Ubuntu version relies on Python, it could break anything else. This is why I stopped working on the idea of replacing internally, but instead to configure VirtualEnv to use another version instead.

If you see procedures that shows you to replace telling you to use update-alternatives to replace python, don’t do it! Go instead learn how to run your own Python version in VirtualEnv.

Procedure

  1. Install build dependencies

    Those were the ones I ran last before a successful build on Ubuntu 14.04 LTS, if you aren’t using the same distribution, you might get a different list.

      apt-get install -y gcc-multilib g++-multilib libffi-dev libffi6 libffi6-dbg python-crypto python-mox3 python-pil python-ply libssl-dev zlib1g-dev libbz2-dev libexpat1-dev libbluetooth-dev libgdbm-dev dpkg-dev quilt autotools-dev libreadline-dev libtinfo-dev libncursesw5-dev tk-dev blt-dev libssl-dev zlib1g-dev libbz2-dev libexpat1-dev libbluetooth-dev libsqlite3-dev libgpm2 mime-support netbase net-tools bzip2
    
  2. Get Python sources and compile

    wget https://www.python.org/ftp/python/2.7.9/Python-2.7.9.tgz
    tar xfz Python-2.7.9.tgz
    cd Python-2.7.9/
    ./configure --prefix /usr/local/lib/python2.7.9 --enable-ipv6
    make
    make install
    
  3. Test if the version works

    /usr/local/lib/python2.7.9/bin/python -V
    Python 2.7.9
    
  4. Then prepare package through FPM

    apt-get install -y ruby-dev gcc
    gem install fpm
    ...
    

Its basically about creating a .deb based on your new runtime setup. You can do that by using fpm (“Fabulous Package Manager”), I am using this technique in a post I published recently about installing a PHP library.

Incomplete scratchpad

But that’s as far as my notes goes for now. Sorry about that.

Setuptools

As per recommended in Setuptools instructions, we can run easy_install through a wget, like so;

  wget https://bootstrap.pypa.io/ez_setup.py -O - | /usr/local/lib/python2.7.9/bin/python
  /usr/local/lib/python2.7.9/bin/easy_install pip
  /usr/local/lib/python2.7.9/bin/pip install virtualenv

Then, create symbolic links

  ln -s /usr/local/lib/python2.7.9/bin/easy_install /usr/bin/easy_install
  ln -s /usr/local/lib/python2.7.9/bin/pip /usr/bin/pip

You can try if it worked

  pip list
  pip (6.0.8)
  setuptools (14.3)
  virtualenv (12.0.7)

Resources

  • http://davebehnke.com/python-pyenv-ubuntu.html
  • https://github.com/yyuu/pyenv
  • https://www.python.org/downloads/release/python-279/
  • http://aboutsimon.com/2012/04/16/building-a-python-deb-in-a-bootstrapped-ubuntu-chroot-with-fpm/
  • http://serverfault.com/questions/669859/how-can-i-upgrade-python-to-2-7-9-on-ubuntu-14-4
  • http://askubuntu.com/questions/101591/how-do-i-install-python-2-7-2-on-ubuntu
  • https://wiki.debian.org/Debootstrap
  • http://www.stylesen.org/python_27_debian_squeeze_60

Install Discourse and Docker on Ubuntu 14.04 with aufs enabled

While working to run Discourse on Docker I was getting numerous errors about a mount problem related to device mapper.

If you are using Ubuntu 14.04 LTS, want to solve the problem with aufs, and use Salt Stack to control your VMs, here’s how I did it.

After digging for some time, I found others who had also problems to run Discourse in Docker, and that we had to use “aufs”.

I had no idea that we could use different mount subsystems in Docker.

So I had to read some more on we can find out if we have aufs (whatever this is), and that we can tell whether or not Docker is configured to use it.

The issue docker/docker#10161 had proven very helpful to figure out what was my problem. I never thought I could tell Docker to use another device mapper engine anyway. In the end, all you can do is to add a -s foo and it can change the Storage engine.

Turns out that aufs is a Kernel module, and that latest Ubuntu 14.04 has it but not in the default kernel package. You have to make sure that linux-image-extra-virtual and aufs-tools are installed.

Documentation and notes recommends to download and run a shell script to install Docker instead of using the distribution packages. But I refuse to follow this advice because I already manage via Salt stack every components of the infrastructure. I just needed to put the right pieces together.

Since those changes are specific to Discourse I didn’t wanted to make a pull request to saltstack-formulas/docker-formula right away, i’d need to follow Salt formula conventions and add more logic for my PR to be usable by other server runtime too. Not something I had in my plans today.

Procedure

On the salt master, make sure you have the following:

  1. Add saltstack-formulas/docker-forumla in your gitfs entries

    # /etc/salt/master.d/gitfs.conf
    fileserver_backend:
      - roots
      - git
    
    gitfs_remotes:
      - https://github.com/saltstack-formulas/docker-formula.git
    
  2. Ensure you have docker in your base top file

    # /srv/salt/top.sls
    base:
      'backend-discourse*':
        - discourse
    
  3. Create add those lines in /srv/salt/discourse/init.sls

    # https://github.com/saltstack-formulas/docker-formula
    {% set kernelrelease = salt['grains.get']('kernelrelease') -%}
    {%- set dir = '/srv/webplatform/discourse' -%}
    
    include:
      - docker
    
    /etc/default/docker:
      file.managed:
        - contents: |
            # Docker Upstart and SysVinit configuration file
            #
            # Managed by Salt Stack, at {{ source }}. Do NOT edit manually!
            # Docker dependencies: Refer to the script at https://get.docker.com/
            # Available cli options: http://docs.docker.com/reference/commandline/cli/
            DOCKER_OPTS="--dns 8.8.8.8 -s aufs"
        - watch_in:
          - service: docker-service
        - require:
          - pkg: lxc-docker
          - pkg: linux-kernel-deps
    
    linux-kernel-deps:
      pkg.installed:
        - pkgs:
          - linux-image-extra-{{ kernelrelease }}
          - aufs-tools
      cmd.run:
        - name: modprobe aufs
        - unless: modinfo aufs > /dev/null 2>&1
    
    clone-discourse:
      pkg.installed:
        - name: git
      git.latest:
        - name: https://github.com/discourse/discourse_docker.git
        - user: ubuntu
        - target: {{ dir }}
        - unless: test -f {{ dir }}/containers/app.yml
        - require:
          - file: {{ dir }}
          - pkg: git
      file.directory:
        - name: {{ dir }}
        - user: ubuntu
        - group: ubuntu
        - recurse:
          - user
          - group
    
  4. Restart your salt master service, sync everything and run highstate

    service salt-master restart
    salt \* saltutil.sync_all
    salt \* service.restart salt-minion
    salt discourse-backend\* state.highstate
    
  5. Go to the VM and run the installer

    Note that as long as there is no /srv/webplatform/discourse/containers/app.yml the states will update the git repository to the latest version. In my projects, I make sure that salt also generates a configuration file with my current environment details (e.g. DNS, Salt master, Redis, SMTP relay, Postgres’ private IP addresses).

A few useful GNU/Linux truth tests while creating salt states

Nothing amazing here, just that in some situations I want to run commands in salt stack only in some situations that should be enforced.

Those tests are written within Salt stack states and YAML, but the test are plain GNU/Linux tests.

Add group additional membership to user

usermod -a -G amavis clamav:
  cmd.run:
    - unless: 'grep "^amavis" /etc/group | grep -q -e clamav'

A similar alternate could also be

usermod -a -G amavis clamav:
  cmd.run:
    - unless: grep -q -e '^amavis.*clamav$' /etc/group

Change ownership on a folder only if its not what we expect

{% for es_folder in ['/usr/share/elasticsearch/data','/var/lib/elasticsearch'] %}
chown -R app-user:app-user {{ es_folder }}:
  cmd.run:
    - onlyif: "test `stat -c %G {{ es_folder }}` != app-user"
{% endfor %}

Creating a new Ubuntu Salt master from the terminal using Cloud-Init

If you run on Virtual Machines on a provider that runs OpenStack you can also leverage a component that’s made to automatically install softwares at creation time. With this, you can any new node in your cluster, including the salt-master in a few terminal commands.

Before starting out, you need to make sure your cloud provider runs Open Stack and has Cloud-Init enabled. To check it out, look in the “Launch instance” dialog to create a new VM a tab titled “Post-Creation”, it might just simply work.

img

Cloud-Init is basically reading a manifest that declares what’s the specifics of the new VM and is part of the conversion from the initial image OpenStack into the specific instance you will be using. You can follow those two articles that describes well how Cloud-Init works.

[Cloud-Init] is made in a way that it handles distribution specific package installation details automatically.

The following is specific to an Ubuntu server VM, but you might need to adjust the package names to match your current server distribution as those tools are getting more and more popular in the industry.

Before testing out on a new VM, you could also check from an existing instance and ask through an HTTP request what was the current instance’ post-creation script using cURL.

Note that the IP address you see below is a virtual interface provided by OpenStack but can be navigated through HTTP, try it out like this;

curl http://169.254.169.254/openstack/
2012-08-10
2013-04-04
2013-10-17

If you see a similar output, you can ask what was the post-creation configuration (“userdata”) it used at creation time. You can dig the tree, here’s how you can find it in an OpenStack (CURRENT VERSION NICKNAME) cluster.

For instance, my a salt master would have the following configuration;

curl http://169.254.169.254/openstack/2013-10-17/user_data

#cloud-config
manage_etc_hosts: false
manage-resolv-conf: false
locale: en_US.UTF-8
timezone: America/New_York
package_upgrade: true
package_update: true
package_reboot_if_required: true

ssh_import_id: [saltstack]
apt_sources:
  - source: "ppa:saltstack/salt"

packages:
  - salt-minion
  - salt-common
  - salt-master
  - python-software-properties
  - software-properties-common
  - python-novaclient

To boot an instance from the terminal, you can use the “nova” command like this;

nova boot --image Ubuntu-14.04-Trusty --user-data /srv/cloudconfig.yml --key_name keyname --flavor subsonic --security-groups default salt

This assumes that you have the following available in your OpenStack dashboard:

  1. An SSH public key called “keyname” in your tenant
  2. A flavor called “subsonic” that has a predefined configuration of vCPU, vRAM, etc.
  3. A security group called “default”, you could use more than one by separating them by comas; e.g. default,foo,bar
  4. A text file in /srv/cloudconfig.yml in YAML format that holds your Cloud-Init (a.k.a. “userdata”) configuration.
  5. You have your nova configuration available (look in your cloud provider dashboard “Download OpenStack RC File” link in “Access & Security” and “API access”) and available in your server’s /etc/profile.d/ profile folder.
  6. You have “python-novaclient” (or its equivalent) installed

To test it out yourself, you could use the block I gave earlier and create a file in /srv/cloudconfig.yml and give the the nova command a try.

In this case, you might want to call the new VM “salt” as the default Salt stack configuration will try to communicate to it to make it its salt master. In this case, it’ll be itself.

The creation of the salt master could also contain a few git repositories to be cloned at the salt master creation time making your salt master as easily replaceable as any other components in your “cloud”.

A set of sample scripts I use to create a new salt master off of a few git repositories can be found in the following Gist

Read more

The following articles was found to be describing in more detail what I introduced in this article.

Install PHP5 Memcached PECL extension and have it support igbinary

I was trying to figure out why my PHP setup would never have both igbinary to be used to serialize sessions in Memcached using current Memcached PECL extension.

Before: session handlers shows memcached
Before: igbinary support no?

After some research I found a procedure in an answer on StackOverflow.

But it didn’t solve my main requirement: Since I do automated deployment, I MUST be able to move packages around. Since all my VMs are using the same distribution and that I already have my own apt repo, I could just add one more deb file.

My objective was then now to package it for deployment. To do this, I discovered Jordan Sissel’s project called fpm which stands for “Freaking Package Manager” (sic)

My target deployment runs on Ubuntu 14.04 LTS and I want it to replace upstream php5-memcached package as a simple .deb file.

Build from PECL source

NOTE The following was run on an Ubuntu 14.04 VM with @rynop’s procedure.

  1. Setting the machine up to make a package.

    mkdir /tmp/php5-memcached
    cd /tmp/php5-memcached
    apt-get install -y php5-dev pkg-config php-pear
    
  2. Follow steps from the procedure. Those were taken from the Original procedure, just before issuing ./configure.

    pecl_memcached_ver="2.2.0"
    pecl download memcached-${pecl_memcached_ver}
    tar xzvf memcached-${pecl_memcached_ver}.tgz
    cd memcached-${pecl_memcached_ver}/
    phpize
    
  3. I realized that under Ubuntu 14.04 we also needed to disable Memcached SASL so I had to do it differently

    ./configure --enable-memcached-igbinary --disable-memcached-sasl
    

Make a .deb package

  1. Install jordansissel/fpm

    apt-get install -y ruby-dev gcc
    gem install fpm
    
  2. Check the package contents you want to replace and let’s replicate for our own purposes.

    dpkg --list | grep php5-memcached
    find /var/cache/apt -type f -name '*php5-memcached*'
    dpkg -c /var/cache/apt/archives/php5-memcached_2.1.0-6build1_amd64.deb
    
  3. I figured out in the output that I only needed a few folders, etc/php5/mods-available/ and usr/lib/php5/foo, so I created them manually.

    mkdir -p etc/php5/mods-available/
    // Adjust memcached.ini to suit your tastes, then prepare it for packaging
    cp memcached.ini etc/php5/mods-available/
    // Make sure the usr/lib/php5/foo path matches in 
    // the result of `dpkg -c` you issued
    mkdir -p usr/lib/php5/20121212/
    cp modules/memcached.so usr/lib/php5/20121212/
    
  4. Magic will happen

    fpm -s dir -t deb -n php5-memcached -v 2.2.0-wpd -m '<[email protected]>' --description 'PHP 5.5 PECL igbinary + memcached support' -d libmemcached10 etc/ usr/
    

    I could have used --replaces REPLACES in fpm options, but when I did this package, I didn’t know which syntax to use. Its an optional argument anyway.

  5. Test if the package works

    dpkg -i php5-memcached_2.2.0-wpd_amd64.deb
    [email protected]:/srv/webplatform/buggenie# dpkg -i /srv/webplatform/apt/php5-memcached_2.2.0-wpd_amd64.deb
    (Reading database ... 118781 files and directories currently installed.)
    Preparing to unpack .../php5-memcached_2.2.0-wpd_amd64.deb ...
    Unpacking php5-memcached (2.2.0-wpd) over (2.1.0-6build1) ...
    Setting up php5-memcached (2.2.0-wpd) ...
    

    Success!

  6. Look at the phpinfo

Ater: registered session handlers

Update your private apt repository (or create one)

  1. Then, in your own apt repository (if you do have one) here’s how I rebuild the index. Not that its not more complex than a folder with a bunch of deb files

    mkdir -p /srv/apt
    cp php5-memcached_2.2.0-wpd_amd64.deb /srv/apt
    cd  /srv/apt
    apt-get install -y dpkg-dev
    dpkg-scanpackages . /dev/null | gzip -9c > Packages.gz
    echo 'deb file:/srv/apt ./' > /etc/apt/sources.list.d/foo.list
    

Done!

Create a MariaDB cluster with replication over SSL with Salt Stack

While reworking WebPlatform infrastructure I had to rebuild a new database cluster.

The objective of the cluster is to have more than one database server so that our web applications can make reads on any node in the cluster.

While the system has replication, and I can send reads on any nodes on the cluster. There is a flaw in it too, any nodes can also make writes; nothing is blocking it.

My plan is to change this so that it would be OK to send writes to anybody in the cluster. There is now something called “Galera” that would allow me that. But that’s outside of the scope of this article.

In the current configuration, I’m purposefully not fixing it because my configuration management makes sure only the current master. So in this setup, I decided that the VM that gets writes has a specific mention of “masterdb” in the hostname.

That way, its easy to see and it gives me the ability to change master at anytime if an emergency requires me to.

Changing MariaDB replication master

Changing master could be done in the following order:

  • Lock writes on masterdb databases
  • Wait replication to catch up
  • On secondary database servers; remove replication configuration
  • Tell all web apps to use new database master
  • Remove database lock
  • Setup new replication configuration to use new master

Thanks to the fact that I manage everything through configruation management –including the web app configuration files– its only a matter of applying the states everywhere in the cluster. That makes it fairly easy to do such an heavy move, even under stress.

This post will be updated once I have completed the multi writes setup.

Procedure

Assumptions

The rest of this article will assume the following:

  1. You are running VMs on OpenStack, and do have credentials to make API calls to it
  2. You have a Salt master already running
  3. Your salt master has at least python-novaclient (nova commands) available on it
  4. You have your Open Stack credentials already loaded in your salt master’s /etc/profile.d/ so you can use nova directly

From the salt-master, initiate a few VMs to use for your database cluster

  1. Before booting, ensure you have the following details in your OpenStack cluster and salt master;

    • You have a SSH key in your OpenStack cluster. Mine is called “renoirb-production” and my salt master user has the private key preinstalled
    • You have a userdata.txt file that has settings that points to your salt master

      cat /srv/opsconfigs/userdata.txt
      
      #cloud-config
      
      manage_etc_hosts: false # Has to be set to false for everybody. Otherwise we need a DNS
      manage-resolv-conf: false
      locale: en_US.UTF-8
      timezone: America/New_York
      package_upgrade: true
      package_update: true
      package_reboot_if_required: true
      
      #
      # This is run at EVERY boot, good to ensure things are at the right place
      # IMPORTANT, make sure that `10.10.10.22` is a valid local DNS server.
      bootcmd:
        - grep -q -e 'nameserver' /etc/resolvconf/resolv.conf.d/head || printf "nameserver 10.10.10.22\n" >> /etc/resolvconf/resolv.conf.d/head
        - grep -q -e 'wpdn' /etc/resolvconf/resolv.conf.d/base || printf "search production.wpdn\ndomain production.wpdn\nnameserver 8.8.8.8" > /etc/resolvconf/resolv.conf.d/base
        - grep -q -e 'wpdn' /etc/resolv.conf || resolvconf -u
      
      runcmd:
        - sed -i "s/127.0.0.1 localhost/127.0.1.1 $(hostname).production.wpdn $(hostname)\n127.0.0.1 localhost/" /etc/hosts
        - apt-get install software-properties-common python-software-properties
        - add-apt-repository -y ppa:saltstack/salt
        - apt-get update
        - apt-get -y upgrade
        - apt-get -y autoremove
      
      packages:
        - salt-minion
        - salt-common
      
      # vim: et ts=2 sw=2 ft=yaml syntax=yaml
      
  2. Create two db-type VMs

    nova boot --image Ubuntu-14.04-Trusty --user-data /srv/opsconfigs/userdata.txt --key_name renoirb-production --flavor lightspeed --security-groups default db1-masterdb
    nova boot --image Ubuntu-14.04-Trusty --user-data /srv/opsconfigs/userdata.txt --key_name renoirb-production --flavor supersonic --security-groups default db2
    
  3. Accept them to the salt

    salt-key -y -a db1-masterdb
    salt-key -y -a db2
    

    As an aside. Imagine you want to run dependencies automatically once a VM is part of your salt-master. For example, adding its private IP address in a local Redis or Etcd live configuration object. One could create a Salt “Reactor and make sure the data is refreshed. This gist is a good starting point

  4. Wait the VM build to finish and get their private IP addresses

    nova list | grep db
    | ... | db1-masterdb | ACTIVE  | Running     | private-network=10.10.10.73 |
    | ... | db2          | ACTIVE  | Running     | private-network=10.10.10.74 |
    
  5. Add them to the pillars.
    Note that the part of the name “masterdb” is what Salt states uses to know which one will get the writes to.
    Note that in the end, the web apps configs will use the private IP address.
    Its quicker to generate pages if the backend doesn’t need to make name resolution each time it makes database queries.
    This is why we have to reflect the pillars. Ensure the following structure exists in the file.

    # Edit /srv/pillar/infra/init.sls at the following blocks
    infra:
      hosts_entries:
        masterdb: 10.10.10.73
    
  6. Refer to the right IP address in the configuration file with a similar salt pillar.get reference in the states.

      /srv/webplatform/blog/wp-config.php:
        file.managed:
          - source: salt://code/files/blog/wp-config.php.jinja
          - template: jinja
          - user: www-data
          - group: www-data
          - context:
              db_creds: {{ salt['pillar.get']('accounts:wiki:db') }}
              masterdb_ip: {{ salt['pillar.get']('infra:hosts_entries:masterdb') }}
          - require:
            - cmd: rsync-blog
    

    … and the wp-config.php.jinja

    <?php
    
    ## Some PHP configuration file that salt will serve on top of a deployed web application
    
    ## Managed by Salt Stack, please DO NOT TOUCH, or ALL CHANGES WILL be LOST!
    ## source {{ source }}
    
    define('DB_CHARSET',  "utf8");
    define('DB_COLLATE',  "");
    define('DB_HOST',     "{{ masterdb_ip|default('127.0.0.1')    }}");
    define('DB_NAME',     "{{ db_creds.database|default('wpblog') }}");
    define('DB_USER',     "{{ db_creds.username|default('root')   }}");
    define('DB_PASSWORD', "{{ db_creds.password|default('')       }}");
    
  7. Refresh the pillars, rebuild the salt master state.highstate, and test it out.

    salt-call saltutil.sync_all
    salt salt state.highstate
    
    salt-call pillar.get infra:hosts_entries:masterdb
    > local:
    >     10.10.10.73
    
  8. Make sure the VMs has the same version of salt as you do

    salt-call test.version
    > local:
    >     2014.7.0
    
    salt db\* test.version
    > db2:
    >     2014.7.0
    > db1-masterdb:
    >     2014.7.0
    
  9. Kick the VMs installation

    salt db\* state.highstate
    
  10. Highstate takes a while to run, but once you are done, you should be able to work with them with the remaining of this tutorial

    salt -G 'roles:db' mysql.version
    > db2:
    >     10.1.2-MariaDB-1~trusty-wsrep-log
    > db1-masterdb:
    >     10.1.2-MariaDB-1~trusty-wsrep-log
    

    Each db-type VM MySQL/MariaDB/Percona server will have a different database maintenance users defined in /etc/mysql/debian.cnf.

    Make sure you don’t overwrite them unless you import everything all at once, including the users and their grants.

  11. Check that each db VMs has their SSL certificate generated by Salt

    salt -G 'roles:db' cmd.run 'ls /etc/mysql | grep pem'
    > db2:
    >     ca-cert.pem
    >     ca-key.pem
    >     client-cert.pem
    >     client-key.pem
    >     client-req.pem
    >     server-cert.pem
    >     server-key.pem
    >     server-req.pem
    > db1-masterdb:
    >     ca-cert.pem
    >     ca-key.pem
    >     client-cert.pem
    >     client-key.pem
    >     client-req.pem
    >     server-cert.pem
    >     server-key.pem
    >     server-req.pem
    

    Each file is a certificate so they can use to make replication through SSL.

Now on each database server;

  1. Connect to both db nodes using the salt as a Jump Host

    ssh masterdb.production.wpdn
    ssh db2.production.wpdn
    
  2. Get to the MySQL/MariaDB/Percona prompt on each VMs.

    If you are used with terminal screens that allows to keep sessions running
    even if you get disconnected, that would be ideal. We never know if the connection hangs.

    On WebPlatform system we do have screen but tmux can do too.

    mysql
    
  3. Check if SSL is enabled on both MySQL/MariaDB/Percona servers

    > MariaDB [(none)]> SHOW VARIABLES like '%ssl%';
    > +---------------+----------------------------+
    > | Variable_name | Value                      |
    > +---------------+----------------------------+
    > | have_openssl  | YES                        |
    > | have_ssl      | YES                        |
    > | ssl_ca        | /etc/mysql/ca-cert.pem     |
    > | ssl_capath    |                            |
    > | ssl_cert      | /etc/mysql/server-cert.pem |
    > | ssl_cipher    | DHE-RSA-AES256-SHA         |
    > | ssl_crl       |                            |
    > | ssl_crlpath   |                            |
    > | ssl_key       | /etc/mysql/server-key.pem  |
    > +---------------+----------------------------+
    
  4. Generate SSL certificates for MySQL/MariaDB/Percona server, see this gist on how to do it.

  5. Places to double check; To see which config keys sets what’s shown in the previous screen, take a look in the VMs /etc/mysql/conf.d/ folders with similar entries.

    • bind-address is what allows us to communicate between servers, before MySQL 5.5 we had skip-networking but now only a bind-address is sufficient. Make sure that your security groups allows only local network connections though.
    • server_id MUST be with a different number for each nodes. Make sure no server has the same number.

      [mysqld]
      bind-address = 0.0.0.0
      log-basename=mariadbrepl
      log-bin
      binlog-format=row
      server_id=1
      ssl
      ssl-cipher=DHE-RSA-AES256-SHA
      ssl-ca=/etc/mysql/ca-cert.pem
      ssl-cert=/etc/mysql/server-cert.pem
      ssl-key=/etc/mysql/server-key.pem
      
      [client]
      ssl
      ssl-cert=/etc/mysql/client-cert.pem
      ssl-key=/etc/mysql/client-key.pem
      
  6. From the database master (a.k.a “masterdb”), Get the replication log position;
    We’ll need the File and Position values to setup the replication node.

    MariaDB [(none)]> show master status;
    > +------------------------+----------+--------------+------------------+
    > | File                   | Position | Binlog_Do_DB | Binlog_Ignore_DB |
    > +------------------------+----------+--------------+------------------+
    > | mariadbrepl-bin.000002 |      644 |              |                  |
    > +------------------------+----------+--------------+------------------+
    
  7. Configure the masterdb to accept replication users. From the salt master

     salt -G 'roles:masterdb' mysql.user_create replication_user '%' foobarbaz
    

    NOTE: My salt states script creates a grain in /etc/salt/grains with the following data;

    roles:
      - masterdb
    

    Alternatively, you could call the VM db1-masterdb, use a small python script that’ll parse the information for you and make it a grain automatically.

  8. Back to the masterdb VM, check if the user exists, ensure SSL is required

    MariaDB [(none)]> show grants for 'replication_user';
    > +-----------------------------------------------------------------------------------------------------------------------------+
    > | Grants for replication_user@%                                                                                               |
    > +---------------------------------------------------------------------------------------+
    > | GRANT REPLICATION SLAVE ON *.* TO 'replication_user'@'%' IDENTIFIED BY PASSWORD '...' |
    > +---------------------------------------------------------------------------------------+
    
    MariaDB [(none)]> GRANT REPLICATION SLAVE ON *.* TO 'replication_user'@'%.local.wpdn' REQUIRE SSL;
    MariaDB [(none)]> GRANT USAGE ON *.* TO 'replication_user'@'%' REQUIRE SSL;
    
    MariaDB [(none)]> SELECT User,Host,Repl_slave_priv,Repl_client_priv,ssl_type,ssl_cipher from mysql.user where User = 'replication_user';
    > +------------------+--------------+-----------------+------------------+----------+
    > | User             | Host         | Repl_slave_priv | Repl_client_priv | ssl_type |
    > +------------------+--------------+-----------------+------------------+----------+
    > | replication_user | %.local.wpdn | Y               | N                | ANY      |
    > +------------------+--------------+-----------------+------------------+----------+
    
  9. On the secondary db VM, in mysql prompt, setup the initial CHANGE MASTER statement;

    CHANGE MASTER TO
      MASTER_HOST='masterdb.local.wpdn',
      MASTER_USER='replication_user',
      MASTER_PASSWORD='foobarbaz',
      MASTER_PORT=3306,
      MASTER_LOG_FILE='mariadbrepl-bin.000002',
      MASTER_LOG_POS=644,
      MASTER_CONNECT_RETRY=10,
      MASTER_SSL=1,
      MASTER_SSL_CA='/etc/mysql/ca-cert.pem',
      MASTER_SSL_CERT='/etc/mysql/client-cert.pem',
      MASTER_SSL_KEY='/etc/mysql/client-key.pem';
    

Checking if it worked

From one of the secondary servers, look for success indicators:

  • Seconds_Behind_Master says 0,
  • Slave_IO_State says Waiting for master to send event

    ```
    MariaDB [wpstats]> show slave status\G
    *************************** 1. row ***************************
                   Slave_IO_State: Waiting for master to send event
                      Master_Host: masterdb.local.wpdn
                      Master_User: replication_user
                      Master_Port: 3306
                    Connect_Retry: 10
                  Master_Log_File: mariadbrepl-bin.000066
              Read_Master_Log_Pos: 19382112
                   Relay_Log_File: mariadbrepl-relay-bin.000203
                    Relay_Log_Pos: 19382405
            Relay_Master_Log_File: mariadbrepl-bin.000066
                 Slave_IO_Running: Yes
                Slave_SQL_Running: Yes
                  Replicate_Do_DB:
              Replicate_Ignore_DB:
               Replicate_Do_Table:
           Replicate_Ignore_Table:
          Replicate_Wild_Do_Table:
      Replicate_Wild_Ignore_Table:
                       Last_Errno: 0
                       Last_Error:
                     Skip_Counter: 0
              Exec_Master_Log_Pos: 19382112
                  Relay_Log_Space: 19382757
                  Until_Condition: None
                   Until_Log_File:
                    Until_Log_Pos: 0
               Master_SSL_Allowed: Yes
               Master_SSL_CA_File: /etc/mysql/ca-cert.pem
               Master_SSL_CA_Path:
                  Master_SSL_Cert: /etc/mysql/client-cert.pem
                Master_SSL_Cipher:
                   Master_SSL_Key: /etc/mysql/client-key.pem
            Seconds_Behind_Master: 0
    Master_SSL_Verify_Server_Cert: No
                    Last_IO_Errno: 0
                    Last_IO_Error:
                   Last_SQL_Errno: 0
                   Last_SQL_Error:
      Replicate_Ignore_Server_Ids:
                 Master_Server_Id: 1
                   Master_SSL_Crl: /etc/mysql/ca-cert.pem
               Master_SSL_Crlpath:
                       Using_Gtid: No
                      Gtid_IO_Pos:
          Replicate_Do_Domain_Ids:
      Replicate_Ignore_Domain_Ids:
    1 row in set (0.00 sec)
    ```
    

Managing users

In the end, since replication is active, you can add users to your system and all nodes will get the privileges.

The way I work is that I can use Salt stack states to add privileges in my states (more details soon)
or I can use a few salt commands from my salt master and send them to the database masterdb VM.

salt -G 'roles:masterdb' mysql.db_create 'accounts_oauth' 'utf8' 'utf8_general_ci'
salt -G 'roles:masterdb' mysql.user_create 'accounts' '%' 'barfoo'
salt -G 'roles:masterdb' mysql.grant_add 'ALL PRIVILEGES' 'accounts_oauth.*' 'accounts' ‘%’

References

  • http://dev.mysql.com/doc/refman/5.7/en/replication-solutions-ssl.html
  • https://mariadb.com/kb/en/mariadb/documentation/managing-mariadb/replication/standard-replication/setting-up-replication/

Quelques bouts de code pour automatiser le déploiement

Automate ALL THE THINGS!

Ce billet n’est qu’un simple «link dump» pour retrouver parmi plusieurs notes éparpillés. Je compte éventuellement publier la totalité de mon travail dans des projets publics sur GitHub une fois la boucle complétée. Le tout sans fournir les données privés, évidemment.

Faire le saut vers l’automatisation demande beaucoup de préparation et je prends le temps de publier ici quelques bouts de code que j’ai écrits pour compléter la tâche.

Au final, mon projet permettra de déployer un site qui s’appuie sur un cluster MariaDB, Memcached, une stack LAMP («prefork») lorsqu’on a pas le choix, une stack [HHVM/php5-fpm, Python, nodejs] app servers pour le reste servi par un frontend NGINX. Mes scripts vont déployer une série d’applications web avec toutes les dépendances qui les adaptent géré dans leur propre «git repo» parent. Dans mon cas, ce sera: WordPress, MediaWiki, Discourse, et quelques autres.

Requis

  • Instantiation à partir de commandes nova du terminal, crée une nouvelle VM mise à jour et son nom définit son rôle dans le réseau interne
  • Les VMs sont uniquement accessible par un Jump box (i.e. réseau interne seulement)
  • Un système regarde si un répertoire clone git à eu des changements sur la branche «master», lance un événement si c’est le cas
  • Chaque machine sont construites à partir d’une VM minimale. Dans ce cas-ci; Ubuntu 14.04 LTS
  • Système doit s’assurer que TOUTES les mises à jour sont appliqués régulièrement
  • Système doit s’assurer que ses services interne sont fonctionnels
  • Dans le cas d’une situation où une VM atteint le seuil critique OOM, la VM redémarre automatiquement
  • Le nom de la VM décrit son rôle, et les scripts d’installation installent les requis qui y sont affectés
  • Les configurations utilisent les détails (e.g. adresses IP privés et publiques) de chaque pool (e.g. redis, memcache, mariadb) et ajuste automatiquement les configurations dans chaque application
  • … etc.

Bouts de code

Billets inspirants sur le sujet

Thoughts about learning in the web developer job, what managers might be missing

After reading an article titled What the ‘Learn to Code’ movement is forgetting: Existing developers, I couldn’t resist commenting. Sadly, I couldn’t add my own comment because their commenting is allowing only paying members so I am sharing it here.

Developers do not simply convert paragraphs of text requirements into “morse”. Obviously. Its a complex craft and like anything else, its all about people you are working with. There are passionate programmers and also the ones who only wants to do the bare minimum. In between, its all about leaving room for creativity.

What a programmer’s day looks like

Let’s remind ourselves what a programer is required to do has a lot of complexity and has to deal with legacy, unknowns, and delivery dates. Not everything can be foreseen: how the team members will work together, their capabilities, ego, and most importantly, whether or not each of them will be in “the zone” long enough during work hours.

The time they are at their optimal is what you want to have most from them, its that time where they can do amazing things, isn’t it. Being developer is about solving puzzles, review code or existing components they can leverage or would need to refactor. Being interrupted by numerous meetings to talk about something they are already trying to figure out doing isn’t helping them.

A good way to help them is to set in place asynchronous communication channels such as IRC, and code review practices. In some open source communities, merge to the master branch requires code review, a common practice is to have a bot to announce commits on an IRC channel. That’s the best of both worlds, you let developers be in their zone AND collaborate. As for paired-programming, they can just announce at the morning SCRUM meeting that they would like to spend time, or just ask for it on IRC.

Learning

As for learning, passionate developers already keep themselves in the loop of best practices and how to improve their craft. The experienced ones might even have ideas on how do things that you never thought of.

I’ve seen companies who allows participation to open source projects, and/or share their knowledge openly on sites such as Stack Overflow and documentation sites. This is another great way to make developers more engaged to their role, and stay in your company.

When I think of learning resources for web developers in general, there is an independent, and vendor neutral, resource where web developers can learn. The site is convened by the W3C, and sponsored by industry leaders to create just what we’ve been missing all along. The site is webplatform.org, and I feel like it’s not enough set forward by the industry. Maybe because its not complete, but its a wiki, anybody can participate.

Full disclosure; I work full time on the webplatform.org project, but note that regardless of my current position, I would participate to the site regardless of whether or not i’m hired to do so.

Photo credits: Anthony Catalano, Morse Code Straight Key J-38

Notes of my purchase experience with ASUS slim DVD-RW reader SDRW-08D2S-U

How do you feel when you purchase a tech product from a respected brand and it feels sketchy and you figure out, after receiving the desired piece of hardware that it doesn’t fit at all.

What got me even more disappointed is the fact I went through product reviews, read tech specs, shopped around for other products and decided to give a chance.

Before getting in the depth of the topic; I’d like to apologize for the tone of this message. My “gage of good will” (ref: Steve Krug ) is very low and I hope this gets somewhere so it can become an incentive to improve the site and product support medias.

NOTE I have (some) theoretical knowledge about User-Interaction design due to my numerous conversations on the topic with relatives and professional contacts who are in the field of software ergonomics. Its in the spirit to remove pain-points that i’m outlining them. Since I spent some time writing this essay, i decided to also share it on my site.

0. The product

img My original expectation wasn’t very big. I wanted a lightweight, small DVD recorder for my backups. You know, to store files you do not need often. I also wanted to encode what I’m burning on disc so if I ever have to let go of the disc, i can be reassured that the data is not accessible without cracking it.

So, what I was expecting:

  • Small
  • USB2 powered (only one cable)
  • Lightweight

My purchase was then a ASUS SDRW-08D2S-U

And supporting encryption got me close the deal. Sadly, ASUS didn’t do anything for Mac OS users.

The following are problems separated by scenarios I went through while getting acquainted with my new purchase.

1. Product registration

During product registration, it FORCED me to fill ALL fields, including my date of birth. If I do not want to give my underwear size besides my full name and region of the world I am at.

NOTE: After going to account, I could add what I wanted to disclose.

2. Searching for drivers

It says supports Mac OS right? After whipping the CD in my CD-Drive-less MacBook Pro I expected to see some (hopefully bloatware free) software to burn CDs and enjoy the CD crypto offering (more on this later). I found nothing.

I know Mac OS and the underlying BSD architecture has us no need to install any drivers. But since the package said it supported and provided software, I thought I could see if I install them. But found none.

After this, i went the asus.com. Its nice looking, marketing flash Pizzazzz. But not useful.

Its only after 10 minutes searching the site, only finding how to compare another product to buy and cannot find appropriate way to navigate to product, downloads/drivers. I got only promotional navigation. This makes the human who purchased something frustrated.

The funny thing is that I had to use a search engine that is NOT from asus.com (in useful results, again). Found plenty of other places to buy and then I found that page where I made this comment by using the comment form at the bottom.

The fact that the page had no images made me think of two things; one more detail that makes it look sketchy, and that maybe the page has outdated information and therefore would not apply to my case.

3. Disc Encryption

Now that I spent some more time explaining all this, I am very disappointed by the product because it sells crypto, but I guess only under Windows. My last resort will be to create MacOS native encoded DMG files, then burn them. No thanks to ASUS. :(

5. Y-shaped USB

Really?

That’s a first time. I understand the electrical needs and the fact that many laptop USB outlets doesn’t always feed with enough voltage/amps. I was surprised.

I saw NO reviews about this.

6. Reading the manual

img I saw NO reviews about this. The interesting part is that, unlike most people, I actually read the manual and it was a pain to figure out which boxes was in my own language.

It was basically me skipping each boxes, in case it was in either french or english. Just to realize after flipping the full “map” around that I just needed to plug the cable.

Good manuals doesn’t require long text. Have a look at how IKEA deals with multi-lingual instructions.

4. Conclusion

Needless to say that i’m disappointed by the purchase. But in the end, I have a not too big/clumsy, device that I can use to read and burn DVDs.

I really hope that ASUS gears toward improving their software products.

Image credits:
* IKEA instructions, by Mike Sacks
* Product picture is promotional material from ASUS website.

Answers I gave for an article about the impacts of Heartbleed

Credits: Codenomicon

Last spring, I had been invited to answer questions for an article about Heartbleed. It was an invitation extended by my friends at SaltStack. Since I use extensively automated deployment techniques with Salt Stack, they thought i’d have things to say.

I accepted to answer the questions on my personal time as the DevOps lead of WebPlatform.org, rather than directly as a W3C team member. And that, even though I had big help from my manager, Doug Schepers, and a few colleagues from the team. Their help was much useful to review and add stats to my arguments.

The article has been published on Dell.com’s Tech Page One blog but only quotes a few pieces of my original contribution. Here is the full answers I gave to the questions.

What made Heartbleed such a tremendous threat?

It hit more than the epicenter of our whole Internet infrastructure, it proven true every sysadmin worst nightmare and on top of that, it impacted everybody.

OpenSSL is so widely used (estimated 2/3 of the web) that many organizations were impacted. This vulnerability has been introduced as a programming mistake for about 2 years. What gives the chills is that since the person who is exploiting the fail do not need to be the “man in the middle” nor need to gain terminal access to the servers, there are no traces left behind. We therefore had no idea how much it was used and by who.

It’s important to remember that the attacker couldn’t know what data he would get either, or if he would get anything interesting at all. And if the “good guys” didn’t know about heartbleed for so long, probably not many “bad guys” did, either.

In consequence, the world had to make a major effort to update a lot of software and people to replace many passwords, but the major impact is probably more psychological: we don’t know what was stolen and we don’t know if there is another “heartbleed” hidden in our servers.

On the bright side, the old advice we give to end users: change your password often, is still valid. This was a nuisance, for sure, but at least heartbleed didn’t change anything there.

Is there anything about the current computing landscape that made the threat greater than it might have been in the past?

The time it takes to gather infrormation through such vulnerability and the set of tools available today has increased dramatically. Ther are network port scanners, for instance, that can report with high 90-percent accuracy in less than 10 minutes.

When attackers gets insight of a vulnerability, its not hard to get the information they are searching for. In the case of heartbleed, it was as simple as searching for machines listening on TCP port 443 and keep a
list of servers using a given version of OpenSSL that is known to have the mistake. Not something hard to do at all for the attackers.

Todays’ computing landscape has better tools, but so the tools to crack them. A better solution is to strenghten the infrastructure and educate end users. Otherwise, the chain will remain as strong as its weakest link.

If there is a thing that our society should do more is to continue teaching sane privacy techniques such as strong and varied passwords.

Hopefully we’ll get to a point where companies would rely on “harder to gain” information than you mother maiden name or your date of birth.

What were the most important things that companies needed to do in the face of Heartbleed?

Companies that doesn’t maintain adequate server management techniques and tools had a lot of (manual) things to do.

The priority was to upgrade to the patched version all affected parts of the infrastructure, ensure that all system that stores passwords gets new passwords as soon as possible.

Of course, the issue had to be fixed as quickly as possible, and without interrupting the services… or belating deliverables that they were already working on.

Therefore, one of the most appreciated thing a system admin has is a way to manage his fleed of servers remotely. In the old days, we only had SSH —hopefully that one doesn’t have its own “heartbleed” problem too— but we now have tools to execute remotely such as Ansible and Salt stack that can help us quickly get the an accurate picture of the whole infrastructure.

Could a Heartbleed-level/style hack happen again?

Before heartbleed, I was an optimistic person. Besides giving chills to everybody, what heartbleed did is a loud “wake up” call.

It is only a matter of time before we learn something else to arise. Computer security is still based on “hard enough to crack” secrets and an accepted set of annoyances. This is where education and better automation tools are of great help. It’d ease the justification of existence, maintenance, and deployment of such measures to everybody.

How can companies best protect themselves and their customers moving forward?

Education, and automation; in that order. Its better to have a good answer —ideally quickly— than a quick wrong answer.

I hope to see training about security practices more often. Its not the lack of training that is the problem, but the people to take time to learn the techniques. Organizations such as the OWASP, —a group who teaches common security mistakes to web developers— is educating security vulnerabilities.

There are dozens of other potential oops. One of the most common type of security breach is what we call “SQL injection”. That’s yet another programming mistake that sends user input directly to the database server without filtering.

img

In the case of Heartbleed, it was a similar kind of programming error. A way to achieve peace of mind includes among others; Testing, proof techniques, “sandboxing” (protected memory), making software Open Source all helps to catch those errors.

While its still a hard problem to find the errors in all that code, using those techniques has the merit to have more than a limited set of eyes on the code.

But all security can be completely overcome by a small human mistake answering private information to the wrong person. Education about how to validate and detect the real collaborator to the potential thief is another challenge in itself.

Many people are talking about it for ages. The recent events makes their teachings more relevant than ever.

Credits

Heartbleed graphic: Codenomicon
Comic strip: XKCD #327