Recover Discourse from a backup, adjust domain name

Roughly based on Move your Discourse instance to a different server post. But without using the web UI. Because sysadmins prefers terminal.

IMPORTANT: Make SURE the backup dump was generated from the same Discourse version the one you’ll import it into.

Copy from host backup into shared folder. Imagine you uploaded via SSH in your host home directory.

cp ~/snapshot.tar.gz /srv/webplatform/shared/backups/

Note that the folder /srv/webplatform/discuss/shared/standalone/backups/ from the docker host would end up to be in /shared/backups/ inside the container.

Enter the VM, make sure enable_restore is run from discourse cli utility

./launcher enter app
discourse enable_restore

Find the mounted backup file from within the container

ll /shared/backups/default/

Make sure /shared/backups/foo.tar.gz is readable by can read

chmod o+r /shared/backups/default/foo.tar.gz
discourse restore foo.tar.gz
discourse disable_restore

Remap domain name

discourse remap discourse.specifiction.org discuss.webplatform.org

Then, work on user uploads and regenerate assets. That’ll make sure ;

rake uploads:clean_up
rake posts:rebake

Refer to

Thoughts about improving load resiliency for CMS driven Websites

The following is an article I wrote based on a note I sent to the Mozilla Developer Network mailing list about an idea that crossed my mind to help scale a CMS driven website.

I’m convinced i am not the first one who came out with a similar idea, but I thought it would be useful to share anyway.

What’s common on a CMS driven Website?

What affects the page load is what happens in the backend before getting HTML back to the web browser. Sometimes its about converting the text stored in the database back into HTML, making some specific views, and the list can go on and on.

Problem is that we are spending a lot of CPU cycles to end up serving time and time again the same content, unique for each user when about 90% the generated content could be exactly the same for everybody.

What makes a content site page unique from what an anonymous visitor can get?
with web apps is that we make backend servers generate HTML as if it was unique when, in truth, most of it could be cached.

What if we could make a separation between what’s common, and use caching and serve it to everybody.

How HTTP cache works, roughly

Regardless of what software does it: Squid, Varnish, NGINX, Zeus, caching is done the same way.

In the end, the HTTP caching layer basically keeps in RAM generated HTTP Response body and keeps in memory based on the headers it had when it passed it through to the original request. Only GET Responses, without cookies, are cacheable. Other response body coming from a [PUT, DELETE, POST] request aren’t.

To come back on my previous example, what part is unique in the the current user compared to what the anonymous visitor gets on a documentation Website page.

What does a Website view serves us then? The content of the page, the “chrome” (what’s always there), Links to account settings, edit, or visualize details for the current page, account settings, the username, link to logout.

It means we are making unique content for things that could be cached and wasting cycles.

Even then, most of the content is cacheable because they would generally be with the same paths in the CMS.

How about we separate dynamic from static?

This makes me wonder if we could improve site resiliency by leveraging HTTP caching, strip off any cookies, and factor out what’s unique on a page so that we get the same output for any context.

As for the contextual parts of the site ‘chrome’, how about we expose a context-root which would take care of serving dynamically generated HTML to use as partials.

One way of doing it would be to make that context-root generate simple HTML strings that we can pass to a JavaScript manager that’ll convert it into DOM and inject it in the ‘chrome’.

Since we can make cookies to be isolated to specific context-roots, we can keep the statefulness of the session on MDN and have a clear separation of what’s dynamic and what’s not.

Managing my PGP/OpenPGP keys and share across many machines

With my previous position just finished, I’m cleaning up my credential data I’ve stored on my computer. While doing so, I needed to update my PGP keys and I learned something that I’m glad existed.

When you use PGP to encrypt and/or sign your messages, you can only read messages sent to you from the machine you have the private key. Right?

The obvious solution would be to copy that private key everywhere; You mobile, servers, laptops.

WRONG!

The problem is that security experts would say that you shouldn’t do that unless you are certain things are “safe”. But what is safe anyway, once somebody else, or a zombie process, accessed your machine, it’s possible somebody gained access to your keys.

OpenPGP has a way to address this issue and it’s called “SubKeys”. Quoting gnupg.org manual

By default, a DSA master signing key and an ElGamal encryption subkey are generated when you create a new keypair. This is convenient, because the roles of the two keys are different, (…). The master signing key is used to make digital signatures (…). The encryption key is used only for decrypting encrypted documents sent to you.

This made me think I should make sure that I create a subkey, backup my main identity, erase any traces of it from my main computer, then import the new subkey.

That way, I would only need to use the main key to update the data on key servers and to edit my keys.

A nice effect of this way of working is that we can revoke a subkey. Since subkeys, signatures and revocation are part of the public key that you sync with key on keyservers, you can make the compromised subkey invalid.

The compromised key will be able to decrypt messages sent for that private subkey, provided the attacker can also gain access to where the messages are stored, but you won’t lose the most important part of the identity system. You’ll be able to create another subkey and keep the same identity.

I hope I got it right. If I’m wrong, don’t hesitate to tell me and I’ll adjust!

I got to read more about all of this, but I am glad I learned about subkeys.

Here are some notes I found that helped me understand this all better.

Converting a dynamic site into static HTML documents

Its been two times now that I’ve been asked to make a website that was running on a CMS and make it static.

This is an useful practice if you want to keep the site content for posterity without having to maintain the underlying CMS. It makes it easier to migrate sites since the sites that you know you won’t add content to anymore becomes simply a bunch of HTML files in a folder.

My end goal was to make an EXACT copy of what the site is like when generated by the CMS, BUT now stored as simple HTML files. When I say EXACT, I mean it, even as to keep documents at their original location from the new static files. It means that each HTML document had to keep their same value BUT that a file will exist and the web server will find it. For example, if a link points to /foo, the link in the page remain as-is, even though its now a static file at /foo.html, but the web server will serve /foo.html anyway.

Here are a few steps I made to achieve just that. Notice that your mileage may vary, I’ve done those steps and they worked for me. I’ve done it once for a WordPress blog and another on the [email protected] website that was running on ExpressionEngine.

Steps

1. Browse and get all pages you think could be lost in scraping

We want a simple file with one web page per line with its full address.
This will help the crawler to not forget pages.

  • Use a web browser developer tool Network inspector, keep it open with “preserve log”.
  • Once you browsed the site a bit, from the network inspector tool, list all documents and then export using the “Save as HAR” feature.
  • Extract urls from har file using underscore-cli

    npm install underscore-cli
    cat site.har | underscore select ‘.entries .request .url’ > workfile.txt

  • Remove first and last lines (its a JSON array and we want one document per line)

  • Remove the trailing remove hostname from each line (i.e. start by /path), in vim you can do %s/http:\/\/www\.example.org//g
  • Remove " and ", from each lines, in vim you can do %s/",$//g
  • At the last line, make sure the " is removed too because the last regex missed it
  • Remove duplicate lines, in vim you can do :sort u
  • Save this file as list.txt for the next step.

2. Let’s scrape it all

We’ll do two scrapes. First one is to get all assets it can get, then we’ll go again with different options.

The following are the commands I ran on the last successful attempt to replicate the site I was working on.
This is not a statement that this method is the most efficient technique.
Please feel free to improve the document as you see fit.

First a quick TL;DR of wget options

  • -m is the same as --mirror
  • -k is the same as --convert-links
  • -K is the same as --backup-converted which creates .orig files
  • -p is the same as --page-requisites makes a page to get ALL requirements
  • -nc ensures we dont download the same file twice and end up with duplicates (e.g. file.html AND file.1.html)
  • --cut-dirs would prevent creating directories and mix things around, do not use.

Notice that we’re sending headers as if we were a web browser. Its up to you.

wget -i list.txt -nc --random-wait --mirror -e robots=off --no-cache -k -E --page-requisites \
     --user-agent='User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.94 Safari/537.36' \
     --header='Accept-Language: fr-FR,fr;q=0.8,fr-CA;q=0.6,en-US;q=0.4,en;q=0.2' \
     --header='Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8' \
     http://www.example.org/

Then, another pass

wget -i list.txt --mirror -e robots=off -k -K -E --no-cache --no-parent \
     --user-agent='User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.94 Safari/537.36' \
     --header='Accept-Language: fr-FR,fr;q=0.8,fr-CA;q=0.6,en-US;q=0.4,en;q=0.2' \
     --header='Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8' \
     http://www.example.org/

3. Do some cleanup on the fetched files

Here are a few commands I ran to clean the files a bit

  • Remove empty lines in every .orig files. They’re the ones we’ll use in the end after all

    find . -type f -regextype posix-egrep -regex '.*\.orig$' -exec sed -i 's/\r//' {} \;
    
  • Rename the .orig file into html

    find . -name '*orig' | sed -e "p;s/orig/html/" | xargs -n2 mv
    find . -type f -name '*\.html\.html' | sed -e "p;s/\.html//" | xargs -n2 mv
    
  • Many folders might have only an index.html file in it. Let’s just make them a file without directory

    find . -type f -name 'index.html' | sed -e "p;s/\/index\.html/.html/" | xargs -n2 mv
    
  • Remove files that has a .1 (or any number in them), they are most likely duplicates anyway

    find . -type f -name '*\.1\.*' -exec rm -rf {} \;
    

Setting up Discourse with Fastly as a CDN provider and SSL

The following is a copy of what I published in a question on meta.discourse.org about “Enable a CDN for your Discourse while working on discuss.webplatform.org.

Setup detail

Our setup uses Fastly, and leverage their SSL feature. Note that in order for you to use SSL too, you’ll have to contact them to have it onto your account.

SEE ALSO this post about Make Discourse “long polling” work behind Fastly. This step is required and is a logical next step to this procedure.

In summary;

  • SSL between users and Fastly
  • SSL between Fastly and “frontend” servers. (That’s the IP we put into Fastly hosts configuration, and are also refered to as “origins” or “backends” in CDN-speak)
  • Docker Discourse instance (“upstream“) which listens only on private network and port (e.g. 10.10.10.3:8000)
  • More than two publicly exposed web servers (“frontend“), with SSL, that we use as “backends” in Fastly
  • frontend server running NGINX with an upstream block proxying internal upstream web servers that the Discourse Docker provides.
  • We use NGINX’s keepalive HTTP header in the frontend to make sure we minimize connections

Using this method, if we need to scale, we only need add more internal Discourse Docker instances, we can add more NGINX upstream entries.

Note that I recommend to use direct private IP addresses instead of internal names. It removes complexity and the need to rewrite Hosts: HTTP headers.

Steps

Everything is the same as basic Fastly configuration, refer to setup your domain.

Here are the differences;

  1. Setup your domain name with the CNAME Fastly will provide you (you will have to contact them for your account though), ours is like that ;

    discuss.webplatform.org.  IN  CNAME  webplatform.map.fastly.net.
    
  2. In Fastly pannel at Configure -> Hosts, we tell which publicly available frontends IPs

    Notice we use port 443, so SSL is between Fastly and our frontends. Also, you can setup Shielding (which is how you activate the CDN behavior within Fastly) by enabling it on only one. I typically set it on the one I call “first”.

    Fastly service configuration, at Hosts tab

  3. In Fastly pannel Configure -> Settings -> Request Settings; we make sure we forward X-Forwarded-For header. You DONT need this; you can remove it.

    Fastly service configuration, at Settings tab

  4. Frontend NGINX server has a block similar to this.

    In our case, we use Salt Stack as the configuration management system, it basically generates the Virtual Hosts for us as using Salt reactor system. Every time a Docker instance would become available, the configuration will be rewritten using this template.

    • {{ upstream_port }} would be at 8000 in this example

    • {{ upstreams }} would be an array of current internal Docker instances, e.g. ['10.10.10.3','10.10.10.4']

    • {{ tld }} would be webplatform.org in production, but can be anything else we need in other deployment, it gives great flexibility.
    • Notice the use of discoursepolling alongside the discourse subdomain name. Refer to this post about Make Discourse “long polling” work behind Fastly to understand its purpose

      upstream upstream_discourse {
      {%- for b in upstreams %}
          server    {{ b }}:{{ upstream_port }};
      {%- endfor %}
          keepalive 16;
      }
      
      server {
          listen      443 ssl;
          server_name discoursepolling.{{ tld }} discourse.{{ tld }};
      
          root    /var/www/html;
          include common_params;
          include ssl_params;
      
          ssl                 on;
          ssl_certificate     /etc/ssl/2015/discuss.pem;
          ssl_certificate_key /etc/ssl/2015/201503.key;
      
          # Use internal Docker runner instance exposed port
          location / {
              proxy_pass             http://upstream_discourse;
              include                proxy_params;
              proxy_intercept_errors on;
      
              # Backend keepalive
              # ref: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive
              proxy_http_version 1.1;
              proxy_set_header Connection "";
          }
      }
      

    Note that I removed the include proxy_params; line. If you have lines similar to proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;, you don’t need them (!)

Install Discourse and Docker on Ubuntu 14.04 with aufs enabled

While working to run Discourse on Docker I was getting numerous errors about a mount problem related to device mapper.

If you are using Ubuntu 14.04 LTS, want to solve the problem with aufs, and use Salt Stack to control your VMs, here’s how I did it.

After digging for some time, I found others who had also problems to run Discourse in Docker, and that we had to use “aufs”.

I had no idea that we could use different mount subsystems in Docker.

So I had to read some more on we can find out if we have aufs (whatever this is), and that we can tell whether or not Docker is configured to use it.

The issue docker/docker#10161 had proven very helpful to figure out what was my problem. I never thought I could tell Docker to use another device mapper engine anyway. In the end, all you can do is to add a -s foo and it can change the Storage engine.

Turns out that aufs is a Kernel module, and that latest Ubuntu 14.04 has it but not in the default kernel package. You have to make sure that linux-image-extra-virtual and aufs-tools are installed.

Documentation and notes recommends to download and run a shell script to install Docker instead of using the distribution packages. But I refuse to follow this advice because I already manage via Salt stack every components of the infrastructure. I just needed to put the right pieces together.

Since those changes are specific to Discourse I didn’t wanted to make a pull request to saltstack-formulas/docker-formula right away, i’d need to follow Salt formula conventions and add more logic for my PR to be usable by other server runtime too. Not something I had in my plans today.

Procedure

On the salt master, make sure you have the following:

  1. Add saltstack-formulas/docker-forumla in your gitfs entries

    # /etc/salt/master.d/gitfs.conf
    fileserver_backend:
      - roots
      - git
    
    gitfs_remotes:
      - https://github.com/saltstack-formulas/docker-formula.git
    
  2. Ensure you have docker in your base top file

    # /srv/salt/top.sls
    base:
      'backend-discourse*':
        - discourse
    
  3. Create add those lines in /srv/salt/discourse/init.sls

    # https://github.com/saltstack-formulas/docker-formula
    {% set kernelrelease = salt['grains.get']('kernelrelease') -%}
    {%- set dir = '/srv/webplatform/discourse' -%}
    
    include:
      - docker
    
    /etc/default/docker:
      file.managed:
        - contents: |
            # Docker Upstart and SysVinit configuration file
            #
            # Managed by Salt Stack, at {{ source }}. Do NOT edit manually!
            # Docker dependencies: Refer to the script at https://get.docker.com/
            # Available cli options: http://docs.docker.com/reference/commandline/cli/
            DOCKER_OPTS="--dns 8.8.8.8 -s aufs"
        - watch_in:
          - service: docker-service
        - require:
          - pkg: lxc-docker
          - pkg: linux-kernel-deps
    
    linux-kernel-deps:
      pkg.installed:
        - pkgs:
          - linux-image-extra-{{ kernelrelease }}
          - aufs-tools
      cmd.run:
        - name: modprobe aufs
        - unless: modinfo aufs > /dev/null 2>&1
    
    clone-discourse:
      pkg.installed:
        - name: git
      git.latest:
        - name: https://github.com/discourse/discourse_docker.git
        - user: ubuntu
        - target: {{ dir }}
        - unless: test -f {{ dir }}/containers/app.yml
        - require:
          - file: {{ dir }}
          - pkg: git
      file.directory:
        - name: {{ dir }}
        - user: ubuntu
        - group: ubuntu
        - recurse:
          - user
          - group
    
  4. Restart your salt master service, sync everything and run highstate

    service salt-master restart
    salt \* saltutil.sync_all
    salt \* service.restart salt-minion
    salt discourse-backend\* state.highstate
    
  5. Go to the VM and run the installer

    Note that as long as there is no /srv/webplatform/discourse/containers/app.yml the states will update the git repository to the latest version. In my projects, I make sure that salt also generates a configuration file with my current environment details (e.g. DNS, Salt master, Redis, SMTP relay, Postgres’ private IP addresses).

Creating a new Ubuntu Salt master from the terminal using Cloud-Init

If you run on Virtual Machines on a provider that runs OpenStack you can also leverage a component that’s made to automatically install softwares at creation time. With this, you can any new node in your cluster, including the salt-master in a few terminal commands.

Before starting out, you need to make sure your cloud provider runs Open Stack and has Cloud-Init enabled. To check it out, look in the “Launch instance” dialog to create a new VM a tab titled “Post-Creation”, it might just simply work.

img

Cloud-Init is basically reading a manifest that declares what’s the specifics of the new VM and is part of the conversion from the initial image OpenStack into the specific instance you will be using. You can follow those two articles that describes well how Cloud-Init works.

[Cloud-Init] is made in a way that it handles distribution specific package installation details automatically.

The following is specific to an Ubuntu server VM, but you might need to adjust the package names to match your current server distribution as those tools are getting more and more popular in the industry.

Before testing out on a new VM, you could also check from an existing instance and ask through an HTTP request what was the current instance’ post-creation script using cURL.

Note that the IP address you see below is a virtual interface provided by OpenStack but can be navigated through HTTP, try it out like this;

curl http://169.254.169.254/openstack/
2012-08-10
2013-04-04
2013-10-17

If you see a similar output, you can ask what was the post-creation configuration (“userdata”) it used at creation time. You can dig the tree, here’s how you can find it in an OpenStack (CURRENT VERSION NICKNAME) cluster.

For instance, my a salt master would have the following configuration;

curl http://169.254.169.254/openstack/2013-10-17/user_data

#cloud-config
manage_etc_hosts: false
manage-resolv-conf: false
locale: en_US.UTF-8
timezone: America/New_York
package_upgrade: true
package_update: true
package_reboot_if_required: true

ssh_import_id: [saltstack]
apt_sources:
  - source: "ppa:saltstack/salt"

packages:
  - salt-minion
  - salt-common
  - salt-master
  - python-software-properties
  - software-properties-common
  - python-novaclient

To boot an instance from the terminal, you can use the “nova” command like this;

nova boot --image Ubuntu-14.04-Trusty --user-data /srv/cloudconfig.yml --key_name keyname --flavor subsonic --security-groups default salt

This assumes that you have the following available in your OpenStack dashboard:

  1. An SSH public key called “keyname” in your tenant
  2. A flavor called “subsonic” that has a predefined configuration of vCPU, vRAM, etc.
  3. A security group called “default”, you could use more than one by separating them by comas; e.g. default,foo,bar
  4. A text file in /srv/cloudconfig.yml in YAML format that holds your Cloud-Init (a.k.a. “userdata”) configuration.
  5. You have your nova configuration available (look in your cloud provider dashboard “Download OpenStack RC File” link in “Access & Security” and “API access”) and available in your server’s /etc/profile.d/ profile folder.
  6. You have “python-novaclient” (or its equivalent) installed

To test it out yourself, you could use the block I gave earlier and create a file in /srv/cloudconfig.yml and give the the nova command a try.

In this case, you might want to call the new VM “salt” as the default Salt stack configuration will try to communicate to it to make it its salt master. In this case, it’ll be itself.

The creation of the salt master could also contain a few git repositories to be cloned at the salt master creation time making your salt master as easily replaceable as any other components in your “cloud”.

A set of sample scripts I use to create a new salt master off of a few git repositories can be found in the following Gist

Read more

The following articles was found to be describing in more detail what I introduced in this article.

Install PHP5 Memcached PECL extension and have it support igbinary

I was trying to figure out why my PHP setup would never have both igbinary to be used to serialize sessions in Memcached using current Memcached PECL extension.

Before: session handlers shows memcached
Before: igbinary support no?

After some research I found a procedure in an answer on StackOverflow.

But it didn’t solve my main requirement: Since I do automated deployment, I MUST be able to move packages around. Since all my VMs are using the same distribution and that I already have my own apt repo, I could just add one more deb file.

My objective was then now to package it for deployment. To do this, I discovered Jordan Sissel’s project called fpm which stands for “Freaking Package Manager” (sic)

My target deployment runs on Ubuntu 14.04 LTS and I want it to replace upstream php5-memcached package as a simple .deb file.

Build from PECL source

NOTE The following was run on an Ubuntu 14.04 VM with @rynop’s procedure.

  1. Setting the machine up to make a package.

    mkdir /tmp/php5-memcached
    cd /tmp/php5-memcached
    apt-get install -y php5-dev pkg-config php-pear
    
  2. Follow steps from the procedure. Those were taken from the Original procedure, just before issuing ./configure.

    pecl_memcached_ver="2.2.0"
    pecl download memcached-${pecl_memcached_ver}
    tar xzvf memcached-${pecl_memcached_ver}.tgz
    cd memcached-${pecl_memcached_ver}/
    phpize
    
  3. I realized that under Ubuntu 14.04 we also needed to disable Memcached SASL so I had to do it differently

    ./configure --enable-memcached-igbinary --disable-memcached-sasl
    

Make a .deb package

  1. Install jordansissel/fpm

    apt-get install -y ruby-dev gcc
    gem install fpm
    
  2. Check the package contents you want to replace and let’s replicate for our own purposes.

    dpkg --list | grep php5-memcached
    find /var/cache/apt -type f -name '*php5-memcached*'
    dpkg -c /var/cache/apt/archives/php5-memcached_2.1.0-6build1_amd64.deb
    
  3. I figured out in the output that I only needed a few folders, etc/php5/mods-available/ and usr/lib/php5/foo, so I created them manually.

    mkdir -p etc/php5/mods-available/
    // Adjust memcached.ini to suit your tastes, then prepare it for packaging
    cp memcached.ini etc/php5/mods-available/
    // Make sure the usr/lib/php5/foo path matches in 
    // the result of `dpkg -c` you issued
    mkdir -p usr/lib/php5/20121212/
    cp modules/memcached.so usr/lib/php5/20121212/
    
  4. Magic will happen

    fpm -s dir -t deb -n php5-memcached -v 2.2.0-wpd -m '<[email protected]>' --description 'PHP 5.5 PECL igbinary + memcached support' -d libmemcached10 etc/ usr/
    

    I could have used --replaces REPLACES in fpm options, but when I did this package, I didn’t know which syntax to use. Its an optional argument anyway.

  5. Test if the package works

    dpkg -i php5-memcached_2.2.0-wpd_amd64.deb
    [email protected]:/srv/webplatform/buggenie# dpkg -i /srv/webplatform/apt/php5-memcached_2.2.0-wpd_amd64.deb
    (Reading database ... 118781 files and directories currently installed.)
    Preparing to unpack .../php5-memcached_2.2.0-wpd_amd64.deb ...
    Unpacking php5-memcached (2.2.0-wpd) over (2.1.0-6build1) ...
    Setting up php5-memcached (2.2.0-wpd) ...
    

    Success!

  6. Look at the phpinfo

Ater: registered session handlers

Update your private apt repository (or create one)

  1. Then, in your own apt repository (if you do have one) here’s how I rebuild the index. Not that its not more complex than a folder with a bunch of deb files

    mkdir -p /srv/apt
    cp php5-memcached_2.2.0-wpd_amd64.deb /srv/apt
    cd  /srv/apt
    apt-get install -y dpkg-dev
    dpkg-scanpackages . /dev/null | gzip -9c > Packages.gz
    echo 'deb file:/srv/apt ./' > /etc/apt/sources.list.d/foo.list
    

Done!

Create a MariaDB cluster with replication over SSL with Salt Stack

While reworking WebPlatform infrastructure I had to rebuild a new database cluster.

The objective of the cluster is to have more than one database server so that our web applications can make reads on any node in the cluster.

While the system has replication, and I can send reads on any nodes on the cluster. There is a flaw in it too, any nodes can also make writes; nothing is blocking it.

My plan is to change this so that it would be OK to send writes to anybody in the cluster. There is now something called “Galera” that would allow me that. But that’s outside of the scope of this article.

In the current configuration, I’m purposefully not fixing it because my configuration management makes sure only the current master. So in this setup, I decided that the VM that gets writes has a specific mention of “masterdb” in the hostname.

That way, its easy to see and it gives me the ability to change master at anytime if an emergency requires me to.

Changing MariaDB replication master

Changing master could be done in the following order:

  • Lock writes on masterdb databases
  • Wait replication to catch up
  • On secondary database servers; remove replication configuration
  • Tell all web apps to use new database master
  • Remove database lock
  • Setup new replication configuration to use new master

Thanks to the fact that I manage everything through configruation management –including the web app configuration files– its only a matter of applying the states everywhere in the cluster. That makes it fairly easy to do such an heavy move, even under stress.

This post will be updated once I have completed the multi writes setup.

Procedure

Assumptions

The rest of this article will assume the following:

  1. You are running VMs on OpenStack, and do have credentials to make API calls to it
  2. You have a Salt master already running
  3. Your salt master has at least python-novaclient (nova commands) available on it
  4. You have your Open Stack credentials already loaded in your salt master’s /etc/profile.d/ so you can use nova directly

From the salt-master, initiate a few VMs to use for your database cluster

  1. Before booting, ensure you have the following details in your OpenStack cluster and salt master;

    • You have a SSH key in your OpenStack cluster. Mine is called “renoirb-production” and my salt master user has the private key preinstalled
    • You have a userdata.txt file that has settings that points to your salt master

      cat /srv/opsconfigs/userdata.txt
      
      #cloud-config
      
      manage_etc_hosts: false # Has to be set to false for everybody. Otherwise we need a DNS
      manage-resolv-conf: false
      locale: en_US.UTF-8
      timezone: America/New_York
      package_upgrade: true
      package_update: true
      package_reboot_if_required: true
      
      #
      # This is run at EVERY boot, good to ensure things are at the right place
      # IMPORTANT, make sure that `10.10.10.22` is a valid local DNS server.
      bootcmd:
        - grep -q -e 'nameserver' /etc/resolvconf/resolv.conf.d/head || printf "nameserver 10.10.10.22\n" >> /etc/resolvconf/resolv.conf.d/head
        - grep -q -e 'wpdn' /etc/resolvconf/resolv.conf.d/base || printf "search production.wpdn\ndomain production.wpdn\nnameserver 8.8.8.8" > /etc/resolvconf/resolv.conf.d/base
        - grep -q -e 'wpdn' /etc/resolv.conf || resolvconf -u
      
      runcmd:
        - sed -i "s/127.0.0.1 localhost/127.0.1.1 $(hostname).production.wpdn $(hostname)\n127.0.0.1 localhost/" /etc/hosts
        - apt-get install software-properties-common python-software-properties
        - add-apt-repository -y ppa:saltstack/salt
        - apt-get update
        - apt-get -y upgrade
        - apt-get -y autoremove
      
      packages:
        - salt-minion
        - salt-common
      
      # vim: et ts=2 sw=2 ft=yaml syntax=yaml
      
  2. Create two db-type VMs

    nova boot --image Ubuntu-14.04-Trusty --user-data /srv/opsconfigs/userdata.txt --key_name renoirb-production --flavor lightspeed --security-groups default db1-masterdb
    nova boot --image Ubuntu-14.04-Trusty --user-data /srv/opsconfigs/userdata.txt --key_name renoirb-production --flavor supersonic --security-groups default db2
    
  3. Accept them to the salt

    salt-key -y -a db1-masterdb
    salt-key -y -a db2
    

    As an aside. Imagine you want to run dependencies automatically once a VM is part of your salt-master. For example, adding its private IP address in a local Redis or Etcd live configuration object. One could create a Salt “Reactor and make sure the data is refreshed. This gist is a good starting point

  4. Wait the VM build to finish and get their private IP addresses

    nova list | grep db
    | ... | db1-masterdb | ACTIVE  | Running     | private-network=10.10.10.73 |
    | ... | db2          | ACTIVE  | Running     | private-network=10.10.10.74 |
    
  5. Add them to the pillars.
    Note that the part of the name “masterdb” is what Salt states uses to know which one will get the writes to.
    Note that in the end, the web apps configs will use the private IP address.
    Its quicker to generate pages if the backend doesn’t need to make name resolution each time it makes database queries.
    This is why we have to reflect the pillars. Ensure the following structure exists in the file.

    # Edit /srv/pillar/infra/init.sls at the following blocks
    infra:
      hosts_entries:
        masterdb: 10.10.10.73
    
  6. Refer to the right IP address in the configuration file with a similar salt pillar.get reference in the states.

      /srv/webplatform/blog/wp-config.php:
        file.managed:
          - source: salt://code/files/blog/wp-config.php.jinja
          - template: jinja
          - user: www-data
          - group: www-data
          - context:
              db_creds: {{ salt['pillar.get']('accounts:wiki:db') }}
              masterdb_ip: {{ salt['pillar.get']('infra:hosts_entries:masterdb') }}
          - require:
            - cmd: rsync-blog
    

    … and the wp-config.php.jinja

    <?php
    
    ## Some PHP configuration file that salt will serve on top of a deployed web application
    
    ## Managed by Salt Stack, please DO NOT TOUCH, or ALL CHANGES WILL be LOST!
    ## source {{ source }}
    
    define('DB_CHARSET',  "utf8");
    define('DB_COLLATE',  "");
    define('DB_HOST',     "{{ masterdb_ip|default('127.0.0.1')    }}");
    define('DB_NAME',     "{{ db_creds.database|default('wpblog') }}");
    define('DB_USER',     "{{ db_creds.username|default('root')   }}");
    define('DB_PASSWORD', "{{ db_creds.password|default('')       }}");
    
  7. Refresh the pillars, rebuild the salt master state.highstate, and test it out.

    salt-call saltutil.sync_all
    salt salt state.highstate
    
    salt-call pillar.get infra:hosts_entries:masterdb
    > local:
    >     10.10.10.73
    
  8. Make sure the VMs has the same version of salt as you do

    salt-call test.version
    > local:
    >     2014.7.0
    
    salt db\* test.version
    > db2:
    >     2014.7.0
    > db1-masterdb:
    >     2014.7.0
    
  9. Kick the VMs installation

    salt db\* state.highstate
    
  10. Highstate takes a while to run, but once you are done, you should be able to work with them with the remaining of this tutorial

    salt -G 'roles:db' mysql.version
    > db2:
    >     10.1.2-MariaDB-1~trusty-wsrep-log
    > db1-masterdb:
    >     10.1.2-MariaDB-1~trusty-wsrep-log
    

    Each db-type VM MySQL/MariaDB/Percona server will have a different database maintenance users defined in /etc/mysql/debian.cnf.

    Make sure you don’t overwrite them unless you import everything all at once, including the users and their grants.

  11. Check that each db VMs has their SSL certificate generated by Salt

    salt -G 'roles:db' cmd.run 'ls /etc/mysql | grep pem'
    > db2:
    >     ca-cert.pem
    >     ca-key.pem
    >     client-cert.pem
    >     client-key.pem
    >     client-req.pem
    >     server-cert.pem
    >     server-key.pem
    >     server-req.pem
    > db1-masterdb:
    >     ca-cert.pem
    >     ca-key.pem
    >     client-cert.pem
    >     client-key.pem
    >     client-req.pem
    >     server-cert.pem
    >     server-key.pem
    >     server-req.pem
    

    Each file is a certificate so they can use to make replication through SSL.

Now on each database server;

  1. Connect to both db nodes using the salt as a Jump Host

    ssh masterdb.production.wpdn
    ssh db2.production.wpdn
    
  2. Get to the MySQL/MariaDB/Percona prompt on each VMs.

    If you are used with terminal screens that allows to keep sessions running
    even if you get disconnected, that would be ideal. We never know if the connection hangs.

    On WebPlatform system we do have screen but tmux can do too.

    mysql
    
  3. Check if SSL is enabled on both MySQL/MariaDB/Percona servers

    > MariaDB [(none)]> SHOW VARIABLES like '%ssl%';
    > +---------------+----------------------------+
    > | Variable_name | Value                      |
    > +---------------+----------------------------+
    > | have_openssl  | YES                        |
    > | have_ssl      | YES                        |
    > | ssl_ca        | /etc/mysql/ca-cert.pem     |
    > | ssl_capath    |                            |
    > | ssl_cert      | /etc/mysql/server-cert.pem |
    > | ssl_cipher    | DHE-RSA-AES256-SHA         |
    > | ssl_crl       |                            |
    > | ssl_crlpath   |                            |
    > | ssl_key       | /etc/mysql/server-key.pem  |
    > +---------------+----------------------------+
    
  4. Generate SSL certificates for MySQL/MariaDB/Percona server, see this gist on how to do it.

  5. Places to double check; To see which config keys sets what’s shown in the previous screen, take a look in the VMs /etc/mysql/conf.d/ folders with similar entries.

    • bind-address is what allows us to communicate between servers, before MySQL 5.5 we had skip-networking but now only a bind-address is sufficient. Make sure that your security groups allows only local network connections though.
    • server_id MUST be with a different number for each nodes. Make sure no server has the same number.

      [mysqld]
      bind-address = 0.0.0.0
      log-basename=mariadbrepl
      log-bin
      binlog-format=row
      server_id=1
      ssl
      ssl-cipher=DHE-RSA-AES256-SHA
      ssl-ca=/etc/mysql/ca-cert.pem
      ssl-cert=/etc/mysql/server-cert.pem
      ssl-key=/etc/mysql/server-key.pem
      
      [client]
      ssl
      ssl-cert=/etc/mysql/client-cert.pem
      ssl-key=/etc/mysql/client-key.pem
      
  6. From the database master (a.k.a “masterdb”), Get the replication log position;
    We’ll need the File and Position values to setup the replication node.

    MariaDB [(none)]> show master status;
    > +------------------------+----------+--------------+------------------+
    > | File                   | Position | Binlog_Do_DB | Binlog_Ignore_DB |
    > +------------------------+----------+--------------+------------------+
    > | mariadbrepl-bin.000002 |      644 |              |                  |
    > +------------------------+----------+--------------+------------------+
    
  7. Configure the masterdb to accept replication users. From the salt master

     salt -G 'roles:masterdb' mysql.user_create replication_user '%' foobarbaz
    

    NOTE: My salt states script creates a grain in /etc/salt/grains with the following data;

    roles:
      - masterdb
    

    Alternatively, you could call the VM db1-masterdb, use a small python script that’ll parse the information for you and make it a grain automatically.

  8. Back to the masterdb VM, check if the user exists, ensure SSL is required

    MariaDB [(none)]> show grants for 'replication_user';
    > +-----------------------------------------------------------------------------------------------------------------------------+
    > | Grants for [email protected]%                                                                                               |
    > +---------------------------------------------------------------------------------------+
    > | GRANT REPLICATION SLAVE ON *.* TO 'replication_user'@'%' IDENTIFIED BY PASSWORD '...' |
    > +---------------------------------------------------------------------------------------+
    
    MariaDB [(none)]> GRANT REPLICATION SLAVE ON *.* TO 'replication_user'@'%.local.wpdn' REQUIRE SSL;
    MariaDB [(none)]> GRANT USAGE ON *.* TO 'replication_user'@'%' REQUIRE SSL;
    
    MariaDB [(none)]> SELECT User,Host,Repl_slave_priv,Repl_client_priv,ssl_type,ssl_cipher from mysql.user where User = 'replication_user';
    > +------------------+--------------+-----------------+------------------+----------+
    > | User             | Host         | Repl_slave_priv | Repl_client_priv | ssl_type |
    > +------------------+--------------+-----------------+------------------+----------+
    > | replication_user | %.local.wpdn | Y               | N                | ANY      |
    > +------------------+--------------+-----------------+------------------+----------+
    
  9. On the secondary db VM, in mysql prompt, setup the initial CHANGE MASTER statement;

    CHANGE MASTER TO
      MASTER_HOST='masterdb.local.wpdn',
      MASTER_USER='replication_user',
      MASTER_PASSWORD='foobarbaz',
      MASTER_PORT=3306,
      MASTER_LOG_FILE='mariadbrepl-bin.000002',
      MASTER_LOG_POS=644,
      MASTER_CONNECT_RETRY=10,
      MASTER_SSL=1,
      MASTER_SSL_CA='/etc/mysql/ca-cert.pem',
      MASTER_SSL_CERT='/etc/mysql/client-cert.pem',
      MASTER_SSL_KEY='/etc/mysql/client-key.pem';
    

Checking if it worked

From one of the secondary servers, look for success indicators:

  • Seconds_Behind_Master says 0,
  • Slave_IO_State says Waiting for master to send event

    ```
    MariaDB [wpstats]> show slave status\G
    *************************** 1. row ***************************
                   Slave_IO_State: Waiting for master to send event
                      Master_Host: masterdb.local.wpdn
                      Master_User: replication_user
                      Master_Port: 3306
                    Connect_Retry: 10
                  Master_Log_File: mariadbrepl-bin.000066
              Read_Master_Log_Pos: 19382112
                   Relay_Log_File: mariadbrepl-relay-bin.000203
                    Relay_Log_Pos: 19382405
            Relay_Master_Log_File: mariadbrepl-bin.000066
                 Slave_IO_Running: Yes
                Slave_SQL_Running: Yes
                  Replicate_Do_DB:
              Replicate_Ignore_DB:
               Replicate_Do_Table:
           Replicate_Ignore_Table:
          Replicate_Wild_Do_Table:
      Replicate_Wild_Ignore_Table:
                       Last_Errno: 0
                       Last_Error:
                     Skip_Counter: 0
              Exec_Master_Log_Pos: 19382112
                  Relay_Log_Space: 19382757
                  Until_Condition: None
                   Until_Log_File:
                    Until_Log_Pos: 0
               Master_SSL_Allowed: Yes
               Master_SSL_CA_File: /etc/mysql/ca-cert.pem
               Master_SSL_CA_Path:
                  Master_SSL_Cert: /etc/mysql/client-cert.pem
                Master_SSL_Cipher:
                   Master_SSL_Key: /etc/mysql/client-key.pem
            Seconds_Behind_Master: 0
    Master_SSL_Verify_Server_Cert: No
                    Last_IO_Errno: 0
                    Last_IO_Error:
                   Last_SQL_Errno: 0
                   Last_SQL_Error:
      Replicate_Ignore_Server_Ids:
                 Master_Server_Id: 1
                   Master_SSL_Crl: /etc/mysql/ca-cert.pem
               Master_SSL_Crlpath:
                       Using_Gtid: No
                      Gtid_IO_Pos:
          Replicate_Do_Domain_Ids:
      Replicate_Ignore_Domain_Ids:
    1 row in set (0.00 sec)
    ```
    

Managing users

In the end, since replication is active, you can add users to your system and all nodes will get the privileges.

The way I work is that I can use Salt stack states to add privileges in my states (more details soon)
or I can use a few salt commands from my salt master and send them to the database masterdb VM.

salt -G 'roles:masterdb' mysql.db_create 'accounts_oauth' 'utf8' 'utf8_general_ci'
salt -G 'roles:masterdb' mysql.user_create 'accounts' '%' 'barfoo'
salt -G 'roles:masterdb' mysql.grant_add 'ALL PRIVILEGES' 'accounts_oauth.*' 'accounts' ‘%’

References

  • http://dev.mysql.com/doc/refman/5.7/en/replication-solutions-ssl.html
  • https://mariadb.com/kb/en/mariadb/documentation/managing-mariadb/replication/standard-replication/setting-up-replication/

How to create a patch and ensure it is applied within Salt Stack

Quick tutorial on how to create a patch and ensure it is applied using salt stack.

Procedure

Creating a patch

  1. Create a copy of original file

    cp file file.orig

  2. Modify file, and test

  3. Create a md5 sum of the modified file for later use

    cat file | md5sum

  4. Revert modification, then prepare patch
    mv file file.mod
    cp file.orig file

  5. Create diff

    diff -Naur file file.mod > file.patch
    

Create Salt stack manifest block in appropriate state file

Add a block similar to this as a patch state definition. Make sure it is refered at least in your top.sls

    /usr/share/ganglia-webfrontend/auth.php:
      file.patch:
        - source: salt://monitor/auth.php.patch
        - hash: md5=480ef2ae17fdfee85585ab887fa1ae5f

References

Procédure pour avoir un environnement de dévelopement local facile à configurer avec Apache

Je ne sais pour vous, mais je ne peut plus programmer sans avoir l’environement serveur localement sur ma machine. Changer ou ajouter un fichier VirtualHost pour chaque nouveau projet est assez répétitif. Il doit y avoir une façon automatique de le faire?

Oui.

Ça s’appelle VirtualDocumentRoot

J’ai ce tutoriel qui traîne dans mon Wiki personnel depuis des lustres, et c’est maintenant que je commence a migrer mes projets sous NGINX que je décide de le mettre en ligne. Il n’est jamais trop tard pour publier.

Cette méthode de configuration répond exactement au besoin précis de ne pas avoir a configurer un hôte virtuel apache pour chaque projet.

Avec cette procédure, vous n’aurez qu’a maintenir votre fichier hosts, le reste suivra tout seul.

Vous pouvez appliquer cette technique avec n’importe quelle version du serveur http “Apache”. Cette procédure peut même être faite si vous développez sous Windows ou Mac OS avec les distributions du serveur HTTP Apache sous Windows telles que MAMP, XAMPP, et EasyPHP.

Pourtant avec un serveur web local, ce type de configuration est possible depuis longtemps, il faut simplement savoir comment ça s’appelle: VirtualDocumentRoot.

Voici comment je configure mon environnement LAMP depuis quelques temps.

Procédure

Établissement du standard

Tout commence par une certaine convention. Avec celle-ci, tout devrait suivre automatiquement.

L’idée est de pouvoir accéder a un l’espace de travail du projet A du client B sur ma machine locale. L’adresse locale n’est plus localhost, mais quelque chose de plus explicite.

Ce que j’apprécie le plus de cette méthode car elle permet de conserver dans un dossier parent tout ce qui est spécifique pour le projet et le client. Le code a exécuter qui soit dans un sous-dossier ne feait que du sens.

Par exemple, un projet appelé projectname du client client serait classé dans un dossier sous le chemin /home/renoirb/workspace/client/projectname.

Le code du projet web serait accessible via le serveur web à l’adresse http://projectname.client.dev/ qui pointe vers l’adresse IP de la station de travail locale.

L’espace de travail du projet

IMPORTANT
Il faut que les noms de dossiers soient en minuscule et aucun espace, ni caractères accentués, sinon le serveur Apache risque de ne pas trouver le dossier. Principalement parce que l’adresse entrée dans le navigateur est convertie en bas de case, et que généralement un système d’exploitation qui se respecte fait une différence entre, par exemple ‘Allo’ et ‘allo’.

La convention suggérée va comme suit:

  • chaque projet est classé dans un chemin prévisible, similaire à /home/renoirb/workspace/client/projectname
  • le projet a un dossier web/
  • les autres dossiers au même niveau que web/ peuvent être n’importe quoi d’autre.

Idéalement, la logique applicative ne devrait pas être visible publiquement de toute façon. Seulement le fichier principal appelle l'”autoloader” en dehors du DocumentRoot.

De cette façon le vous pouvez classer tout vos projets du même client, et séparer par projets.

La procédure tient aussi en compte
* L’utilisateur courrant puisse écrire dans son dossier workspace/ avec Apache2 comme s’il était son propre utilisateur avec mpm-itk
* Le nom de domaine utilisé définit dans quel dossier de l’utilisateur chercher

Procédure

  • Assurer que les modules sont chargés
     sudo a2enmod vhost_alias
  • Ajouter le fichier default a la config de apache
     sudo vi /etc/apache2/ports.conf
  • Vérifier qu’il y a ceci:
    NameVirtualHost *:80 
    Listen 80 
    UseCanonicalName Off
  • Modifier le fichier de config du VirtualHost par défaut
  • Fichier de configuration par magique
    sudo vi /etc/apache2/sites-available/default
  • Verifier qu’il y a ce bloc dans <VirtualHost ...>:
    <IfModule mpm_itk_module>
        AssignUserId renoirb users
    </IfModule>
  • Remplacer la mention DocumentRoot par ce format:
    VirtualDocumentRoot /home/renoirb/workspaces/%1/%0/web

Sources

Who else is using feature flipping thing on their web applications?

I am currently reading and collecting ideas on how to present and propose an implementation in my projects.

I want to use:

  • Continuous integration
  • Automated builds
  • Feature flipping

And make all of this quick and easy for anybody in the team.

Feature flipping

This is fairly new to me, but I like the idea. The concept is that the code declares in their own components which features they are contributing to.

This way, we can then totally hide it from sight from the users.

Source control branching

I am currently searching and preparing to introduce to my client ways to work with a few elements in our project.

The idea is that instead of managing a complex branch scheme, and skim to the essential.

A trend was to use GitFlow, then, the project grows, developers do not have all the time to manage everything, and things get out of hands.

It may then look like something similar to that:

Quoting a slide from Zach Holman about branching

It doesn’t seem bad in the first place, but even though Git gives an easy way to do so, if you want to adapt quickly, it can bring overhead.

At least, that’s what Flikr, Github, Twitter, Facebook (so I heard) does.

I’ll keep you posted on what I find on the idea soon-ish.

References

Sensibilisation sur les courriels non sollicités

J’ai décidé cette nuit d’écrire sur des choses qui me tiennent a coeur.

Je vois des messages d’amis de longue date qui font circuler des chaînes de lettres et autres tentatives d’«arnaque sur le web», comme par exemple «empêcher Hotmail de fermer», devenir payant, ou encore même d’éviter communiquer a quelqu’un parcequ’il envoie des virus.

Je me retiens depuis des années à écrire ouvertement sur le sujet et j’ai décidé de briser le silence.

Le message que j’avais reçu qui m’a inspiré a y répondre annonçait qu’Hotmail deviendrait payant.

Ce qui m’avait ébahi c’était le fait que la personne avait mis plus d’une centaine d’adresses en copie conforme.

J’ai répondu à cet ami, ainsi qu’a toutes les adresses courriel qui étaient marquées en copie conforme avec le message qui suit:

Le courriel envoyé

From: (mon-adresse-de-spam)@msn.com (ce n’est pas la vraie)
Subject: Sensibilisation sur les courriels non sollicités
Date: Wed, 3 Dec 2008 01:58:53 -0500

Pour ceux qui me connaissent pas je m’appelle Renoir. J’ai 29 ans. Professionnellement je suis webmestre de passion depuis 1999. J’ai fait au-dessus de 150 sites web dans ma carrière et je travaille pour une (entreprise que je ne nommerai pas) qui est un bureau de consultation qui offre une suite de services informatiques relié au web.

Un peu plus sur moi…

Avant de parler, laissez-moi me présenter.

Dans mon travail, j’ai eu beaucoup d’expérience avec les ordinateurs et je peut compléter mon profil en disant que je n’utilise plus de programme faits par Microsoft depuis 2003.

J’utilise dans ma vie de tous les jours un Mac et des machines en Ubuntu Linux.

Je déteste Microsoft pas seulement parce que ça peut sembler cool. Mais principalement parce qu’en grande partie, j’ai passé une grande partie du début de ma carrière à faire du code pour deux logiciels qui devraient être compatibles; le navigateur Internet Explorer, et pour ceux qui utilisent les navigateurs répondants aux standards du web.

Pourquoi j’écris ce courriel?

J’écris tout ceci simplement pour faire un peu de sensibilisation sur le courriel non sollicité.

En fait, plus bas (après ma signature) vous allez voir un message typique de H0AX (attrape-nigaud). Je vais même le décortiquer pour vous.

Je voulais, aussi, faire une bonne action, car je vois beaucoup de monde qui se laisser leurrer par ce type de message et si je vous en apprends, bien, j’en serai ravi.

Les destinataires

Un des premiers points que j’aimerai sensibiliser c’est le fait de faire suivre a tout le monde un courriel, sans nettoyer toutes les adresses courriel qui y sont affichées.

Lorsque vous faites “forward” on peut voir les autres destinataires qui ont reçu le message (hormis ceux qui sont CCI ou BCC), vous étiez plus d’une centaine (!!).

En fait la mention BCC ou CCI signifie : Copie conforme invisible (ou Blank Conform Copy). Qui permet de respecter la vie privée des personnes qui ont reçu le message avant vous.

En fait, dans ce message-ci, je ne sais pas combien de personnes le recevront, mais tant mieux s’il t’a permit d’apprendre.

Le danger potentiel de laisser grand ouvert les adresses des autres destinataires est que des personnes et/ou programmes peuvent collectionner votre adresse courriel et empirer encore plus ta boite de réception de message non sollicité. En plus qu’un gars peut sortir de nulle part et écrire a tout le monde pour faire ce que je fait.

Les attrapes nigauds

On appelle aussi ça du Hammeçonnage, HOAX, et il agit de plusieurs façons. Toujours pour obtenir de l’information un élément de valeur à l’insu de la victime.

En fait il s’agit de “monter un bateau” et de fournir (parfois peu) de preuves sur ce qui est avancé.

Des brigands vont même jusqu’a totalement refaire la page d’accueil du site d’une “telle banque” et par la suite, envoyer des courriels en tentant de se faire passer pour “telle banque”. Puis attendre que tu cliques sur le courriel qu’il a fait et espérer que tu entres ton nom d’utilisateur et ton mot de passe… BOUM!

Jamais faire confiance a un envoi de courriel qui dit de te connecter en “cliquant ici” (avec un lien déjà fourni si généreusement (sarcasme)) sans vérifier si l’adresse est valide. Il faut TOUJOURS vous assurer que vous êtes bien sur le site original.

Exemple d’une fraude potentielle: http://www.cibc.a.com/olbtxn/authentication/PreSignOn.cibc?locale=en_CA, alors qu’une adresse valide: https://www.cibconline.cibc.com/olbtxn/authentication/PreSignOn.cibc?locale=en_CA

Remarquez le …cibc.a.com… Tout ce qui a tout juste avant le .com ou .net, .org, .edu, .ca, .gc.ça…. est le TOP NIVEAU sur internet. Donc dans mon premier exemple, quelqun a acheté a.com et la CIBC a acheté cibc.com …. qui est la VRAIE banque, finalement? ;)

Aussi, le http et le https en est pour BEAUCOUP!

Le “s” signifie “Secure”. Car il encrypte les données entre le serveur et le navigateur que tu utilises.

Si tu es sur un mauvais site, un bon navigateur va te dire que tu n’est pas sur un site qui a un certificat valide. Ou encore mieux, il peut même te dire si un site a été détecté comme une fraude.

A suivre…

Ce billet était la page originale d’un plus long billet et il a été tronqué en plusieurs parties, car il couvrait beaucoup trop de sujets et portait a confusion. 

Vous pouvez continuer avec la suite dans le billet «quelques indices pour savoir si il s’agit d’une chaîne de lettres», et Les navigateurs web, vous avez le choix

En lire plus sur le même thème

Ce billet fait partie d’une série d’articles portant sur mes observations au sujet des attrapes sur le web.

Vous pouvez voir tous les articles dans la section Projets, dans la section spécifique aux arnaques sur internet.

Ce billet a été révisé le 6 juin 2013.

Effacer complètement un disque dur de façon sécuritaire

Lorsqu’on se préocupe de ses informations personnelles et qu’on veut se débarrasser d’un ordinateur ou d’un vieux disque dur il faut, idéalement, le vider. J’ai une petite méthode pour le faire pas trop compliqué et totalement sécuritaire pour les données.

Ma méthode implique

  • Un disque dur qu’on veut wiper (effacer)
  • Un LiveCD de Linux
  • Du temps
  • Une tour d’ordinateur pour le processus, idéalement inutilisée… sinon où chercher ou quoi utiliser pour se «dé-s’emmerder»
    • Un lecteur CD-Rom

Attention, un peu technique… mais tellement conseillé :)

ÉTAPE UN… backups?!

C’est un peu stupide de le préciser… mais assurez-vous qu’il est vraiment vide de vos données avant de faire quoi que ce soit!

Je vous recommande DE NE PAS AVOIR D’AUTRES DISQUES D’INSTALLÉ lors de l’exécution de ce script Tant que vous exécutez pas le script a l’étape six vous ne risquez rien (!).

Continue reading “Effacer complètement un disque dur de façon sécuritaire”

Fait d’usabilité no3, Nous n’essayons pas de comprendre les choses, nous fouillons

dmmt_cover.jpg

Voici la suite de ma revue du livre que j’ai lu récemment qui traîte de l’Utilisabilité. Le livre s’appelait: Don’t make me think. A Common Sense Approach to Web Usability, voici un troisième fait qu’il est bien de considérer.

Attention à l’anglicisme;
Fact of life #3 > We don’t figure out how things work. We muddle through.

Continue reading “Fait d’usabilité no3, Nous n’essayons pas de comprendre les choses, nous fouillons”

Ajouter de la valeur a un mot de passe, des astuces

Durant mes rollups de podcasts, mon podcast préféré expliquait certains concepts pour améliorer la sécurité des mots de passe.

En gros;

  • Non basé sur des mots de dictionnaire
  • Rapetisser une phrase
  • Authentification avec Multi facteurs

Il s’agira d’une série de posts sur le sujet que je mettrai a jour de temps en temps. Je n’ai pas encore décidé le nombre de posts, mais j’ai créé un nouveau tag pour mes posts: “saferpasswords”
Continue reading “Ajouter de la valeur a un mot de passe, des astuces”

Conversion et resampling image

ça vous est arrivé de recevoir un paquet d’images trop grosses pour les lire et que ce à -peu-près cent images pesent le poid d’un DVD (!!!).

C’est ce qui m’est arrivé. Un DVD plein de photos, tout en format TIF pesant 3Gigas! Toutes en format assez grandes pour faire une affiche publicitaire grandeur de celles qu’on trouve sur nos autoroutes!

Resizer les photos a la main… Non merci!

Voici comment j’ai fait, en linux, pour transférer les TIF en JPG. Puis ensuite, rapetisser les photos d’une largeur plus raisonnable.

Continue reading “Conversion et resampling image”

Installer VMWare sur Ubuntu server avec le repositoire de Canonical

L’idée c’est d’avoir un serveur rack-mount minimal fait pour héberger les VM d’infrastructure du réseau. Actuellement plusieurs de nos serveurs *physiques* ont déjà X-Windows sur leur hôtes (SuSE v10, RHEL 4 U5, Gentoo 2007.0, Ubuntu, etc.) Mais ce sont des installations qui ont été faites avec tout ce qu’il sortait sans réfléchir ou se pencher sur l’économie de ressources. Ce serveur sera minimal coté installation, et maximisé sur VMWare… voici les étapes que j’ai fait.
Continue reading “Installer VMWare sur Ubuntu server avec le repositoire de Canonical”

Installer VMWare sur Ubuntu server avec le repositoire de Canonical

L’idée c’est d’avoir un serveur rack-mount minimal fait pour héberger les VM d’infrastructure du réseau. Actuellement plusieurs de nos serveurs *physiques* ont déjà X-Windows sur leur hôtes (SuSE v10, RHEL 4 U5, Gentoo 2007.0, Ubuntu, etc.) Mais ce sont des installations qui ont été faites avec tout ce qu’il sortait sans réfléchir ou se pencher sur l’économie de ressources. Ce serveur sera minimal coté installation, et maximisé sur VMWare… voici les étapes que j’ai fait.
Continue reading “Installer VMWare sur Ubuntu server avec le repositoire de Canonical”