Add OpenStack instance meta-data info in your salt grains

During a work session on my salt-states for WebPlatform.org I wanted to shape be able to query the OpenStack cluster meta-data so that I can adjust more efficiently my salt configuration.

What are grains? Grains are structured data that describes what a minion has such as which version of GNU/Linux its running, what are the network adapters, etc.

The following is a Python script that adds data in Salt Stack’ internal database called grains.

I have to confess that I didn’t write the script but adapted it to work within an OpenStack cluster. More precisely on DreamHost’s DreamCompute cluster. The original script came from saltstack/salt-contrib and the original file was ec2_info.py to read data from EC2.

The original script wasn’t getting any data in the cluster. Most likely due to API changes and that EC2 API exposes dynamic meta-data that the DreamCompute/OpenStack cluster don’t.

In the end, I edited the file to make it work on DreamCompute and also truncated some data that the grains subsystem already has.

My original objective was to get a list of security-groups the VM was assigned. Unfortunately the API doesn’t give that information yet. Hopefully I’ll find a way to get that information some day.

Get OpenStack instance detail using Salt

Locally

salt-call grains.get dreamcompute:uuid
local:
    10a4f390-7c55-4dd3-0000-a00000000000

Or for another machine

salt app1 grains.get dreamcompute:uuid
app1:
    510f5f24-217b-4fd2-0000-f00000000000

What size did we create a particular VM?

salt app1 grains.get dreamcompute:instance_type
app1:
    lightspeed

What data you can get

Here is a sample of the grain data that will be added to every salt minion you manage.

You might notice that some data will be repeated such as the ‘hostname’, but the rest can be very useful if you want to use the data within your configuration management.

dreamcompute:
    ----------
    availability_zone:
        iad-1
    block_device_mapping:
        ----------
        ami:
            vda
        ebs0:
            /dev/vdb
        ebs1:
            vda
        root:
            /dev/vda
    hostname:
        salt.novalocal
    instance_action:
        none
    instance_id:
        i-00000000
    instance_type:
        lightspeed
    launch_index:
        0
    local_ipv4:
        10.10.10.11
    name:
        salt
    network_config:
        ----------
        content_path:
            /content/0000
        name:
            network_config
    placement:
        ----------
        availability_zone:
            iad-1
    public_ipv4:
        203.0.113.11
    public_keys:
        ----------
        someuser:
            ssh-rsa ...an rsa public key... someuser-comment@example.org
    ramdisk_id:
        None
    reservation_id:
        r-33333333
    security-groups:
        None
    uuid:
        10a4f390-7c55-4dd3-0000-a00000000000

What does the script do?

The script basically scrapes OpenStack meta-data service and serializes into saltstack grains system the data it gets.

OpenStack’s meta-data service is similar to what you’d get from AWS, but doesn’t expose exactly the same data. This is why I had to adapt the original script.

To get data from an instance you simply (really!) need to make an HTTP call to an internal IP address that OpenStack nova answers.

For example, from an AWS/OpenStack VM, you can know the instance hostname by doing

curl http://169.254.169.254/latest/meta-data/hostname
salt.novalocal

To know what the script calls, you can add a line at _call_aws(url) method like so

diff --git a/_grains/dreamcompute.py b/_grains/dreamcompute.py
index 682235d..c3af659 100644
--- a/_grains/dreamcompute.py
+++ b/_grains/dreamcompute.py
@@ -25,6 +25,7 @@ def _call_aws(url):

     """
     conn = httplib.HTTPConnection("169.254.169.254", 80, timeout=1)
+    LOG.info('API call to ' + url )
     conn.request('GET', url)
     return conn.getresponse()

When you saltutil.sync_all (i.e. refresh grains and other data), the log will tell you which endpoints it queried.

In my case they were:

[INFO    ] API call to /openstack/2012-08-10/meta_data.json
[INFO    ] API call to /latest/meta-data/
[INFO    ] API call to /latest/meta-data/block-device-mapping/
[INFO    ] API call to /latest/meta-data/block-device-mapping/ami
[INFO    ] API call to /latest/meta-data/block-device-mapping/ebs0
[INFO    ] API call to /latest/meta-data/block-device-mapping/ebs1
[INFO    ] API call to /latest/meta-data/block-device-mapping/root
[INFO    ] API call to /latest/meta-data/hostname
[INFO    ] API call to /latest/meta-data/instance-action
[INFO    ] API call to /latest/meta-data/instance-id
[INFO    ] API call to /latest/meta-data/instance-type
[INFO    ] API call to /latest/meta-data/local-ipv4
[INFO    ] API call to /latest/meta-data/placement/
[INFO    ] API call to /latest/meta-data/placement/availability-zone
[INFO    ] API call to /latest/meta-data/public-ipv4
[INFO    ] API call to /latest/meta-data/ramdisk-id
[INFO    ] API call to /latest/meta-data/reservation-id
[INFO    ] API call to /latest/meta-data/security-groups
[INFO    ] API call to /openstack/2012-08-10/meta_data.json
[INFO    ] API call to /latest/meta-data/
[INFO    ] API call to /latest/meta-data/block-device-mapping/
[INFO    ] API call to /latest/meta-data/block-device-mapping/ami
[INFO    ] API call to /latest/meta-data/block-device-mapping/ebs0
[INFO    ] API call to /latest/meta-data/block-device-mapping/ebs1
[INFO    ] API call to /latest/meta-data/block-device-mapping/root
[INFO    ] API call to /latest/meta-data/hostname
[INFO    ] API call to /latest/meta-data/instance-action
[INFO    ] API call to /latest/meta-data/instance-id
[INFO    ] API call to /latest/meta-data/instance-type
[INFO    ] API call to /latest/meta-data/local-ipv4
[INFO    ] API call to /latest/meta-data/placement/
[INFO    ] API call to /latest/meta-data/placement/availability-zone
[INFO    ] API call to /latest/meta-data/public-ipv4
[INFO    ] API call to /latest/meta-data/ramdisk-id
[INFO    ] API call to /latest/meta-data/reservation-id
[INFO    ] API call to /latest/meta-data/security-groups

Its quite heavy.

Hopefully the script respects HTTP headers and don’t bypass 304 Not Modified responses. Otherwise it’ll add load to nova. Maybe I should check that (note-to-self).

Install

You can add this feature by adding a file in your salt states repository in the _grains/ folder. The file can have any name ending by .py.

You can grab the grain python code in this gist.

enjoy!

Creating a new Ubuntu Salt master from the terminal using Cloud-Init

If you run on Virtual Machines on a provider that runs OpenStack you can also leverage a component that’s made to automatically install softwares at creation time. With this, you can any new node in your cluster, including the salt-master in a few terminal commands.

Before starting out, you need to make sure your cloud provider runs Open Stack and has Cloud-Init enabled. To check it out, look in the “Launch instance” dialog to create a new VM a tab titled “Post-Creation”, it might just simply work.

img

Cloud-Init is basically reading a manifest that declares what’s the specifics of the new VM and is part of the conversion from the initial image OpenStack into the specific instance you will be using. You can follow those two articles that describes well how Cloud-Init works.

[Cloud-Init] is made in a way that it handles distribution specific package installation details automatically.

The following is specific to an Ubuntu server VM, but you might need to adjust the package names to match your current server distribution as those tools are getting more and more popular in the industry.

Before testing out on a new VM, you could also check from an existing instance and ask through an HTTP request what was the current instance’ post-creation script using cURL.

Note that the IP address you see below is a virtual interface provided by OpenStack but can be navigated through HTTP, try it out like this;

curl http://169.254.169.254/openstack/
2012-08-10
2013-04-04
2013-10-17

If you see a similar output, you can ask what was the post-creation configuration (“userdata”) it used at creation time. You can dig the tree, here’s how you can find it in an OpenStack (CURRENT VERSION NICKNAME) cluster.

For instance, my a salt master would have the following configuration;

curl http://169.254.169.254/openstack/2013-10-17/user_data

#cloud-config
manage_etc_hosts: false
manage-resolv-conf: false
locale: en_US.UTF-8
timezone: America/New_York
package_upgrade: true
package_update: true
package_reboot_if_required: true

ssh_import_id: [saltstack]
apt_sources:
  - source: "ppa:saltstack/salt"

packages:
  - salt-minion
  - salt-common
  - salt-master
  - python-software-properties
  - software-properties-common
  - python-novaclient

To boot an instance from the terminal, you can use the “nova” command like this;

nova boot --image Ubuntu-14.04-Trusty --user-data /srv/cloudconfig.yml --key_name keyname --flavor subsonic --security-groups default salt

This assumes that you have the following available in your OpenStack dashboard:

  1. An SSH public key called “keyname” in your tenant
  2. A flavor called “subsonic” that has a predefined configuration of vCPU, vRAM, etc.
  3. A security group called “default”, you could use more than one by separating them by comas; e.g. default,foo,bar
  4. A text file in /srv/cloudconfig.yml in YAML format that holds your Cloud-Init (a.k.a. “userdata”) configuration.
  5. You have your nova configuration available (look in your cloud provider dashboard “Download OpenStack RC File” link in “Access & Security” and “API access”) and available in your server’s /etc/profile.d/ profile folder.
  6. You have “python-novaclient” (or its equivalent) installed

To test it out yourself, you could use the block I gave earlier and create a file in /srv/cloudconfig.yml and give the the nova command a try.

In this case, you might want to call the new VM “salt” as the default Salt stack configuration will try to communicate to it to make it its salt master. In this case, it’ll be itself.

The creation of the salt master could also contain a few git repositories to be cloned at the salt master creation time making your salt master as easily replaceable as any other components in your “cloud”.

A set of sample scripts I use to create a new salt master off of a few git repositories can be found in the following Gist

Read more

The following articles was found to be describing in more detail what I introduced in this article.

Create a MariaDB cluster with replication over SSL with Salt Stack

While reworking WebPlatform infrastructure I had to rebuild a new database cluster.

The objective of the cluster is to have more than one database server so that our web applications can make reads on any node in the cluster.

While the system has replication, and I can send reads on any nodes on the cluster. There is a flaw in it too, any nodes can also make writes; nothing is blocking it.

My plan is to change this so that it would be OK to send writes to anybody in the cluster. There is now something called “Galera” that would allow me that. But that’s outside of the scope of this article.

In the current configuration, I’m purposefully not fixing it because my configuration management makes sure only the current master. So in this setup, I decided that the VM that gets writes has a specific mention of “masterdb” in the hostname.

That way, its easy to see and it gives me the ability to change master at anytime if an emergency requires me to.

Changing MariaDB replication master

Changing master could be done in the following order:

  • Lock writes on masterdb databases
  • Wait replication to catch up
  • On secondary database servers; remove replication configuration
  • Tell all web apps to use new database master
  • Remove database lock
  • Setup new replication configuration to use new master

Thanks to the fact that I manage everything through configruation management –including the web app configuration files– its only a matter of applying the states everywhere in the cluster. That makes it fairly easy to do such an heavy move, even under stress.

This post will be updated once I have completed the multi writes setup.

Procedure

Assumptions

The rest of this article will assume the following:

  1. You are running VMs on OpenStack, and do have credentials to make API calls to it
  2. You have a Salt master already running
  3. Your salt master has at least python-novaclient (nova commands) available on it
  4. You have your Open Stack credentials already loaded in your salt master’s /etc/profile.d/ so you can use nova directly

From the salt-master, initiate a few VMs to use for your database cluster

  1. Before booting, ensure you have the following details in your OpenStack cluster and salt master;

    • You have a SSH key in your OpenStack cluster. Mine is called “renoirb-production” and my salt master user has the private key preinstalled
    • You have a userdata.txt file that has settings that points to your salt master

      cat /srv/opsconfigs/userdata.txt
      
      #cloud-config
      
      manage_etc_hosts: false # Has to be set to false for everybody. Otherwise we need a DNS
      manage-resolv-conf: false
      locale: en_US.UTF-8
      timezone: America/New_York
      package_upgrade: true
      package_update: true
      package_reboot_if_required: true
      
      #
      # This is run at EVERY boot, good to ensure things are at the right place
      # IMPORTANT, make sure that `10.10.10.22` is a valid local DNS server.
      bootcmd:
        - grep -q -e 'nameserver' /etc/resolvconf/resolv.conf.d/head || printf "nameserver 10.10.10.22\n" >> /etc/resolvconf/resolv.conf.d/head
        - grep -q -e 'wpdn' /etc/resolvconf/resolv.conf.d/base || printf "search production.wpdn\ndomain production.wpdn\nnameserver 8.8.8.8" > /etc/resolvconf/resolv.conf.d/base
        - grep -q -e 'wpdn' /etc/resolv.conf || resolvconf -u
      
      runcmd:
        - sed -i "s/127.0.0.1 localhost/127.0.1.1 $(hostname).production.wpdn $(hostname)\n127.0.0.1 localhost/" /etc/hosts
        - apt-get install software-properties-common python-software-properties
        - add-apt-repository -y ppa:saltstack/salt
        - apt-get update
        - apt-get -y upgrade
        - apt-get -y autoremove
      
      packages:
        - salt-minion
        - salt-common
      
      # vim: et ts=2 sw=2 ft=yaml syntax=yaml
      
  2. Create two db-type VMs

    nova boot --image Ubuntu-14.04-Trusty --user-data /srv/opsconfigs/userdata.txt --key_name renoirb-production --flavor lightspeed --security-groups default db1-masterdb
    nova boot --image Ubuntu-14.04-Trusty --user-data /srv/opsconfigs/userdata.txt --key_name renoirb-production --flavor supersonic --security-groups default db2
    
  3. Accept them to the salt

    salt-key -y -a db1-masterdb
    salt-key -y -a db2
    

    As an aside. Imagine you want to run dependencies automatically once a VM is part of your salt-master. For example, adding its private IP address in a local Redis or Etcd live configuration object. One could create a Salt “Reactor and make sure the data is refreshed. This gist is a good starting point

  4. Wait the VM build to finish and get their private IP addresses

    nova list | grep db
    | ... | db1-masterdb | ACTIVE  | Running     | private-network=10.10.10.73 |
    | ... | db2          | ACTIVE  | Running     | private-network=10.10.10.74 |
    
  5. Add them to the pillars.
    Note that the part of the name “masterdb” is what Salt states uses to know which one will get the writes to.
    Note that in the end, the web apps configs will use the private IP address.
    Its quicker to generate pages if the backend doesn’t need to make name resolution each time it makes database queries.
    This is why we have to reflect the pillars. Ensure the following structure exists in the file.

    # Edit /srv/pillar/infra/init.sls at the following blocks
    infra:
      hosts_entries:
        masterdb: 10.10.10.73
    
  6. Refer to the right IP address in the configuration file with a similar salt pillar.get reference in the states.

      /srv/webplatform/blog/wp-config.php:
        file.managed:
          - source: salt://code/files/blog/wp-config.php.jinja
          - template: jinja
          - user: www-data
          - group: www-data
          - context:
              db_creds: {{ salt['pillar.get']('accounts:wiki:db') }}
              masterdb_ip: {{ salt['pillar.get']('infra:hosts_entries:masterdb') }}
          - require:
            - cmd: rsync-blog
    

    … and the wp-config.php.jinja

    <?php
    
    ## Some PHP configuration file that salt will serve on top of a deployed web application
    
    ## Managed by Salt Stack, please DO NOT TOUCH, or ALL CHANGES WILL be LOST!
    ## source {{ source }}
    
    define('DB_CHARSET',  "utf8");
    define('DB_COLLATE',  "");
    define('DB_HOST',     "{{ masterdb_ip|default('127.0.0.1')    }}");
    define('DB_NAME',     "{{ db_creds.database|default('wpblog') }}");
    define('DB_USER',     "{{ db_creds.username|default('root')   }}");
    define('DB_PASSWORD', "{{ db_creds.password|default('')       }}");
    
  7. Refresh the pillars, rebuild the salt master state.highstate, and test it out.

    salt-call saltutil.sync_all
    salt salt state.highstate
    
    salt-call pillar.get infra:hosts_entries:masterdb
    > local:
    >     10.10.10.73
    
  8. Make sure the VMs has the same version of salt as you do

    salt-call test.version
    > local:
    >     2014.7.0
    
    salt db\* test.version
    > db2:
    >     2014.7.0
    > db1-masterdb:
    >     2014.7.0
    
  9. Kick the VMs installation

    salt db\* state.highstate
    
  10. Highstate takes a while to run, but once you are done, you should be able to work with them with the remaining of this tutorial

    salt -G 'roles:db' mysql.version
    > db2:
    >     10.1.2-MariaDB-1~trusty-wsrep-log
    > db1-masterdb:
    >     10.1.2-MariaDB-1~trusty-wsrep-log
    

    Each db-type VM MySQL/MariaDB/Percona server will have a different database maintenance users defined in /etc/mysql/debian.cnf.

    Make sure you don’t overwrite them unless you import everything all at once, including the users and their grants.

  11. Check that each db VMs has their SSL certificate generated by Salt

    salt -G 'roles:db' cmd.run 'ls /etc/mysql | grep pem'
    > db2:
    >     ca-cert.pem
    >     ca-key.pem
    >     client-cert.pem
    >     client-key.pem
    >     client-req.pem
    >     server-cert.pem
    >     server-key.pem
    >     server-req.pem
    > db1-masterdb:
    >     ca-cert.pem
    >     ca-key.pem
    >     client-cert.pem
    >     client-key.pem
    >     client-req.pem
    >     server-cert.pem
    >     server-key.pem
    >     server-req.pem
    

    Each file is a certificate so they can use to make replication through SSL.

Now on each database server;

  1. Connect to both db nodes using the salt as a Jump Host

    ssh masterdb.production.wpdn
    ssh db2.production.wpdn
    
  2. Get to the MySQL/MariaDB/Percona prompt on each VMs.

    If you are used with terminal screens that allows to keep sessions running
    even if you get disconnected, that would be ideal. We never know if the connection hangs.

    On WebPlatform system we do have screen but tmux can do too.

    mysql
    
  3. Check if SSL is enabled on both MySQL/MariaDB/Percona servers

    > MariaDB [(none)]> SHOW VARIABLES like '%ssl%';
    > +---------------+----------------------------+
    > | Variable_name | Value                      |
    > +---------------+----------------------------+
    > | have_openssl  | YES                        |
    > | have_ssl      | YES                        |
    > | ssl_ca        | /etc/mysql/ca-cert.pem     |
    > | ssl_capath    |                            |
    > | ssl_cert      | /etc/mysql/server-cert.pem |
    > | ssl_cipher    | DHE-RSA-AES256-SHA         |
    > | ssl_crl       |                            |
    > | ssl_crlpath   |                            |
    > | ssl_key       | /etc/mysql/server-key.pem  |
    > +---------------+----------------------------+
    
  4. Generate SSL certificates for MySQL/MariaDB/Percona server, see this gist on how to do it.

  5. Places to double check; To see which config keys sets what’s shown in the previous screen, take a look in the VMs /etc/mysql/conf.d/ folders with similar entries.

    • bind-address is what allows us to communicate between servers, before MySQL 5.5 we had skip-networking but now only a bind-address is sufficient. Make sure that your security groups allows only local network connections though.
    • server_id MUST be with a different number for each nodes. Make sure no server has the same number.

      [mysqld]
      bind-address = 0.0.0.0
      log-basename=mariadbrepl
      log-bin
      binlog-format=row
      server_id=1
      ssl
      ssl-cipher=DHE-RSA-AES256-SHA
      ssl-ca=/etc/mysql/ca-cert.pem
      ssl-cert=/etc/mysql/server-cert.pem
      ssl-key=/etc/mysql/server-key.pem
      
      [client]
      ssl
      ssl-cert=/etc/mysql/client-cert.pem
      ssl-key=/etc/mysql/client-key.pem
      
  6. From the database master (a.k.a “masterdb”), Get the replication log position;
    We’ll need the File and Position values to setup the replication node.

    MariaDB [(none)]> show master status;
    > +------------------------+----------+--------------+------------------+
    > | File                   | Position | Binlog_Do_DB | Binlog_Ignore_DB |
    > +------------------------+----------+--------------+------------------+
    > | mariadbrepl-bin.000002 |      644 |              |                  |
    > +------------------------+----------+--------------+------------------+
    
  7. Configure the masterdb to accept replication users. From the salt master

     salt -G 'roles:masterdb' mysql.user_create replication_user '%' foobarbaz
    

    NOTE: My salt states script creates a grain in /etc/salt/grains with the following data;

    roles:
      - masterdb
    

    Alternatively, you could call the VM db1-masterdb, use a small python script that’ll parse the information for you and make it a grain automatically.

  8. Back to the masterdb VM, check if the user exists, ensure SSL is required

    MariaDB [(none)]> show grants for 'replication_user';
    > +-----------------------------------------------------------------------------------------------------------------------------+
    > | Grants for replication_user@%                                                                                               |
    > +---------------------------------------------------------------------------------------+
    > | GRANT REPLICATION SLAVE ON *.* TO 'replication_user'@'%' IDENTIFIED BY PASSWORD '...' |
    > +---------------------------------------------------------------------------------------+
    
    MariaDB [(none)]> GRANT REPLICATION SLAVE ON *.* TO 'replication_user'@'%.local.wpdn' REQUIRE SSL;
    MariaDB [(none)]> GRANT USAGE ON *.* TO 'replication_user'@'%' REQUIRE SSL;
    
    MariaDB [(none)]> SELECT User,Host,Repl_slave_priv,Repl_client_priv,ssl_type,ssl_cipher from mysql.user where User = 'replication_user';
    > +------------------+--------------+-----------------+------------------+----------+
    > | User             | Host         | Repl_slave_priv | Repl_client_priv | ssl_type |
    > +------------------+--------------+-----------------+------------------+----------+
    > | replication_user | %.local.wpdn | Y               | N                | ANY      |
    > +------------------+--------------+-----------------+------------------+----------+
    
  9. On the secondary db VM, in mysql prompt, setup the initial CHANGE MASTER statement;

    CHANGE MASTER TO
      MASTER_HOST='masterdb.local.wpdn',
      MASTER_USER='replication_user',
      MASTER_PASSWORD='foobarbaz',
      MASTER_PORT=3306,
      MASTER_LOG_FILE='mariadbrepl-bin.000002',
      MASTER_LOG_POS=644,
      MASTER_CONNECT_RETRY=10,
      MASTER_SSL=1,
      MASTER_SSL_CA='/etc/mysql/ca-cert.pem',
      MASTER_SSL_CERT='/etc/mysql/client-cert.pem',
      MASTER_SSL_KEY='/etc/mysql/client-key.pem';
    

Checking if it worked

From one of the secondary servers, look for success indicators:

  • Seconds_Behind_Master says 0,
  • Slave_IO_State says Waiting for master to send event

    ```
    MariaDB [wpstats]> show slave status\G
    *************************** 1. row ***************************
                   Slave_IO_State: Waiting for master to send event
                      Master_Host: masterdb.local.wpdn
                      Master_User: replication_user
                      Master_Port: 3306
                    Connect_Retry: 10
                  Master_Log_File: mariadbrepl-bin.000066
              Read_Master_Log_Pos: 19382112
                   Relay_Log_File: mariadbrepl-relay-bin.000203
                    Relay_Log_Pos: 19382405
            Relay_Master_Log_File: mariadbrepl-bin.000066
                 Slave_IO_Running: Yes
                Slave_SQL_Running: Yes
                  Replicate_Do_DB:
              Replicate_Ignore_DB:
               Replicate_Do_Table:
           Replicate_Ignore_Table:
          Replicate_Wild_Do_Table:
      Replicate_Wild_Ignore_Table:
                       Last_Errno: 0
                       Last_Error:
                     Skip_Counter: 0
              Exec_Master_Log_Pos: 19382112
                  Relay_Log_Space: 19382757
                  Until_Condition: None
                   Until_Log_File:
                    Until_Log_Pos: 0
               Master_SSL_Allowed: Yes
               Master_SSL_CA_File: /etc/mysql/ca-cert.pem
               Master_SSL_CA_Path:
                  Master_SSL_Cert: /etc/mysql/client-cert.pem
                Master_SSL_Cipher:
                   Master_SSL_Key: /etc/mysql/client-key.pem
            Seconds_Behind_Master: 0
    Master_SSL_Verify_Server_Cert: No
                    Last_IO_Errno: 0
                    Last_IO_Error:
                   Last_SQL_Errno: 0
                   Last_SQL_Error:
      Replicate_Ignore_Server_Ids:
                 Master_Server_Id: 1
                   Master_SSL_Crl: /etc/mysql/ca-cert.pem
               Master_SSL_Crlpath:
                       Using_Gtid: No
                      Gtid_IO_Pos:
          Replicate_Do_Domain_Ids:
      Replicate_Ignore_Domain_Ids:
    1 row in set (0.00 sec)
    ```
    

Managing users

In the end, since replication is active, you can add users to your system and all nodes will get the privileges.

The way I work is that I can use Salt stack states to add privileges in my states (more details soon)
or I can use a few salt commands from my salt master and send them to the database masterdb VM.

salt -G 'roles:masterdb' mysql.db_create 'accounts_oauth' 'utf8' 'utf8_general_ci'
salt -G 'roles:masterdb' mysql.user_create 'accounts' '%' 'barfoo'
salt -G 'roles:masterdb' mysql.grant_add 'ALL PRIVILEGES' 'accounts_oauth.*' 'accounts' ‘%’

References

  • http://dev.mysql.com/doc/refman/5.7/en/replication-solutions-ssl.html
  • https://mariadb.com/kb/en/mariadb/documentation/managing-mariadb/replication/standard-replication/setting-up-replication/

Quelques bouts de code pour automatiser le déploiement

Automate ALL THE THINGS!

Ce billet n’est qu’un simple «link dump» pour retrouver parmi plusieurs notes éparpillés. Je compte éventuellement publier la totalité de mon travail dans des projets publics sur GitHub une fois la boucle complétée. Le tout sans fournir les données privés, évidemment.

Faire le saut vers l’automatisation demande beaucoup de préparation et je prends le temps de publier ici quelques bouts de code que j’ai écrits pour compléter la tâche.

Au final, mon projet permettra de déployer un site qui s’appuie sur un cluster MariaDB, Memcached, une stack LAMP («prefork») lorsqu’on a pas le choix, une stack [HHVM/php5-fpm, Python, nodejs] app servers pour le reste servi par un frontend NGINX. Mes scripts vont déployer une série d’applications web avec toutes les dépendances qui les adaptent géré dans leur propre «git repo» parent. Dans mon cas, ce sera: WordPress, MediaWiki, Discourse, et quelques autres.

Requis

  • Instantiation à partir de commandes nova du terminal, crée une nouvelle VM mise à jour et son nom définit son rôle dans le réseau interne
  • Les VMs sont uniquement accessible par un Jump box (i.e. réseau interne seulement)
  • Un système regarde si un répertoire clone git à eu des changements sur la branche «master», lance un événement si c’est le cas
  • Chaque machine sont construites à partir d’une VM minimale. Dans ce cas-ci; Ubuntu 14.04 LTS
  • Système doit s’assurer que TOUTES les mises à jour sont appliqués régulièrement
  • Système doit s’assurer que ses services interne sont fonctionnels
  • Dans le cas d’une situation où une VM atteint le seuil critique OOM, la VM redémarre automatiquement
  • Le nom de la VM décrit son rôle, et les scripts d’installation installent les requis qui y sont affectés
  • Les configurations utilisent les détails (e.g. adresses IP privés et publiques) de chaque pool (e.g. redis, memcache, mariadb) et ajuste automatiquement les configurations dans chaque application
  • … etc.

Bouts de code

Billets inspirants sur le sujet

Processus de création d’une VM faisant partie d’un parc géré par Salt Stack

A mon emploi actuel je gère un parc de machines virtuelles (VMs) qui est automatisé par un système de gestion de configuration appelé Salt Stack.

Salt stack est un outil permettant de décrire quel est l’état désiré d’un serveur. Comme d’autres outils avec une utilité similaire, Salt Stack peut être utilisé dans des environnements hétérogènes sous GNU/Linux, Mac OS, et Windows, FreeBSD, etc.

L’environment que nous utilisons est un serveur avec des «blades». Chaque «blade» fournit les services créant un cluster OpenStack. Dans le futur, nous risquons d’avoir plus d’un fournisseur OpenStack. Pour automatiser comme nous l’aimons, nous utilisons grandement la ligne de commande avec le paquet python-novaclient.

Chaque machine virtuelle roule une version LTS («Long Term Support») de Ubuntu.

Absolument toutes les configurations sont appliqués via Salt Stack, la seule chose qui est fait manuellement en
ce moment est de créer la nouvelle instance, et de l’ajouter au «master» de Salt Stack.

Même là, ça risque de changer lorsque nous aurons déployé Salt Cloud.

Procédure

Mise à jour Mars 2015: Un nouvel article sur le même sujet a été écrit (en anglais) et illustre comment faire une nouvelle VM avec encore moins d’étapes

  1. Boot une nouvelle node avec Nova

    joe@master:~$ nova boot --image lucid-minion-template --flavor wpdn.large --key-name renoirb app6
    
  2. Donner un nom en fonction du type de serveur a déployer avec un numéro à la fin. Exemple: app6

    NOTE Dans mon cas, j’ai notamment: app, db, memcached, etc.

  3. Ajoute l’adresse floating dans /srv/pillar/nodes/init.sls comme les autres

    nodes:
      master:
        public:  ####IP PUBLIQUE CACHÉE####
        private: 10.0.0.1
    
      app1:
        public:  ####IP PUBLIQUE CACHÉE####
        private: 10.0.0.7
    
      memcache2:
        public:  ####IP PUBLIQUE CACHÉE####
        private: 10.0.0.4
    
      app5:
        public:  ####IP PUBLIQUE CACHÉE####
        private: 10.0.0.3
    
  4. Prend le fichier /home/ubuntu/runme de n’importe quel autre serveur et colle le dans la nouvelle machine. Puis execute (sudo /bin/bash runme)

  5. Ajouter une ligne dans le nouveau serveir dans /etc/salt/minion.d/master.conf

    id: app6
    

    … Voir les autres nodes

  6. Restart salt-minion

    ubuntu@node6:~$ sudo service salt-minion restart
    
  7. Ajoute la clée au master

    joe@master:~$ sudo salt-key -a app6
    

    … Le ‘-a foo’ est optionnel et tu peux lister Les nodes.

  8. Run state.highstate

    joe@master:~$ sudo salt app6 state.highstate
    
  9. Uploader le code via rsync dans la nouvelle app node, puis re-rouler state.highstate (certains scripts prennent pour aquis que le code est déjà déployé)

    joe@master:~$ sudo salt app6 state.sls code.node_app
    joe@master:~$ sudo salt app6 state.highstate
    

    Comme je disais, parfois, le premier state.highstate ne marche pas a cause du code pas déployé.

  10. Rafraichir les autorisations pour storage

    joe@master:~$ sudo salt 'storage*' state.highstate
    joe@master:~$ sudo salt 'monitor*' state.highstate
    
  11. Updater le hosts file de quelque nodes

    joe@master:~$ sudo salt 'app*' state.sls hosts
    joe@master:~$ sudo salt 'db*' state.sls hosts
    joe@master:~$ sudo salt 'memcache*' state.sls hosts
    

Project idea: Creating a home made OpenStack cluster for development purposes

Think about it. How about using spare computers to create a homemade OpenStack cluster for development.

We can do that from our cloud provider, or create a separate project or even use Wikimedia’s OpenStack infrastructure allowance for the project.

With such setup, one could work locally with his Salt stack (or Puppet, or Ansible) deployment schemes, try them, trash VMs, rebuild.

The beauty of it would be that it could be made in a fashion that would not even modify the computer running the VMs. The cluster member running OpenStack hypervisor would be installed seeded through net boot. Not booting from the network would revert the computer back as if it never been used.

Here is what I think would require to make this happen.

Limitations

  • Not use Computer/Laptop local hard drive
  • Rely only on net boot

Material

  • 1..n Computers/laptop supporting netboot
  • 1 Storage device supporting one or more storage protocol (nfs, samba, sshfs)

Hardware requirements

  • 1 VM providing tftp, dhcp, dns to serve as net boot server that should run outide of the cluster (“Networking node”)
  • 1 VM image of OpenStack controller (“OpS controller”)
  • 1 LiveCD+persistent image with OpenStack preinstalled, configured to use storage device credentials as it’s root filesystem (“OpS Hypervisor”)

Distribution choice factors

  • Networking node can be the smallest Linux possible, on a RaspberryPI, or a modified Router or Network Attached storage device?
  • OpS Hypervisor to be among the supported OpenStack distributions (I think a Ubuntu precise 12.04 LTS or a variant such as Puppy Linux might work too)

To be continued…

I will keep you posted whenever possible on the outcome of my research.

Did you ever do this in your infra. Leave a comment.

How to create a patch and ensure it is applied within Salt Stack

Quick tutorial on how to create a patch and ensure it is applied using salt stack.

Procedure

Creating a patch

  1. Create a copy of original file

    cp file file.orig

  2. Modify file, and test

  3. Create a md5 sum of the modified file for later use

    cat file | md5sum

  4. Revert modification, then prepare patch
    mv file file.mod
    cp file.orig file

  5. Create diff

    diff -Naur file file.mod > file.patch
    

Create Salt stack manifest block in appropriate state file

Add a block similar to this as a patch state definition. Make sure it is refered at least in your top.sls

    /usr/share/ganglia-webfrontend/auth.php:
      file.patch:
        - source: salt://monitor/auth.php.patch
        - hash: md5=480ef2ae17fdfee85585ab887fa1ae5f

References