Pablo Iranzo Gómez's blog

oct 26, 2017

i18n and 'bash8' in bash

Introduction

In order to improve Citellus and Magui, we did implement some Unit testing to improve code quality.

The tests written were made in python and with some changes it was also possible to validate the actual tests.

Also, we did prepare the strings in python using gettext library so the actual messages can be translated to the language of choice (defaults to en, but can be changed via --lang modifier of citellus).

Bashate for bash code validation

One of the things I did miss was to have some kind of tox8 for validate format, and locate some errors. After some research I came to bashate, and as it was written in python was very easy to integrate:

  • Update test-requirements.txt to request bashate for 'tests'
  • Editing tox.ini to add a new section

    ~~~ini [testenv:bashate] commands = bash -c 'find citellus -name "*.sh" -type f -print0 | xargs -0 bashate -i E006' ~~~

This change makes that execution of tox also pulls the output of bashate so all the integration already done for CI, was automatically update to do bash formatting too :-)

Bash i18n

Another topic that was interesting is the ability to easily write code in one language and via poedit or equivalent editors, be able to localize it.

In python is more or less easy as we did for citellus code, but I wasn't aware of any way of doing that for bash scripts (such as the plugins we do use for citellus).

Doing a simple man bash gives some hints somewhat hidden:

--dump-po-strings
    Equivalent to -D, but the output is in the GNU gettext po (portable object) file format.

So, bash has a way to dump 'po' strings (to be edited with poedit or your editor of choice), so only a bit more search was required to find how to really do it.

Apparently is a lot easier than I expected, as long as we take some considerations:

  • LANG shouldn't be C as it disables i18n
  • Environment variable TEXTDOMAIN should indicate the filename containing the translated strings.
  • Environment variable TEXTDOMAINDIR should contain the path to the root of the folder containing the translations, for example:
    • TEXTDOMAIN=citellus/locale
    • And language file for en as:
      • citellus/locale/en/LC_MESSAGES/$TEXTDOMAIN.mo

Now, the "trickier" part was to prepare scripts...

# Legacy way
echo "String"
# i18n way
echo $"String"
# Difficult... isn't it?

This change makes 'bash' to lock for the string inside $TEXTDOMAINDIR/locale/$LANG/LC_MESSAGES/$TEXTDOMAIN.mo and do on the fly replacement of the strings for the translated ones (or fallback to the one echoed).

In citellus we did implement it by exporting the extra variables defined above, so scripts, as well as framework is ready for translation!.

Just in case, some remarks: - I found some complains when same script outputs the same string in several places, what I did, is to create a VAR and echo that var.

  • As we've strings in citellus.py, magui.py, etc and the bash files, I did update a script to extract the required strings:
# Extract python strings
python setup.py extract_messages -F babel.cfg -k _L
# Extract bash strings
find citellus -name "*.sh" -exec bash --dump-po-strings "{}" \; > citellus/locale/citellus-plugins.pot
# Merge bash and python strings
msgcat -F citellus/locale/citellus.pot citellus/locale/citellus-plugins.pot > citellus/locale/citellus-new.pot
# Move file to destination
cat citellus/locale/citellus-new.pot > citellus/locale/citellus.pot

In this way, we're ready to use on editor to translate all the strings for the whole citellus + plugins.

Enjoy!

Click to read and post comments

ago 17, 2017

Jenkins for running CI tests

Why?

While working on Citellus and Magui it soon became evident that Unit testing for validating the changes was a requirement.

Initially, using a .travis.yml file contained in the repo and the free service provided by https://travis-ci.org we soon got https://github.com repo providing information about if the builds succeded or not.

When it was decided to move to https://gerrithub.io to work in a more similar way to what is being done in upstream, we improved on the code comenting (peer review), but we lost the ability to run the tests in an automated way until the change was merged into github.

After some research, it became more or less evident that another tool, like Jenkins was required to automate the UT process and report to individual reviews about the status.

Setup

Some initial steps are required for integration:

  • Create ssh keypair for jenkins to use
  • Creating github account to be used by jenkins and configuring above ssh keypair
  • Login into gerrithub with that account
  • Setup Jenkins and build jobs
  • Allow on the parent project, access to jenkins github account permission to +1/-1 on Verify

In order to setup the Jenkins environment a new VM was spawned in one of our RHV servers.

This VM was installed with:

  • 20 Gb of HDD
  • 2 Gb of RAM
  • 2 VCPU
  • Red Hat Enterprise Linux 7 'base install'

Tuning the OS

RHEL7 provides a stable environment for run on, but at the same time we were lacking some of the latest tools we're using for the builds.

As a dirty hack, it was altered in what is not a recomended way, but helped to quickly check as proof of concept if it would work or not.

Once OS was installed, some commands (do not run in production) were used:

pip install pip # to upgrade pip
pip install -U tox # To upgrade to 2.x version

# Install python 3.5 on the system
yum -y install openssl-devel gcc
wget https://www.python.org/ftp/python/3.5.0/Python-3.5.0.tgz
tar xvzf Python-3.5-0.tgz
cd Python*
./configure

# This will install in alternate  folder in system not to replace user-wide python version
make altinstall

# this is required to later allow tox to find the command as 'jenkins' user
ln -s /usr/local/bin/python3.5 /usr/bin/

Installing Jenkins

For the jenkins installation it's easier, there's a 'stable' repo for RHEL and the procedure is documented:

wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat-stable/jenkins.repo
rpm --import https://jenkins-ci.org/redhat/jenkins-ci.org.key
yum install jenkins java
chkconfig jenkins on
service jenkins start
firewall-cmd --zone=public --add-port=8080/tcp --permanent
firewall-cmd --zone=public --add-service=http --permanent
firewall-cmd --reload

This will install and start jenkins and enable the firewall to access it.

If you can get to the url of your server at the port 8080, you'll be presented an initial procedure for installing Jenkins.

Jenkins dashboard

During it, you'll be asked for a password on a file on disk and you'll be prompted to create an user we'll be using from now on to configure.

Also, we'll be offered to deploy the most common set of plugins, choose that option, and later we'll add the gerrit plugin and Python.

Configure Jenkins

Once we can login into gerrit, we need to enter the administration area, and install new plugins and install Gerrit Trigger.

Manage Jenkins

Above link details how to do most of the setup, in this case, for gerrithub, we required:

  • Hostname: our hostname
  • Frontend URL: https://review.gerrithub.io
  • SSH Port: 29418
  • Username: our-github-jenkins-user
  • SSH keyfile: path_to_private_sshkey

Gerrit trigger configuration

Once done, click on Test Connection and validate if it worked.

At the time of this writing, version reported by plugin was 2.13.6-3044-g7e9c06d when connected to gerrithub.io.

Gerrit servers

Creating a Job

Now, we need to create a Job (first option in Jenkins list of jobs).

  • Name: Citellus
  • Discard older executions:
    • Max number of executions to keep: 10
  • Source code Origin: Git
    • URL: ssh://@review.gerrithub.io:29418/zerodayz/citellus
    • Credentials: jenkins (Created based on the ssh keypair defined above)
    • Branches to build: $GERRIT_BRANCH
    • Advanced
      • Refspec: $GERRIT_REFSPEC
    • Add additional behaviours
      • Strategy for choosing what to build:
        • Choosing strategy Gerrit Trigger
  • Triggers for launch:
    • Change Merged
    • Commend added with regexp: .recheck.
    • Patchset created
    • Ref Updated
    • Gerrit Project:
      • Type: plain
      • Pattern: zerodayz/citellus
    • Branches:
      • Type: Path
      • Pattern: **
  • Execute:
    • Python script:
import os
import tox

os.chdir(os.getenv('WORKSPACE'))

# environment is selected by ``TOXENV`` env variable
tox.cmdline()

Jenkins Job configuration

From this point, any new push (review) made against gerrit will trigger a Jenkins build (in this case, running tox). Additionally, a manual trigger of the job can be executed to validate the behavior.

Manual trigger

Checking execution

In our project, tox checks some UT's on python 2.7, and python 3.5, as well as python's PEP compliance.

Now, Jenkins will build, and post messages on the review, stating that the build has started and the results of it, setting also the 'Verified' flag.

Gerrithub commens by Jenkins

Enjoy having automated validation of new reviews before accepting them into your code!

Click to read and post comments

jul 31, 2017

Magui for analysis of issues across everal hosts.

Background

Citellus allows to check a sosreport against known problems identified on the provided tests.

This approach is easy to implement and easy to test but has limitations when a problem can span across several hosts and only the problem reveals itself when a general analysis is performed.

Magui tries to solve that by running the analysis functions inside citellus across a set of sosreports, unifying the data obtained per citellus plugin.

At the moment, Magui just does the grouping of the data and visualization, for example, give it a try with the seqno plugin of citellus to report the sequence number in galera database:

[user@host folder]$ magui.py * -f seqno # (filtering for ‘seqno’ plugins).
{'/home/remote/piranzo/citellus/citellus/plugins/openstack/mysql/seqno.sh': {'ctrl0.localdomain': {'err': '08a94e67-bae0-11e6-8239-9a6188749d23:36117633\n',
                                                                                                   'out': '',
                                                                                                   'rc': 0},
                                                                             'ctrl1.localdomain': {'err': '08a94e67-bae0-11e6-8239-9a6188749d23:36117633\n',
                                                                                                   'out': '',
                                                                                                   'rc': 0},
                                                                             'ctrl2.localdomain': {'err': '08a94e67-bae0-11e6-8239-9a6188749d23:36117633\n',
                                                                                                   'out': '',
                                                                                                   'rc': 0}}}

Here, we can see that the sequence number on the logs is the same for the hosts.

The goal, once tis has been discussed and determined, is to write plugins that get the raw data from citellus and applies logic on top by parsing the raw data obtained by the increasing number of citellus plugins and is able to detect issues like, for example:

  • galera seqno
  • cluster status
  • ntp syncronization across nodes
  • etc

Hope it's helpful for you! Pablo

PD: We've proposed this to be a talk in upcoming OSP Summit 2017 in Sydney, so if you want to see us there, don't forget to vote on https://www.openstack.org/summit/sydney-2017/vote-for-speakers#/19095

Click to read and post comments

jul 26, 2017

Citellus: framework for detecting known issues in systems.

Background

Since I became Technical Account Manager for Cloud and later as Software Maintenance Engineer for OpenStack, I became officially part of Red Hat Support.

We do usually diagnose issues based on data from the affected systems, sometimes from one system, and most of the times, from several at once.

It might be controllers nodes for OpenStack, Computes running instances, IdM, etc

In order to make it easier to grab the required information, we rely on sosreport.

Sosreport has a set of plugins for grabbing required information from system, ranging from networking configuration, installed packages, running services, processes and even for some components, it can also check API, database queries, etc.

But that's all, it does data gathering, packaging in a tarball but nothing else.

In OpenStack we've already identified common issues, so we create kbases for them, ranging from covering some documentation gaps, to speficic use cases or configuration options.

Many times, a missed configuration (documented) is causing headaches and can be checked with a simple checks, like TTL in ceilometer or stonith configuration on pacemaker.

Here is where Citellus comes to play.

Citellus

The Citellus project https://github.com/zerodayz/citellus/ created by my colleague Robin, aims on creating a set of tests that can be executed against a live system or an uncompressed sosreport tarball (it depends on the test if it applies to one or the other).

The philosphy behind is very easy: - There's a wrapper citellus.py which allows to select plugins to use, or folder containing plugins, verbosity, etc and a sosreport folder to act against. - The wrapper does check the plugins available (can be anything executable from linux, so bash, python, etc are there to be used) - Then it setups some environment variables like the path to find the data and proceeds to execute the plugins against, recording the output of them. - The plugins, on their side, determine if: - Plugin should be run or skipped if it's a live system, a sosreport - Plugin should run because of required file or package missing - Provide return code of: - $RC_OKAY for success - $RC_FAILED for failure - $RC_SKIPPED for skip - anything else (Undetermined error) - Provide 'stderr' with relevant messages: - Reason to be skipped - Reason for failure - etc - The wrapper then sorts the output, and prints it based on settings (grouping skipped and ok by default) and detailing failures.

You can check the provided plugins on the github repo (and hopefully also collaborate sending yours).

Our target is to keep plugins easy to write, so we can extend the plugin set as much as possible, hilighting were focus should be put at first and once typical issues are ruled out, check on the deeper analysis.

Even if we've started with OpenStack plugins (that's what we do for a living), the software is open to check against whatever is there, and we've reached to other colleagues in different speciality areas to provide more feedback or contributions to make it even more useful.

As Citellus works with sosreports it is easy to have it installed locally and test new tests.

Writing a new test

Leading by the example is probably easier, so let's illustrate how to create a basic plugin for checking if a system is a RHV hosted engine:

#!/bin/bash

if [ "$CITELLUS_LIVE" = "0" ];  ## Checks if we're running live or not
then
        grep -q ovirt-hosted-engine-ha $CITELLUS_ROOT/installed-rpms ## checks package
        returncode=$?  #stores return code
        if [ "x$returncode" == "x0" ];
        then
            exit $RC_OKAY
        else
            echo “ovirt-hosted-engine is not installed “ >&2 #Outputs info
            exit $RC_FAILED e #returns code to wrapper
        fi
else
        echo “Not running on Live system” >&2
        exit $RC_SKIPPED
fi

Above example is a bit 'hacky', as we count on wrapper not outputing information if return code is $RC_OKAY, so it should have another conditional to write output or not.

How to debug?

Easiest way to do trial-error would be to create a new folder for your plugins to test and use something like this:

[user@host mytests]$ ~/citellus/citellus.py /cases/01884438/sosreport-20170724-175510/ycrta02.rd1.rf1 ~/mytests/  [-d debug]


DEBUG:__main__:Additional parameters: ['/cases/sosreport-20170724-175510/hostname', '/home/remote/piranzo/mytests/']
DEBUG:__main__:Found plugins: ['/home/remote/piranzo/mytests/ovirt-engine.sh']
_________ .__  __         .__  .__                
\_   ___ \|__|/  |_  ____ |  | |  |  __ __  ______
/    \  \/|  \   __\/ __ \|  | |  | |  |  \/  ___/
\     \___|  ||  | \  ___/|  |_|  |_|  |  /\___ \
 \______  /__||__|  \___  >____/____/____//____  >
        \/              \/                     \/
found #1 tests at /home/remote/piranzo/mytests/
mode: fs snapshot /cases/sosreport-20170724-175510/hostname
DEBUG:__main__:Running plugin: /home/remote/piranzo/mytests/ovirt-engine.sh
# /home/remote/piranzo/mytests/ovirt-engine.sh: failed
    “ovirt-hosted-engine is not installed “

DEBUG:__main__:Plugin: /home/remote/piranzo/mytests/ovirt-engine.sh, output: {'text': u'\x1b[31mfailed\x1b[0m', 'rc': 1, 'err': '\xe2\x80\x9covirt-hosted-engine is not installed \xe2\x80\x9c\n', 'out': ''}

That debug information comes from the python wrapper, if you need more detail inside your test, you can try set -x to have bash showing more information about progress.

Keep always in mind that all functionality is based on return codes and the stderr message to keep it simple.

Hope it's helpful for you! Pablo

PD: We've proposed this to be a talk in upcoming OSP Summit 2017 in Sydney, so if you want to see us there, don't forget to vote on https://www.openstack.org/summit/sydney-2017/vote-for-speakers#/19095

Click to read and post comments

feb 23, 2017

InfraRed for deploying OpenStack

InfraRed is tool that allows to install/provision OpenStack. You can find the documentation for the project at http://infrared.readthedocs.io.

Also, developers and users are online in FreeNode at #infrared channel.

Why InfraRed?

Deploying OSP with OSP-d (TripleO) requires several setup steps for preparation, deployment, etc. InfraRed simplifies them by automating with ansible most of those steps and configuration.

  • It allows to deploy several OSP versions
  • Allows to ease connection to installed vm roles (Ceph, Computes, Controllers, Undercloud)
  • Allows to define working environments so one InfraRed-running host can be used to manage different environments
  • and much more...

Setup of InfraRed-running host

Setting InfraRed is quite easy, at the moment the version 2 (branch on github) is working pretty well.

We'll start with:

  • Clone GIT repo: git clone https://github.com/redhat-openstack/infrared.git
  • Create a virtual ENV so we can proceed with installation, later we'll need to source it before each use. cd infrared ; virtualenv .venv && source .venv/bin/activate
  • Proceed with upgrade of pip and setuptools (required) and installation of InfraRed
    • pip install --upgrade pip
    • pip install --upgrade setuptools
    • pip install .

Remote host setup

Once done, we need to setup the requirements on the host we'll use to virtualize, this includes, having the system registered against a repository providing required packages.

  • Register RHEL7 and update:
    • subscription-manager register (provide your credentials)
    • subscription-manager attach --pool= (check pool number first)
    • subscription-manager repos --disable=*
    • for canal in rhel-7-server-extras-rpms rhel-7-server-fastrack-rpms rhel-7-server-optional-fastrack-rpms rhel-7-server-optional-rpms rhel-7-server-rh-common-rpms rhel-7-server-rhn-tools-rpms rhel-7-server-rpms rhel-7-server-supplementary-rpms rhel-ha-for-rhel-7-server-rpms;do subscription-manager repos --enable=$canal; done

NOTES

  • OSP7 did not contain RPM packaged version of images, a repo with the images needs to be defined like:
    • time infrared tripleo-undercloud --version $VERSION --images-task import --images-url $REPO_URL
    • NOTE: --images-task import and --images-url
  • Ceph failed to install unless --storage-backend ceph was provided (open bug for that)

Error reporting

  • IRC or github

RFE/BUGS

Some bugs/RFE on the way to get implemented some day:

  • Allow use of localhost to launch installation against local host
  • Multi env creation, so several osp-d versions are deployed on the same hypervisor but one launched
  • Automatically add --storage-backend ceph when ceph nodes defined

Using Ansible to deploy InfraRed

This is something that I began testing to automate the basic setup, still is needed to decide version to use, and do deployment of infrastructure vm's but does some automation for setting up the hypervisors.

---
- hosts: all
  user: root

  tasks:
    - name: Install git
      yum:
        name:
          - "git"
          - "python-virtualenv"
          - "openssl-devel"
        state: latest

    - name: "Checkout InfraRed to /root/infrared folder"
      git:
        repo: https://github.com/redhat-openstack/infrared.git
        dest: /root/infrared

    - name: Initialize virtualenv
      pip:
        virtualenv: "/root/infrared/.venv"
        name: setuptools, pip

    - name: Upgrade virtualenv pip
      pip:
        virtualenv: "/root/infrared/.venv"
        name: pip
        extra_args: --upgrade

    - name: Upgrade virtualenv setuptools
      pip:
        virtualenv: "/root/infrared/.venv"
        name: setuptools
        extra_args: --upgrade

    - name: Install InfraRed
      pip:
        virtualenv: "/root/infrared/.venv"
        name: file:///root/infrared/.

This playbook will do checkout of git repo, setup extra pip commands to upgrade virtualenv's deployed pip and setuptools, etc.

Deploy environment examples

This will show the commands that might be used to deploy some environments and some sample timings on a 64Gb RAM host.

Common requirements

export HOST=myserver.com
export HOST_KEY=~/.ssh/id_rsa
export ANSIBLE_LOG_PATH=deploy.log

Cleanup

time infrared virsh --cleanup True --host-address $HOST --host-key $HOST_KEY

OSP 9 (3 + 2)

Define version to use

export VERSION=9

time infrared virsh --host-address $HOST --host-key $HOST_KEY --topology-nodes "undercloud:1,controller:3,compute:2"

real    11m19.665s
user    3m7.013s
sys     1m27.941s

time infrared tripleo-undercloud --version $VERSION --images-task rpm

real    48m8.742s
user    10m35.800s
sys     5m23.126s

time infrared tripleo-overcloud --deployment-files virt --version 9 --introspect yes --tagging yes --post yes

real    43m44.424s
user    9m36.592s
sys     4m39.188s

OSP 8 (3+2)

export VERSION=8

time infrared virsh --host-address $HOST --host-key $HOST_KEY --topology-nodes "undercloud:1,controller:3,compute:2"

real    11m29.478s
user    3m10.174s
sys     1m28.276s

time infrared tripleo-undercloud --version $VERSION --images-task rpm

real    40m47.387s
user    9m14.151s
sys     4m24.820s

time infrared tripleo-overcloud --deployment-files virt --version $VERSION --introspect yes --tagging yes --post yes

real    42m57.315s
user    9m2.412s
sys     4m25.840s

OSP 10 (3+2)

export VERSION=10

time infrared virsh --host-address $HOST --host-key $HOST_KEY --topology-nodes "undercloud:1,controller:3,compute:2"

real    10m54.710s
user    2m42.761s
sys     1m12.844s

time infrared tripleo-undercloud --version $VERSION --images-task rpm

real    43m10.474s
user    8m34.905s
sys     4m3.732s

time infrared tripleo-overcloud --deployment-files virt --version $VERSION --introspect yes --tagging yes --post yes

real    54m1.111s
user    11m55.808s
sys     6m1.023s

OSP 7 (3+2+3)

export VERSION=7

time infrared virsh --host-address $HOST --host-key $HOST_KEY --topology-nodes "undercloud:1,controller:3,compute:2,ceph:3"

real    13m46.205s
user    3m46.753s
sys     1m47.422s

time infrared tripleo-undercloud --version $VERSION --images-task import    --images-url $URLTOIMAGES

real    43m14.471s
user    9m45.479s
sys     4m53.126s

time infrared tripleo-overcloud --deployment-files virt --version $VERSION --introspect yes --tagging yes --post yes     --storage-backend ceph

real    86m47.471s
user    20m2.582s
sys     9m42.577s

Wrapping-up

Please do refer to the InfraRed documentation to get deeper in its possibilities and if interested, consider contributing!

Click to read and post comments
Next → Page 1 of 13