Pablo Iranzo Gómez's blog

A bunch of unrelated data

ene 16, 2018

Recent changes in Magui and Citellus

Table of contents

  1. What's new?
    1. Citellus
    2. Magui
  2. Wrap up!

What's new?

During recent weeks we've been coding and performing several changes to Citellus and Magui.

Checking the latest logs or list of issues open and closed on github is probably not an easy task or the best way to get 'up-to-date' with changes, so I'll try to compile a few here.

First of all, we're going to present it at 2018, so come stop-by if assisting :-)

Some of the changes include...


  • New functions for bash scripts!
    • We've created lot of functions to check different things:
      • installed rpm
      • rpm over specific version
      • compare dates over X days
      • regexp in file
      • etc..
    • Functions do allow to do quicker plugin development.
  • save/restore options so they can be loaded automatically for each execution
    • Think of enabled filters, excluded, etc
  • metadata added for plugins and returned as dictionary
  • plugin has a unique ID for all installations based on plugin relative path and plugin name
    • We do use that ID in magui to select the plugin data we'll be acting on
  • plugin priority!
    • Plugins are assigned a number between 0 and 1000 that represents how likely it's going to affect your environment, and you can filter also on it with --prio
  • extended via 'extensions' to provide support for other plugins
    • moved prior plugins to be core extension
    • ansible playbook support via ansible-playbook command
    • metadata plugins that just generate metadata (hostname, date for sosreport, etc)
  • Web Interface!!
    • David Valee Delisle did a great job on preparing an html that loads citellus.json and shows it graphically.
    • Thanks to his work, we did extended some other features like priority, categories, etc that are calculated via citellus and consumed via citellus-www.
    • Interface can also load magui.json (with ?json=magui.json) and show it's output.
    • We did extend citellus to take --web to automatically create the json named citellus.json on the folder specified with -o and copy the citellus.html file there. So if you provide sosreports over http, you can point to citellus.html to see graphical status! (check latest image at citellus website as www.png )
  • Increased plugin count!
    • Now we do have more than 119 across different categories
    • A new plugin in python that checks for unexpected reboots
    • Spectre/Meltdown security checks!


  • If there's an existing citellus.json magui does load it to speed it up process across multiple sosreports.
  • Magui can also use ansible-playbook to copy citellus program to remote host and run there the command, and bring back the generated citellus.json so you can quickly run citellus across several hosts without having to manually perform operations or generate sosreports.
  • Moved prior data to two plugins:
    • citellus-outputs
      • Citellus plugins output arranged by plugin and sosreport
    • citellus-metadata
      • Outputs metadata gathered by metadata plugins in citellus arranged by plugin and sosreport
  • First plugins that compare data received from citellus on global level
    • Plugins are written in python and use each plugin id to just work on the data they know how to process
    • pipeline-yaml
      • Checks if pipeline.yaml and warns if is different across hosts
    • seqno
      • Checks latest galera seqno on hosts
    • release
      • Reports RHEL release across hosts and warns if is different across hosts
  • Enable quiet mode on the data received from citellus as well as local plugins, so only outputs with ERROR or different output on sosreports is shown, even on magui plugins.

Wrap up!

As you can see we've been busy trying to improve plugins, Citellus framework and Magui as well.

We've been also busy demonstrating to others it's value and raising lot of new issues and closing them with our commits (294 requests closed so far).

So, come and tell us what else are you missing or how can we improve it to suit your needs (or code them yourself and submit a review!)

Click to read and post comments

oct 26, 2017

i18n and 'bash8' in bash

Table of contents

  1. Introduction
  2. Bashate for bash code validation
  3. Bash i18n


In order to improve Citellus and Magui, we did implement some Unit testing to improve code quality.

The tests written were made in python and with some changes it was also possible to validate the actual tests.

Also, we did prepare the strings in python using gettext library so the actual messages can be translated to the language of choice (defaults to en, but can be changed via --lang modifier of citellus).

Bashate for bash code validation

One of the things I did miss was to have some kind of tox8 for validate format, and locate some errors. After some research I came to bashate, and as it was written in python was very easy to integrate:

  • Update test-requirements.txt to request bashate for 'tests'
  • Editing tox.ini to add a new section

    ~~~ini [testenv:bashate] commands = bash -c 'find citellus -name "*.sh" -type f -print0 | xargs -0 bashate -i E006' ~~~

This change makes that execution of tox also pulls the output of bashate so all the integration already done for CI, was automatically update to do bash formatting too :-)

Bash i18n

Another topic that was interesting is the ability to easily write code in one language and via poedit or equivalent editors, be able to localize it.

In python is more or less easy as we did for citellus code, but I wasn't aware of any way of doing that for bash scripts (such as the plugins we do use for citellus).

Doing a simple man bash gives some hints somewhat hidden:

    Equivalent to -D, but the output is in the GNU gettext po (portable object) file format.

So, bash has a way to dump 'po' strings (to be edited with poedit or your editor of choice), so only a bit more search was required to find how to really do it.

Apparently is a lot easier than I expected, as long as we take some considerations:

  • LANG shouldn't be C as it disables i18n
  • Environment variable TEXTDOMAIN should indicate the filename containing the translated strings.
  • Environment variable TEXTDOMAINDIR should contain the path to the root of the folder containing the translations, for example:
    • TEXTDOMAIN=citellus/locale
    • And language file for en as:
      • citellus/locale/en/LC_MESSAGES/$

Now, the "trickier" part was to prepare scripts...

# Legacy way
echo "String"
# i18n way
echo $"String"
# Difficult... isn't it?

This change makes 'bash' to lock for the string inside $TEXTDOMAINDIR/locale/$LANG/LC_MESSAGES/$ and do on the fly replacement of the strings for the translated ones (or fallback to the one echoed).

In citellus we did implement it by exporting the extra variables defined above, so scripts, as well as framework is ready for translation!.

Just in case, some remarks: - I found some complains when same script outputs the same string in several places, what I did, is to create a VAR and echo that var.

  • As we've strings in,, etc and the bash files, I did update a script to extract the required strings:
# Extract python strings
python extract_messages -F babel.cfg -k _L
# Extract bash strings
find citellus -name "*.sh" -exec bash --dump-po-strings "{}" \; > citellus/locale/citellus-plugins.pot
# Merge bash and python strings
msgcat -F citellus/locale/citellus.pot citellus/locale/citellus-plugins.pot > citellus/locale/citellus-new.pot
# Move file to destination
cat citellus/locale/citellus-new.pot > citellus/locale/citellus.pot

In this way, we're ready to use on editor to translate all the strings for the whole citellus + plugins.


Click to read and post comments

ago 17, 2017

Jenkins for running CI tests

Table of contents

  1. Why?
  2. Setup
    1. Tuning the OS
  3. Installing Jenkins
  4. Configure Jenkins
    1. Creating a Job
    2. Checking execution


While working on Citellus and Magui it soon became evident that Unit testing for validating the changes was a requirement.

Initially, using a .travis.yml file contained in the repo and the free service provided by we soon got repo providing information about if the builds succeded or not.

When it was decided to move to to work in a more similar way to what is being done in upstream, we improved on the code comenting (peer review), but we lost the ability to run the tests in an automated way until the change was merged into github.

After some research, it became more or less evident that another tool, like Jenkins was required to automate the UT process and report to individual reviews about the status.


Some initial steps are required for integration:

  • Create ssh keypair for jenkins to use
  • Creating github account to be used by jenkins and configuring above ssh keypair
  • Login into gerrithub with that account
  • Setup Jenkins and build jobs
  • Allow on the parent project, access to jenkins github account permission to +1/-1 on Verify

In order to setup the Jenkins environment a new VM was spawned in one of our RHV servers.

This VM was installed with:

  • 20 Gb of HDD
  • 2 Gb of RAM
  • 2 VCPU
  • Red Hat Enterprise Linux 7 'base install'

Tuning the OS

RHEL7 provides a stable environment for run on, but at the same time we were lacking some of the latest tools we're using for the builds.

As a dirty hack, it was altered in what is not a recomended way, but helped to quickly check as proof of concept if it would work or not.

Once OS was installed, some commands (do not run in production) were used:

pip install pip # to upgrade pip
pip install -U tox # To upgrade to 2.x version

# Install python 3.5 on the system
yum -y install openssl-devel gcc
tar xvzf Python-3.5-0.tgz
cd Python*

# This will install in alternate  folder in system not to replace user-wide python version
make altinstall

# this is required to later allow tox to find the command as 'jenkins' user
ln -s /usr/local/bin/python3.5 /usr/bin/

Installing Jenkins

For the jenkins installation it's easier, there's a 'stable' repo for RHEL and the procedure is documented:

wget -O /etc/yum.repos.d/jenkins.repo
rpm --import
yum install jenkins java
chkconfig jenkins on
service jenkins start
firewall-cmd --zone=public --add-port=8080/tcp --permanent
firewall-cmd --zone=public --add-service=http --permanent
firewall-cmd --reload

This will install and start jenkins and enable the firewall to access it.

If you can get to the url of your server at the port 8080, you'll be presented an initial procedure for installing Jenkins.

Jenkins dashboard

During it, you'll be asked for a password on a file on disk and you'll be prompted to create an user we'll be using from now on to configure.

Also, we'll be offered to deploy the most common set of plugins, choose that option, and later we'll add the gerrit plugin and Python.

Configure Jenkins

Once we can login into gerrit, we need to enter the administration area, and install new plugins and install Gerrit Trigger.

Manage Jenkins

Above link details how to do most of the setup, in this case, for gerrithub, we required:

  • Hostname: our hostname
  • Frontend URL:
  • SSH Port: 29418
  • Username: our-github-jenkins-user
  • SSH keyfile: path_to_private_sshkey

Gerrit trigger configuration

Once done, click on Test Connection and validate if it worked.

At the time of this writing, version reported by plugin was 2.13.6-3044-g7e9c06d when connected to

Gerrit servers

Creating a Job

Now, we need to create a Job (first option in Jenkins list of jobs).

  • Name: Citellus
  • Discard older executions:
    • Max number of executions to keep: 10
  • Source code Origin: Git
    • URL: ssh://
    • Credentials: jenkins (Created based on the ssh keypair defined above)
    • Branches to build: $GERRIT_BRANCH
    • Advanced
      • Refspec: $GERRIT_REFSPEC
    • Add additional behaviours
      • Strategy for choosing what to build:
        • Choosing strategy Gerrit Trigger
  • Triggers for launch:
    • Change Merged
    • Commend added with regexp: .recheck.
    • Patchset created
    • Ref Updated
    • Gerrit Project:
      • Type: plain
      • Pattern: citellusorg/citellus
    • Branches:
      • Type: Path
      • Pattern: **
  • Execute:
    • Python script:
import os
import tox


# environment is selected by ``TOXENV`` env variable

Jenkins Job configuration

From this point, any new push (review) made against gerrit will trigger a Jenkins build (in this case, running tox). Additionally, a manual trigger of the job can be executed to validate the behavior.

Manual trigger

Checking execution

In our project, tox checks some UT's on python 2.7, and python 3.5, as well as python's PEP compliance.

Now, Jenkins will build, and post messages on the review, stating that the build has started and the results of it, setting also the 'Verified' flag.

Gerrithub commens by Jenkins

Enjoy having automated validation of new reviews before accepting them into your code!

Click to read and post comments

jul 31, 2017

Magui for analysis of issues across everal hosts.


Citellus allows to check a sosreport against known problems identified on the provided tests.

This approach is easy to implement and easy to test but has limitations when a problem can span across several hosts and only the problem reveals itself when a general analysis is performed.

Magui tries to solve that by running the analysis functions inside citellus across a set of sosreports, unifying the data obtained per citellus plugin.

At the moment, Magui just does the grouping of the data and visualization, for example, give it a try with the seqno plugin of citellus to report the sequence number in galera database:

[user@host folder]$ * -f seqno # (filtering for ‘seqno’ plugins).
{'/home/remote/piranzo/citellus/citellus/plugins/openstack/mysql/': {'ctrl0.localdomain': {'err': '08a94e67-bae0-11e6-8239-9a6188749d23:36117633\n',
                                                                                                   'out': '',
                                                                                                   'rc': 0},
                                                                             'ctrl1.localdomain': {'err': '08a94e67-bae0-11e6-8239-9a6188749d23:36117633\n',
                                                                                                   'out': '',
                                                                                                   'rc': 0},
                                                                             'ctrl2.localdomain': {'err': '08a94e67-bae0-11e6-8239-9a6188749d23:36117633\n',
                                                                                                   'out': '',
                                                                                                   'rc': 0}}}

Here, we can see that the sequence number on the logs is the same for the hosts.

The goal, once tis has been discussed and determined, is to write plugins that get the raw data from citellus and applies logic on top by parsing the raw data obtained by the increasing number of citellus plugins and is able to detect issues like, for example:

  • galera seqno
  • cluster status
  • ntp syncronization across nodes
  • etc

Hope it's helpful for you! Pablo

PD: We've proposed this to be a talk in upcoming OSP Summit 2017 in Sydney, so if you want to see us there, don't forget to vote on

Click to read and post comments

jul 26, 2017

Citellus: framework for detecting known issues in systems.

Table of contents

  1. Background
  2. Citellus
  3. Writing a new test
  4. How to debug?


Since I became Technical Account Manager for Cloud and later as Software Maintenance Engineer for OpenStack, I became officially part of Red Hat Support.

We do usually diagnose issues based on data from the affected systems, sometimes from one system, and most of the times, from several at once.

It might be controllers nodes for OpenStack, Computes running instances, IdM, etc

In order to make it easier to grab the required information, we rely on sosreport.

Sosreport has a set of plugins for grabbing required information from system, ranging from networking configuration, installed packages, running services, processes and even for some components, it can also check API, database queries, etc.

But that's all, it does data gathering, packaging in a tarball but nothing else.

In OpenStack we've already identified common issues, so we create kbases for them, ranging from covering some documentation gaps, to speficic use cases or configuration options.

Many times, a missed configuration (documented) is causing headaches and can be checked with a simple checks, like TTL in ceilometer or stonith configuration on pacemaker.

Here is where Citellus comes to play.


The Citellus project created by my colleague Robin, aims on creating a set of tests that can be executed against a live system or an uncompressed sosreport tarball (it depends on the test if it applies to one or the other).

The philosphy behind is very easy: - There's a wrapper which allows to select plugins to use, or folder containing plugins, verbosity, etc and a sosreport folder to act against. - The wrapper does check the plugins available (can be anything executable from linux, so bash, python, etc are there to be used) - Then it setups some environment variables like the path to find the data and proceeds to execute the plugins against, recording the output of them. - The plugins, on their side, determine if: - Plugin should be run or skipped if it's a live system, a sosreport - Plugin should run because of required file or package missing - Provide return code of: - $RC_OKAY for success - $RC_FAILED for failure - $RC_SKIPPED for skip - anything else (Undetermined error) - Provide 'stderr' with relevant messages: - Reason to be skipped - Reason for failure - etc - The wrapper then sorts the output, and prints it based on settings (grouping skipped and ok by default) and detailing failures.

You can check the provided plugins on the github repo (and hopefully also collaborate sending yours).

Our target is to keep plugins easy to write, so we can extend the plugin set as much as possible, hilighting were focus should be put at first and once typical issues are ruled out, check on the deeper analysis.

Even if we've started with OpenStack plugins (that's what we do for a living), the software is open to check against whatever is there, and we've reached to other colleagues in different speciality areas to provide more feedback or contributions to make it even more useful.

As Citellus works with sosreports it is easy to have it installed locally and test new tests.

Writing a new test

Leading by the example is probably easier, so let's illustrate how to create a basic plugin for checking if a system is a RHV hosted engine:


if [ "$CITELLUS_LIVE" = "0" ];  ## Checks if we're running live or not
        grep -q ovirt-hosted-engine-ha $CITELLUS_ROOT/installed-rpms ## checks package
        returncode=$?  #stores return code
        if [ "x$returncode" == "x0" ];
            exit $RC_OKAY
            echo “ovirt-hosted-engine is not installed “ >&2 #Outputs info
            exit $RC_FAILED e #returns code to wrapper
        echo “Not running on Live system” >&2
        exit $RC_SKIPPED

Above example is a bit 'hacky', as we count on wrapper not outputing information if return code is $RC_OKAY, so it should have another conditional to write output or not.

How to debug?

Easiest way to do trial-error would be to create a new folder for your plugins to test and use something like this:

[user@host mytests]$ ~/citellus/ /cases/01884438/sosreport-20170724-175510/ycrta02.rd1.rf1 ~/mytests/  [-d debug]

DEBUG:__main__:Additional parameters: ['/cases/sosreport-20170724-175510/hostname', '/home/remote/piranzo/mytests/']
DEBUG:__main__:Found plugins: ['/home/remote/piranzo/mytests/']
_________ .__  __         .__  .__                
\_   ___ \|__|/  |_  ____ |  | |  |  __ __  ______
/    \  \/|  \   __\/ __ \|  | |  | |  |  \/  ___/
\     \___|  ||  | \  ___/|  |_|  |_|  |  /\___ \
 \______  /__||__|  \___  >____/____/____//____  >
        \/              \/                     \/
found #1 tests at /home/remote/piranzo/mytests/
mode: fs snapshot /cases/sosreport-20170724-175510/hostname
DEBUG:__main__:Running plugin: /home/remote/piranzo/mytests/
# /home/remote/piranzo/mytests/ failed
    “ovirt-hosted-engine is not installed “

DEBUG:__main__:Plugin: /home/remote/piranzo/mytests/, output: {'text': u'\x1b[31mfailed\x1b[0m', 'rc': 1, 'err': '\xe2\x80\x9covirt-hosted-engine is not installed \xe2\x80\x9c\n', 'out': ''}

That debug information comes from the python wrapper, if you need more detail inside your test, you can try set -x to have bash showing more information about progress.

Keep always in mind that all functionality is based on return codes and the stderr message to keep it simple.

Hope it's helpful for you! Pablo

PD: We've proposed this to be a talk in upcoming OSP Summit 2017 in Sydney, so if you want to see us there, don't forget to vote on

Click to read and post comments
← Previous Next → Page 2 of 15