Pablo Iranzo Gómez's blog

ago 17, 2017

Jenkins for running CI tests

Why?

While working on Citellus and Magui it soon became evident that Unit testing for validating the changes was a requirement.

Initially, using a .travis.yml file contained in the repo and the free service provided by https://travis-ci.org we soon got https://github.com repo providing information about if the builds succeded or not.

When it was decided to move to https://gerrithub.io to work in a more similar way to what is being done in upstream, we improved on the code comenting (peer review), but we lost the ability to run the tests in an automated way until the change was merged into github.

After some research, it became more or less evident that another tool, like Jenkins was required to automate the UT process and report to individual reviews about the status.

Setup

Some initial steps are required for integration:

  • Create ssh keypair for jenkins to use
  • Creating github account to be used by jenkins and configuring above ssh keypair
  • Login into gerrithub with that account
  • Setup Jenkins and build jobs
  • Allow on the parent project, access to jenkins github account permission to +1/-1 on Verify

In order to setup the Jenkins environment a new VM was spawned in one of our RHV servers.

This VM was installed with:

  • 20 Gb of HDD
  • 2 Gb of RAM
  • 2 VCPU
  • Red Hat Enterprise Linux 7 'base install'

Tuning the OS

RHEL7 provides a stable environment for run on, but at the same time we were lacking some of the latest tools we're using for the builds.

As a dirty hack, it was altered in what is not a recomended way, but helped to quickly check as proof of concept if it would work or not.

Once OS was installed, some commands (do not run in production) were used:

pip install pip # to upgrade pip
pip install -U tox # To upgrade to 2.x version

# Install python 3.5 on the system
yum -y install openssl-devel gcc
wget https://www.python.org/ftp/python/3.5.0/Python-3.5.0.tgz
tar xvzf Python-3.5-0.tgz
cd Python*
./configure

# This will install in alternate  folder in system not to replace user-wide python version
make altinstall

# this is required to later allow tox to find the command as 'jenkins' user
ln -s /usr/local/bin/python3.5 /usr/bin/

Installing Jenkins

For the jenkins installation it's easier, there's a 'stable' repo for RHEL and the procedure is documented:

wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat-stable/jenkins.repo
rpm --import https://jenkins-ci.org/redhat/jenkins-ci.org.key
yum install jenkins java
chkconfig jenkins on
service jenkins start
firewall-cmd --zone=public --add-port=8080/tcp --permanent
firewall-cmd --zone=public --add-service=http --permanent
firewall-cmd --reload

This will install and start jenkins and enable the firewall to access it.

If you can get to the url of your server at the port 8080, you'll be presented an initial procedure for installing Jenkins.

Jenkins dashboard

During it, you'll be asked for a password on a file on disk and you'll be prompted to create an user we'll be using from now on to configure.

Also, we'll be offered to deploy the most common set of plugins, choose that option, and later we'll add the gerrit plugin and Python.

Configure Jenkins

Once we can login into gerrit, we need to enter the administration area, and install new plugins and install Gerrit Trigger.

Manage Jenkins

Above link details how to do most of the setup, in this case, for gerrithub, we required:

  • Hostname: our hostname
  • Frontend URL: https://review.gerrithub.io
  • SSH Port: 29418
  • Username: our-github-jenkins-user
  • SSH keyfile: path_to_private_sshkey

Gerrit trigger configuration

Once done, click on Test Connection and validate if it worked.

At the time of this writing, version reported by plugin was 2.13.6-3044-g7e9c06d when connected to gerrithub.io.

Gerrit servers

Creating a Job

Now, we need to create a Job (first option in Jenkins list of jobs).

  • Name: Citellus
  • Discard older executions:
    • Max number of executions to keep: 10
  • Source code Origin: Git
    • URL: ssh://@review.gerrithub.io:29418/zerodayz/citellus
    • Credentials: jenkins (Created based on the ssh keypair defined above)
    • Branches to build: $GERRIT_BRANCH
    • Advanced
      • Refspec: $GERRIT_REFSPEC
    • Add additional behaviours
      • Strategy for choosing what to build:
        • Choosing strategy Gerrit Trigger
  • Triggers for launch:
    • Change Merged
    • Commend added with regexp: .recheck.
    • Patchset created
    • Ref Updated
    • Gerrit Project:
      • Type: plain
      • Pattern: zerodayz/citellus
    • Branches:
      • Type: Path
      • Pattern: **
  • Execute:
    • Python script:
import os
import tox

os.chdir(os.getenv('WORKSPACE'))

# environment is selected by ``TOXENV`` env variable
tox.cmdline()

Jenkins Job configuration

From this point, any new push (review) made against gerrit will trigger a Jenkins build (in this case, running tox). Additionally, a manual trigger of the job can be executed to validate the behavior.

Manual trigger

Checking execution

In our project, tox checks some UT's on python 2.7, and python 3.5, as well as python's PEP compliance.

Now, Jenkins will build, and post messages on the review, stating that the build has started and the results of it, setting also the 'Verified' flag.

Gerrithub commens by Jenkins

Enjoy having automated validation of new reviews before accepting them into your code!

Click to read and post comments

jul 31, 2017

Magui for analysis of issues across everal hosts.

Background

Citellus allows to check a sosreport against known problems identified on the provided tests.

This approach is easy to implement and easy to test but has limitations when a problem can span across several hosts and only the problem reveals itself when a general analysis is performed.

Magui tries to solve that by running the analysis functions inside citellus across a set of sosreports, unifying the data obtained per citellus plugin.

At the moment, Magui just does the grouping of the data and visualization, for example, give it a try with the seqno plugin of citellus to report the sequence number in galera database:

[user@host folder]$ magui.py * -f seqno # (filtering for ‘seqno’ plugins).
{'/home/remote/piranzo/citellus/citellus/plugins/openstack/mysql/seqno.sh': {'ctrl0.localdomain': {'err': '08a94e67-bae0-11e6-8239-9a6188749d23:36117633\n',
                                                                                                   'out': '',
                                                                                                   'rc': 0},
                                                                             'ctrl1.localdomain': {'err': '08a94e67-bae0-11e6-8239-9a6188749d23:36117633\n',
                                                                                                   'out': '',
                                                                                                   'rc': 0},
                                                                             'ctrl2.localdomain': {'err': '08a94e67-bae0-11e6-8239-9a6188749d23:36117633\n',
                                                                                                   'out': '',
                                                                                                   'rc': 0}}}

Here, we can see that the sequence number on the logs is the same for the hosts.

The goal, once tis has been discussed and determined, is to write plugins that get the raw data from citellus and applies logic on top by parsing the raw data obtained by the increasing number of citellus plugins and is able to detect issues like, for example:

  • galera seqno
  • cluster status
  • ntp syncronization across nodes
  • etc

Hope it's helpful for you! Pablo

PD: We've proposed this to be a talk in upcoming OSP Summit 2017 in Sydney, so if you want to see us there, don't forget to vote on https://www.openstack.org/summit/sydney-2017/vote-for-speakers#/19095

Click to read and post comments

jul 26, 2017

Citellus: framework for detecting known issues in systems.

Background

Since I became Technical Account Manager for Cloud and later as Software Maintenance Engineer for OpenStack, I became officially part of Red Hat Support.

We do usually diagnose issues based on data from the affected systems, sometimes from one system, and most of the times, from several at once.

It might be controllers nodes for OpenStack, Computes running instances, IdM, etc

In order to make it easier to grab the required information, we rely on sosreport.

Sosreport has a set of plugins for grabbing required information from system, ranging from networking configuration, installed packages, running services, processes and even for some components, it can also check API, database queries, etc.

But that's all, it does data gathering, packaging in a tarball but nothing else.

In OpenStack we've already identified common issues, so we create kbases for them, ranging from covering some documentation gaps, to speficic use cases or configuration options.

Many times, a missed configuration (documented) is causing headaches and can be checked with a simple checks, like TTL in ceilometer or stonith configuration on pacemaker.

Here is where Citellus comes to play.

Citellus

The Citellus project https://github.com/zerodayz/citellus/ created by my colleague Robin, aims on creating a set of tests that can be executed against a live system or an uncompressed sosreport tarball (it depends on the test if it applies to one or the other).

The philosphy behind is very easy: - There's a wrapper citellus.py which allows to select plugins to use, or folder containing plugins, verbosity, etc and a sosreport folder to act against. - The wrapper does check the plugins available (can be anything executable from linux, so bash, python, etc are there to be used) - Then it setups some environment variables like the path to find the data and proceeds to execute the plugins against, recording the output of them. - The plugins, on their side, determine if: - Plugin should be run or skipped if it's a live system, a sosreport - Plugin should run because of required file or package missing - Provide return code of: - $RC_OKAY for success - $RC_FAILED for failure - $RC_SKIPPED for skip - anything else (Undetermined error) - Provide 'stderr' with relevant messages: - Reason to be skipped - Reason for failure - etc - The wrapper then sorts the output, and prints it based on settings (grouping skipped and ok by default) and detailing failures.

You can check the provided plugins on the github repo (and hopefully also collaborate sending yours).

Our target is to keep plugins easy to write, so we can extend the plugin set as much as possible, hilighting were focus should be put at first and once typical issues are ruled out, check on the deeper analysis.

Even if we've started with OpenStack plugins (that's what we do for a living), the software is open to check against whatever is there, and we've reached to other colleagues in different speciality areas to provide more feedback or contributions to make it even more useful.

As Citellus works with sosreports it is easy to have it installed locally and test new tests.

Writing a new test

Leading by the example is probably easier, so let's illustrate how to create a basic plugin for checking if a system is a RHV hosted engine:

#!/bin/bash

if [ "$CITELLUS_LIVE" = "0" ];  ## Checks if we're running live or not
then
        grep -q ovirt-hosted-engine-ha $CITELLUS_ROOT/installed-rpms ## checks package
        returncode=$?  #stores return code
        if [ "x$returncode" == "x0" ];
        then
            exit $RC_OKAY
        else
            echo “ovirt-hosted-engine is not installed “ >&2 #Outputs info
            exit $RC_FAILED e #returns code to wrapper
        fi
else
        echo “Not running on Live system” >&2
        exit $RC_SKIPPED
fi

Above example is a bit 'hacky', as we count on wrapper not outputing information if return code is $RC_OKAY, so it should have another conditional to write output or not.

How to debug?

Easiest way to do trial-error would be to create a new folder for your plugins to test and use something like this:

[user@host mytests]$ ~/citellus/citellus.py /cases/01884438/sosreport-20170724-175510/ycrta02.rd1.rf1 ~/mytests/  [-d debug]


DEBUG:__main__:Additional parameters: ['/cases/sosreport-20170724-175510/hostname', '/home/remote/piranzo/mytests/']
DEBUG:__main__:Found plugins: ['/home/remote/piranzo/mytests/ovirt-engine.sh']
_________ .__  __         .__  .__                
\_   ___ \|__|/  |_  ____ |  | |  |  __ __  ______
/    \  \/|  \   __\/ __ \|  | |  | |  |  \/  ___/
\     \___|  ||  | \  ___/|  |_|  |_|  |  /\___ \
 \______  /__||__|  \___  >____/____/____//____  >
        \/              \/                     \/
found #1 tests at /home/remote/piranzo/mytests/
mode: fs snapshot /cases/sosreport-20170724-175510/hostname
DEBUG:__main__:Running plugin: /home/remote/piranzo/mytests/ovirt-engine.sh
# /home/remote/piranzo/mytests/ovirt-engine.sh: failed
    “ovirt-hosted-engine is not installed “

DEBUG:__main__:Plugin: /home/remote/piranzo/mytests/ovirt-engine.sh, output: {'text': u'\x1b[31mfailed\x1b[0m', 'rc': 1, 'err': '\xe2\x80\x9covirt-hosted-engine is not installed \xe2\x80\x9c\n', 'out': ''}

That debug information comes from the python wrapper, if you need more detail inside your test, you can try set -x to have bash showing more information about progress.

Keep always in mind that all functionality is based on return codes and the stderr message to keep it simple.

Hope it's helpful for you! Pablo

PD: We've proposed this to be a talk in upcoming OSP Summit 2017 in Sydney, so if you want to see us there, don't forget to vote on https://www.openstack.org/summit/sydney-2017/vote-for-speakers#/19095

Click to read and post comments