Pablo Iranzo Gómez's blog

feb 20, 2017

Getting started with Ansible

In started to get familiar with Ansible because, apart of getting more and more accepted for OSP-related tasks and installation, I wanted to automate some tasks we needed to setup some servers for the OpenStack group I work for.

First of all, it's recommended to get latest version of ansible (tested on RHEL7 and Fedora), but in order not to mess with the system python libraries, it's convenient to use python's virtual environments.

A virtual Environment allows to create a 'chroot'-like enviroment that might contain different library versions to the one installed with the system (but be careful as if it's not kept track as part of the usually system patching process, it might become a security concern).

virtualenvs

For creating a virtualenv, we require the package python-virtualenv installed on our system and executing virtualenv and a target folder, for example:

[iranzo@iranzo ~]$ virtualenv .venv
New python executable in /home/iranzo/.venv/bin/python2
Also creating executable in /home/iranzo/.venv/bin/python
Installing setuptools, pip, wheel...done.

From this point, we've a base virtualenv installed, but as we would like to install more packages inside we'll first need to 'enter' into it:

. .venv/bin/activate

And from there, we can list the available/installed packages:

[iranzo@iranzo ~]$ pip list
DEPRECATION: The default format will switch to columns in the future. You can use --format=(legacy|columns) (or define a format=(legacy|columns) in your pip.conf under the [list] section) to disable this warning.
appdirs (1.4.0)
packaging (16.8)
pip (9.0.1)
pyparsing (2.1.10)
setuptools (34.2.0)
six (1.10.0)
wheel (0.30.0a0)

Now, all packages we install using pip will get installed to this folder, leaving system libraries intact.

Once we finished, to return back to system's environment, we'll execute deactivate.

Pipsi

In order to simplify the management we can make use of pipsi which not only allows to install Python packages as we'll normally do with pip, but also, takes care of doing proper symlinks so the installed packages are available directly for execution.

If our distribution provides it, we can install pipsi on our system:

dnf -y install pipsi

But if not, we can use this workaround (for example, on RHEL7)

# Use pip to install pipsi on the system (should be minor change not affecting other software installed)
pip install pipsi

From this point, we can use pipsi to take care of installation and maintenance (can do upgrades, removal, etc) of our python packages.

For example, we can install ansible by executing:

pipsi install ansible

This might fail, as ansible, does some compiling and for doing so, it might require some development libraries on your system, have care of that to satisfy requirements for the packages.

Prepare for ansible utilization

At this point we've the ansible binary available for execution as pipsi did take care of setting up the required symlinks, etc

Ansible uses an inventory file (can be provided on command line) so it can connect to the hosts listed there and apply playbooks which define the different actions to perform.

This file, for example, can consist of just a simple list of hosts to connect to like:

192.168.1.1
192.168.1.2
myhostname.net

And for starting we create a simple playbook, for example a HW asset inventory:

---
- hosts: all
  user: root

  tasks:
    - name: Display inventory of host
      debug:
        msg: "{{ inventory_hostname }} | {{ ansible_default_ipv4.address }} | | | {{ ansible_memtotal_mb }} | | | {{ ansible_bios_date }}"

This will act on all hosts, as user root and will run a task which prints a debug message crafted based on the contents of some of the facts that ansible gathers on the execution host.

To run it is quite easy:

[iranzo@iranzo labs]$ ansible-playbook -i myhost.net, inventory.yaml

PLAY [all] *********************************************************************

TASK [setup] *******************************************************************
ok: [myhost.net]

TASK [Display inventory of host] ***********************************************
ok: [myhost.net] => {
    "msg": "myhost.net | 192.168.1.1 | | | 14032 | | | 01/01/2011"
}

PLAY RECAP *********************************************************************
myhost.net             : ok=2    changed=0    unreachable=0    failed=0

This has connected to the target host, and returned a message with hostname, ip address, some empty fields, total memory and bios date.

This is a quite simple script, but for example, we can use ansible to deploy ansible binary on our target host using other modules available, in this case, for simplicity, we'll not be using pipsi for ansible installation.

---
- hosts: all
  user: root

  tasks:
    - name: Install git
      yum:
        name:
          - "git"
          - "python-virtualenv"
          - "openssl-devel"
        state: latest

    - name: Install virtualenv
      pip:
        virtualenv: "/root/infrared/.venv"
        name: pipsi

    - name: Upgrade virtualenv pip
      pip:
        virtualenv: "/root/infrared/.venv"
        name: pip
        extra_args: --upgrade

    - name: Upgrade virtualenv setuptools
      pip:
        virtualenv: "/root/infrared/.venv"
        name: setuptools
        extra_args: --upgrade

    - name: Install Ansible
      pip:
        virtualenv: "/root/infrared/.venv"
        name: ansible

At this point, the system should have ansible available from within the virtualenv we've created and should be avialble when executing:

# Activate python virtualenv
. .venv/bin/activate
# execute ansible
ansible-playbook -i hosts ansible.yaml

Have fun!

Click to read and post comments

nov 05, 2016

Unit testing for stampy

Since my prior post on Contributing to OpenStack, I liked the idea of using some automated tests to validate functionality and specifically, the corner cases that could arise when playing with the code.

Most of the errors fixed so far on stampy, were related with some pieces of the code not properly handling UTF or some information returned, etc and still it has improved, the idea of ensuring that prior errors were not put back into the code when some other changes were performed, started to arise to be a priority.

For implementing them, I made use of nose, which can be executed with nosetests and are available on Fedora as 'python-nose' and to provide further automation, I've also relied on tox also inspired n what OpenStack does.

Let's start with tox: once installed, a new configuration file is created for it, defining the different environments and configuration in a similar way to:

[tox]
minversion = 2.0
envlist = py27,pep8
skipsdist = True

[testenv]
passenv = CI TRAVIS TRAVIS_*
deps = -r{toxinidir}/requirements.txt
       -r{toxinidir}/test-requirements.txt
commands =
    /usr/bin/find . -type f -name "*.pyc" -delete
    nosetests \
        []
[testenv:pep8]
commands = flake8

[testenv:venv]
commands = {posargs}

[testenv:cover]
commands =
  coverage report

[flake8]
show-source = True
exclude=.venv,.git,.tox,dist,doc,*lib/python*,*egg,build

This file, defines two environments, one for validating pep8 for the python formatting and another one for validating python 2.7.

The environment definition for the tests, also performs some commands like executed the forementioned nosetests to run the defined unit tests.

Above tox.ini also mentions requirements.txt and test-requirements.txt, which define the python packages required to validate the program, that will be automatically installed by tox on a virtualenv, so the alternate versions being used, doesn't interfere with the system-wide ones we're using.

About the tests themselves, as nosetests does automatic discovery of tests to perform, I've created a new folder named tests/ and placed there some files in alphabetically order:

ls -l tests
total 28
-rw-r--r--. 1 iranzo iranzo  709 nov  5 16:58 test_00-setup.py
-rw-r--r--. 1 iranzo iranzo  739 nov  3 09:56 test_10-alias.py
-rw-r--r--. 1 iranzo iranzo  456 nov  3 23:53 test_10-autokarma.py
-rw-r--r--. 1 iranzo iranzo  581 nov  3 09:56 test_10-karma.py
-rw-r--r--. 1 iranzo iranzo 3544 nov  5 18:19 test_10-process.py
-rw-r--r--. 1 iranzo iranzo  477 nov  3 23:15 test_10-quote.py
-rw-r--r--. 1 iranzo iranzo  230 nov  3 09:56 test_10-sendmessage.py

First one test_00-setup takes the required commands to define the enviroment, as on each validation run of tox, a new environment should be available not to mask errors that could be overlooked.

#!/usr/bin/env python
# encoding: utf-8

from unittest import TestCase

from stampy.stampy import config, setconfig, createdb, dbsql

# Precreate DB for other operations to work
try:
    createdb()
except:
    pass

# Define configuration for tests
setconfig('token', '279488369:AAFqGVesZ-81n9sFafLQxUUCVO8_8L3JNEU')
setconfig('owner', 'iranzo')
setconfig('url', 'https://api.telegram.org/bot')
setconfig('verbosity', 'DEBUG')

# Empty karma database in case it contained some leftover
dbsql('DELETE from karma')
dbsql('DELETE from quote')
dbsql('UPDATE SQLITE_SEQUENCE SET SEQ=0 WHERE NAME="quote"')


class TestStampy(TestCase):
    def test_owner(self):
        self.assertEqual(config('owner'), 'iranzo')

This file creates the database if none is existing and defines some sample values, like DEBUG level, url for contacting telegram API servers, or even a token that can be used to test the functionality for sending messages.

Also, if the database is already existing, empties the karma table, quotes (and sets sequence to 0 to simulate TRUNCATE which is not available on sqlite)

An unittest is specified under the class inherited from TestCase imported from unittest, there for each one of the tests we want to performed, a new 'definition' is created and after it an assert is used, for example assertEqual validates that the function call returns the value provided as secondary argument, failing otherwise.

From that point, the tests are performed again in alphabetically order, so be careful in the naming of each tests or define a sequence number to use a top-to-bottom approach that will be probably easier to understand.

For example, for karma changes we've:

#!/#!/usr/bin/env python
# encoding: utf-8

from unittest import TestCase

from stampy.stampy import getkarma, updatekarma, putkarma


class TestStampy(TestCase):
    def test_putkarma(self):
        putkarma('patata', 0)
        self.assertEqual(getkarma('patata'), 0)

    def test_getkarma(self):
        self.assertEqual(getkarma('patata'), 0)

    def test_updatekarmaplus(self):
        updatekarma('patata', 2)
        self.assertEqual(getkarma('patata'), 2)

    def test_updatekarmarem(self):
        updatekarma('patata', -1)
        self.assertEqual(getkarma('patata'), 1)

Which starts by putting a known karma on a word, validating, verifying the query, update the value by a positive number and later, decrease it with a negative one.

For the aliases, we use a similar aproach, as we also play with the karma changes when an alias is defined:

#!/usr/bin/env python
# encoding: utf-8

from unittest import TestCase

from stampy.stampy import getkarma, putkarma, createalias, getalias, deletealias


class TestStampy(TestCase):

    def test_createalias(self):
        createalias('patata', 'creilla')
        self.assertEqual(getalias('patata'), 'creilla')

    def test_getalias(self):
        self.assertEqual(getalias('patata'), 'creilla')

    def test_increasealiaskarma(self):
        updatekarma('patata', 1)
        self.assertEqual(getkarma('patata'), 1)

        # Alias doesn't get increased as the 'aliases' modifications are in
        # process, not in the individual functions
        self.assertEqual(getkarma('creilla'), 0)

    def test_removealias(self):
        deletealias('patata')
        self.assertEqual(getkarma('creilla'), 0)

    def test_removekarma(self):
        putkarma('patata', 0)
        self.assertEqual(getkarma('patata'), 0)

Where an alias is created, verified, karma in creased on the word with an alias, and then the aliased value.

As noted in the above example, the individual function for the karma doesn't take into consideration the aliases so this must be handled by processing a message set via process(messages) which has been also modified as well as other functions to allow the implementation of individual tests for them.

This will for sure end up with some more code rewriting so the functions can be fully tested individually and as a whole, to ensure that the bot behaves as intended... and many more tests to come to the code.

As an end, an example of the execution of tox and the results raised:

tox
py27 installed: coverage==4.2,nose==1.3.7,prettytable==0.7.2
py27 runtests: PYTHONHASHSEED='604985980'
py27 runtests: commands[0] | /usr/bin/find . -type f -name *.pyc -delete
py27 runtests: commands[1] | nosetests
..................
----------------------------------------------------------------------
Ran 18 tests in 14.996s

OK
pep8 installed: coverage==4.2,nose==1.3.7,prettytable==0.7.2
pep8 runtests: PYTHONHASHSEED='604985980'
pep8 runtests: commands[0] | flake8
WARNING:test command found but not installed in testenv
  cmd: /usr/bin/flake8
  env: /home/iranzo/DEVEL/private/stampython/.tox/pep8
Maybe you forgot to specify a dependency? See also the whitelist_externals envconfig setting.
__________________________________________________________________________ summary ___________________________________________________________________________
  py27: commands succeeded
  pep8: commands succeeded
  congratulations :)

If you're using a CI system, like 'Travis', which is also available to https://github.com repos, a .travis.yml can be added to the repo to ensure those tests are performed automatically on each code push:

language: python
python:
    - 2.7

notifications:
    email: false

before_install:
    - pip install pep8
    - pip install misspellings
    - pip install nose

script:
    # Run pep8 on all .py files in all subfolders
    # (I ignore "E402: module level import not at top of file"
    # because of use case sys.path.append('..'); import <module>)
    - find . -name \*.py -exec pep8 --ignore=E402,E501 {} +
    - find . -name '*.py' | misspellings -f -
    - nosetests

Enjoy!

Click to read and post comments

jul 21, 2016

Contributing to OpenStack

Contributing to an opensource project might take some time at the beginning, the good thing with OpenStack is that there are lot of guides on how to start and collaborate.

What I did is to look for a bug in the project tagged as low-hanging-fruit, this allows to browse a large list of bugs that are classified as easy, so they are the best place for new starters to get familiar with the workflow.

I did found an issue with weight which is supposed to be an integer, that was doing a conversion from float to integer (0.1 -> 0) which was considered invalid, and instead an error should be returned.

When I checked the Neutron-LBaaS I found out where the problem was, as the value provided, was being converted to integer instead of validating it.

Before contributing you need to:

Submitting a change is quite easy:

# Select the project, 'neutron-lbaas' for me
each='neutron-lbaas'
git clone git@github.com:openstack/$each.git
cd $each
# This setups git-review, getting required hooks, etc
git-review -s
# create a new branch so we can keep our changes separate
git branch new-branch

# Edit files with changes
git add $files
git commit -m "Descriptive message"
# send  to upstream for review:
git-review

git-review will output an url you can use to preview your change, and the hooks will automatically add a 'Change-ID' so subsequent changes are linked to it.

NOTE: full reference is available at the Developer's Guide

The biggest issue started here:

  • In order to not require a new function to validate integers, I've used the one for non-negative which already does this tests, but one of the reviewers suggested to write a function
  • Functions were imported from neutron-lib so I submitted a second change to neutron-lib project
  • As the change in neutron-lib couldn't be marked as dependent as neutron-lbaas uses the build the version already published, I had to define another interim version of the function so that neutron-lbaas can use it in the meantime and raise another bug, to later remove this interim function once than neutron-lib includes the validate_integer function
  • As part of the comments on neutron-lib review, it was found that it would be nice to validate values, so after some discussion, I moved to use the internal validate_values.
  • Of course, validate_values is just doing data in valid_values, so it fails if data or valid_values are not comparable and doesn't do conversion of depending on the values itselves, so this spin-off another review for improving the ´validate_values´ function.

At the moment, I'm trying to close the one to neutron-lib to use the function already defined, and have it merged, and then continue with the other steps, like removing the interim function in neutron-lbaas and work on enhancing validate_values and close all the dependant launchpad bugs I've created for tracking.

My experience so far, is that sometimes it might be a bit difficult, as git-review is a collaborative environment so different opinions are being shared with different approachs and some of them are 'easier' and some others 'pickier' like having an 'extra space', etc.

Of couse, all the code is checked by some automation engines when submitted, which validates that the code still builds, no formatting errors, etc but many of them can be executed locally by using tox, which allows to perform part of the tests like:

  • tox -e pep8
  • tox -e py27
  • tox -e coverage

To respectively, validate pep8 formatting (line length, spaces around operators, docsstrings formatting, etc) and to run another set of tests like the ones you define.

After each set of changes performed to apply the feedback received, ensure to:

# Add the modified files to a commit

git add $files_modified

# Create the commit with the changes

git commit -m "whatever"

# This will show you the last two commits, so you can leave the first one and
# on the beginning of the second one,
# replace 'pick' for 'f' so the changes are merged with first one without
# keeping the commit message

git rebase -i HEAD~2

# Fix the commit message if needed  (like fixing formatting,
# set dependant commits, or bugs it closes, etc)

git commit --amend

# Submit changes again for review

git-review

Also, keep in mind that apart from submitting the code change is important to submit automated validation tests, which can be executed with tox -e py27 to view that the functions return the values we expect even if the input data is out of what it should be, or like coverage, to validate that the code is covered (check on tox.ini what is defined).

And last but not least, expect to have lot of comments on more serius changes like changes to stable libs, as lot of reviewers will come to ensure that everything looks good and might even discuss it on the regular meetings to ensure, that a change is a good fit for the product in the proposed approach.

Click to read and post comments

jun 03, 2016

New blog rendering engine: Pelican

As always, I don't usually find myself keen to write about things I do, until I later realize they might be helpful for others, and that's why in the past I decided to switch the place I was putting the information about why did to Github and also, take benefit of practicing markdown for writing the entries.

At that time, I moved my old blog posts to markdown to be used in conjunction with Jekyll and to use Octopress as the engine rendering the contents into a static website. The setup and migration was not difficult, but still require to use some ruby, while I was more familiar with Python.

Since some time ago, I was checking other platforms, following the same approach of rendering markdown files and sticker to Pelican, it's included in Fedora Repos (python-pelican). Pelican offers a similar behaviour, having also a server for allowing you to quickly test the new settings (plugins, themes, etc) and to publish the resulting website to a hosting provider.

As I did with Jekyll+Octopress, I'm still using github.io for it, and I'm in the process of adapting some changes like additional plugins, theme tweaks and consider to develop one of my own.

Click to read and post comments

ago 28, 2015

Filtering email with imapfilter

Since some time ago, email filter management was not scaling for me as I was using server-side filtering, I had to deal with the web-based interface which was missing some elements like drag&drop reordering of rules, cloning, etc.

As I was already using offlineimap to sync from the remote mailserver to my system into a maildir folder, I had almost all the elements I needed.

After searching for several options imapfilter seemed to be a perfect fit, so I started with a small set of rules and start integration with my email process.

On my first attempts, I setup a pre-sync hook on offlineimap by using as well as the postsync hook I already had:

presynchook  = time imapfilter
postsynchook = ~/.mutt/postsync-offlineimap.sh

Initial attempts were not good at all, applying filters on the remote imapserver was very time consuming and my actual 1 minute delay after finishing one check was becoming a real 10-15 minute interval between checks because of the imapfiltering and this was not scaling as I was putting new rules.

After some tries, and as I already had all the email synced offline, moved filtering to be locally instead of server-side, but as imapfilter requires an imap server, I tricked dovecot into using the local folder to be offered via imap:

protocols = imap
mail_location = maildir:~/.maildir/FOLDER/:INBOX=~/.maildir/FOLDER/.INBOX/
auth_debug_passwords=yes

This also required to change my foldernames to use "." in front of them, so I needed to change mutt configuration too for this:

set mask=".*"

and my mailfoders script:

set mbox_type=Maildir
set folder="~/.maildir/FOLDER"
set spoolfile="~/.maildir/FOLDER/.INBOX"

#mailboxes `echo -n "+ "; find ~/.cache/notmuch/mutt/results ~/.maildir/FOLDER -type d -not -name 'cur' -not -name 'new' -not -name 'tmp' -not -name '.notmuch' -not -name 'xapian' -not -name 'FOLDER' -printf "+'%f' "`

mailboxes `find ~/.maildir/FOLDER -type d -name cur -printf '%h '|tr " " "\n"|grep -v "^/home/iranzo/.maildir/FOLDER$"|sort|xargs echo`
#Store reply on current folder
folder-hook . 'set record="^"'

After this, I could start using imapfilter and start working on my set of rules... but first problem appeared, apparently I started having some duplicated email as I was cancelling and rerunning the script while debugging so a new tool was also introduced to 'dedup' my imap folder named IMAPdedup with a small script:

#!/bin/bash
(
for folder in $(python ~/.bin/imapdedup.py -s localhost  -u iranzo    -w '$PASSWORD'  -m -c -v  -l)
do
    python ~/.bin/imapdedup.py -s localhost  -u iranzo    -w '$PASSWORD'  -m -c  "$folder"

done
) 2>&1|grep "will be marked as deleted"

This script was taking care of listing all email foders on 'localhost' with my username and password (can be scripted or use external tools to gather it) and dedup email after each sync (in my postsync-offlinemap.sh as well as lbdq script for fetchning new addresses, notmuch and running imapfilter after syncing (to cath the limited filtering I do sever-side)

I still do some server-side filtering (4 rules), to get on a "Pending sort" folder all email which is either:

  • New support cases remain at INBOX
  • All emails from case updates, bugzilla, etc to _pending
  • All emails containing 'list' or 'bounces' in from to _pending
  • All emails not containing me directly on CC or To, to _pending

This more or less ensures a clean INBOX with most important things still there, and easier rule handling for email sorting.

So, after some tests, this is at the moment a simplified version of my filtering file:

---------------
--  Options  --
---------------

options.timeout = 30
options.subscribe = true
options.create = false

function offlineimap (key)
    local status
    local value
    status, value = pipe_from('grep -A2 ACCOUNT ~/.offlineimaprc | grep -v ^#|grep '.. key ..'|cut -d= -f2')C
        value = string.gsub(value, ' ', '')
        value = string.gsub(value, '\n', '')
        return value
end

----------------
--  Accounts  --
----------------

-- Connects to "imap1.mail.server", as user "user1" with "secret1" as
-- password.
EXAMPLE = IMAP {
    server = 'localhost',
    username = 'iranzo',
    password = '$PASSWORD',
    port = 143
}
-- My email
myuser = 'ranzo'

function mine(messages)
    email=messages:contain_cc(myuser)+messages:contain_to(myuser)+messages:contain_from(myuser)
    return email
end

function filter(messages,email,destination)
    messages:contain_from(email):move_messages(destination)
    messages:contain_to(email):move_messages(destination)
    messages:contain_cc(email):move_messages(destination)
    messages:contain_field('sender', email):move_messages(destination)
end

function deleteold(messages,days)
    todelete=messages:is_older(days)-mine(messages)
    todelete:move_messages(EXAMPLE['Trash'])
end


-- Define the msgs we're going to work on

-- Move sent messages to INBOX to later sorting
sent = EXAMPLE.Sent:select_all()
sent:move_messages(EXAMPLE['INBOX'])

inbox = EXAMPLE['INBOX']:select_all()
pending = EXAMPLE['INBOX/_pending']:select_all()
todos = pending + inbox

-- Mark as read messages sent from my user
todos:contain_from(myuser):is_recent():mark_seen()

-- Delete google calendar forwards
todos:contain_to('piranzo@gapps.example.com'):delete_messages()

-- Move all spam messages to Junk folder
spam = todos:contain_field('X-Spam-Score','*****')
spam:move_messages(EXAMPLE['Junk'])

-- Move Jive notifications
filter(todos,'jive-notify@example.com',EXAMPLE['INBOX/EXAMPLE/Customers/_jive'])

-- Filter EXAMPLEN
filter(todos,'dev-null@rhn.example.com',EXAMPLE['Trash'])

-- Filter PNT
filter(todos:contain_subject('[PNT] '),'noreply@example.com',EXAMPLE['Trash'])

-- Filter CPG (Customer Private Group)
filter(todos:contain_subject('Red Hat - Group '),'noreply@example.com',EXAMPLE['INBOX/EXAMPLE/Customers/Other/CPG'])

-- Remove month start reminders
todos:contain_subject('mailing list memberships reminder'):delete_messages()

-- Delete messages about New accounts created (RHN)
usercreated=todos:contain_subject('New Red Hat user account created')*todos:contain_from('noreply@example.com')
usercreated:delete_messages()

-- Search messages from CPG's
cpg = EXAMPLE['INBOX/EXAMPLE/Customers/Other/CPG']:select_all()
cpg:contain_subject('Cust1'):move_messages(EXAMPLE['INBOX/EXAMPLE/Customers/Cust1/CPG'])
cpg:contain_subject('Cust2'):move_messages(EXAMPLE['INBOX/EXAMPLE/Customers/Cust2/CPG'])
cpg:contain_subject('Cust3'):move_messages(EXAMPLE['INBOX/EXAMPLE/Customers/Cust3/CPG'])
cpg:contain_subject('Cust4'):move_messages(EXAMPLE['INBOX/EXAMPLE/Customers/Cust4/CPG'])

-- Move bugzilla messages
filter(todos:contain_subject('] New:'),'bugzilla@example.com',EXAMPLE['INBOX/EXAMPLE/Customers/_bugzilla/new'])
filter(todos,'bugzilla@example.com',EXAMPLE['INBOX/EXAMPLE/Customers/_bugzilla'])

-- Move all support messages to Other for later processing
filter(todos:contain_subject('(NEW) ('),'support@example.com',EXAMPLE['INBOX/EXAMPLE/Customers/_new'])
filter(todos:contain_subject('Case '),'support@example.com',EXAMPLE['INBOX/EXAMPLE/Customers/Other/cases'])

EXAMPLE['INBOX/EXAMPLE/Customers/_new']:is_seen():move_messages(EXAMPLE['INBOX/EXAMPLE/Customers/Other/cases'])

support = EXAMPLE['INBOX/EXAMPLE/Customers/Other/cases']:select_all()
-- Restart the search only for messages in Other to also process if we have new rules

support:contain_subject('is about to breach its SLA'):delete_messages()
support:contain_subject('has breached its SLA'):delete_messages()
support:contain_subject(' has had no activity in '):delete_messages()

-- Here the process is customer after customer and mark as read messages from non-prio customers
support:contain_body('Cust1'):move_messages(EXAMPLE['INBOX/EXAMPLE/Customers/Cust1/cases'])
support:contain_body('Cust2'):move_messages(EXAMPLE['INBOX/EXAMPLE/Customers/Cust2/cases'])
support:contain_body('Cust3'):move_messages(EXAMPLE['INBOX/EXAMPLE/Customers/Cust3/cases'])
support:contain_body('Cust4'):move_messages(EXAMPLE['INBOX/EXAMPLE/Customers/Cust4/cases'])

-- For customer swith common matching names, use header field
support:contain_field('X-SFDC-X-Account-Number', 'XXXX'):move_messages(EXAMPLE['INBOX/EXAMPLE/Customers/Cust5/cases'])
support:contain_body('Customer         : COMMONNAME'):move_messages(EXAMPLE['INBOX/EXAMPLE/Customers/Cust6/cases'])

-- Non prio customers (mark updates as read)
cust7 = support:contain_body('WATCHINGCUST') + support:contain_body('Cust7')
cust7:mark_seen()
cust7:move_messages(EXAMPLE['INBOX/EXAMPLE/Customers/Cust7/cases'])

-- Filter other messages by domain
filter(todos,'todos.es', EXAMPLE['INBOX/EXAMPLE/Customers/Cust8'])

-- Process all remaining messages in INBOX + all read messages in pending-sort for mailing lists and move to lists folder
filter(todos,'list', EXAMPLE['INBOX/Lists'])
filter(todos,'bounces',EXAMPLE['INBOX/Lists'])

-- Add EXAMPLE lists, inbox and _pending and Fedora default bin for reprocessing in case a new list has been added
lists = todos + EXAMPLE['INBOX/Lists']:select_all() + EXAMPLE['INBOX/Lists/Fedora']:select_all()

-- Mailing lists

-- EXAMPLE
filter(lists,'outages-list',EXAMPLE['INBOX/Lists/EXAMPLE/general/outage'])
filter(lists,'announce-list',EXAMPLE['INBOX/Lists/EXAMPLE/general/announce'])

-- Fedora
filter(lists,'kickstart-list',EXAMPLE['INBOX/Lists/Fedora/kickstart'])
filter(lists,'ambassadors@lists.fedoraproject.org',EXAMPLE['INBOX/Lists/Fedora/Ambassador'])
filter(lists,'infrastructure@lists.fedoraproject.org',EXAMPLE['INBOX/Lists/Fedora/infra'])
filter(lists,'announce@lists.fedoraproject.org',EXAMPLE['INBOX/Lists/Fedora/announce'])
filter(lists,'lists.fedoraproject.org',EXAMPLE['INBOX/Lists/Fedora'])

-- OSP
filter(lists,'openstack@lists.openstack.org',EXAMPLE['INBOX/Lists/OpenStack'])
filter(lists,'openstack-es@lists.openstack.org',EXAMPLE['INBOX/Lists/OpenStack/es'])

-- Filter my messages not filtered back to INBOX
mios=pending:contain_from(myuser)
mios:move_messages(EXAMPLE['INBOX'])

-- move messages we're in BCC to INBOX for manual sorting
hidden = pending - mine(pending)
hidden:move_messages(EXAMPLE['INBOX'])

-- Start processing of messages older than:
maxage=60

-- Delete old messages from mailing lists
deleteold(EXAMPLE['INBOX/Lists/EXAMPLE/general/media'],maxage)
deleteold(EXAMPLE['INBOX/Lists/EXAMPLE/general/outage'],maxage)

-- delete old cases
maxage=180

-- for each in $(cat .imapfilter/config.lua|grep -i cases|tr " ,()" "\n"|grep cases|sort|uniq|grep -v ":" );do echo "deleteold($each,maxage)";done
deleteold(EXAMPLE['INBOX/EXAMPLE/Customers/Cust1/cases'],maxage)
deleteold(EXAMPLE['INBOX/EXAMPLE/Customers/Cust2/cases'],maxage)
deleteold(EXAMPLE['INBOX/EXAMPLE/Customers/Cust3/cases'],maxage)
deleteold(EXAMPLE['INBOX/EXAMPLE/Customers/Other/cases'],maxage)

deleteold(EXAMPLE['INBOX/EXAMPLE/Customers/_bugzilla'],maxage)

-- Empty trash every 7 days
maxage=7
deleteold(EXAMPLE['Trash'],maxage)

As this is applied filtering twice, offlineimap might be uploading part of your changes already, making it faster to next syncs, and suffle some of your emails while it runs.

The point of adding the already filtered set to be filtered again (CPG, cases, etc) is that if a new customer is consiredered to be filter on a folder of its own, the messages will be picked up and moved accordingly automatically ;-)

Hope it helps, and happy filtering!

Click to read and post comments
Next → Page 1 of 12