Anitya Infrastructure SOP¶
Anitya is used by Fedora to track upstream project releases and maps them to downstream distribution packages, including (but not limited to) Fedora.
Anitya staging instance: https://stg.release-monitoring.org
Anitya production instance: https://release-monitoring.org
Anitya project page: https://github.com/release-monitoring/anitya
Fedora Infrastructure Team
pingou, jcline, zlopez
Map upstream releases to Fedora packages.
The current deployment is made up of two hosts, anitya-backend01 and release-monitoring OpenShift instance.
This host runs:
The apache/mod_wsgi application for release-monitoring.org
A fedmsg-relay instance for anitya’s local fedmsg bus
A cronjob that retrieves all projects from the PostgreSQL database and checks the upstream project to see if there’s a new version. This is run every 1 hour.
This host relies on:
A postgres db server running on anitya-backend01
Lots of external third-party services. The anitya webapp can scrape pypi, rubygems.org, sourceforge and many others on command.
Lots of external third-party services. The cronjob makes all kinds of requests out to the Internet that can fail in various ways.
Things that rely on this host:
The Fedora Infrastructure bus subscribes to the anitya bus published here by the local fedmsg-relay daemon at tcp://release-monitoring.org:9940
the-new-hotness is a fedmsg-hub plugin running in FI on hotness01. It listens for anitya messages from here and performs actions on koji and bugzilla.
anitya-backend01 expects to publish fedmsg messages via anitya-frontend01’s fedmsg-relay daemon. Access should be restricted by firewall.
It is the host for the Anitya PostgreSQL database server.
The services and jobs on this host are:
A PostgreSQL database server to be used by release-monitoring.
A database backup job that runs daily. Database dumps are available at the normal database dump location.
Things that rely on this host:
The webapps running on anitya-frontend01 relies on the postgres db server running on this node.
The cronjob running on release-monitoring relies on the postgres db server running on this node.
The release process is described in Anitya documentation.
Staging deployment of Anitya is deployed in OpenShift on os-master01.stg.phx2.fedoraproject.org.
To deploy staging instance of Anitya you need to push changes to staging branch on Anitya Github. GitHub webhook will then automatically deploy a new version of Anitya on staging.
Production deployment of Anitya is deployed in OpenShift on os-master01.phx2.fedoraproject.org.
To deploy production instance of Anitya you need to push changes to production branch on Anitya Github. GitHub webhook will then automatically deploy a new version of Anitya on production.
All the following commands should be run from batcave01.
First, ensure there are no configuration changes required for the new update. If there are, update the Ansible anitya role(s) and optionally run the playbook:
$ sudo rbac-playbook openshift-apps/release-monitoring.yml
The configuration changes could be limited to staging only using:
$ sudo rbac-playbook openshift-apps/release-monitoring.yml -l staging
This is recomended for testing new configuration changes.
To deploy new version of Anitya you need to push changes to staging branch on Anitya Github. GitHub webhook will then automatically deploy a new version of Anitya on staging.
Anitya web application offers some functionality to administer itself.
User admin status is tracked in anitya database. Admin users can grant or revoke admin priviledges to users in the users tab <https://release-monitoring.org/users>.
Admin users have additional functionality available in web interface. In particular, admins can view flagged projects, remove projects and remove package mappings etc.
This section contains various issues encountered during deployment or configuration changes and possible solutions.
Fedmsg messages aren’t sent¶
Issue: Fedmsg messages aren’t sent.
Solution: Set USER environment variable in pod.
Explanation: Fedmsg is using USER env variable as a username inside messages. Without USER env set it just crashes and didn’t send anything.
Cronjob is crashing¶
Issue: Cronjob pod is crashing on start, even after configuration change that should fix the behavior.
Solution: Restart the cronjob. This could be done by OPS.
Explanation: Every time the cronjob is executed after crash it is trying to actually reuse the pod with bad configuration instead of creating a new one with new configuration.
Database migration is taking too long¶
Issue: Database migration is taking few hours to complete.
Solution: Stop every pod and cronjob before migration.
Explanation: When creating new index or doing some other complex operation on database, the migration script needs exclusive access to the database.