===============
 Configuration
===============

Please read
`productmd documentation <http://release-engineering.github.io/productmd/index.html>`_
for
`terminology <http://release-engineering.github.io/productmd/terminology.html>`_
and other release and compose related details.


Minimal Config Example
======================
::

    # RELEASE
    release_name = "Fedora"
    release_short = "Fedora"
    release_version = "23"

    # GENERAL SETTINGS
    comps_file = "comps-f23.xml"
    variants_file = "variants-f23.xml"

    # KOJI
    koji_profile = "koji"
    runroot = False

    # PKGSET
    sigkeys = [None]
    pkgset_source = "koji"
    pkgset_koji_tag = "f23"

    # GATHER
    gather_method = "deps"
    greedy_method = "build"
    check_deps = False

    # BUILDINSTALL
    buildinstall_method = "lorax"


Release
=======
Following **mandatory** options describe a release.


Options
-------

**release_name** [mandatory]
    (*str*) -- release name

**release_short** [mandatory]
    (*str*) -- release short name, without spaces and special characters

**release_version** [mandatory]
    (*str*) -- release version

**release_type** = "ga" (*str*) -- release type, for example ``ga``,
    ``updates`` or ``updates-testing``. See `list of all valid values
    <http://productmd.readthedocs.io/en/latest/common.html#productmd.common.RELEASE_TYPES>`_
    in productmd documentation.

**release_internal** = False
    (*bool*) -- whether the compose is meant for public consumption

**treeinfo_version**
    (*str*) Version to display in ``.treeinfo`` files. If not configured, the
    value from ``release_version`` will be used.


Example
-------
::

    release_name = "Fedora"
    release_short = "Fedora"
    release_version = "23"
    # release_type = "ga"


Base Product
============
Base product options are **optional** and we need
to them only if we're composing a layered product
built on another (base) product.


Options
-------

**base_product_name**
    (*str*) -- base product name

**base_product_short**
    (*str*) -- base product short name, without spaces and special characters

**base_product_version**
    (*str*) -- base product **major** version

**base_product_type** = "ga"
    (*str*) -- base product type, "ga", "updates" etc., for full list see
    documentation of *productmd*.


Example
-------
::

    release_name = "RPM Fusion"
    release_short = "rf"
    release_version = "23.0"

    base_product_name = "Fedora"
    base_product_short = "Fedora"
    base_product_version = "23"

General Settings
================

Options
-------

**comps_file** [mandatory]
    (:ref:`scm_dict <scm_support>`, *str* or None) -- reference to comps XML
    file with installation groups

**variants_file** [mandatory]
    (:ref:`scm_dict <scm_support>` or *str*) -- reference to variants XML file
    that defines release variants and architectures

**module_defaults_dir** [optional]
    (:ref:`scm_dict <scm_support>` or *str*) -- reference the module defaults
    directory containing modulemd-defaults YAML documents. Files relevant for
    modules included in the compose will be embedded in the generated repodata
    and available for DNF.

    ::

        module_defaults_dir = {
            "scm": "git",
            "repo": "https://pagure.io/releng/fedora-module-defaults.git",
            "dir": ".",
        }

**failable_deliverables** [optional]
    (*list*) -- list which deliverables on which variant and architecture can
    fail and not abort the whole compose. This only applies to ``buildinstall``
    and ``iso`` parts. All other artifacts can be configured in their
    respective part of configuration.

    Please note that ``*`` as a wildcard matches all architectures but ``src``.

**comps_filter_environments** [optional]
    (*bool*) -- When set to ``False``, the comps files for variants will not
    have their environments filtered to match the variant.

**tree_arches**
    ([*str*]) -- list of architectures which should be included; if undefined,
    all architectures from variants.xml will be included

**tree_variants**
    ([*str*]) -- list of variants which should be included; if undefined, all
    variants from variants.xml will be included

**repoclosure_strictness**
    (*list*) -- variant/arch mapping describing how repoclosure should run.
    Possible values are

     * ``off`` -- do not run repoclosure
     * ``lenient`` -- (default) run repoclosure and write results to logs, but
       detected errors are only reported in logs
     * ``fatal`` -- abort compose when any issue is detected

    When multiple blocks in the mapping match a variant/arch combination, the
    last value will win.

**repoclosure_backend**
    (*str*) -- Select which tool should be used to run repoclosure over created
    repositories. By default ``yum`` is used, but you can switch to ``dnf``.
    Please note that when ``dnf`` is used, the build dependencies check is
    skipped. On Python 3, only ``dnf`` backend is available.

    See also: the ``gather_backend`` setting for Pungi's gather phase.

**cts_url**
    (*str*) -- URL to Compose Tracking Service. If defined, Pungi will add
    the compose to Compose Tracking Service and ge the compose ID from it.
    For example ``https://cts.localhost.tld/``

**cts_keytab**
    (*str*) -- Path to Kerberos keytab which will be used for Compose
    Tracking Service Kerberos authentication. If not defined, the default
    Kerberos principal is used.

**cts_oidc_token_url**
    (*str*) -- URL to the OIDC token endpoint.
    For example ``https://oidc.example.com/openid-connect/token``.
    This option can be overridden by the environment variable ``CTS_OIDC_TOKEN_URL``.

**cts_oidc_client_id*
    (*str*) -- OIDC client ID.
    This option can be overridden by the environment variable ``CTS_OIDC_CLIENT_ID``.
    Note that environment variable ``CTS_OIDC_CLIENT_SECRET`` must be configured with
    corresponding client secret to authenticate to CTS via OIDC.

**compose_type**
    (*str*) -- Allows to set default compose type. Type set via a command-line
    option overwrites this.

**mbs_api_url**
    (*str*) -- URL to Module Build Service (MBS) API.
    For example ``https://mbs.example.com/module-build-service/2``.
    This is required by ``pkgset_scratch_modules``.

Example
-------
::

    comps_file = {
        "scm": "git",
        "repo": "https://git.fedorahosted.org/git/comps.git",
        "branch": None,
        "file": "comps-f23.xml.in",
    }

    variants_file = {
        "scm": "git",
        "repo": "https://pagure.io/pungi-fedora.git ",
        "branch": None,
        "file": "variants-fedora.xml",
    }

    failable_deliverables = [
        ('^.*$', {
            # Buildinstall can fail on any variant and any arch
            '*': ['buildinstall'],
            'src': ['buildinstall'],
            # Nothing on i386 blocks the compose
            'i386': ['buildinstall', 'iso', 'live'],
        })
    ]

    tree_arches = ["x86_64"]
    tree_variants = ["Server"]

    repoclosure_strictness = [
        # Make repoclosure failures fatal for compose on all variants …
        ('^.*$', {'*': 'fatal'}),
        # … except for Everything where it should not run at all.
        ('^Everything$', {'*': 'off'})
    ]


Image Naming
============

Both image name and volume id are generated based on the configuration. Since
the volume id is limited to 32 characters, there are more settings available.
The process for generating volume id is to get a list of possible formats and
try them sequentially until one fits in the length limit. If substitutions are
configured, each attempted volume id will be modified by it.

For layered products, the candidate formats are first
``image_volid_layered_product_formats`` followed by ``image_volid_formats``.
Otherwise, only ``image_volid_formats`` are tried.

If no format matches the length limit, an error will be reported and compose
aborted.

Options
-------

There a couple common format specifiers available for both the options:
 * ``compose_id``
 * ``release_short``
 * ``version``
 * ``date``
 * ``respin``
 * ``type``
 * ``type_suffix``
 * ``label``
 * ``label_major_version``
 * ``variant``
 * ``arch``
 * ``disc_type``

**image_name_format** [optional]
    (*str|dict*) -- Python's format string to serve as template for image
    names. The value can also be a dict mapping variant UID regexes to the
    format string. The pattern should not overlap, otherwise it is undefined
    which one will be used.

    This format will be used for all phases generating images. Currently that
    means ``createiso``, ``live_images`` and ``buildinstall``.

    Available extra keys are:
     * ``disc_num``
     * ``suffix``

**image_volid_formats** [optional]
    (*list*) -- A list of format strings for generating volume id.

    The extra available keys are:
     * ``base_product_short``
     * ``base_product_version``

**image_volid_layered_product_formats** [optional]
    (*list*) -- A list of format strings for generating volume id for layered
    products. The keys available are the same as for ``image_volid_formats``.

**restricted_volid** = False
    (*bool*) -- New versions of lorax replace all non-alphanumerical characters
    with dashes (underscores are preserved). This option will mimic similar
    behaviour in Pungi.

**volume_id_substitutions** [optional]
    (*dict*) -- A mapping of string replacements to shorten the volume id.

**disc_types** [optional]
    (*dict*) -- A mapping for customizing ``disc_type`` used in image names.

    Available keys are:
     * ``boot`` -- for ``boot.iso`` images created in  *buildinstall* phase
     * ``live`` -- for images created by *live_images* phase
     * ``dvd`` -- for images created by *createiso* phase
     * ``ostree`` -- for ostree installer images

    Default values are the same as the keys.

Example
-------
::

    # Image name respecting Fedora's image naming policy
    image_name_format = "%(release_short)s-%(variant)s-%(disc_type)s-%(arch)s-%(version)s%(suffix)s"
    # Use the same format for volume id
    image_volid_formats = [
        "%(release_short)s-%(variant)s-%(disc_type)s-%(arch)s-%(version)s"
    ]
    # No special handling for layered products, use same format as for regular images
    image_volid_layered_product_formats = []
    # Replace "Cloud" with "C" in volume id etc.
    volume_id_substitutions = {
        'Cloud': 'C',
        'Alpha': 'A',
        'Beta': 'B',
        'TC': 'T',
    }

    disc_types = {
        'boot': 'netinst',
        'live': 'Live',
        'dvd': 'DVD',
    }


Signing
=======

If you want to sign deliverables generated during pungi run like RPM wrapped
images. You must provide few configuration options:

**signing_command** [optional]
    (*str*) -- Command that will be run with a koji build as a single
    argument. This command must not require any user interaction.
    If you need to pass a password for a signing key to the command,
    do this via command line option of the command and use string
    formatting syntax ``%(signing_key_password)s``.
    (See **signing_key_password_file**).

**signing_key_id** [optional]
    (*str*) -- ID of the key that will be used for the signing.
    This ID will be used when crafting koji paths to signed files
    (``kojipkgs.fedoraproject.org/packages/NAME/VER/REL/data/signed/KEYID/..``).

**signing_key_password_file** [optional]
    (*str*) -- Path to a file with password that will be formatted
    into **signing_command** string via ``%(signing_key_password)s``
    string format syntax (if used).
    Because pungi config is usually stored in git and is part of compose
    logs we don't want password to be included directly in the config.
    Note: If ``-`` string is used instead of a filename, then you will be asked
    for the password interactivelly right after pungi starts.

Example
-------
::

        signing_command = '~/git/releng/scripts/sigulsign_unsigned.py -vv --password=%(signing_key_password)s fedora-24'
        signing_key_id = '81b46521'
        signing_key_password_file = '~/password_for_fedora-24_key'


.. _git-urls:

Git URLs
========

In multiple places the config requires URL of a Git repository to download some
file from. This URL is passed on to *Koji*. It is possible to specify which
commit to use using this syntax: ::

    git://git.example.com/git/repo-name.git?#<rev_spec>

The ``<rev_spec>`` pattern can be replaced with actual commit SHA, a tag name,
``HEAD`` to indicate that tip of default branch should be used or
``origin/<branch_name>`` to use tip of arbitrary branch.

If the URL specifies a branch or ``HEAD``, *Pungi* will replace it with the
actual commit SHA. This will later show up in *Koji* tasks and help with
tracing what particular inputs were used.

.. note::

    The ``origin`` must be specified because of the way *Koji* works with the
    repository. It will clone the repository then switch to requested state
    with ``git reset --hard REF``. Since no local branches are created, we need
    to use full specification including the name of the remote.



Createrepo Settings
===================


Options
-------

**createrepo_checksum**
    (*str*) -- specify checksum type for createrepo; expected values:
    ``sha512``, ``sha256``, ``sha1``. Defaults to ``sha256``.

**createrepo_c** = True
    (*bool*) -- use createrepo_c (True) or legacy createrepo (False)

**createrepo_deltas** = False
    (*list*) -- generate delta RPMs against an older compose. This needs to be
    used together with ``--old-composes`` command line argument. The value
    should be a mapping of variants and architectures that should enable
    creating delta RPMs. Source and debuginfo repos never have deltas.

**createrepo_use_xz** = False
    (*bool*) -- whether to pass ``--xz`` to the createrepo command. This will
    cause the SQLite databases to be compressed with xz.

**createrepo_num_threads**
    (*int*) -- how many concurrent ``createrepo`` process to run. The default
    is to use one thread per CPU available on the machine.

**createrepo_num_workers**
    (*int*) -- how many concurrent ``createrepo`` workers to run. Value defaults to 3.

**createrepo_database**
    (*bool*) -- whether to create SQLite database as part of the repodata. This
    is only useful as an optimization for clients using Yum to consume to the
    repo. Default value depends on gather backend. For DNF it's turned off, for
    Yum the default is ``True``.

**createrepo_extra_args**
    (*[str]*) -- a list of extra arguments passed on to ``createrepo`` or
    ``createrepo_c`` executable. This could be useful for enabling zchunk
    generation and pointing it to correct dictionaries.

**createrepo_extra_modulemd**
    (*dict*) -- a mapping of variant UID to :ref:`an scm dict <scm_support>`.
    If specified, it should point to a directory with extra module metadata
    YAML files that will be added to the repository for this variant. The
    cloned files should be split into subdirectories for each architecture of
    the variant.

**createrepo_enable_cache** = True
    (*bool*) -- whether to use ``--cachedir`` option of ``createrepo``. It will
    cache and reuse checksum vaules to speed up createrepo phase.
    The cache dir is located at ``/var/cache/pungi/createrepo_c/$release_short-$uid``
    e.g. /var/cache/pungi/createrepo_c/Fedora-1000

**product_id** = None
    (:ref:`scm_dict <scm_support>`) -- If specified, it should point to a
    directory with certificates ``*<variant_uid>-<arch>-*.pem``. Pungi will
    copy each certificate file into the relevant Yum repositories as a
    ``productid`` file in the ``repodata`` directories. The purpose of these
    ``productid`` files is to expose the product data to `subscription-manager
    <https://github.com/candlepin/subscription-manager>`_.
    subscription-manager includes a "product-id" Yum plugin that can read these
    ``productid`` certificate files from each Yum repository.

**product_id_allow_missing** = False
    (*bool*) -- When ``product_id`` is used and a certificate for some variant
    and architecture is missing, Pungi will exit with an error by default.
    When you set this option to ``True``, Pungi will ignore the missing
    certificate and simply log a warning message.

**product_id_allow_name_prefix** = True
    (*bool*) -- Allow arbitrary prefix for the certificate file name (see
    leading ``*`` in the pattern above). Setting this option to ``False`` will
    make the pattern more strict by requiring the file name to start directly
    with variant name.


Example
-------
::

    createrepo_checksum = "sha"
    createrepo_deltas = [
        # All arches for Everything should have deltas.
        ('^Everything$', {'*': True}),
        # Also Server.x86_64 should have them (but not on other arches).
        ('^Server$', {'x86_64': True}),
    ]
    createrepo_extra_modulemd = {
        "Server": {
            "scm": "git",
            "repo": "https://example.com/extra-server-modulemd.git",
            "dir": ".",
            # The directory should have this layout. Each architecture for the
            # variant should be included (even if the directory is empty.
            # .
            # ├── aarch64
            # │   ├── some-file.yaml
            # │   └ ...
            # └── x86_64
        }
    }


Package Set Settings
====================


Options
-------

**sigkeys**
    ([*str* or None]) -- priority list of signing key IDs. These key IDs match
    the key IDs for the builds in Koji. Pungi will choose signed packages
    according to the order of the key IDs that you specify here. Use one
    single key in this list to ensure that all RPMs are signed by one key. If
    the list includes an empty string or *None*, Pungi will allow unsigned
    packages. If the list only includes *None*, Pungi will use all unsigned
    packages.

**pkgset_source** [mandatory]
    (*str*) -- "koji" (any koji instance) or "repos" (arbitrary yum repositories)

**pkgset_koji_tag**
    (*str|[str]*) -- tag(s) to read package set from. This option can be
    omitted for modular composes.

**pkgset_koji_builds**
    (*str|[str]*) -- extra build(s) to include in a package set defined as NVRs.

**pkgset_koji_scratch_tasks**
    (*str|[str]*) -- RPM scratch build task(s) to include in a package set,
    defined as task IDs. This option can be used only when ``compose_type``
    is set to ``test``. The RPM still needs to have higher NVR than any
    other RPM with the same name coming from other sources in order to
    appear in the resulting compose.

**pkgset_koji_module_tag**
   (*str|[str]*) -- tags to read module from. This option works similarly to
   listing tags in variants XML. If tags are specified and variants XML
   specifies some modules via NSVC (or part of), only modules matching that
   list will be used (and taken from the tag). Inheritance is used
   automatically.

**pkgset_koji_module_builds**
    (*dict*) -- A mapping of variants to extra module builds to include in a
    package set: ``{variant: [N:S:V:C]}``.

**pkgset_koji_inherit** = True
    (*bool*) -- inherit builds from parent tags; we can turn it off only if we
    have all builds tagged in a single tag

**pkgset_koji_inherit_modules** = False
    (*bool*) -- the same as above, but this only applies to modular tags. This
    option applies to the content tags that contain the RPMs.

**pkgset_repos**
    (*dict*) -- A mapping of architectures to repositories with RPMs: ``{arch:
    [repo]}``. Only use when ``pkgset_source = "repos"``.

**pkgset_scratch_modules**
    (*dict*) -- A mapping of variants to scratch module builds: ``{variant:
    [N:S:V:C]}``. Requires ``mbs_api_url``.

**pkgset_exclusive_arch_considers_noarch** = True
    (*bool*) -- If a package includes ``noarch`` in its ``ExclusiveArch`` tag,
    it will be included in all architectures since ``noarch`` is compatible
    with everything. Set this option to ``False`` to ignore ``noarch`` in
    ``ExclusiveArch`` and always consider only binary architectures.

**pkgset_inherit_exclusive_arch_to_noarch** = True
    (*bool*) -- When set to ``True``, the value of ``ExclusiveArch`` or
    ``ExcludeArch`` will be copied from source rpm to all its noarch packages.
    That will than limit which architectures the noarch packages can be
    included in.

    By setting this option to ``False`` this step is skipped, and noarch
    packages will by default land in all architectures. They can still be
    excluded by listing them in a relevant section of ``filter_packages``.

**pkgset_allow_reuse** = True
    (*bool*) -- When set to ``True``, *Pungi* will try to reuse pkgset data
    from the old composes specified by ``--old-composes``. When enabled, this
    option can speed up new composes because it does not need to calculate the
    pkgset data from Koji. However, if you block or unblock a package in Koji
    (for example) between composes, then Pungi may not respect those changes
    in your new compose.

**signed_packages_retries** = 0
    (*int*) -- In automated workflows, you might start a compose before Koji
    has completely written all signed packages to disk. In this case you may
    want Pungi to wait for the package to appear in Koji's storage. This
    option controls how many times Pungi will retry looking for the signed
    copy.

**signed_packages_wait** = 30
    (*int*) -- Interval in seconds for how long to wait between attempts to
    find signed packages. This option only makes sense when
    ``signed_packages_retries`` is set higher than 0.


Example
-------
::

    sigkeys = [None]
    pkgset_source = "koji"
    pkgset_koji_tag = "f23"


Buildinstall Settings
=====================
Script or process that creates bootable images with
Anaconda installer is historically called
`buildinstall <https://git.fedorahosted.org/cgit/anaconda.git/tree/scripts/buildinstall?h=f15-branch>`_.

Options
-------

**buildinstall_method**
    (*str*) -- "lorax" (f16+, rhel7+)
**lorax_options**
    (*list*) -- special options passed on to *lorax*.

    Format: ``[(variant_uid_regex, {arch|*: {option: name}})]``.

    Recognized options are:
      * ``bugurl`` -- *str* (default ``None``)
      * ``nomacboot`` -- *bool* (default ``True``)
      * ``noupgrade`` -- *bool* (default ``True``)
      * ``add_template`` -- *[str]* (default empty)
      * ``add_arch_template`` -- *[str]* (default empty)
      * ``add_template_var`` -- *[str]* (default empty)
      * ``add_arch_template_var`` -- *[str]* (default empty)
      * ``rootfs_size`` -- [*int*] (default empty)
      * ``version`` -- [*str*] (default from ``treeinfo_version`` or
        ``release_version``) -- used as ``--version`` and ``--release``
        argument on the lorax command line
      * ``dracut_args`` -- [*[str]*] (default empty) override arguments for
        dracut. Please note that if this option is used, lorax will not use any
        other arguments, so you have to provide a full list and can not just
        add something.
      * ``skip_branding`` -- *bool* (default ``False``)
      * ``squashfs_only`` -- *bool* (default ``False``) pass the --squashfs_only to Lorax.
      * ``configuration_file`` -- (:ref:`scm_dict <scm_support>`) (default empty) pass the
        specified configuration file to Lorax using the -c option.
**lorax_extra_sources**
    (*list*) -- a variant/arch mapping with urls for extra source repositories
    added to Lorax command line. Either one repo or a list can be specified.
**lorax_use_koji_plugin** = False
    (*bool*) -- When set to ``True``, the Koji pungi_buildinstall task will be
    used to execute Lorax instead of runroot. Use only if the Koji instance
    has the pungi_buildinstall plugin installed.
**buildinstall_kickstart**
    (:ref:`scm_dict <scm_support>`) -- If specified, this kickstart file will
    be copied into each file and pointed to in boot configuration.
**buildinstall_topdir**
    (*str*) -- Full path to top directory where the runroot buildinstall
    Koji tasks output should be stored. This is useful in situation when
    the Pungi compose is not generated on the same storage as the Koji task
    is running on. In this case, Pungi can provide input repository for runroot
    task using HTTP and set the output directory for this task to
    ``buildinstall_topdir``. Once the runroot task finishes, Pungi will copy
    the results of runroot tasks to the compose working directory.
**buildinstall_skip**
    (*list*) -- mapping that defines which variants and arches to skip during
    buildinstall; format: ``[(variant_uid_regex, {arch|*: True})]``. This is
    only supported for lorax.
**buildinstall_allow_reuse** = False
    (*bool*) -- When set to ``True``, *Pungi* will try to reuse buildinstall
    results from old compose specified by ``--old-composes``.
**buildinstall_packages**
    (list) – Additional packages to be installed in the runroot environment
    where lorax will run to create installer. Format: ``[(variant_uid_regex,
    {arch|*: [package_globs]})]``.


Example
-------
::

    buildinstall_method = "lorax"

    # Enables macboot on x86_64 for all variants and builds upgrade images
    # everywhere.
    lorax_options = [
        ("^.*$", {
            "x86_64": {
                "nomacboot": False
            }
            "*": {
                "noupgrade": False
            }
        })
    ]

    # Don't run buildinstall phase for Modular variant
    buildinstall_skip = [
        ('^Modular', {
            '*': True
        })
    ]

    # Add another repository for lorax to install packages from
    lorax_extra_sources = [
        ('^Simple$', {
            '*': 'https://example.com/repo/$basearch/',
        })
    ]

    # Additional packages to be installed in the Koji runroot environment where
    # lorax will run.
    buildinstall_packages = [
        ('^Simple$', {
            '*': ['dummy-package'],
        })
    ]

.. note::

    It is advised to run buildinstall (lorax) in koji,
    i.e. with **runroot enabled** for clean build environments, better logging, etc.


.. warning::

    Lorax installs RPMs into a chroot. This involves running %post scriptlets
    and they frequently run executables in the chroot.
    If we're composing for multiple architectures, we **must** use runroot for this reason.


Gather Settings
===============

Options
-------

**gather_method** [mandatory]
    (*str*|*dict*) -- Options are ``deps``, ``nodeps`` and ``hybrid``.
    Specifies whether and how package dependencies should be pulled in.
    Possible configuration can be one value for all variants, or if configured
    per-variant it can be a simple string ``hybrid`` or a a dictionary mapping
    source type to a value of ``deps`` or ``nodeps``. Make sure only one regex
    matches each variant, as there is no guarantee which value will be used if
    there are multiple matching ones. All used sources must have a configured
    method unless hybrid solving is used.

**gather_fulltree** = False
    (*bool*) -- When set to ``True`` all RPMs built from an SRPM will always be
    included. Only use when ``gather_method = "deps"``.

**gather_selfhosting** = False
    (*bool*) -- When set to ``True``, *Pungi* will build a self-hosting tree by
    following build dependencies. Only use when ``gather_method = "deps"``.

**gather_allow_reuse** = False
    (*bool*) -- When set to ``True``, *Pungi* will try to reuse gather results
    from old compose specified by ``--old-composes``.

**greedy_method** = none
    (*str*) -- This option controls how package requirements are satisfied in
    case a particular ``Requires`` has multiple candidates.

    * ``none`` -- the best packages is selected to satisfy the dependency and
      only that one is pulled into the compose
    * ``all`` -- packages that provide the symbol are pulled in
    * ``build`` -- the best package is selected, and then all packages from the
      same build that provide the symbol are pulled in

    .. note::
        As an example let's work with this situation: a package in the compose
        has ``Requires: foo``. There are three packages with ``Provides: foo``:
        ``pkg-a``, ``pkg-b-provider-1`` and ``pkg-b-provider-2``. The
        ``pkg-b-*`` packages are build from the same source package. Best match
        determines ``pkg-b-provider-1`` as best matching package.

        * With ``greedy_method = "none"`` only ``pkg-b-provider-1`` will be
          pulled in.
        * With ``greedy_method = "all"`` all three packages will be
          pulled in.
        * With ``greedy_method = "build"`` ``pkg-b-provider-1`` and
          ``pkg-b-provider-2`` will be pulled in.

**gather_backend**
    (*str*) --This changes the entire codebase doing dependency solving, so it
    can change the result in unpredictable ways.

    On Python 2, the choice is between ``yum`` or ``dnf`` and defaults to
    ``yum``. On Python 3 ``dnf`` is the only option and default.

    Particularly the multilib work is performed differently by using
    ``python-multilib`` library. Please refer to ``multilib`` option to see the
    differences.

    See also: the ``repoclosure_backend`` setting for Pungi's repoclosure
    phase.

**multilib**
    (*list*) -- mapping of variant regexes and arches to list of multilib
    methods

    Available methods are:
     * ``none`` -- no package matches this method
     * ``all`` -- all packages match this method
     * ``runtime`` -- packages that install some shared object file
       (``*.so.*``) will match.
     * ``devel`` -- packages whose name ends with ``-devel`` or ``--static``
       suffix will be matched. When ``dnf`` is used, this method automatically
       enables ``runtime`` method as well. With ``yum`` backend this method
       also uses a hardcoded blacklist and whitelist.
     * ``kernel`` -- packages providing ``kernel`` or ``kernel-devel`` match
       this method (only in ``yum`` backend)
     * ``yaboot`` -- only ``yaboot`` package on ``ppc`` arch matches this (only
       in ``yum`` backend)

.. _additional_packages:

**additional_packages**
    (*list*) -- additional packages to be included in a variant and
    architecture; format: ``[(variant_uid_regex, {arch|*: [package_globs]})]``

    In contrast to the ``comps_file`` setting, the ``additional_packages``
    setting merely adds the list of packages to the compose. When a package
    is in a comps group, it is visible to users via ``dnf groupinstall`` and
    Anaconda's Groups selection, but ``additional_packages`` does not affect
    DNF groups.

    The packages specified here are matched against RPM names, not any other
    provides in the package nor the name of source package. Shell globbing is
    used, so wildcards are possible. The package can be specified as name only
    or ``name.arch``.

    With ``dnf`` gathering backend, you can specify a debuginfo package to be
    included. This is meant to include a package if autodetection does not get
    it. If you add a debuginfo package that does not have anything else from
    the same build included in the compose, the sources will not be pulled in.

    If you list a package in ``additional_packages`` but Pungi cannot find
    it (for example, it's not available in the Koji tag), Pungi will log a
    warning in the "work" or "logs" directories and continue without aborting.

    *Example*: This configuration will add all packages in a Koji tag to an
    "Everything" variant::

        additional_packages = [
            ('^Everything$', {
                '*': [
                    '*',
                ],
            })
        ]

**filter_packages**
    (*list*) -- packages to be excluded from a variant and architecture;
    format: ``[(variant_uid_regex, {arch|*: [package_globs]})]``

    See :ref:`additional_packages <additional_packages>` for details about
    package specification.

**filter_modules**
    (*list*) -- modules to be excluded from a variant and architecture;
    format: ``[(variant_uid_regex, {arch|*: [name:stream]})]``

    Both name and stream can use shell-style globs. If stream is omitted, all
    streams are removed.

    This option only applies to modules taken from Koji tags, not modules
    explicitly listed in variants XML without any tags.

**filter_system_release_packages**
    (*bool*) -- for each variant, figure out the best system release package
    and filter out all others. This will not work if a variant needs more than
    one system release package. In such case, set this option to ``False``.

**gather_prepopulate** = None
    (:ref:`scm_dict <scm_support>`) -- If specified, you can use this to add
    additional packages. The format of the file pointed to by this option is a
    JSON mapping ``{variant_uid: {arch: {build: [package]}}}``. Packages added
    through this option can not be removed by ``filter_packages``.

**multilib_blacklist**
    (*dict*) -- multilib blacklist; format: ``{arch|*: [package_globs]}``.

    See :ref:`additional_packages <additional_packages>` for details about
    package specification.

**multilib_whitelist**
    (*dict*) -- multilib blacklist; format: ``{arch|*: [package_names]}``. The
    whitelist must contain exact package names; there are no wildcards or
    pattern matching.

**gather_lookaside_repos** = []
    (*list*) -- lookaside repositories used for package gathering; format:
    ``[(variant_uid_regex, {arch|*: [repo_urls]})]``

    The repo_urls are passed to the depsolver, which can use packages in the
    repos for satisfying dependencies, but the packages themselves are not
    pulled into the compose. The repo_urls can contain $basearch variable,
    which will be substituted with proper value by the depsolver.

    The repo_urls are used by repoclosure too, but it can't parse $basearch
    currently and that will cause Repoclosure phase crashed. *repoclosure_strictness*
    option could be used to stop running repoclosure.

    Please note that * as a wildcard matches all architectures but src.

**hashed_directories** = False
    (*bool*) -- put packages into "hashed" directories, for example
    ``Packages/k/kernel-4.0.4-301.fc22.x86_64.rpm``

**check_deps** = True
    (*bool*) -- Set to ``False`` if you don't want the compose to abort when
    some package has broken dependencies.

**require_all_comps_packages** = False
    (*bool*) -- Set to ``True`` to abort compose when package mentioned in
    comps file can not be found in the package set. When disabled (the
    default), such cases are still reported as warnings in the log.

    With ``dnf`` gather backend, this option will abort the compose on any
    missing package no matter if it's listed in comps, ``additional_packages``
    or prepopulate file.

**gather_source_mapping**
    (*str*) -- JSON mapping with initial packages for the compose. The value
    should be a path to JSON file with following mapping: ``{variant: {arch:
    {rpm_name: [rpm_arch|None]}}}``. Relative paths are interpreted relative to
    the location of main config file.

**gather_profiler** = False
    (*bool*) -- When set to ``True`` the gather tool will produce additional
    performance profiling information at the end of its logs.  Only takes
    effect when ``gather_backend = "dnf"``.

**variant_as_lookaside**
    (*list*) -- a variant/variant mapping that tells one or more variants in compose
    has other variant(s) in compose as a lookaside. Only top level variants are
    supported (not addons/layered products). Format:
    ``[(variant_uid, variant_uid)]``


Example
-------
::

    gather_method = "deps"
    greedy_method = "build"
    check_deps = False
    hashed_directories = True

    gather_method = {
        "^Everything$": {
            "comps": "deps"     # traditional content defined by comps groups
        },
        "^Modular$": {
            "module": "nodeps"  # Modules do not need dependencies
        },
        "^Mixed$": {            # Mixed content in one variant
            "comps": "deps",
            "module": "nodeps"
        }
        "^OtherMixed$": "hybrid",   # Using hybrid depsolver
    }

    additional_packages = [
        # bz#123456
        ('^(Workstation|Server)$', {
            '*': [
                'grub2',
                'kernel',
            ],
        }),
    ]

    filter_packages = [
        # bz#111222
        ('^.*$', {
            '*': [
                'kernel-doc',
            ],
        }),
    ]

    multilib = [
        ('^Server$', {
            'x86_64': ['devel', 'runtime']
        })
    ]

    multilib_blacklist = {
        "*": [
            "gcc",
        ],
    }

    multilib_whitelist = {
        "*": [
            "alsa-plugins-*",
        ],
    }

    # gather_lookaside_repos = [
    #     ('^.*$', {
    #         '*': [
    #             "https://dl.fedoraproject.org/pub/fedora/linux/releases/22/Everything/$basearch/os/",
    #         ],
    #         'x86_64': [
    #             "https://dl.fedoraproject.org/pub/fedora/linux/releases/22/Everything/source/SRPMS/",
    #         ]
    #     }),
    # ]


.. note::

   It is a good practice to attach bug/ticket numbers
   to additional_packages, filter_packages, multilib_blacklist and multilib_whitelist
   to track decisions.


Koji Settings
=============


Options
-------

**koji_profile**
    (*str*) -- koji profile name. This tells Pungi how to communicate with
    your chosen Koji instance. See `Koji's documentation about profiles
    <https://docs.pagure.org/koji/profiles/>`_ for more information about how
    to set up your Koji client profile. In the examples, the profile name is
    "koji", which points to Fedora's koji.fedoraproject.org.

**global_runroot_method**
    (*str*) -- global runroot method to use. If ``runroot_method`` is set
    per Pungi phase using a dictionary, this option defines the default
    runroot method for phases not mentioned in the ``runroot_method``
    dictionary.

**runroot_method**
    (*str*|*dict*) -- Runroot method to use. It can further specify
    the runroot method in case the ``runroot`` is set to True.

    Available methods are:
     * ``local`` -- runroot tasks are run locally
     * ``koji`` -- runroot tasks are run in Koji
     * ``openssh`` -- runroot tasks are run on remote machine connected using OpenSSH.
       The ``runroot_ssh_hostnames`` for each architecture must be set and the
       user under which Pungi runs must be configured to login as ``runroot_ssh_username``
       using the SSH key.

    The runroot method can also be set per Pungi phase using the dictionary
    with phase name as key and runroot method as value. The default runroot
    method is in this case defined by the ``global_runroot_method`` option.

Example
-------
::

    global_runroot_method = "koji"
    runroot_method = {
        "createiso": "local"
    }

**runroot_channel**
    (*str*) -- name of koji channel

**runroot_tag**
    (*str*) -- name of koji **build** tag used for runroot

**runroot_weights**
    (*dict*) -- customize task weights for various runroot tasks. The values in
    the mapping should be integers, the keys can be selected from the following
    list. By default no weight is assigned and Koji picks the default one
    according to policy.

     * ``buildinstall``
     * ``createiso``
     * ``ostree``
     * ``ostree_installer``

Example
-------
::

    koji_profile = "koji"
    runroot_channel = "runroot"
    runroot_tag = "f23-build"

Runroot "openssh" method settings
=================================


Options
-------

**runroot_ssh_username**
    (*str*) -- For ``openssh`` runroot method, configures the username used to login
    the remote machine to run the runroot task. Defaults to "root".

**runroot_ssh_hostnames**
    (*dict*) -- For ``openssh`` runroot method, defines the hostname for each
    architecture on which the runroot task should be running. Format:
    ``{"x86_64": "runroot-x86-64.localhost.tld", ...}``

**runroot_ssh_init_template**
    (*str*) [optional] -- For ``openssh`` runroot method, defines the command
    to initializes the runroot task on the remote machine. This command is
    executed as first command for each runroot task executed.

    The command can print a string which is then available as ``{runroot_key}``
    for other SSH commands. This string might be used to keep the context
    across different SSH commands executed for single runroot task.

    The goal of this command is setting up the environment for real runroot
    commands. For example preparing the unique mock environment, mounting the
    desired file-systems, ...

    The command string can contain following variables which are replaced by
    the real values before executing the init command:

    * ``{runroot_tag}`` - Tag to initialize the runroot environment from.

    When not set, no init command is executed.

**runroot_ssh_install_packages_template**
    (*str*) [optional] -- For ``openssh`` runroot method, defines the template
    for command to install the packages requested to run the runroot task.

    The template string can contain following variables which are replaced by
    the real values before executing the install command:

    * ``{runroot_key}`` - Replaced with the string returned by
      ``runroot_ssh_init_template`` if used. This can be used to keep the track
      of context of SSH commands belonging to single runroot task.
    * ``{packages}`` - White-list separated list of packages to install.

    Example (The ``{runroot_key}`` is expected to be set to mock config file
    using the ``runroot_ssh_init_template`` command.):
    ``"mock -r {runroot_key} --install {packages}"``

    When not set, no command to install packages on remote machine is executed.

**runroot_ssh_run_template**
    (*str*) [optional] -- For ``openssh`` runroot method, defines the template
    for the main runroot command.

    The template string can contain following variables which are replaced by
    the real values before executing the install command:

    * ``{runroot_key}`` - Replaced with the string returned by
      ``runroot_ssh_init_template`` if used. This can be used to keep the track
      of context of SSH commands belonging to single runroot task.
    * ``{command}`` - Command to run.

    Example (The ``{runroot_key}`` is expected to be set to mock config file
    using the ``runroot_ssh_init_template`` command.):
    ``"mock -r {runroot_key} chroot -- {command}"``

    When not set, the runroot command is run directly.


Extra Files Settings
====================


Options
-------

**extra_files**
    (*list*) -- references to external files to be placed in os/ directory and
    media; format: ``[(variant_uid_regex, {arch|*: [scm_dict]})]``. See
    :ref:`scm_support` for details. If the dict specifies a ``target`` key, an
    additional subdirectory will be used.


Example
-------
::

    extra_files = [
        ('^.*$', {
            '*': [
                # GPG keys
                {
                    "scm": "rpm",
                    "repo": "fedora-repos",
                    "branch": None,
                    "file": [
                        "/etc/pki/rpm-gpg/RPM-GPG-KEY-22-fedora",
                    ],
                    "target": "",
                },
                # GPL
                {
                    "scm": "git",
                    "repo": "https://pagure.io/pungi-fedora",
                    "branch": None,
                    "file": [
                        "GPL",
                    ],
                    "target": "",
                },
            ],
        }),
    ]


Extra Files Metadata
--------------------
If extra files are specified a metadata file, ``extra_files.json``, is placed
in the ``os/`` directory and media. The checksums generated are determined by
``media_checksums`` option. This metadata file is in the format:

::

    {
      "header": {"version": "1.0},
      "data": [
        {
          "file": "GPL",
          "checksums": {
            "sha256": "8177f97513213526df2cf6184d8ff986c675afb514d4e68a404010521b880643"
          },
          "size": 18092
        },
        {
          "file": "release-notes/notes.html",
          "checksums": {
            "sha256": "82b1ba8db522aadf101dca6404235fba179e559b95ea24ff39ee1e5d9a53bdcb"
          },
          "size": 1120
        }
      ]
    }


CreateISO Settings
==================

Options
-------

**createiso_skip** = False
    (*list*) -- mapping that defines which variants and arches to skip during createiso; format: [(variant_uid_regex, {arch|*: True})]

**createiso_max_size**
    (*list*) -- mapping that defines maximum expected size for each variant and
    arch. If the ISO is larger than the limit, a warning will be issued.

    Format: ``[(variant_uid_regex, {arch|*: number})]``

**createiso_max_size_is_strict**
    (*list*) -- Set the value to ``True`` to turn the warning from
    ``createiso_max_size`` into a hard error that will abort the compose.
    If there are multiple matches in the mapping, the check will be strict if
    at least one match says so.

    Format: ``[(variant_uid_regex, {arch|*: bool})]``

**create_jigdo** = False
    (*bool*) -- controls the creation of jigdo from ISO

**create_optional_isos** = False
    (*bool*) -- when set to ``True``, ISOs will be created even for
    ``optional`` variants. By default only variants with type ``variant`` or
    ``layered-product`` will get ISOs.

**createiso_break_hardlinks** = False
    (*bool*) -- when set to ``True``, all files that should go on the ISO and
    have a hardlink will be first copied into a staging directory. This should
    work around a bug in ``genisoimage`` including incorrect link count in the
    image, but it is at the cost of having to copy a potentially significant
    amount of data.

    The staging directory is deleted when ISO is successfully created. In that
    case the same task to create the ISO will not be re-runnable.

**createiso_use_xorrisofs** = False
    (*bool*) -- when set to True, use ``xorrisofs`` for creating ISOs instead
    of ``genisoimage``.

**iso_size** = 4700000000
    (*int|str*) -- size of ISO image. The value should either be an integer
    meaning size in bytes, or it can be a string with ``k``, ``M``, ``G``
    suffix (using multiples of 1024).

**iso_level**
    (*int|list*) [optional] -- Set the ISO9660 conformance level. This is
    either a global single value (a number from 1 to 4), or a variant/arch
    mapping.

**split_iso_reserve** = 10MiB
    (*int|str*) -- how much free space should be left on each disk. The format
    is the same as for ``iso_size`` option.

**iso_hfs_ppc64le_compatible** = True
    (*bool*) -- when set to False, the Apple/HFS compatibility is turned off
    for ppc64le ISOs. This option only makes sense for bootable products, and
    affects images produced in *createiso* and *extra_isos* phases.

.. note::

    Source architecture needs to be listed explicitly.
    Excluding '*' applies only on binary arches.
    Jigdo causes significant increase of time to ISO creation.


Example
-------
::

    createiso_skip = [
        ('^Workstation$', {
            '*': True,
            'src': True
        }),
    ]


.. _auto-version:

Automatic generation of version and release
===========================================

Version and release values for certain artifacts can be generated automatically
based on release version, compose label, date, type and respin. This can be
used to shorten the config and keep it the same for multiple uses.

+----------------------------+-------------------+--------------+--------------+--------+------------------+
| Compose ID                 | Label             | Version      | Date         | Respin | Release          |
+============================+===================+==============+==============+========+==================+
| ``F-Rawhide-20170406.n.0`` | ``-``             | ``Rawhide``  | ``20170406`` | ``0``  | ``20170406.n.0`` |
+----------------------------+-------------------+--------------+--------------+--------+------------------+
| ``F-26-20170329.1``        | ``Alpha-1.6``     | ``26_Alpha`` | ``20170329`` | ``1``  | ``1.6``          |
+----------------------------+-------------------+--------------+--------------+--------+------------------+
| ``F-Atomic-25-20170407.0`` | ``RC-20170407.0`` | ``25``       | ``20170407`` | ``0``  | ``20170407.0``   |
+----------------------------+-------------------+--------------+--------------+--------+------------------+
| ``F-Atomic-25-20170407.0`` | ``-``             | ``25``       | ``20170407`` | ``0``  | ``20170407.0``   |
+----------------------------+-------------------+--------------+--------------+--------+------------------+

All non-``RC`` milestones from label get appended to the version. For release
either label is used or date, type and respin.


Common options for Live Images, Live Media and Image Build
==========================================================

All images can have ``ksurl``, ``version``, ``release`` and ``target``
specified. Since this can create a lot of duplication, there are global options
that can be used instead.

For each of the phases, if the option is not specified for a particular
deliverable, an option named ``<PHASE_NAME>_<OPTION>`` is checked. If that is
not specified either, the last fallback is ``global_<OPTION>``. If even that is
unset, the value is considered to not be specified.

The kickstart URL is configured by these options.

 * ``global_ksurl`` -- global fallback setting
 * ``live_media_ksurl``
 * ``image_build_ksurl``
 * ``live_images_ksurl``

Target is specified by these settings.

 * ``global_target`` -- global fallback setting
 * ``live_media_target``
 * ``image_build_target``
 * ``live_images_target``
 * ``osbuild_target``

Version is specified by these options. If no version is set, a default value
will be provided according to :ref:`automatic versioning <auto-version>`.

 * ``global_version`` -- global fallback setting
 * ``live_media_version``
 * ``image_build_version``
 * ``live_images_version``
 * ``osbuild_version``

Release is specified by these options. If set to a magic value to
``!RELEASE_FROM_LABEL_DATE_TYPE_RESPIN``, a value will be generated according
to :ref:`automatic versioning <auto-version>`.

 * ``global_release`` -- global fallback setting
 * ``live_media_release``
 * ``image_build_release``
 * ``live_images_release``
 * ``osbuild_release``

Each configuration block can also optionally specify a ``failable`` key. For
live images it should have a boolean value. For live media and image build it
should be a list of strings containing architectures that are optional. If any
deliverable fails on an optional architecture, it will not abort the whole
compose. If the list contains only ``"*"``, all arches will be substituted.


Live Images Settings
====================

**live_images**
    (*list*) -- Configuration for the particular image. The elements of the
    list should be tuples ``(variant_uid_regex, {arch|*: config})``. The config
    should be a dict with these keys:

      * ``kickstart`` (*str*)
      * ``ksurl`` (*str*) [optional] -- where to get the kickstart from
      * ``name`` (*str*)
      * ``version`` (*str*)
      * ``target`` (*str*)
      * ``repo`` (*str|[str]*) -- repos specified by URL or variant UID
      * ``specfile`` (*str*) -- for images wrapped in RPM
      * ``scratch`` (*bool*) -- only RPM-wrapped images can use scratch builds,
        but by default this is turned off
      * ``type`` (*str*) -- what kind of task to start in Koji. Defaults to
        ``live`` meaning ``koji spin-livecd`` will be used. Alternative option
        is ``appliance`` corresponding to ``koji spin-appliance``.
      * ``sign`` (*bool*) -- only RPM-wrapped images can be signed

**live_images_no_rename**
    (*bool*) -- When set to ``True``, filenames generated by Koji will be used.
    When ``False``, filenames will be generated based on ``image_name_format``
    configuration option.


Live Media Settings
===================

**live_media**
    (*dict*) -- configuration for ``koji spin-livemedia``; format:
    ``{variant_uid_regex: [{opt:value}]}``

    Required options:

      * ``name`` (*str*)
      * ``version`` (*str*)
      * ``arches`` (*[str]*) -- what architectures to build the media for; by default uses
        all arches for the variant.
      * ``kickstart`` (*str*) -- name of the kickstart file

    Available options:

      * ``ksurl`` (*str*)
      * ``ksversion`` (*str*)
      * ``scratch`` (*bool*)
      * ``target`` (*str*)
      * ``release`` (*str*) -- a string with the release, or
        ``!RELEASE_FROM_LABEL_DATE_TYPE_RESPIN`` to automatically generate a
        suitable value. See :ref:`automatic versioning <auto-version>` for
        details.
      * ``skip_tag`` (*bool*)
      * ``repo`` (*str|[str]*) -- repos specified by URL or variant UID
      * ``title`` (*str*)
      * ``install_tree_from`` (*str*) -- variant to take install tree from
      * ``nomacboot`` (*bool*)


Image Build Settings
====================

**image_build**
    (*dict*) -- config for ``koji image-build``; format: {variant_uid_regex: [{opt: value}]}

    By default, images will be built for each binary arch valid for the
    variant. The config can specify a list of arches to narrow this down.

.. note::
    Config can contain anything what is accepted by
    ``koji image-build --config configfile.ini``

    Repo can be specified either as a string or a list of strings. It will be
    automatically transformed into format suitable for ``koji``. A repo for the
    currently built variant will be added as well.

    If you explicitly set ``release`` to
    ``!RELEASE_FROM_LABEL_DATE_TYPE_RESPIN``, it will be replaced with a value
    generated as described in :ref:`automatic versioning <auto-version>`.

    If you explicitly set ``release`` to
    ``!RELEASE_FROM_DATE_RESPIN``, it will be replaced with a value
    generated as described in :ref:`automatic versioning <auto-version>`.

    If you explicitly set ``version`` to
    ``!VERSION_FROM_VERSION``, it will be replaced with a value
    generated as described in :ref:`automatic versioning <auto-version>`.

    Please don't set ``install_tree``. This gets automatically set by *pungi*
    based on current variant. You can use ``install_tree_from`` key to use
    install tree from another variant.

    Both the install tree and repos can use one of following formats:

     * URL to the location
     * name of variant in the current compose
     * absolute path on local filesystem (which will be translated using
       configured mappings or used unchanged, in which case you have to ensure
       the koji builders can access it)

    You can set either a single format, or a list of formats. For available
    values see help output for ``koji image-build`` command.

    If ``ksurl`` ends with ``#HEAD``, Pungi will figure out the SHA1 hash of
    current HEAD and use that instead.

    Setting ``scratch`` to ``True`` will run the koji tasks as scratch builds.


Example
-------
::

    image_build = {
        '^Server$': [
            {
                'image-build': {
                    'format': ['docker', 'qcow2']
                    'name': 'fedora-qcow-and-docker-base',
                    'target': 'koji-target-name',
                    'ksversion': 'F23',     # value from pykickstart
                    'version': '23',
                    # correct SHA1 hash will be put into the URL below automatically
                    'ksurl': 'https://git.fedorahosted.org/git/spin-kickstarts.git?somedirectoryifany#HEAD',
                    'kickstart': "fedora-docker-base.ks",
                    'repo': ["http://someextrarepos.org/repo", "ftp://rekcod.oi/repo"],
                    'distro': 'Fedora-20',
                    'disk_size': 3,

                    # this is set automatically by pungi to os_dir for given variant
                    # 'install_tree': 'http://somepath',
                },
                'factory-parameters': {
                    'docker_cmd':  "[ '/bin/bash' ]",
                    'docker_env': "[ 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin' ]",
                    'docker_labels': "{'Name': 'fedora-docker-base', 'License': u'GPLv2', 'RUN': 'docker run -it --rm ${OPT1} --privileged -v \`pwd\`:/atomicapp -v /run:/run -v /:/host --net=host --name ${NAME} -e NAME=${NAME} -e IMAGE=${IMAGE} ${IMAGE} -v ${OPT2} run ${OPT3} /atomicapp', 'Vendor': 'Fedora Project', 'Version': '23', 'Architecture': 'x86_64' }",
                }
            },
            {
                'image-build': {
                    'format': ['docker', 'qcow2']
                    'name': 'fedora-qcow-and-docker-base',
                    'target': 'koji-target-name',
                    'ksversion': 'F23',     # value from pykickstart
                    'version': '23',
                    # correct SHA1 hash will be put into the URL below automatically
                    'ksurl': 'https://git.fedorahosted.org/git/spin-kickstarts.git?somedirectoryifany#HEAD',
                    'kickstart': "fedora-docker-base.ks",
                    'repo': ["http://someextrarepos.org/repo", "ftp://rekcod.oi/repo"],
                    'distro': 'Fedora-20',
                    'disk_size': 3,

                    # this is set automatically by pungi to os_dir for given variant
                    # 'install_tree': 'http://somepath',
                }
            },
            {
                'image-build': {
                    'format': 'qcow2',
                    'name': 'fedora-qcow-base',
                    'target': 'koji-target-name',
                    'ksversion': 'F23',     # value from pykickstart
                    'version': '23',
                    'ksurl': 'https://git.fedorahosted.org/git/spin-kickstarts.git?somedirectoryifany#HEAD',
                    'kickstart': "fedora-docker-base.ks",
                    'distro': 'Fedora-23',

                    # only build this type of image on x86_64
                    'arches': ['x86_64']

                    # Use install tree and repo from Everything variant.
                    'install_tree_from': 'Everything',
                    'repo': ['Everything'],

                    # Set release automatically.
                    'release': '!RELEASE_FROM_LABEL_DATE_TYPE_RESPIN',
                }
            }
        ]
    }


KiwiBuild Settings
==================

**kiwibuild**
    (*dict*) -- configuration for building images using kiwi by a Koji plugin.
    Pungi will trigger a Koji task delegating to kiwi, which will build the image,
    import it to Koji via content generators.

    Format: ``{variant_uid_regex: [{...}]}``.

    Required keys in the configuration dict:

    * ``description_scm`` -- (*str*) scm URL of description kiwi description.
    * ``description_path`` -- (*str*) path to kiwi description inside the scm repo
    * ``kiwi_profile`` -- (*str*) select profile from description file.

    Optional keys:

    * ``repos`` -- additional repos used to install RPMs in the image. The
      compose repository for the enclosing variant is added automatically.
      Either variant name or a URL is supported.
    * ``target`` -- (*str*) which build target to use for the task. If not
      provided, then either ``kiwibuild_target`` or ``global_target`` is
      needed.
    * ``release`` -- (*str*) release of the output image.
    * ``arches`` -- (*[str]*) List of architectures to build for. If not
      provided, all variant architectures will be built.
    * ``failable`` -- (*[str]*) List of architectures for which this
      deliverable is not release blocking.


OSBuild Composer for building images
====================================

**osbuild**
    (*dict*) -- configuration for building images in OSBuild Composer service
    fronted by a Koji plugin. Pungi will trigger a Koji task delegating to the
    OSBuild Composer, which will build the image, import it to Koji via content
    generators.

    Format: ``{variant_uid_regex: [{...}]}``.

    Required keys in the configuration dict:

    * ``name`` -- name of the Koji package
    * ``distro`` -- image for which distribution should be build TODO examples
    * ``image_types`` -- a list with a single image type string or just a
      string representing the image type to build (e.g. ``qcow2``). In any
      case, only a single image type can be provided as an argument.

    Optional keys:

    * ``target`` -- which build target to use for the task. Either this option
      or the global ``osbuild_target`` is required.
    * ``version`` -- version for the final build (as a string). This option is
      required if the global ``osbuild_version`` is not specified.
    * ``release`` -- release part of the final NVR. If neither this option nor
      the global ``osbuild_release`` is set, Koji will automatically generate a
      value.
    * ``repo`` -- a list of repositories from which to consume packages for
      building the image. By default only the variant repository is used.
      The list items may use one of the following formats:

      * String with just the repository URL.

      * Dictionary with the following keys:

        * ``baseurl`` -- URL of the repository.
        * ``package_sets`` -- a list of package set names to use for this
            repository. Package sets are an internal concept of Image Builder
            and are used in image definitions. If specified, the repository is
            used by Image Builder only for the pipeline with the same name.
            For example, specifying the ``build`` package set name will make
            the repository to be used only for the build environment in which
            the image will be built. (optional)

    * ``arches`` -- list of architectures for which to build the image. By
      default, the variant arches are used. This option can only restrict it,
      not add a new one.
    * ``manifest_type`` -- the image type that is put into the manifest by
      pungi. If not supplied then it is autodetected from the Koji output.
    * ``ostree_url`` -- URL of the repository that's used to fetch the parent
      commit from.
    * ``ostree_ref`` -- name of the ostree branch
    * ``ostree_parent`` -- commit hash or a a branch-like reference to the
      parent commit.
    * ``customizations`` -- a dictionary with customizations to use for the
      image build. For the list of supported customizations, see the **hosted**
      variants in the `Image Builder documentation
      <https://osbuild.org/docs/user-guide/blueprint-reference#installation-device>`.
    * ``upload_options`` -- a dictionary with upload options specific to the
      target cloud environment. If provided, the image will be uploaded to the
      cloud environment, in addition to the Koji server. One can't combine
      arbitrary image types with arbitrary upload options.
      The dictionary keys differ based on the target cloud environment. The
      following keys are supported:

      * **AWS EC2 upload options** -- upload to Amazon Web Services.

        * ``region`` -- AWS region to upload the image to
        * ``share_with_accounts`` -- list of AWS account IDs to share the image
          with
        * ``snapshot_name`` -- Snapshot name of the uploaded EC2 image
          (optional)

      * **AWS S3 upload options** -- upload to Amazon Web Services S3.

        * ``region`` -- AWS region to upload the image to

      * **Azure upload options** -- upload to Microsoft Azure.

        * ``tenant_id`` -- Azure tenant ID to upload the image to
        * ``subscription_id`` -- Azure subscription ID to upload the image to
        * ``resource_group`` -- Azure resource group to upload the image to
        * ``location`` -- Azure location of the resource group (optional)
        * ``image_name`` -- Image name of the uploaded Azure image (optional)

      * **GCP upload options** -- upload to Google Cloud Platform.

        * ``region`` -- GCP region to upload the image to
        * ``bucket`` -- GCP bucket to upload the image to (optional)
        * ``share_with_accounts`` -- list of GCP accounts to share the image
          with
        * ``image_name`` -- Image name of the uploaded GCP image (optional)

      * **Container upload options** -- upload to a container registry.

        * ``name`` -- name of the container image (optional)
        * ``tag`` -- container tag to upload the image to (optional)

.. note::
   There is initial support for having this task as failable without aborting
   the whole compose. This can be enabled by setting ``"failable": ["*"]`` in
   the config for the image. It is an on/off switch without granularity per
   arch.


Image container
===============

This phase supports building containers in OSBS that embed an image created in
the same compose. This can be useful for delivering the image to users running
in containerized environments.

Pungi will start a ``buildContainer`` task in Koji with configured source
repository. The ``Dockerfile`` can expect that a repo file will be injected
into the container that defines a repo named ``image-to-include``, and its
``baseurl`` will point to the image to include. It is possible to extract the
URL with a command like ``dnf config-manager --dump image-to-include | awk
'/baseurl =/{print $3}'```

**image_container**
    (*dict*) -- configuration for building containers embedding an image.

    Format: ``{variant_uid_regex: [{...}]}``.

    The inner object will define a single container. These keys are required:

    * ``url``, ``target``, ``git_branch``. See OSBS section for definition of
      these.
    * ``image_spec`` -- (*object*) A string mapping of filters used to select
      the image to embed. All images listed in metadata for the variant will be
      processed. The keys of this filter are used to select metadata fields for
      the image, and values are regular expression that need to match the
      metadata value.

      The filter should match exactly one image.


Example config
--------------
::

    image_container = {
        "^Server$": [{
            "url": "git://example.com/dockerfiles.git?#HEAD",
            "target": "f24-container-candidate",
            "git_branch": "f24",
            "image_spec": {
                "format": "qcow2",
                "arch": "x86_64",
                "path": ".*/guest-image-.*$",
            }
        }]
    }


OSTree Settings
===============

The ``ostree`` phase of *Pungi* can create and update ostree repositories. This
is done by running ``rpm-ostree compose`` in a Koji runroot environment. The
ostree repository itself is not part of the compose and should be located in
another directory. Any new packages in the compose will be added to the
repository with a new commit.

**ostree**
    (*dict*) -- a mapping of configuration for each variant. The format should
    be ``{variant_uid_regex: config_dict}``. It is possible to use a list of
    configuration dicts as well.

    The configuration dict for each variant arch pair must have these keys:

    * ``treefile`` -- (*str*) Filename of configuration for ``rpm-ostree``.
    * ``config_url`` -- (*str*) URL for Git repository with the ``treefile``.
    * ``repo`` -- (*str|dict|[str|dict]*) repos specified by URL or a dict of
      repo options, ``baseurl`` is required in the dict.
    * ``ostree_repo`` -- (*str*) Where to put the ostree repository

    These keys are optional:

    * ``keep_original_sources`` -- (*bool*) Keep the existing source repos in
      the tree config file. If not enabled, all the original source repos will
      be removed from the tree config file.
    * ``config_branch`` -- (*str*) Git branch of the repo to use. Defaults to
      ``master``.
    * ``arches`` -- (*[str]*) List of architectures for which to update ostree.
      There will be one task per architecture. By default all architectures in
      the variant are used.
    * ``failable`` -- (*[str]*) List of architectures for which this
      deliverable is not release blocking.
    * ``update_summary`` -- (*bool*) Update summary metadata after tree composing.
      Defaults to ``False``.
    * ``force_new_commit`` -- (*bool*) Do not use rpm-ostree's built-in change
      detection.
      Defaults to ``False``.
    * ``unified_core`` -- (*bool*) Use rpm-ostree in unified core mode for composes.
      Defaults to ``False``.
    * ``version`` -- (*str*) Version string to be added as versioning metadata.
      If this option is set to ``!OSTREE_VERSION_FROM_LABEL_DATE_TYPE_RESPIN``,
      a value will be generated automatically as ``$VERSION.$RELEASE``.
      If this option is set to ``!VERSION_FROM_VERSION_DATE_RESPIN``,
      a value will be generated automatically as ``$VERSION.$DATE.$RESPIN``.
      :ref:`See how those values are created <auto-version>`.
    * ``tag_ref`` -- (*bool*, default ``True``) If set to ``False``, a git
      reference will not be created.
    * ``ostree_ref`` -- (*str*) To override value ``ref`` from ``treefile``.
    * ``runroot_packages`` -- (*list*) A list of additional package names to be
      installed in the runroot environment in Koji.

Example config
--------------
::

    ostree = {
        "^Atomic$": {
            "treefile": "fedora-atomic-docker-host.json",
            "config_url": "https://git.fedorahosted.org/git/fedora-atomic.git",
            "keep_original_sources": True,
            "repo": [
                "http://example.com/repo/x86_64/os",
                {"baseurl": "http://example.com/linux/repo", "exclude": "systemd-container"},
            ],
            "ostree_repo": "/mnt/koji/compose/atomic/Rawhide/",
            "update_summary": True,
            # Automatically generate a reasonable version
            "version": "!OSTREE_VERSION_FROM_LABEL_DATE_TYPE_RESPIN",
            # Only run this for x86_64 even if Atomic has more arches
            "arches": ["x86_64"],
        }
    }

**ostree_use_koji_plugin** = False
    (*bool*) -- When set to ``True``, the Koji pungi_ostree task will be
    used to execute rpm-ostree instead of runroot. Use only if the Koji instance
    has the pungi_ostree plugin installed.


OSTree Native Container Settings
================================

The ``ostree_container`` phase of *Pungi* can create an ostree native container
image as an OCI archive. This is done by running ``rpm-ostree compose image``
in a Koji runroot environment.

While rpm-ostree can use information from previously built images to improve
the split in container layers, we can not use that functionnality until
https://github.com/containers/skopeo/pull/2114 is resolved. Each invocation
will thus create a new OCI archive image *from scratch*.

**ostree_container**
    (*dict*) -- a mapping of configuration for each variant. The format should
    be ``{variant_uid_regex: config_dict}``. It is possible to use a list of
    configuration dicts as well.

    The configuration dict for each variant arch pair must have these keys:

    * ``treefile`` -- (*str*) Filename of configuration for ``rpm-ostree``.
    * ``config_url`` -- (*str*) URL for Git repository with the ``treefile``.

    These keys are optional:

    * ``repo`` -- (*str|dict|[str|dict]*) repos specified by URL or a dict of
      repo options, ``baseurl`` is required in the dict.
    * ``keep_original_sources`` -- (*bool*) Keep the existing source repos in
      the tree config file. If not enabled, all the original source repos will
      be removed from the tree config file.
    * ``config_branch`` -- (*str*) Git branch of the repo to use. Defaults to
      ``main``.
    * ``arches`` -- (*[str]*) List of architectures for which to generate
      ostree native container images. There will be one task per architecture.
      By default all architectures in the variant are used.
    * ``failable`` -- (*[str]*) List of architectures for which this
      deliverable is not release blocking.
    * ``version`` -- (*str*) Version string to be added to the OCI archive name.
      If this option is set to ``!OSTREE_VERSION_FROM_LABEL_DATE_TYPE_RESPIN``,
      a value will be generated automatically as ``$VERSION.$RELEASE``.
      If this option is set to ``!VERSION_FROM_VERSION_DATE_RESPIN``,
      a value will be generated automatically as ``$VERSION.$DATE.$RESPIN``.
      :ref:`See how those values are created <auto-version>`.
    * ``tag_ref`` -- (*bool*, default ``True``) If set to ``False``, a git
      reference will not be created.
    * ``runroot_packages`` -- (*list*) A list of additional package names to be
      installed in the runroot environment in Koji.

Example config
--------------
::

    ostree_container = {
        "^Sagano$": {
            "treefile": "fedora-tier-0-38.yaml",
            "config_url": "https://gitlab.com/CentOS/cloud/sagano.git",
            "config_branch": "main",
            "repo": [
                "http://example.com/repo/x86_64/os",
                {"baseurl": "http://example.com/linux/repo", "exclude": "systemd-container"},
            ],
            # Automatically generate a reasonable version
            "version": "!OSTREE_VERSION_FROM_LABEL_DATE_TYPE_RESPIN",
            # Only run this for x86_64 even if Sagano has more arches
            "arches": ["x86_64"],
        }
    }

**ostree_container_use_koji_plugin** = False
    (*bool*) -- When set to ``True``, the Koji pungi_ostree task will be
    used to execute rpm-ostree instead of runroot. Use only if the Koji instance
    has the pungi_ostree plugin installed.


Ostree Installer Settings
=========================

The ``ostree_installer`` phase of *Pungi* can produce installer image bundling
an OSTree repository. This always runs in Koji as a ``runroot`` task.

**ostree_installer**
    (*dict*) -- a variant/arch mapping of configuration. The format should be
    ``[(variant_uid_regex, {arch|*: config_dict})]``.

    The configuration dict for each variant arch pair must have this key:

    These keys are optional:

    * ``repo`` -- (*str|[str]*) repos specified by URL or variant UID
    * ``release`` -- (*str*) Release value to set for the installer image. Set
      to ``!RELEASE_FROM_LABEL_DATE_TYPE_RESPIN`` to generate the value
      :ref:`automatically <auto-version>`.
    * ``failable`` -- (*[str]*) List of architectures for which this
      deliverable is not release blocking.

    These optional keys are passed to ``lorax`` to customize the build.

    * ``installpkgs`` -- (*[str]*)
    * ``add_template`` -- (*[str]*)
    * ``add_arch_template`` -- (*[str]*)
    * ``add_template_var`` -- (*[str]*)
    * ``add_arch_template_var`` -- (*[str]*)
    * ``rootfs_size`` -- (*[str]*)
    * ``template_repo`` -- (*str*) Git repository with extra templates.
    * ``template_branch`` -- (*str*) Branch to use from ``template_repo``.

    The templates can either be absolute paths, in which case they will be used
    as configured; or they can be relative paths, in which case
    ``template_repo`` needs to point to a Git repository from which to take the
    templates.

    If the templates need to run with additional dependencies, that can be configured
    with the optional key:

    * ``extra_runroot_pkgs`` -- (*[str]*)
    * ``skip_branding`` -- (*bool*) Stops lorax to install packages with branding.
      Defaults to ``False``.

**ostree_installer_overwrite** = False
    (*bool*) -- by default if a variant including OSTree installer also creates
    regular installer images in buildinstall phase, there will be conflicts (as
    the files are put in the same place) and Pungi will report an error and
    fail the compose.

    With this option it is possible to opt-in for the overwriting. The
    traditional ``boot.iso`` will be in the ``iso/`` subdirectory.

**ostree_installer_use_koji_plugin** = False
    (*bool*) -- When set to ``True``, the Koji pungi_buildinstall task will be
    used to execute Lorax instead of runroot. Use only if the Koji instance
    has the pungi_buildinstall plugin installed.


Example config
--------------
::

    ostree_installer = [
        ("^Atomic$", {
            "x86_64": {
                "repo": [
                    "Everything",
                    "https://example.com/extra-repo1.repo",
                    "https://example.com/extra-repo2.repo",
                ],
                "release": "!RELEASE_FROM_LABEL_DATE_TYPE_RESPIN",
                "installpkgs": ["fedora-productimg-atomic"],
                "add_template": ["atomic-installer/lorax-configure-repo.tmpl"],
                "add_template_var": [
                    "ostree_osname=fedora-atomic",
                    "ostree_ref=fedora-atomic/Rawhide/x86_64/docker-host",
                ],
                "add_arch_template": ["atomic-installer/lorax-embed-repo.tmpl"],
                "add_arch_template_var": [
                    "ostree_repo=https://kojipkgs.fedoraproject.org/compose/atomic/Rawhide/",
                    "ostree_osname=fedora-atomic",
                    "ostree_ref=fedora-atomic/Rawhide/x86_64/docker-host",
                ]
                'template_repo': 'https://git.fedorahosted.org/git/spin-kickstarts.git',
                'template_branch': 'f24',
            }
        })
    ]


OSBS Settings
=============

*Pungi* can build container images in OSBS. The build is initiated through Koji
``container-build`` plugin. The base image will be using RPMs from the current
compose and a ``Dockerfile`` from specified Git repository.

Please note that the image is uploaded to a registry and not exported into
compose directory. There will be a metadata file in
``compose/metadata/osbs.json`` with details about the built images (assuming
they are not scratch builds).

**osbs**
    (*dict*) -- a mapping from variant regexes to configuration blocks. The
    format should be ``{variant_uid_regex: [config_dict]}``.

    The configuration for each image must have at least these keys:

    * ``url`` -- (*str*) URL pointing to a Git repository with ``Dockerfile``.
      Please see :ref:`git-urls` section for more details.
    * ``target`` -- (*str*) A Koji target to build the image for.
    * ``git_branch`` -- (*str*) A branch in SCM for the ``Dockerfile``. This is
      required by OSBS to avoid race conditions when multiple builds from the
      same repo are submitted at the same time. Please note that ``url`` should
      contain the branch or tag name as well, so that it can be resolved to a
      particular commit hash.

    Optionally you can specify ``failable``. If it has a truthy value, failure
    to create the image will not abort the whole compose.

    The configuration will pass other attributes directly to the Koji task.
    This includes ``scratch`` and ``priority``. See ``koji list-api
    buildContainer`` for more details about these options.

    A value for ``yum_repourls`` will be created automatically and point at a
    repository in the current compose. You can add extra repositories with
    ``repo`` key having a list of urls pointing to ``.repo`` files or just
    variant uid, Pungi will create the .repo file for that variant. If
    specific URL is used in the ``repo``, the ``$COMPOSE_ID`` variable in
    the ``repo`` string will be replaced with the real compose ID.
    ``gpgkey`` can be specified to enable gpgcheck in repo files for variants.

**osbs_registries**
   (*dict*) -- Use this optional setting to emit ``osbs-request-push``
   messages for each non-scratch container build. These messages can guide
   other tools how to push the images to other registries. For example, an
   external tool might trigger on these messages and copy the images from
   OSBS's registry to a staging or production registry.

   For each completed container build, Pungi will try to match the NVR against
   a key in ``osbs_registries`` mapping (using shell-style globbing) and take
   the corresponding value and collect them across all built images. Pungi
   will save this data into ``logs/global/osbs-registries.json``, mapping each
   Koji NVR to the registry data. Pungi will also send this data to the
   message bus on the ``osbs-request-push`` topic once the compose finishes
   successfully.

   Pungi simply logs the mapped data and emits the messages. It does not
   handle the messages or push images. A separate tool must do that.


Example config
--------------
::

    osbs = {
        "^Server$": {
            # required
            "url": "git://example.com/dockerfiles.git?#HEAD",
            "target": "f24-docker-candidate",
            "git_branch": "f24-docker",

            # optional
            "repo": ["Everything", "https://example.com/extra-repo.repo"],
            # This will result in three repo urls being passed to the task.
            # They will be in this order: Server, Everything, example.com/
            "gpgkey": 'file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release',
        }
    }


Extra ISOs
==========

Create an ISO image that contains packages from multiple variants. Such ISO
always belongs to one variant, and will be stored in ISO directory of that
variant.

The ISO will be bootable if buildinstall phase runs for the parent variant. It
will reuse boot configuration from that variant.

**extra_isos**
    (*dict*) -- a mapping from variant UID regex to a list of configuration
    blocks.

    * ``include_variants`` -- (*list*) list of variant UIDs from which content
      should be added to the ISO; the variant of this image is added
      automatically.

    Rest of configuration keys is optional.

    * ``filename`` -- (*str*) template for naming the image. In addition to the
      regular placeholders ``filename`` is available with the name generated
      using ``image_name_format`` option.

    * ``volid`` -- (*str*) template for generating volume ID. Again ``volid``
      placeholder can be used similarly as for file name. This can also be a
      list of templates that will be tried sequentially until one generates a
      volume ID that fits into 32 character limit.

    * ``extra_files`` -- (*list*) a list of :ref:`scm_dict <scm_support>`
      objects. These files will be put in the top level directory of the image.

    * ``arches`` -- (*list*) a list of architectures for which to build this
      image. By default all arches from the variant will be used. This option
      can be used to limit them.

    * ``failable_arches`` -- (*list*) a list of architectures for which the
      image can fail to be generated and not fail the entire compose.

    * ``skip_src`` -- (*bool*) allows to disable creating an image with source
      packages.

    * ``inherit_extra_files`` -- (*bool*) by default extra files in variants
      are ignored. If you want to include them in the ISO, set this option to
      ``True``.

    * ``max_size`` -- (*int*) expected maximum size in bytes. If the final
      image is larger, a warning will be issued.

Example config
--------------
::

    extra_isos = {
        'Server': [{
            # Will generate foo-DP-1.0-20180510.t.43-Server-x86_64-dvd1.iso
            'filename': 'foo-{filename}',
            'volid': 'foo-{arch}',

            'extra_files': [{
                'scm': 'git',
                'repo': 'https://pagure.io/pungi.git',
                'file': 'setup.py'
            }],

            'include_variants': ['Client']
        }]
    }
    # This should create image with the following layout:
    #  .
    #  ├── Client
    #  │   ├── Packages
    #  │   │   ├── a
    #  │   │   └── b
    #  │   └── repodata
    #  ├── Server
    #  │   ├── Packages
    #  │   │   ├── a
    #  │   │   └── b
    #  │   └── repodata
    #  └── setup.py



Media Checksums Settings
========================

**media_checksums**
    (*list*) -- list of checksum types to compute, allowed values are anything
    supported by Python's ``hashlib`` module (see `documentation for details
    <https://docs.python.org/2/library/hashlib.html>`_).

**media_checksum_one_file**
    (*bool*) -- when ``True``, only one ``CHECKSUM`` file will be created per
    directory; this option requires ``media_checksums`` to only specify one
    type

**media_checksum_base_filename**
    (*str*) -- when not set, all checksums will be save to a file named either
    ``CHECKSUM`` or based on the digest type; this option allows adding any
    prefix to that name

    It is possible to use format strings that will be replace by actual values.
    The allowed keys are:

      * ``arch``
      * ``compose_id``
      * ``date``
      * ``label``
      * ``label_major_version``
      * ``release_short``
      * ``respin``
      * ``type``
      * ``type_suffix``
      * ``version``
      * ``dirname`` (only if ``media_checksum_one_file`` is enabled)

    For example, for Fedora the prefix should be
    ``%(release_short)s-%(variant)s-%(version)s-%(date)s%(type_suffix)s.%(respin)s``.


Translate Paths Settings
========================

**translate_paths**
    (*list*) -- list of paths to translate; format: ``[(path, translated_path)]``

.. note::
    This feature becomes useful when you need to transform compose location
    into e.g. a HTTP repo which is can be passed to ``koji image-build``.
    The ``path`` part is normalized via ``os.path.normpath()``.


Example config
--------------
::

    translate_paths = [
        ("/mnt/a", "http://b/dir"),
    ]

Example usage
-------------
::

    >>> from pungi.util import translate_paths
    >>> print translate_paths(compose_object_with_mapping, "/mnt/a/c/somefile")
    http://b/dir/c/somefile


Miscellaneous Settings
======================

**paths_module**
    (*str*) -- Name of Python module implementing the same interface as
    ``pungi.paths``. This module can be used to override where things are
    placed.

**link_type** = ``hardlink-or-copy``
    (*str*) -- Method of putting packages into compose directory.

    Available options:

    * ``hardlink-or-copy``
    * ``hardlink``
    * ``copy``
    * ``symlink``
    * ``abspath-symlink``

**skip_phases**
    (*list*) -- List of phase names that should be skipped. The same
    functionality is available via a command line option.

**release_discinfo_description**
    (*str*) -- Override description in ``.discinfo`` files. The value is a
    format string accepting ``%(variant_name)s`` and ``%(arch)s`` placeholders.

**symlink_isos_to**
    (*str*) -- If set, the ISO files from ``buildinstall``, ``createiso`` and
    ``live_images`` phases will be put into this destination, and a symlink
    pointing to this location will be created in actual compose directory.

**dogpile_cache_backend**
    (*str*) -- If set, Pungi will use the configured Dogpile cache backend to
    cache various data between multiple Pungi calls. This can make Pungi
    faster in case more similar composes are running regularly in short time.

    For list of available backends, please see the
    https://dogpilecache.readthedocs.io documentation.

    Most typical configuration uses the ``dogpile.cache.dbm`` backend.

**dogpile_cache_arguments**
    (*dict*) -- Arguments to be used when creating the Dogpile cache backend.
    See the particular backend's configuration for the list of possible
    key/value pairs.

    For the ``dogpile.cache.dbm`` backend, the value can be for example
    following: ::

        {
            "filename": "/tmp/pungi_cache_file.dbm"
        }

**dogpile_cache_expiration_time**
    (*int*) -- Defines the default expiration time in seconds of data stored
    in the Dogpile cache. Defaults to 3600 seconds.