Difference between revisions of "Debian builds with Jenkins"

From PDP/Grid Wiki
Jump to navigationJump to search
(typofixes)
(typo in filename)
 
(4 intermediate revisions by the same user not shown)
Line 1: Line 1:
 +
See the results of the debian builds of the middleware stack on Jenkins at [[Media:Debian-builds.pdf|Debian-builds.pdf]]
 +
 
== Goals ==
 
== Goals ==
  
Line 25: Line 27:
  
 
The list of available jobs for debian packaging can be found in our [https://jenkins.nikhef.nl:8008 local jenkins instance], under the DEBIAN-BUILDS tab. We started out creating Jenkins jobs based on the recommendations of [http://jenkins-debian-glue.org jenkins-debian-glue], but soon started deviating from it as more and more customization was needed to fit specific use cases. For every package that needs to be build for debian we dedicate two separate Jenkins jobs as suggested in the setup guide by [http://jenkins-debian-glue.org jenkins-debian-glue]. The <package-name>.source will build the source package, and the <package-name>.binaries will build all binary packages for different architectures and distributions. These two can be executed independently from each other The creation of these jobs are outlined below.
 
The list of available jobs for debian packaging can be found in our [https://jenkins.nikhef.nl:8008 local jenkins instance], under the DEBIAN-BUILDS tab. We started out creating Jenkins jobs based on the recommendations of [http://jenkins-debian-glue.org jenkins-debian-glue], but soon started deviating from it as more and more customization was needed to fit specific use cases. For every package that needs to be build for debian we dedicate two separate Jenkins jobs as suggested in the setup guide by [http://jenkins-debian-glue.org jenkins-debian-glue]. The <package-name>.source will build the source package, and the <package-name>.binaries will build all binary packages for different architectures and distributions. These two can be executed independently from each other The creation of these jobs are outlined below.
 +
 +
When adding new debian build jobs you can either follow the templates described below, or simply create new jobs by copying already existing job configurations and altering some fields. The second option is more desirable, since it ensure that you're getting the most up to date job configuration.
  
 
=== Building source packages ===
 
=== Building source packages ===
Line 148: Line 152:
  
 
As an extra step, lintian checks are executed at the end of the script.
 
As an extra step, lintian checks are executed at the end of the script.
 +
 +
Building binary packages out of adopted projects follows the same design outlined above, with the exception that the source package is simply build with the ''make deb-src'' command instead of calling svn-buildpackage. Note that the '''build-and-provide-package''' will be called on adopted projects as well in the save way as for local projects.
  
 
Similarly to the source building job, the binary building job are also chained together in a build specific order that under the job called build-all.binaries
 
Similarly to the source building job, the binary building job are also chained together in a build specific order that under the job called build-all.binaries
Line 211: Line 217:
  
 
== Problems and Solution ==
 
== Problems and Solution ==
 +
 +
=== Unfit source generating script from jenkins-debian-glue ===
 +
 +
The jenkins-debian-glue package provides a wrapper script called '''generate-svn-snapshot''' which is intended for building source packages out of a freshly checked out debian subdirectory. Unable to fine tune it via passed parameters only, we decided to just use ''svn-buildpackage'' manually. What generate-svn-snapshot '''can't''' do:
 +
 +
* Download source tarballs properly into one location. The script omits the use of ''uscan'' and relies on ''svn-buildpackage'' to download original sources, which does not work all the time. Therefore, we are using uscan to download the source tarballs.
 +
* Strict checking of the 'mergeWithUpstream' property set on the debian subdirectory coming from svn. We discovered some inconsistencies in out svn repository with regards to this property that made the script fail (some had an incorrect version of the property, such as 'svn:mergeWithUpstream' or 'MergeWithUpstream') This was easily corrected by modifying the property to its correct format on all debian subdirectories.
 +
* Package naming cannot be tuned, which leaves us with no option to add the proper backport suffixes to different builds. This script uses its own naming convention with specific timestamps and svn revision numbers suffixed to the resulting source package, which is different from what we want. By executing ''dch'' manually, we overcame this issue.
 +
 +
=== FORCE_BINARY_ONLY breaks some of the builds ===
 +
 +
The '''build-and-provide-package''' script accepts the FORCE_BINARY_ONLY variable, which is used to decide to do binary only builds (-B or -A). This variable can be used to suppress building source packages again and again for every sub-job (distribution-architecture pair). Setting this flag will result in some of the packages not being build properly, because it fill fail to build ever package with architecture=all (in the -B case) or architecture=any (in the -A case). If both architecture dependent and architecture independent packages are provided by a build, the result of setting this flag is not visible right away, because the build succeeds. On the other hand, if a build provides only one or the other, this flag will break the build.
 +
 +
Since we wanted a uniform build script that can be applied to every package (regardless of whether it provides architecture dependent or architecture independent packages) we omit setting this flag, and build everything.
 +
 +
=== Concurrent builds ===
 +
 +
Executing build in parallel is technically possible, since a separate clean environment is provided for each build, and '''build-and-provide-package''' uses directory names marked with PIDs to avoid collisions. In practice, we encountered problems with the script failing when it cannot unmount a previously bind-mounted directory.
 +
 +
The build-and-provide-package script bind-mount the /var/cache/pbuilder/build directory inside the chroot environment in order to be able to save a snapshot of the build environment in case of a failed build. This is useful if someone wants to inspect further problems with a build, but also causes problems with concurrent executions of the script. You can stop this, by removing the directory from the bind mount list inside the script
 +
 +
diff /usr/bin/build-and-provide-package /usr/bin/build-and-provide-package.orig
 +
429,430c429
 +
<  #local BINDMOUNTS="/tmp/adt-$$ /tmp/apt-$$ /var/cache/pbuilder/build ${USER_BINDMOUNTS:-}"
 +
<  local BINDMOUNTS="/tmp/adt-$$ /tmp/apt-$$ ${USER_BINDMOUNTS:-}"
 +
---
 +
>  local BINDMOUNTS="/tmp/adt-$$ /tmp/apt-$$ /var/cache/pbuilder/build ${USER_BINDMOUNTS:-}"
 +
 +
=== Old debhelper version on Debian Squeeze ===
 +
 +
Some of the builds on squeeze kept on breaking with a syntax error while executing ''dh_auto_configure''. Some investigation lead us to discover that the '''dpkg-dev''' and '''debhelper''' packages are having a version number which is lower than the ones required for building the package. To install the proper version of these required packages we included the squeeze-backports into the available repositories inside the build environment for squeeze builds. This can be achieved by a pbuilder hook script that installs the right repository and the right packages.
 +
 +
See the hook script at /usr/share/jenkins-debian-glue/pbuilder-hookdir/D30squeeze-backport for more details:
 +
 +
#!/bin/sh
 +
 +
set -x
 +
 +
DEBIAN_VERSION=`cat /etc/debian_version | cut -d '.' -f 1`
 +
DPKG_DEV_VERSION=`dpkg-query --show dpkg-dev | sed 's/^dpkg-dev[ \t]*//' | cut -d '.' -f 2`
 +
 +
if [ "${DEBIAN_VERSION}" = "6" -a ${DPKG_DEV_VERSION} -lt 16 ]; then
 +
 +
  echo "Adding squeeze-backports to sources"
 +
  echo 'deb <nowiki>http://backports.debian.org/debian-backports</nowiki> squeeze-backports main' > /etc/apt/sources.list.d/squeeze-backports.list
 +
 +
  /usr/bin/apt-get update
 +
  /usr/bin/apt-get install -y -t squeeze-backports dpkg-dev debhelper
 +
fi
 +
 +
=== Debian Sequeeze only accepts signed packages ===
 +
 +
By default the resulting packages in the '''local repository''' are unsigned, but for squeeze these packages need to be signed in order to use the '''local repository''' for dependency resolution.
 +
 +
The problem is also described by the jenkins-debian-glue config file: "By default reprepro repositories are not verified but assumed to be trustworthy. Please note that if you build packages for Squeeze, the reprepro repositories *MUST* be signed and verifiable. I.e. you need to set KEY_ID and the corresponding keyring in REPOSITORY_KEYRING that holds the public key portion for that KEY_ID." The solution is to create a GPG key and include the set the required variables in /etc/jenkins/debian_glue:
 +
 +
gpg --gen-key
 +
 +
=== Filtering out specific configurations ===
 +
 +
We encountered packages that cannot be built for each distribution-architecture pair, because of missing dependencies on a specific platform. Having a sub-job inside the <package-name>.binaries multi-configuration job always failing is undesired, because it's redundant and also marks the whole build as failed. To filter out some if these configurations that don't make sense to a specific package you can add a '''combination filter''' to the matrix project configuration.
 +
 +
!(architecture=="i386" && distribution=="squeeze")
 +
 +
The above example tells Jenkins not to build the package on a i386 squeeze.
 +
 +
=== Multiple source tarballs ===
 +
 +
Packages that require multiple source tarballs at build time can be tricky to build. We dedicate the directory called '''tarballs''' in the job workspace to be a container of all required source tarballs. Moreover ''svn-buildpackage'' will expect to find these together in the this directory, so we make sure to copy every top level tarball manually over to the '''tarballs''' directory after ''uscan'' is executed.
 +
 +
The problem manifests on executing ''svn-buildpackage'', which is unable to cope with more than two source tarballs because of an already discovered [https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=585658 bug]. To fix in the debian node, you can just simply apply the following diff which fixes the rangecheck bug.
 +
 +
diff /usr/bin/svn-buildpackage /usr/bin/svn-buildpackage.orig
 +
635c635
 +
<   if (@entries == 0) {
 +
---
 +
>   if (@entries == 1) {
 +
639c639
 +
<     withecho "mv", (<$ba/tmp-$mod/*>), "$bdir/$component";
 +
---
 +
>     withecho "mv", "$ba/tmp-$mod", "$bdir/$component";
 +
 +
=== Missing source packages from the local repository ===
 +
 +
The build-and-provide-package script only builds and uploads source packages for builds executed on the same architecture as the building platform (which in our case is amd64). This means that the i386 builds are not uploading any source packages. Jenkins executes the architecture-distribution sub-jobs in an arbitrary order. This can lead to the execution of amd64 builds before i386 builds, which results in a situation where the source packages created and uploaded by the amd64 build are erased by the i386 build (of the same distribution), and never uploaded again. To solve this issue you can modify the matrix job configuration of the <package-name>.binaries jobs to execute i386 builds before amd64 builds. This ordering will ensure that source packages uploaded by amd64 builds will stay in the local repository, and will not get deleted by upcoming sub-jobs. The way to do this in Jenkins is with the 'Execute touchstone builds first' option set to:
 +
 +
architecture=="i386"

Latest revision as of 14:54, 1 June 2015

See the results of the debian builds of the middleware stack on Jenkins at Debian-builds.pdf

Goals

The aim of this work was to ease and automate the way debian packages are created for the supported middleware security software. Just like the Koji Testbed for automated RPM packaging, a similar solution was proposed with the use of Jenkins. A Jenkins job can be configured for every component which can be run on a debian build node. This job can then be used to create packages for multiple distributions and architectures in a clean environment with the use of cowbuilder. Much of this work was built on top of the debian building procedure already outlined before.

Prerequisites

There are a couple of prerequisites assumed to be already in place:

Note: The use of Debian Package Builder has been discarded due to its limitation on not using a clean build environment

  • The latest stable debian image (jessie at the time) configured as Jenkins slave (debian build node)
  • Software installed on the debian build node
apt-get install dh-make autotools-dev dh-autoreconf build-essentials devscripts cdbs quilt \
                debhelper fakeroot linitan pbuilder cowbuilder svn_buildpackage maven_debian_helper

Package building jobs

The list of available jobs for debian packaging can be found in our local jenkins instance, under the DEBIAN-BUILDS tab. We started out creating Jenkins jobs based on the recommendations of jenkins-debian-glue, but soon started deviating from it as more and more customization was needed to fit specific use cases. For every package that needs to be build for debian we dedicate two separate Jenkins jobs as suggested in the setup guide by jenkins-debian-glue. The <package-name>.source will build the source package, and the <package-name>.binaries will build all binary packages for different architectures and distributions. These two can be executed independently from each other The creation of these jobs are outlined below.

When adding new debian build jobs you can either follow the templates described below, or simply create new jobs by copying already existing job configurations and altering some fields. The second option is more desirable, since it ensure that you're getting the most up to date job configuration.

Building source packages

The steps taken by a <package-name>.source job are:

  1. Restrict where this project can be run: must be a debian slave
  2. Delete previous workspace
  3. Source Code Management: svn or git checkout of the debian subdirectory containing the relevant files [1] into a directory called 'source'
  4. Execute source building script

The first 3 steps of the job are straight forwards, while the last step is the one that does the actual work. The source building script differs for svn checkouts (local projects) and for git checkouts (adopted projects). For local projects coming from out local svn repository we execute the following build script:

ORIG_DIR=$(pwd)

cd source
dch --distribution unstable --release ""

svn upgrade

if [ -f debian/orig-tar.sh ]; then
    chmod +x debian/orig-tar.sh
fi
mkdir -p ../tarballs
uscan --download-current-version --destdir ../tarballs
cp ../*.tar.gz ../tarballs || true

svn-buildpackage -S --svn-builder dpkg-buildpackage -d --svn-move-to=${ORIG_DIR} --svn-dont-purge -uc -us --svn-ignore-new -rfakeroot

lintian -IiE --pedantic `find . -type f -name *.changes` || true

First, the changelog is modified to reflect a new build via the dhc command. After an svn upgrade, the tarballs containing the sources are fetched with the use of uscan and/or the aid of the debian/orig-tar.sh script (thus it is important for it have an executable flag). In case the custom debian/orig-tar.sh downloads the tarballs into the parent directory, we make sure they are copied into the tarballs directory, where svn-buildpackage expects them to be. Finally, the source package is built with svn-buildpackage, and lintian checks are executed on the results.

At the time of this writing our only encountered adopted projects are the argus c components, for which the debian subdirectory is checked out through github. These packages come with a predefined Makefile, so building them boils down to executing:

dch --distribution unstable --release ""
make deb-src

So that you wouldn't have to execute every build separately you can chain them into a single job which calls every <package-name>.source job as a downstream project, using Parametrized Trigger Plugin. On our Jenkins instance this job is called build-all.source

Building binaries packages

The steps taken by a <package-name>.binaries job are:

  1. Define ${architecture}, ${distribution} and debian slave from the matrix configuration
  2. Delete previous workspace
  3. Source Code Management: svn or git checkout of the debian subdirectory containing the relevant files [2] into a directory called 'source'
  4. Execute binary building script
    1. add package suffix
    2. build source package
    3. build binary packages
    4. execute lintian checks

Every <package-name>.binaries job is a multi-configuration job with the following axis defined:

  • User-defined Axis: architecture=amd64 i386
  • User-defined Axis: distribution=jessie wheezy squeeze sid
  • Label expression: label_exp=debian8

The two User-defined axis will create 8 sub-jobs, one for each combinations of architecture and distribution, while the Label expression will restrict the jobs' execution to debian8 nodes (see Jenkins Setup on how to set up nodes with labels).

After clearing the workspace and checking out the debian subdirectory to following script is executed:

###############################################################
#  Building source with modified changelog                    #
###############################################################
ORIG_DIR=$(pwd)

cd source
if [ "${distribution}" = "sid" ]; then
    dch --distribution unstable --release ""
else
    if [ "${distribution}" = "squeeze" ]; then
       bptag=bpo60+1
    elif [ "${distribution}" = "wheezy" ]; then
       bptag=bpo70+1
    elif [ "${distribution}" = "jessie" ]; then
       bptag=bpo80+1
    fi

    version=`dpkg-parsechangelog | sed -n 's/^Version: //p'`
    dch --force-distribution --distribution ${distribution}-backports -b -v ${version}~${bptag} "Rebuild for ${distribution}"
fi

svn upgrade

if [ -f debian/orig-tar.sh ]; then
    chmod +x debian/orig-tar.sh
fi
mkdir -p ../tarballs
uscan --download-current-version --destdir ../tarballs
cp ../*.tar.gz ../tarballs || true

svn-buildpackage -S --svn-builder dpkg-buildpackage -d --svn-move-to=${ORIG_DIR} --svn-dont-purge -uc -us --svn-ignore-new -rfakeroot

cd ../

###############################################################
#  Building binaries                                          #
###############################################################

USE_LOCAL_REPOSITORY=true

if [ -n ${USE_LOCAL_REPOSITORY} ]; then
    export release=${distribution}
    export REMOVE_FROM_RELEASE=true
else
     export REPOSITORY_EXTRA="deb http://software.nikhef.nl/dist/debian/ ${distribution} main"
     export REPOSITORY_EXTRA_KEYS='http://software.nikhef.nl/dist/debian/DEB-GPG-KEY-MWSEC.asc'
fi

/usr/bin/build-and-provide-package

###############################################################
#  Lintian reports                                            #
###############################################################


/usr/bin/lintian-junit-report `find . -type f -name *.dsc`
cat lintian.txt

The first part of the script builds a source package with the relevant name suffix according to backporting conventions. This part is similar to the script executed in a <package-name>.source job, with a modified dch behaviour. Because of the different suffix appended to different distribution backports, it is necesarry to rebuild the source with the appropriate name. This means that we cannot rely on the output of <package-name>.source to be used in the <package-name>.binaries, as suggested by the jenkins-debian-glue guide.

The second part of the script is used to define where dependencies should be taken from, and executes the build-and-provide-package script. The build-and-provide-package script, provided by jenkins-debian-glue, will use an existing cowbuilder base (or create a new one) for every distribution-architecture pair to build the package. Once the packages are build, the script uploads them into the local repository.

As an extra step, lintian checks are executed at the end of the script.

Building binary packages out of adopted projects follows the same design outlined above, with the exception that the source package is simply build with the make deb-src command instead of calling svn-buildpackage. Note that the build-and-provide-package will be called on adopted projects as well in the save way as for local projects.

Similarly to the source building job, the binary building job are also chained together in a build specific order that under the job called build-all.binaries

Build dependency resolution

There are two types of dependencies that we come across when building the grid middlware: internal dependencies are referring to closely related packages from the same software stack which are maintained locally, while external dependencies are referring to packages not maintained locally.

Internal dependencies

When resolving internal dependencies there are two options to choose from: one can either use the official release repository [3], or use the unofficial local repository of the building node. The first option works well for builds having dependencies that have been build and packaged before. In case new releases are out for two interdependent packages, or we are building for a new distribution, the first option will fail, in which case we can resort to the local repository. The choice between the inclusion of the official repository or the local repository is made at build time by:

USE_LOCAL_REPOSITORY=true

if [ -n ${USE_LOCAL_REPOSITORY} ]; then
    export release=${distribution}
    export REMOVE_FROM_RELEASE=true
else
     export REPOSITORY_EXTRA="deb http://software.nikhef.nl/dist/debian/ ${distribution} main"
     export REPOSITORY_EXTRA_KEYS='http://software.nikhef.nl/dist/debian/DEB-GPG-KEY-MWSEC.asc'
fi

The build-and-provide-package script recognizes the exported environmental variables. The ${release} variable is controlling the name of local repository where packages are uploaded, if unspecified it defaults to "<package-name>-distribution" (in which case every package ends up in a separate repository). By setting release=${distribution} we are making sure that every resulting package will get into the same local repository. The build-and-provide-package script will also implicitly include the specified release repository into the /etc/apt/sources.list.d of the cowbuilder environment. The ${REMOVE_FROM_RELEASE} variable ensures that packages are removed from the local repository before uploading the results of the new build, in order to avoid hash collisions on the same filename.

By using ${REPOSITORY_EXTRA} and ${REPOSITORY_EXTRA_KEYS}, build-and-provide-package will include these into the /etc/apt/sources.list.d of the cowbuilder environment.

In principle you can include both local and official repositories into your build process, but this could lead to confusions later on. The build process will resolve every dependency from the available repositories included.

External dependencies

When it comes to external dependencies, one can mostly rely on official debian repositories. Every once in a while, a specific package is missing from official debian repositories, which leaves us with the only choice of including external repositories in the build environment (such as [4], [5]). These external repositories are often unmaintained, or have outdated packages that are not desirable in the clean environment. To ensure that only the most necessary packages are installed into the build environment we can run a pbuilder hook. This way the external repository can be added briefly to install the necessary packages, and removed afterwards, leaving a clean repository list. If the external repository is not removed, the automatic dependency resolver might use it for further dependency resolution, which is undesirable.

The pbuilder hook script executed for external dependency resolution can be found at /usr/share/jenkins-debian-glue/pbuilder-hookdir/D30dependency-downloader.

#!/bin/sh

set -x

if [ -n "${DEB_EXTRA_REPO}" -a -n "${DEB_EXTRA_REPO_KEY}" -a -n "${DEB_INSTALL_PACKAGES}" ]; then

    apt-get install -y wget

    echo "${DEB_EXTRA_REPO}" > /etc/apt/sources.list.d/extra-repo.list
    wget -O- "${DEB_EXTRA_REPO_KEY}" | apt-key add -

    apt-get update
    apt-get install -y ${DEB_INSTALL_PACKAGES}

    rm /etc/apt/sources.list.d/extra-repo.list 
    apt-get update

fi

Once the script is in place on your debian node, all you have to do is inject the required environmental variables into your jenkins job. As an example:

 DEB_EXTRA_REPO=deb http://repo-deb.ige-project.eu/debian/ squeeze main
 DEB_EXTRA_REPO_KEY=http://repo-deb.ige-project.eu/DEB-GPG-KEY-IGE.asc
 DEB_INSTALL_PACKAGES=libglobus-gridmap-callout-error-dev libglobus-common0 libglobus-gsi-credential1

These variable will be passed into the build environment where the hook will pick them up. Make sure to configure your jenkins user on the debian node with sudo rights and include this in the sudoers file:

Defaults:jenkins env_keep+="DEB_* DIST ARCH"

Problems and Solution

Unfit source generating script from jenkins-debian-glue

The jenkins-debian-glue package provides a wrapper script called generate-svn-snapshot which is intended for building source packages out of a freshly checked out debian subdirectory. Unable to fine tune it via passed parameters only, we decided to just use svn-buildpackage manually. What generate-svn-snapshot can't do:

  • Download source tarballs properly into one location. The script omits the use of uscan and relies on svn-buildpackage to download original sources, which does not work all the time. Therefore, we are using uscan to download the source tarballs.
  • Strict checking of the 'mergeWithUpstream' property set on the debian subdirectory coming from svn. We discovered some inconsistencies in out svn repository with regards to this property that made the script fail (some had an incorrect version of the property, such as 'svn:mergeWithUpstream' or 'MergeWithUpstream') This was easily corrected by modifying the property to its correct format on all debian subdirectories.
  • Package naming cannot be tuned, which leaves us with no option to add the proper backport suffixes to different builds. This script uses its own naming convention with specific timestamps and svn revision numbers suffixed to the resulting source package, which is different from what we want. By executing dch manually, we overcame this issue.

FORCE_BINARY_ONLY breaks some of the builds

The build-and-provide-package script accepts the FORCE_BINARY_ONLY variable, which is used to decide to do binary only builds (-B or -A). This variable can be used to suppress building source packages again and again for every sub-job (distribution-architecture pair). Setting this flag will result in some of the packages not being build properly, because it fill fail to build ever package with architecture=all (in the -B case) or architecture=any (in the -A case). If both architecture dependent and architecture independent packages are provided by a build, the result of setting this flag is not visible right away, because the build succeeds. On the other hand, if a build provides only one or the other, this flag will break the build.

Since we wanted a uniform build script that can be applied to every package (regardless of whether it provides architecture dependent or architecture independent packages) we omit setting this flag, and build everything.

Concurrent builds

Executing build in parallel is technically possible, since a separate clean environment is provided for each build, and build-and-provide-package uses directory names marked with PIDs to avoid collisions. In practice, we encountered problems with the script failing when it cannot unmount a previously bind-mounted directory.

The build-and-provide-package script bind-mount the /var/cache/pbuilder/build directory inside the chroot environment in order to be able to save a snapshot of the build environment in case of a failed build. This is useful if someone wants to inspect further problems with a build, but also causes problems with concurrent executions of the script. You can stop this, by removing the directory from the bind mount list inside the script

diff /usr/bin/build-and-provide-package /usr/bin/build-and-provide-package.orig 
429,430c429
<   #local BINDMOUNTS="/tmp/adt-$$ /tmp/apt-$$ /var/cache/pbuilder/build ${USER_BINDMOUNTS:-}"
<   local BINDMOUNTS="/tmp/adt-$$ /tmp/apt-$$ ${USER_BINDMOUNTS:-}"
---
>   local BINDMOUNTS="/tmp/adt-$$ /tmp/apt-$$ /var/cache/pbuilder/build ${USER_BINDMOUNTS:-}"

Old debhelper version on Debian Squeeze

Some of the builds on squeeze kept on breaking with a syntax error while executing dh_auto_configure. Some investigation lead us to discover that the dpkg-dev and debhelper packages are having a version number which is lower than the ones required for building the package. To install the proper version of these required packages we included the squeeze-backports into the available repositories inside the build environment for squeeze builds. This can be achieved by a pbuilder hook script that installs the right repository and the right packages.

See the hook script at /usr/share/jenkins-debian-glue/pbuilder-hookdir/D30squeeze-backport for more details:

#!/bin/sh

set -x

DEBIAN_VERSION=`cat /etc/debian_version | cut -d '.' -f 1`
DPKG_DEV_VERSION=`dpkg-query --show dpkg-dev | sed 's/^dpkg-dev[ \t]*//' | cut -d '.' -f 2`

if [ "${DEBIAN_VERSION}" = "6" -a ${DPKG_DEV_VERSION} -lt 16 ]; then

  echo "Adding squeeze-backports to sources"
  echo 'deb http://backports.debian.org/debian-backports squeeze-backports main' > /etc/apt/sources.list.d/squeeze-backports.list

  /usr/bin/apt-get update
  /usr/bin/apt-get install -y -t squeeze-backports dpkg-dev debhelper
fi

Debian Sequeeze only accepts signed packages

By default the resulting packages in the local repository are unsigned, but for squeeze these packages need to be signed in order to use the local repository for dependency resolution.

The problem is also described by the jenkins-debian-glue config file: "By default reprepro repositories are not verified but assumed to be trustworthy. Please note that if you build packages for Squeeze, the reprepro repositories *MUST* be signed and verifiable. I.e. you need to set KEY_ID and the corresponding keyring in REPOSITORY_KEYRING that holds the public key portion for that KEY_ID." The solution is to create a GPG key and include the set the required variables in /etc/jenkins/debian_glue:

gpg --gen-key

Filtering out specific configurations

We encountered packages that cannot be built for each distribution-architecture pair, because of missing dependencies on a specific platform. Having a sub-job inside the <package-name>.binaries multi-configuration job always failing is undesired, because it's redundant and also marks the whole build as failed. To filter out some if these configurations that don't make sense to a specific package you can add a combination filter to the matrix project configuration.

!(architecture=="i386" && distribution=="squeeze")

The above example tells Jenkins not to build the package on a i386 squeeze.

Multiple source tarballs

Packages that require multiple source tarballs at build time can be tricky to build. We dedicate the directory called tarballs in the job workspace to be a container of all required source tarballs. Moreover svn-buildpackage will expect to find these together in the this directory, so we make sure to copy every top level tarball manually over to the tarballs directory after uscan is executed.

The problem manifests on executing svn-buildpackage, which is unable to cope with more than two source tarballs because of an already discovered bug. To fix in the debian node, you can just simply apply the following diff which fixes the rangecheck bug.

diff /usr/bin/svn-buildpackage /usr/bin/svn-buildpackage.orig 
635c635
< 	  if (@entries == 0) {
---
> 	  if (@entries == 1) {
639c639
< 	     withecho "mv", (<$ba/tmp-$mod/*>), "$bdir/$component";
---
> 	     withecho "mv", "$ba/tmp-$mod", "$bdir/$component";

Missing source packages from the local repository

The build-and-provide-package script only builds and uploads source packages for builds executed on the same architecture as the building platform (which in our case is amd64). This means that the i386 builds are not uploading any source packages. Jenkins executes the architecture-distribution sub-jobs in an arbitrary order. This can lead to the execution of amd64 builds before i386 builds, which results in a situation where the source packages created and uploaded by the amd64 build are erased by the i386 build (of the same distribution), and never uploaded again. To solve this issue you can modify the matrix job configuration of the <package-name>.binaries jobs to execute i386 builds before amd64 builds. This ordering will ensure that source packages uploaded by amd64 builds will stay in the local repository, and will not get deleted by upcoming sub-jobs. The way to do this in Jenkins is with the 'Execute touchstone builds first' option set to:

architecture=="i386"