It may be obvious to some of you, but the very first thing to do when building an apt package mirror is to list the mirrors you will need in your offline environment. If you do not know where you want to go, how in the world could you get there?
To find out which mirrors you might want to replicate, you could look into the mirrors listed as package sources for your package manager. apt
keeps a list of all package sources in /etc/apt
, distributed into multiple files. The main file is sources.list
, which contains all default package sources. If you add custom ones, they will either be appended to this file, or be added in an extra file in /etc/apt/sources.list.d
.
Let's look at the structure of an entry. It is made up of 4 parts:
deb
- The type of mirror this entry refers to (here, it's a debian package mirror)http://fr.archive.ubuntu.com/ubuntu/
- The URL of the mirrorbionic
or bionic-updates
- The distribution to fetch in this mirrormain restricted
- The repositories to fetch from the distribution (here we get packages from the main
and restricted
repositories)To list all package sources you wish to mirror, you will therefore need to write down these specs explicitly. Once you got that, let's move on to more technical steps. To build our mirror, we will use the Aptly tool.
Installing it is as easy as downloading it, and moving it to your binaries.
Now you can either use aptly's default config, or create a custom one. If you use a custom config file, you will be able to specify the location where the mirrored packages will be stored, the download speed limit, concurrency, and many other parameters. For the sake of simplicity, we will go with the default config in this article, which stores config and packages in your home directory, under .aptly/
.
To create a mirror, you can use the aptly mirror create
command. This will create a local mirror, that is linked to the public mirror directly, without downloading anything. In order to obtain a local copy of the mirror, you will need to update your local mirror first, using aptly mirror update
.
This will download all packages present in the public mirror to your local machine. Depending on the size of the mirror you plan on mirroring, this can end up taking hours. (In our example, using the Ubuntu bionic mirror, this will take several hours, as there are over 70Gb of packages to download.)
I recommend using a tool like nohup
or screen
to run this task in the background. Do not worry, in case of failure, running the update command again will figure out the deltas and finish downloading the missing packages.
The mirror we built is a useful tool to maintain and update your local mirror, but is not enough to be used as an effective mirror from which you could install packages. Since a mirror
in aptly is an object that you can update, and change in time, it is not fit for being exposed reliably to users. Here comes the concept of snapshots
.
Aptly allows you to take a snapshot of a mirror at a given moment in time. This is an immutable object, whose versioning you can manage more precisely, which we will use to install packages from. We will need to publish it and expose the packages it contains using an HTTP server.
The first step here is to make a snapshot of our local mirror. This can be easily done using aptly snapshot create
.
Publishing a snapshot can be a bit trickier. In fact, the packages you publish need to be signed with a GPG key, which is the standard signing method for packages as of today. This is important, because it enables the recipient of the data to verify that no modifications occurred after the data was signed. If you have ever added a new package repository to your computer, you probably went through some steps that involve configuring your local package manager to trust a given GPG key. Here is an example, taken from the Docker docs:
As you can see, to be able to download the docker client from the docker mirror, the first step is to download the public key, and tell apt to trust it (using apt-key add
). Then and only then can you start using docker to build your app into containers.
So let's start by creating our very own GPG key, that we will use to sign the packages in our local mirror. You will be prompted to fill in a couple fields while generating the GPG key, simply go through the process, and you should have an output looking like the one shown below. If you want to find out more about generating your own keys, check out the official docs!
The field located above your uid
is the ID of your key. We will need it in the following steps!
We can now publish our snapshot, signing packages with our very own GPG key!
This will create a public
directory in ~/.aptly
, whose contents are organized like so:
All subdirectories of the dists
folder contains the files that describe your distributions. The pool
directory contains all .deb
packages from your snapshot.
There we go, our snapshot is officially published! To make it public, all that's left is to expose it.
To expose your local packages as a Debian mirror, all you need is an HTTP server capable of serving static files. Simply running python -m http.server 8000
(Python 3.x) in the public
directory is actually enough to have a fully exposed Debian mirror!
You could also start a Nginx server, using a config file such as this one to expose it to a specific domain.
To make it easier for anyone to use your mirror, you should place your public GPG key at the root of your mirror, to be accessible at my.debian.mirror.domain.name/gpg
.
If you do not own the domain, and simply wish to test your mirror out locally, you can either use the aptly serve
command, or mock the DNS by declaring this domain in your /etc/hosts
file, and have it point to your local machine.
To use this mirror with your apt package manager, you have to trust its public key, and add it as a package source. Once that's done, running apt
update, you should see your package among the listed sources!
You're 100% ready to work in offline mode now, installing packages directly from your custom package source! This process could be the first step in deploying your solution on-premise in a secure offline environment.
If you liked this article, check out our blog, filled with DevOps articles and resources !