Removing old kernels in Debian or Ubuntu

When one installs kernels they need to get upgraded every once in a while and while normal packages are removed once they are upgraded, kernels are not. You can install a newer kernel and the old will still be present. The reason for this simple, if the updated kernel is not working correctly you can use the previous kernel to boot your machine and continue working. However, when you have done this several times there will be a huge list of kernels and these need to be removed every once in a while. I made a script once that removes obsolete kernels, but it was specific to Ubuntu only and I needed something similar for Debian. So I completely rewrote the script and this is the end result.

Include this in your .bashrc (or create a script for it) and use the following functions to remove kernels which are no longer needed (other then  your running kernel). It will also remove any kernel header packages. You will not delete meta-packages for kernels, eg linux-image-generic or linux-image-generic-pae unless the kernel you are running is not the most recent one. The script looks at the dependencies (including reversed) to avoid such deletions. I’ve tested it on Debian and Ubuntu. In case of bugs, let me know!

This script is now also featured on the Tutorial section on the Ubuntuforums, click.

_egrep-pattern ()
        [ -z "$1" ] && return
        local regexp=$1
        for i in $*
        echo ${regexp}

rmkernel () {
        local cur_kernel="$(uname -r)"
        local all_kernels="$(dpkg -l | grep -- linux- | egrep -- "-(image|headers|backports|modules)" \
        | grep "^ii" | awk '{print $2}')"
        local cur_kernel_pkgs="$(echo -e "$all_kernels" | grep "$cur_kernel$")"
        local cur_kernel_img=$(echo -e "$cur_kernel_pkgs" | grep image)
        local cur_kernel_hdr=$(echo -e "$cur_kernel_pkgs" | grep headers)
        if [ -z "$cur_kernel_img" ]
                echo "Unable to get the current kernel package. What are you doing?" >&2
                echo "Possibly running in a chroot, eitherway aborting!" >&2
                return 1
        local cur_kernel_img_rdepends="$(apt-cache rdepends $cur_kernel_img | tail -n +3)"
        local cur_kernel_hdr_rdepends
        local cur_kernel_hdr_depends
        if [ -n "$cur_kernel_hdr" ]
                cur_kernel_hdr_rdepends="$(apt-cache rdepends $cur_kernel_hdr | tail -n +3)" 
                cur_kernel_hdr_depends="$(apt-cache depends $cur_kernel_hdr | egrep 'linux-(image|headers)' | grep Depends | awk '{print $2}')" 
        local keep="$(_egrep-pattern $cur_kernel_pkgs $cur_kernel_img_rdepends $cur_kernel_hdr_rdepends $cur_kernel_hdr_depends)"
        if [ -z "$keep" ]
                echo "Unable to determine which kernels to keep!" >&2
                echo "This should not happen in normal circumstances" >&2
                return 1
        local remove="$(echo -e "$all_kernels" | egrep -v "^($keep)$")"
        local rm_modules="$(find /lib/modules -mindepth 1 -maxdepth 1 -type d ! -name ${cur_kernel} )"
        if [ -n "$remove" ] || [ -n "$rm_modules" ]
                echo "Going to remove the following kernel packages:" $remove
                echo "Going to remove the following kernel modules directories afterwards:" $rm_modules
                echo "Waiting 5 seconds for ctrl-c to abort mission"
                sleep 3
                echo -n ..2
                sleep 1
                echo -n ..1
                sleep 1
                echo "..Start removal"
                if [ -n "$remove" ]
                        sudo aptitude -y purge $remove
                [ -n "$rm_modules" ] && sudo rm -rf $rm_modules
        return 0

What can we learn from fashion?

Pirate's dilemma logo

In the following 15 minute TED talk Johanna Blakely discusses how other industries can learn from the fashion industry. She has an interesting opinion on how industries without copyright protection flourish with creativity. She debunks the often heard statement “Without ownership there is no incentive to innovate” with both economic and creative examples.

In the fashion industry, where there are little copyright laws (but loads of trademark protection) because clothing is utilitarian, there is an open and creative ecology. Designers can sample from each other and recreate and sell new clothes based on older works, from any era. To quote Johanna, it is a “Culture of copying“. Every designer in the industry copies from other designers.  And most, if not all, designers get their main inspiration from the street. They copy everyday people like you and me and sample, remix and match the thing they like best to create something new and fresh.
Copies are not only made by fellow designers, they are made by big corporations like H&M, Zara, etc. These corporations will create copies of high-end designs which they sell at low-end prices. One would expect that this would be hurting the industry, but it isn’t. As the lead designer of Gucci was quoted “The customer buying the knock-off is not our customer“. The knock-off product and the original product attract a different demographic.

The virtues of copying within the fashion industry, are according to Johanna, that fashion is democratic. Both designers and consumers have choice and can create and wear what they like. Fashion helps you project who you are to the world. And because of this, trends move quickly and you have people who want to use the trend to move products (like H&M for example) and you have people who want to innovate and be trend-setters. Designers are pushed creatively by the trend-setters to come up with new and exciting designs and to prove to the critics that they are the top designers for a reason.

Designers copy themselves, they work together with big shops like H&M, create cheaper versions of their high-end products and sell them to a another demographic. They don’t fight copiers, they cooperate with them to get into a new market.

The talk becomes even more interesting when she shows sales figures of the industries which don’t have copyright protection and the industries which do have them. The slide she shows speaks for itself. The industries which are protected don’t sell much, whereas the industies where copyright protection is non-existent have huge sales figures.

Growth figures of various industies

So why is this? According to Johanna, these protected industries are operating within an atmosphere where they have no protection and don’t know what to do. In other words, material gets copied, remixed and sampled, but there is no incentive to create new materials.

Now let’s put this into perspective. Why are music, movies/films, art copyright protected? Because they are not ideas, they are goods (or at least the CD/DVD which contains the music or movie). Due to the electronic age that we now live in, art, music and films are electronic files and are not goods like they were 15 years ago. They now act like ideas, which circulate the globe and can be used as utilities to express oneself just like fashion does.

Graph of why something could have copyright

So what can we learn from the fashion industry? According to the talk, the movie, music and book industry can learn that innovation and creativity is triggered by lack of copyright laws (same goes for patent laws). Businesses can be booming without having to worry about lawsuits on copyright and/or patent infringements. Artists and companies alike can find new market segments in stead of frustrating platforms which innovate and distribute and/or copy their materials. The fashion industry has shown this as seen by the sales figures and the creative process we see in that market. It is time for others to do the same and have a look at their current business models. Learn from other markets which are more creative and in the end, more profitable.

Maintaining Ubuntu: upgrade vs reinstall

Upgrading vs reinstalling Ubuntu when a new version arrives.  Many users on the Ubuntu Forums believe that reinstalling is a better option then upgrading. The primary reasons  are:

  • Look at the amount of topics on UF on problems with upgrading
  • An upgrade is more error prone
  • An upgrade bloats your machine
  • You don’t lose any settings (if you have a separate slice for your /home mount).
  • An upgrade takes longer then reinstalling
  • You don’t get the new features of the new release with an upgrade

The advise is often given with the best of intentions, however they are (in my opinion) ill-founded. In certain cases a reinstall might be a better option over upgrading, eg if you have to upgrade more then 5 times to get to the most current release. Mind you: most current release. Ubuntu has a minimum of four stable releases at any given point. At time of writing it is five: 6.06 LTS/8.04 LTS/8.10/9.04 and 9.10. A popular Dutch web community ( recently upgraded from Ubuntu 6.10 to 9.10 in less then a day.

Look at the amount of topics on UF on problems with upgrading

Kind of a non-argument if you ask me. Look at the amount of topics of failed installations and you have an argument not to install Ubuntu. Besides, looking on a technical support forum for threads from happy users which have performed a successful upgrade is, pardon my French, retarded. People with problems come to these forums, people who want to help as well, but the people who do not care and encountered no issues at all during an upgrade will not post threads saying the upgrade worked perfect.

An upgrade is more error prone

I really want to know why people are saying this. Only once I encountered a upgrade failure when I upgraded from 8.04 to 8.10 on the release date itself. It was an xorg/intel bug which would have made it to my system regardless of upgrading or installing Ubuntu. If a bug is in a package, it will be present, no matter how you installed that particular package. The snapshot which is used for the CD/DVD images uses the same package as from the online archives.

Incorrectly maintaining Ubuntu will lead to problems sooner or later, so yes, if you messed it up beyond repair, then an upgrade could make things worse (although an upgrade fixed my sound issues going from 6.10 to 7.04).

An upgrade bloats your machine

Partly true, some packages generate files after installation (presumably by the post-install scripts) and those files are not known by apt, when purging packages these generated files are not removed. Apt warns you a directory is not empty and will not  purge it. You have to do it yourself.

Ever noticed that clean installations will create so called “bloat” or “cruft”? Install the cruft package (sudo aptitude install cruft) and run it as root. You can exclude some directories if you want, eg /opt if you compiled software yourself and installed it in /opt. Or your /home partition/slice:

sudo cruft  --ignore /opt --ignore /home

Perform this on a clean install. I’ll give you a free beer if you don’t find any cruft.

You can also install deborphan (sudo aptitude install deborphan) to see which libaries/old libraries are possibly candidate for removal. I never ran this on a clean system, so perhaps some libraries could be considered cruft, although I never investigated this.

Then there is aptitude, which can show you obsolete packages in the interactive mode. It will show you the amount of obsolete or locally created packages. Removing them is easy as pie. Select them with aptitude, select for removal (underscore), and hit g.

Most of my obsolete packages are kernels which go back to Jaunty.

All the cruft problems which are presented here are not due to a distribution upgrade (apt-get dist-upgrade/aptitude full-upgrade), this can happen on any installation where packages are added and removed. Checking for this on a regular basis will reduce the amount of decision making when upgrading.

Upgrades takes longer and you don’t lose any of your settings (provided you have a seperate mount for /home).

This all depends on how your system is configured. I will lose settings even if when I have /home on a separate mount. Some applications require configuration files in /etc (for postfix, bind, apache, php and others), which you cannot move outside the root disk, since it is required at boot (/etc/init.d, /etc/fstab, etc). Reinstalling will overwrite the files or when you format the rootdisk prior to re-installing (which you want to do for the cruft removal process ;) ), so all your changes are lost. So you lose settings!

Takes longer? Perhaps if you have to do several incremental upgrades (eg, 8.10 to 10.04, which requires an upgrade to 9.04, 9.10 and finally to 10.04). BTW 10.04 is the current development release, so please don’t upgrade to that release just yet, unless you want report/triage/fix bugs. The upgrade itself may take a bit longer then the re-install, BUT, you don’t lose data. You have your configurations files in /etc, and the upgrade process will not touch your homedir: NO loss of data. And the upgrade process does not involve having to setup your machine again (installing additional packages, configuring those packages). The reinstall option does require additional time afterwards. I didn’t use a stopwatch, but I’m more then convinced to say that for a single upgrade or reinstall the time it takes is equal. And perhaps in favor of the upgrade.

You don’t get the new features of the new release with an upgrade

Come again? I have grub2, I have ext4 support on my Karmic box without a reinstall (coming from Hardy). apt-cache policy will help me make my point. I have lucid, karmic and jaunty enabled in my sources.list and will show how the upgrade works:

$ apt-cache policy firefox
  Installed: 3.5.6+nobinonly-0ubuntu1
  Candidate: 3.5.6+nobinonly-0ubuntu1
  Version table:
 *** 3.5.6+nobinonly-0ubuntu1 0
        990 lucid/main Packages
        100 /var/lib/dpkg/status
     3.5.6+nobinonly-0ubuntu0.9.10.1 0
        990 karmic-security/main Packages
        990 karmic-updates/main Packages
     3.5.3+build1+nobinonly-0ubuntu6 0
        990 karmic/main Packages
     3.0.16+nobinonly-0ubuntu0.9.04.1 0
        500 jaunty-security/main Packages
        500 jaunty-updates/main Packages
     3.0.8+nobinonly-0ubuntu3 0
        500 jaunty/main Packages

The Jaunty version is 3.0.8, and when it received updates, it became 3.0.16 as you can see from the updates and security repositories. For this upgrade no-one would reinstall Ubuntu. However, for the upgrade to Firefox 3.5.x in Karmic people somehow think reinstalling Ubuntu is the correct way. You trust Ubuntu to upgrade within a release, but not when you want a new release? While it makes use of the same process and principles. Makes no sense at all…

I feel a question popping up: But the new versions require package X, Y and Z and those were not available in the previous release, now what? Most of the time these packages are part of a bigger package (aka a meta-package) like the ubuntu-desktop package. If you have that particular package installed on your old release, it will be upgraded to the newer version which has the dependency to the new X, Y and Z packages. With aptitude safe-upgrade these packages will be installed automatically, apt-get is a bit different (requires dist-upgrade for it to work). However the ubuntu upgrader (update-manager/do-release-upgrade) takes care of this, so you don’t have to worry about it.

The Depends and Conflicts section of the Debian package control files (visible by running apt-cache policy packagename will make sure conflicting packages are and dependencies are respected. Upgrading such a package will remove the conflicting packages and makes sure the dependencies are installed. An example of this can be found in this bug report. As a fix I’ve added a dependency to python2.5 in the gdesklets package and when I upgrade that package it will now install python2.5 if not installed already whereas the previous version did not install the dependency:

$ apt-cache policy gdesklets
  Installed: 0.36-5ubuntu1
  Candidate: 0.36-5ubuntu1
  Version table:
     0.36.1-2ubuntu1 0
        500 karmic/universe Packages
     0.36-5ubuntu2 0
        500 jaunty/main Packages
 *** 0.36-5ubuntu1 0
        995 jaunty/universe Packages
        100 /var/lib/dpkg/status

$ sudo aptitude install gdesklets=0.36-5ubuntu2
Reading package lists... Done
Building dependency tree
Reading state information... Done
Reading extended state information
Initializing package states... Done
The following NEW packages will be installed:
  python2.5{a} python2.5-minimal{a}
The following packages will be upgraded:
1 packages upgraded, 2 newly installed, 0 to remove and 0 not upgraded.

The same will happen during a release upgrade, and some packages may be removed because they are replaced by others.

Sometimes new features can be introduced but you don’t see them, you could remove (back up first in case you want to go back) your .gnome directory, or .kde if you use KDE like me and then login again to see what changes the default Ubuntu (or Kubuntu) installation has introduced, changes which you might not see when you do a fresh install (including an empty homedir). There is no difference in upgrading or reinstalling.

My verdict..

Upgrading is a viable option to get to a newer release of Ubuntu. Reinstalling has its advantages, I have done them, don’t worry :) The upgrade process is supported by Canonical by means of the update-manager and update-manager-core packages (the latter provides do-release-upgrade) and you can also upgrade Ubuntu just like you update Debian (a bit more manual work, but the exact same end result).

There are some cases where one might prefer to reinstall a newer version of Ubuntu, when you change you partition layout, or when something is so broken and the quickest way out of it is reinstalling, or whatever your reason may be to reinstall a newer version. But for me, the obvious choice is simply upgrading by the tools which are ment to do it: update-manager/do-release-upgrade/aptitude/apt-get/synaptic and perhaps even with Software Center (although I never tried that). And if you still don’t believe me, please have a look at these two guides I wrote on the community wiki: Upgrading supported versions of Ubuntu and Upgrading unsupported versions of Ubuntu. You could also have a look at this blogpost. And read this blog for a view on reinstalling over upgrading.

And, BTW, whatever your choice will be, remember to backup any important files or your complete disk before doing an upgrade or a reinstall. Better safe then sorry. I’d recommend clonezilla for backing up your data.

Chrooting with and without upstart in Bash


I’ve posted a chroot script in the past which dealt with upstart. However, if you also have an OS which does not use upstart you need to change the sources of the chroot script. I decided to make a script which does the general things, and have a config file per chroot enviroment. I have need two chrooted enviroments for my dual boot laptop. One for Debian and one for Ubuntu.

Let’s start with creating some directories for our chroot script and the configuration files. Btw, I will use vi in my examples for editing files, use any editor you like, eg nano, gedit, kate, emacs, etc, etc.

# Create /home/youruser/chroot/etc
mkdir -p $HOME/chroot/etc
cd $HOME/chroot

Create the Debian chroot configuration.

vi etc/debian.conf
# This file is read as a shell script
## Debian
# What is the mount point of the OS?
# Mount various things
# Have seperate mount points for /var/log, /home and /opt,
# include them in the chroot.
mnt_devices_start="proc dev dev/pts sys var/log home opt"
# 1 if your OS uses upstart or 0 if it doesn't.
# Command to start chroot with
# I use my own account to enter the chroot
chroot="su - youruser"

And the Ubuntu config.

vi etc/ubuntu.conf
# This file is read as a shell script
## Ubuntu
mnt_devices_start="proc dev dev/pts sys var/log home opt"
chroot="su - youruser"

Then we will create/our chroot script, we define what we want to mount at start/stop (there should be some kind of shell thingy to reverse the list, but I haven’t found it – or just use an array :) )


mnt_devices_start="proc dev dev/pts sys var/log home opt"

SELF=$(basename $0)
SELF_DIR=$(dirname $0)

usage() {
   echo "$SELF [chrootname] "
   exit 0

if [ -n "$1" ] ; then
    if [ ! -f "$SELF_DIR/etc/$1.conf" ] ; then
        echo "chroot '$1' does not exist!" >&2
        exit 1
    source $SELF_DIR/etc/$1.conf

if [ -z "$mnt_devices_stop" ] ; then
    mnt_devices_stop=$(echo $mnt_devices_start | tac -s ' ')


start() {

    [ ! -d "$mnt" ] && sudo mkdir -p "$mnt"

    echo -e "$mounted" | grep -q "$mnt"

    if [ $? -ne 0 ] ; then
        sudo mount $root_disk "$mnt"
        if [ $? -ne 0 ] ; then
            echo "Unable to mount '$root_disk' on '$mnt'" >&2
            exit 1

    local i
    for i in $mnt_devices_start ; do

        echo -e "$mounted" | grep -q "$mnt/$i"
        [ $? -eq 0 ] && continue

        sudo mount -o bind /$i "$mnt"/$i


    if [ ! -e "$resolv_o" ] ; then
        sudo cat "/etc/resolv.conf" | sudo tee "$resolv_o" >/dev/null
    cat /etc/resolv.conf | sudo tee "$resolv" >/dev/null

    if [ $upstart -ne 0 ] && [ ! -e "$mnt/sbin/initctl.distrib" ] ; then
        sudo chroot "$mnt" dpkg-divert --local --rename --add /sbin/initctl
        sudo chroot "$mnt" ln -s /bin/true /sbin/initctl
    eval sudo chroot "$mnt" "$chroot"

stop() {

    [ ! -e "$mnt" ] && return

    if [ "$upstart" -ne 0 ] && [ -e "$mnt/sbin/initctl.distrib" ] ; then
        sudo chroot "$mnt" rm /sbin/initctl
        sudo chroot "$mnt" dpkg-divert --local --rename --remove /sbin/initctl

    if [ -e "$resolv_o" ] ; then
        cat $resolv_o | tee "$resolv" >/dev/null
        sudo rm "$resolv_o"

    local i
    for i in $mnt_devices_stop; do
        echo -e "$mounted" | grep -q "$mnt/$i"
        [ $? -ne 0 ] && continue
        sudo umount "$mnt/$i"

    echo -e "$mounted" | grep -q "$mnt"

    if [ $? -eq 0 ] ; then
        sudo umount "$mnt"
        [ $? -ne 0 ] && return
        sudo rmdir "$mnt"

case $1 in
    start|stop) $1;;
    *) usage;;

To be able to call from anywhere, just add it to your path.

vi $HOME/.bashrc # or .zshrc, or whatever your shell requires
export PATH=$PATH:$HOME/chroot
# Load new .bashrc
source $HOME/.bashrc
# or..
# or
export PATH=$PATH:$HOME/chroot

Now you can start/stop chroot by: ubuntu start ubuntu stop
# or Debian debian start debian stop

I hope you find this useful :) .

Using the preferences file on Debian and Debian based distributions

apt_preferencesIn Debian and Ubuntu (and other Debian based distributions) you can use the /etc/apt/preferences file to maintain the various repositories. In the preferences file you can define whether packages should be installed from a certain repository defined in /etc/apt/sources.list and/or /etc/apt/sources.list.d/*.list. I will use Debian in my examples and will mention Ubuntu if it requires some extra attention.

Why would one use the preferences file?

Good question. I started using it when I wanted to install a teamspeak server on Debian Etch (the stable release back then) and the package was only available in Debian Lenny (testing). I do not download packages manually, I prefer apt to take care of it. But when I added the Lenny repositories I would upgrade my machine to testing, which is something I did not want to do. The preferences file allows you to mix your system, or to tell apt in which cases may be upgraded (or not).

How does it work?

Normally, without a preferences file all packages/repositories have a priority of 500, installed packages have a priority of a 100.
In Debian you can define a default/target release, eg lenny, APT::Default-Release “lenny”;. This will make sure packages from that release get a priority of 990. You can set the default release in /etc/apt/apt.conf. Doing this in Ubuntu is rather useless. The Ubuntu repositories are CODENAME-(updates|security|proposed|backports), and the default release doesn’t include these repositories. Because of this I don’t use this directive. I set the 990 priority in my preferences file (more on these numbers later).

You can look at the priorities of your repositories by running apt-cache policy or apt-cache policy packagename.

$ apt-cache policy hello
  Installed: (none)
  Candidate: 2.2-2
  Version table:
     2.4-3 0
        500 testing/main Packages
        500 unstable/main Packages
     2.2-2 0
        990 lenny/main Packages

How does this influence package installations?

Apt determines normally based on the version of a package whether it should be installed. So if you have a repository which has an higher version of an installed package it will install that package. But now we have told apt what our default release is, apt will only update/install packages from that repository with the exception of packages which are not present on your default release repositories. If you track multiple repositories the default release alone will not be enough, and that is where the preferences files comes in. You can find the preferences file at /etc/apt/preferences, or not since it is doesn’t exist by default. In newer versions of apt you can also find the /etc/apt/preferences.d/ directory. The concept is similar to /etc/apt/sources.list and /etc/apt/sources.list.d. If you are using aptitude, please be aware that /etc/apt/preferences.d/* is ignored. The bug is now resolved in aptitude version 0.6.3-3.2 (apt-cache policy aptitude will show you the installed version) which is available in Debian Squeeze and Ubuntu Natty (11.04).

In the preferences file you can tell apt how to deal with various repositories, eg mixing releases or using PPA’s with Ubuntu.

This would be my default preferences file if I would be running Debian stable. Please note that apt now supports # as comments, normally one would use Explanation: .

# Give preference to stable, then testing and finally unstable
# a=stable,n=lenny could also be a=stable, but you don't want to upgrade
# stable once testing becomes stable without you knowing about it.
# If you use testing/unstable feel free to pick any :) 
# Ubuntu users can use a=CODENAME, where codename is
# dapper, hardy, intrepid, jaunty, karmic and all CODENAME-repos,
# eg hardy-updates, hardy-security, hardy-backports and hardy-proposed
Package: *
Pin: release a=stable,n=lenny
Pin-Priority: 990

Package: *
Pin: release a=testing
Pin-Priority: 600

Package: *
Pin: release a=unstable
Pin-Priority: 300

When running testing, I would set lenny to 300, testing to 990 and unstable to 600. If you wonder what the numbers mean, I’ve copied this from the apt_preferences(5) man page

P > 1000 causes a version to be installed even if this constitutes a downgrade of the package
990 < P <=1000 causes a version to be installed even if it does not come from the target release, unless the installed version is more recent
500 < P <=990 causes a version to be installed unless there is a version available belonging to the target release or the installed version is more recent
100 < P <=500 causes a version to be installed unless there is a version available belonging to some other distribution or the installed version is more recent
0 < P <=100 causes a version to be installed only if there is no installed version of the package
P < 0 prevents the version from being installed

Pinning packages

You can also pin packages from specific releases or versions.

Package: hello
Pin: release n=lenny
Pin-Priority: 995

# Or to a specific version
Package: hello
Pin: version 2.2-2
Pin-Priority: 990

# Or to anything in version 2.2, eg 2.2-4
Package: hello
Pin: version 2.2*
Pin-Priority: 990

You can see this here:

  Installed: (none)
  Candidate: 2.2-2
  Package pin: 2.2-2
  Version table:
     2.4-3 990
        600 testing/main Packages
        990 unstable/main Packages
     2.2-2 990
        990 lenny/main Packages

Setting a package to a Pin-Priority above 1000 you will force a downgrade of a package when the version you want to be installed has a lower version then the currently installed package. If you do this, please execute aptitude -s install PACKAGENAME to see the consequences of that action. Please note that the package version pin preference is present on all versions, see this debian bug comment for more information.

Third-party repositories

Third party repositories could interfere with your regular preferences file, it will update packages which you don’t want to update. To remedy this, you can use apt-cache policy to determine some information about the repository.

$ apt-cache policy
# truncated for readability
 500 jaunty/main Packages
     release v=9.04,o=LP-PPA-ultrafredde,a=jaunty,n=jaunty,l=Ubuntu,c=main
# Everything from launchpad
Package: *
Pin: origin
Pin-Priority: 600

# One particular PPA
Package: *
Pin: release o=LP-PPA-ultrafredde
Pin-Priority: 600

You can now install packages and be sure they are not upgraded when you don’t want them to. Just fiddle with your preferences file and you can mix and match your system to your liking.

If you want to install packages which would normally not be installed, you can force aptitude to install other versions:

# Based on version, will install regardless of preferences file (assume prio 999)
aptitude install hello=2.4-3
# Based on release, will install with prio 990
aptitude install -t stable hello
# Based on release, will install regardless of preferences file (assume prio 999)
aptitude install hello/testing

We are now going to deny package installations from all launchpad origins but allow task to be installed. Task comes from a launchpad PPA.

# Deny everything from launchpad
Package: *
Pin: origin
Pin-Priority: -10

# Allow task
Package: task
Pin: origin
Pin-priority: 990

# Or based on the PPA itself:
Package: task
Pin: release o=LP-PPA-ultrafredde
Pin-priority: 990

With 3 small lines you can maintain a mixed system, Ubuntu 8.10 mixed with some 9.04 or Debian testing with unstable, or the otherway around. You can add PPA’s without having to worry about if it will update packages that you don’t want to be upgraded.

Fixing library issues on Ubuntu Lucid (10.04 development release)

ubuntuTwo weeks ago I encountered rather strange problems with my Lucid development installation. I had some issues with KDE and was hoping an upgrade would fix the bugs I had. I don’t know what happened, but something borked and good.
I saw some weird things in my logs, a bluetooth library file was missing. In itself not a real issue since I hardly use bluetooth, but I shouldn’t see those errors at boot. So I wanted to figure out which package was responsible and that was when I saw shit had hit the fan. Apt itself was also broken.

$ sudo aptitude update
aptitude: error while loading shared libraries:
cannot open shared object file: No such file or directory
$ ldd $(which aptitude) =>  (0xb76e9000) => not found => /lib/ (0xb768a000) => /usr/lib/ (0xb7682000) => /usr/lib/ (0xb75c4000) => /usr/lib/ (0xb7553000) => /usr/lib/ (0xb740a000) => /lib/ (0xb73f5000) => /lib/tls/i686/cmov/ (0xb73dc000) => /usr/lib/ (0xb72e8000) => /lib/tls/i686/cmov/ (0xb72c2000) => /lib/ (0xb72a4000) => /lib/tls/i686/cmov/ (0xb7160000) => /lib/tls/i686/cmov/ (0xb715c000)
        /lib/ (0xb76ea000) => not found

$ apt-file search
Can't load '/usr/lib/perl5/auto/AptPkg/' for module AptPkg: cannot open shared object file: No such file
or directory at /usr/lib/perl/5.10/ line 196. at
/usr/lib/perl5/AptPkg/ line 8
Compilation failed in require at /usr/lib/perl5/AptPkg/ line 8.
BEGIN failed--compilation aborted at /usr/lib/perl5/AptPkg/ line 8.
Compilation failed in require at /usr/bin/apt-file line 28.
BEGIN failed--compilation aborted at /usr/bin/apt-file line 28.

The file was a broken symlink to in /usr/lib

I asked for some help in #ubuntu+1 (the Ubuntu development irc channel on Freenode) and yofel asked me to install the debsums package to check my apt package. Dpkg was still working, so I grabbed the Debian testing/unstable version of debsums, since was down at the time and installed it (sudo dpkg -i debsums_2.0.47_all.deb). debsums -c apt showed that the file was indeed missing. I decided to check if more packages had the same issue.

$ mkdir tmp

# This might take a while..
$ sudo debsums -c &>$HOME/tmp/debsums-check.log
$ grep  "missing" debsums-c.log > $HOME/tmp/missing-files

# Count the missing files
$ wc -l $HOME/tmp/missing-files

$ awk '{NF=NF-1} {print $NF}'  $HOME/tmp/missing-files | \
sort -u > $HOME/tmp/missing-files-pkgs
# Count the packages
$ wc -l $HOME/tmp/missing-files-pkgs

# Install the packages missing files
sudo aptitude install $(cat $HOME/tmp/missing-files-pkgs)

The numbers of this operation: 602 files missing from 224 packages.

I also saw that some packages were missing checksums. The following function will make sure that missing checksums will be generated:

gen_sums() {
    pkg=$(debsums -l)
    [ -n "$pkg" ] && sudo aptitude --download-only reinstall $pkg
    sudo debsums --generate=keep,nocheck /var/cache/apt/archives/*.deb

I had my machine back but KDE was still an issue. This was fixed by the first Alpha release of Lucid.