=== mruiz_away is now known as mruiz === bmk789_ is now known as bmk789 [05:10] hi === asac_ is now known as asac [15:55] * pitti waves [15:56] * InsClusoe waves back.. [15:56] * db-keen stands aghast to be in the presence of InsClusoe [15:57] db-keen: Pls don't say such things and embarass me... [15:59] * pitti rings the schoolbell [15:59] welcome! [15:59] so, who is here to learn about patching packages? [16:00] and how has the week been so far for you? [16:00] +1 [16:00] o/ [16:00] I'm no programmer, just curious [16:00] +1 === warp10 is now known as warp10_ [16:00] Haven't had much time to follow other lessons, but plan to do so today. Great sessions ahead. :) [16:00] hey hey [16:01] * warp10_ raises an hand [16:01] cool, relatively quiet === warp10_ is now known as warp10 [16:01] Welcome to the hands-on training session about how to patch Ubuntu source packages! [16:01] This assumes that you already know what a patch is and how to handle .patch files in general (i. e. for upstream authors). Also, you should have a rough idea what an Ubuntu source package is. [16:02] The purpose of this session is to teach you how to put a patch into an Ubuntu source package, since unlike rpm this is not very consistent still. [16:02] While I do some introduction, please install some packages and sources on your box which we will need for the training: [16:02] sudo apt-get install dpatch cdbs quilt patchutils devscripts debhelper fakeroot [16:02] mkdir training; cd training [16:02] apt-get source cron udev pmount ed xterm [16:02] wget http://people.ubuntu.com/~pitti/scripts/dsrc-new-patch [16:02] chmod 755 dsrc-new-patch [16:02] I deliberately picked the smallest packages I could find [16:03] this can run in the background while I continue [16:03] if anyone has any question, or I'm totally uncomprehensible (sorry for my English, I'm German), please do not hesitate to interrupt and ask *immediately* [16:03] Also, don't bother trying to take notes, we'll sort that out at the end. You can fully concentrate on the discussion and examples. [16:03] ok everyone? [16:04] +1 [16:04] Let's begin with a little bit of history: [16:04] ok! [16:04] Almost set. [16:04] done [16:04] == Why use separate patches == [16:04] In earlier times, people just applied patches inline (i. e. directly in the source code tree). However, this makes it very hard to extract patches later to modify them, send them upstream, etc. Also this means that new upstream versions are a pain, since they generate a lot of rejections when applying the package diff.gz to them. [16:04] With split-out patches it is much easier to send them upstream, keep track of them, develop them, etc., since you always see which changes belong together. [16:05] The ideal state is an unmodified tarball from upstream, plus clean and separate patches, plus the packaging bits in debian/. That means that lsdiff -z .diff.gz only contains debian/. [16:05] The first attempts to split-out patches were pretty trivial: storing patches in debian/patches/, and adding some patch/patch -R snippets to debian/rules. This worked for small patches, but provided no tools for editing these patches, updating them for new upstream versions, etc. [16:05] (debian/rules is the central script for building a package, in case you don't know) [16:06] Thus several standard patch systems were created which are easy to deploy and provide tools for patch juggling and editing. [16:06] -- [16:06] What I would like to do now is to introduce the most common patch systems and show some hands-on demo how to add a new patch and how to edit one. [16:06] For this, I will point at a source package from the current gutsy or hardy archive, quickly explain the patch system, and show how to apply some (braindead) modifications to it. [16:06] I recommend you to do the same steps in a terminal, so that you get a feeling for the process and can immediately ask questions. [16:06] is everyone fine with this approach? [16:07] k [16:07] yep! [16:07] Sure. [16:07] +1 [16:07] = cron: inline patches == [16:07] No patch system at all, nothing much to say about this. So this is an example of the "old" style of packaging, a counter-example [16:07] You directly edit the files in the source tree. [16:07] This is convenient for a simple and quick change, but will bite back for new upstream versions (see above) and is inconvenient for submitting patches upstream, or reviewing for merges. [16:08] if you do 'lsdiff -z .diff.gz' and you see changes which are not in debian/, then you probably have such a package [16:08] i. e. in this case lsdiff -z cron*.diff.gz [16:08] (you can also take a look at it to see how all patches are lumped together) [16:08] so, I think I do not need to say anything else about cron, unless someone has a question [16:09] == udev: separate patches, but no standard patch system == [16:09] This case is the most complicated one since you have to do all the hard work manually. [16:09] In order to make you understand what a patch system does, and to give you a fallback method that will *always* work with any patch system, I handle this first. [16:09] The good news is that you will seldomly be required to actually do this procedure, since for many packages there are nice tools which make things a charm. [16:09] The bad news is that it may seem utterly complicated for people who never did it before, but I would like you to understand what's actually going on behind the curtains of the tools. [16:10] BTW; when I'm too fast, don't hesitate to ping [16:10] So please do not desperate if you do not fully understand it at first; there's written documentation and you can always take your time to grok it. [16:10] The general approach, which you can print out and hang over your desk :) is: [16:10] 1. copy the clean source tree to a temporary directory /tmp/old [16:11] 2. apply all patches up to the one you want to edit; if you want to create a new patch, apply all existing ones (this is necessary since in general patches depend on previous patches) [16:11] 3. copy the whole source tree again: cp -a /tmp/old /tmp/new [16:11] 4. go into /tmp/new, do your modifications [16:11] 5. go back into /tmp and generate the patch with [16:11] diff -Nurp old new > mypatchname.patch [16:11] 6. move the newly generated patch to /debian/patches/mypatchname.patch [16:12] in general we want the following diff options: [16:12] -N -> include new files [16:12] -u -> unified patches (context diffs are ugly) [16:12] -r -> recursive [16:12] -p -> bonus, you can see the name of the affected function in the patch [16:12] does anyone have a question about the principle method? [16:12] (I'll do a hands-on example now) [16:13] All good here. [16:13] here too [16:13] same here [16:13] so, open a shell, ready your fingers :) [16:13] udev example 1, let's create a new patch 92_penguins.patch: [16:13] cd udev-* # -113 on gutsy, -117 on hardy [16:13] -> now we are in our original source tree where we want to add a new patch [16:14] cp -a . /tmp/old [16:14] -> create a copy of the clean sources as reference tree [16:14] pushd /tmp/old [16:14] -> go to /tmp/old; 'pushd' to remember the previous directory, so that we can go back conveniently [16:14] debian/rules patch [16:14] -> apply all already existing patches; of course we could use the 'patch' program to do it manually, but since debian/rules already knows how to do it, let's use it. The actual name for the patch target varies, I have seen the following ones so far: patch, setup, apply-patches, unpack, patch-stamp. You have to look in debian/rules how it is called. [16:15] cp -a . /tmp/new; cd ../new [16:15] -> copies our patched reference tree to our new work directory /tmp/new where we can hack in [16:15] that's the preparatory part [16:15] let's do a braindead modification now [16:15] sed -i 's/Linux/Penguin/g' README [16:15] -> changes the README file; of course you can use your favourite editor, but I wanted to keep my examples copy&pasteable [16:15] and now we create a patch between the reference and our new tree: [16:16] cd .. [16:16] -> go back to /tmp, i. e. where our reference tree (old) and hacked tree (new) is located [16:16] diff -Nurp old new > 95_penguins.patch [16:16] -> generate the patch (Ignore the 'recursive directory loop' warnings) [16:16] popd [16:16] -> now you should be back in your original source tree (when you did the pushd) [16:16] rm -rf /tmp/old /tmp/new [16:16] -> clean up the temporary trees [16:16] mv /tmp/95_penguins.patch debian/patches [16:16] -> move the patch from /tmp to the source tree's patch directory, where it belongs. [16:16] *uff* :) [16:16] Now take a look at your shiny new debian/patches/95_penguins.patch. [16:17] and give a ping when you are ready [16:17] * warp10 is ready [16:17] ping [16:17] +1 [16:18] great, that goes very smoothly today :) [16:18] if you do 'debian/rules patch', you'll see that the patch applies cleanly [16:18] warp10, AstralJava, dargol, db-keen: caught up? [16:19] pitti: sure! [16:19] yep [16:19] yep [16:19] Yeah, sorry was AFK for a bit. [16:20] awesome [16:20] so, obviously that's not the end of the wisdom, but if you do these steps a couple of times, you should get a feeling for how to create the most complicated patch conceivable [16:20] +1 [16:20] so this procedure is the life safer if anything else fails [16:20] questions so far? [16:20] Nope. :) [16:20] (just drop them into #-chat) [16:20] Pretty much work, isn't it? Since this happens pretty often, I created a very dumb helper script 'dsrc-new-patch' for this purpose. [16:21] First, let's bring back udev to the previous state by removing the source tree and re-unpacking the source package: [16:21] cd .. [16:21] rm -r udev-* [16:21] dpkg-source -x udev_*.dsc [16:21] Then, using my script, above steps would reduce to: [16:21] cd udev-* [16:21] ../dsrc-new-patch 95_penguins.patch [16:22] ^ this now does all the preparation for you and drops you into a shell where you can edit the code [16:22] sed -i 's/Linux/Penguin/g' README [16:22] [16:22] that looks slightly better, doesn't it? [16:23] look at debian/patches/95_penguins.patch, it should look exactly like the one created manually [16:23] If you like the script, please put it into your ~/bin, so that it is in your $PATH [16:23] but I had to torture you with the close-to-the-metal method for the sake of understanding. [16:23] You might have noticed that we applied all previous patches before creating our's. Does someone have an idea why this is done? [16:24] some patches could depend on other patches [16:24] which means? [16:24] They need to be applied in order. [16:24] correct order, that is. [16:24] they coud only aply to the patched source [16:25] i. e. patch 22 can change the same file that patch 7 did, and patch 22 might not even apply to the pristine upstream source. [16:25] dargol: exactly [16:25] That's why you will commonly see numeric prefixes to the patch files, since they are applied asciibetically in many patch systems (including the application of patches in udev). [16:25] with that script I won't need dpatch-edit-patch anymore ;) [16:25] pochu: oh, you do [16:25] we'll get to that later [16:25] oh [16:26] dsrc-new-patch is a hack which mostly works for packages without a real patch system, but split-out patches [16:26] like udev [16:26] (which is really a bug we should fix) [16:26] but makes a nice example :-P [16:26] Since above procedure is so hideously complicated, patch systems were invented to aid you with that. [16:26] Let's look at the most popular ones now (they are sufficient to allow you to patch about 90% of the archive's source packages; for the rest you have to resort to the manual approach above). [16:26] exit [16:26] ah, found it [16:26] (sorry) [16:26] dargol: ? [16:26] ok [16:27] == pmount: cdbs with simple-patchsys == [16:27] (wrong place) [16:27] cdbs' simple-patchsys.mk module matches its name, it has no bells and whistles whatsoever. [16:27] However, it is pretty popular since it is sufficient for most tasks [16:27] and long ago I wrote a script 'cdbs-edit-patch' which most people can live with pretty well. [16:27] This script is contained in the normal cdbs package. [16:27] You just supply the name of a patch to the script, and depending on whether it already exists or not, it will create a new patch or edit an existing one. [16:28] (dsrc-new-patch can currently *not* edit existing patches, mind you; patches welcome :-) [16:28] but real patch systems can do that of course [16:28] cd pmount-* [16:28] everyone please look in debian/patches, debian/rules to get a feeling how it looks like [16:28] i. e. in debian/rules you simply include simple-patchsys.mk, and that'll do all the magic [16:28] and debian/patches looks pretty much like udev [16:29] so, let's mess up pmount a bit [16:29] and add a new patch [16:29] cdbs-edit-patch 07-simple-readme.patch [16:29] echo 'This should document pmount' > README [16:29] [16:29] easy, isn't it? [16:29] this will take care of applying all patches that need to be applied, can change patches in the middle of the stack, and also create new ones [16:31] Editing an already existing patch works exactly the same way. [16:31] so I won't give a demo [16:31] BTW, "cdbs-edit-patch" is slightly misleading, since it actually only applies to simple-patchsys.mk. You can also use other cdbs patch system plugins, such as dpatch or quilt. [16:31] questions? [16:32] Nothing at the moment, no. :) [16:32] (sorry, some #-chat action going on) [16:32] All good, all good. :) [16:32] == ed: dpatch == [16:32] dpatch is a pretty robust and proven patch system which also ships a script 'dpatch-edit-patch' [16:32] packages which use this build-depend on 'dpatch', and debian/rules includes 'dpatch.mk' [16:33] The two most important things you should be aware of: [16:33] * dpatch does not apply debian/patches/*, but instead applies all patches mentioned in debian/patches/00list, in the mentioned order. That means that you do not have to rely on asciibetical ordering of the patches and can easily disable patches, but you have to make sure to not forget to update 00list if you add a new patch. [16:34] (forgetting to update 00list is a common cause of followup uploads :-) ) [16:34] * dpatch patches are actually scripts that are executed, not just patches fed to 'patch'. That means you can also do fancy things like calling autoconf or using sed in a dpatch if you want. [16:34] using dpatch for non-native patches is rare, and normally you do not need to worry about how a .dpatch file looks like [16:34] but I think it's important to mention it [16:35] so if you ever want to replace *all* instances of Debian with Ubuntu in all files, write a dpatch with a small shell script that uses sed [16:35] instead of doing a 300 KB static patch which won't apply to the next version anyway [16:35] The manpage is very good and has examples, too, so I will only give one example here: [16:35] This will edit an already existing patch and take care that all previous patches are applied in order: [16:35] cd /whereever/you/unpacked/the/source/ed-0.7 [16:36] (if you are still in the pmount directory: cd ../ed-0.7) [16:36] dpatch-edit-patch 05_ed.1-warning-fix [16:36] [16:36] so that's exactly like cdbs-edit-patch [16:36] ok, now we edited a patch, that's pretty easy, right? [16:37] now let's create a new one; this is different from cdbs-e-p [16:37] dpatch-edit-patch foo 07_ed.1-spelling-fixes [16:37] [16:37] echo foo.dpatch >> debian/patches/00list [16:37] ^ this is the new bit; you have to explicitly add a new patch to that 00list index file [16:38] This will create a new patch foo.dpatch relative to the already existing 07_ed.1-spelling-fixes.dpatch. [16:38] If your patch is very confined and does not depend on other patches, you can leave out the second argument. [16:38] BTW, you even have bash commandline completion for dpatch-edit-patch! [16:38] alright? [16:38] yeah [16:39] y [16:39] k [16:39] Fine here. :) [16:39] db-keen, phoenix24: ok? [16:39] Yep! [16:40] == xterm: quilt == [16:40] quilt is the other non-dumb standard patch system. Like dpatch, it has a list of patches to apply in patches/series (to use debian/patches, packages need to add a sylink). [16:41] or set $QUILT_PATCHES to debian/patches [16:41] It is non-trivial to set up and has a lot of advanced commands which make it very flexible, but not very easy to use. [16:41] nontrivial to set up for Debian source packages, that is [16:41] (it's not hard either, but more work than simple-patchsys, and even dpatch) [16:41] it's not that widespread, but common enough to handle it here, and it apparently gains more popularity [16:41] I will only show a small example here [16:42] cd /whereever/you/unpacked/the/source/xterm-229 [16:42] if you followed me exactly, that should be cd ../xterm-229 [16:42] export QUILT_PATCHES=debian/patches [16:42] This is necessary because the default patch directory for quilt is ./patches. But for Debian-style source packages we want to keep them in debian/patches, because that's the convention. [16:42] Now let's edit the already existing patch 901_xterm_manpage.diff: [16:42] quilt push 901_xterm_manpage.diff [16:43] this will apply all patches in the stack up to the given one [16:43] apply inline right in the source tree, that is [16:43] unlike quilt, cdbs-edit-pattch, and dpatch-edit-patch, quilt doesn't create temporary directories with a copy, but remembers old versions of the files and uses the normal working tree [16:43] a bit like version control (svn, bzr, etc.) [16:43] now let's edit a file that is already touched by the original patch [16:44] sed -i 's/Copyright/Copyleft/' xterm.man [16:44] let's commit the change: [16:44] quilt refresh 901_xterm_manpage.diff [16:44] ^ updates the patch file with your recent changes [16:44] quilt pop -a [16:44] ^ unapplies all patches to go back to pristine source tree [16:45] * pitti waits a bit for people to catch up and finish the example on their keyboards [16:45] ok everyone? [16:45] done [16:45] yep, done [16:45] Done. [16:45] y [16:45] look at debian/patches/901_xterm_manpage.diff to see the effect [16:45] #join ubuntu-classroom-chat [16:45] * pitti hands cprov-out a slash [16:45] sorry ... [16:46] Finally, let's add a new patch to the top of the stack: [16:46] Done1 [16:46] pitti: thanks ;) [16:46] quilt push -a [16:46] '-a' means 'all patches', thus it applies all further patches after 901_xterm_manpage.diff up to the top [16:46] quilt new muhaha.diff [16:46] register a new patch name (which we want to put on top of the patch stack) [16:46] quilt add README [16:46] you have to do that for all files you modify, so that quilt can keep track of the original version [16:46] this tells quilt to keep track of the original version of README [16:47] sed -i '1 s/^/MUHAHA/' README [16:47] modify the source [16:47] quilt refresh [16:47] update the currently edited patch [16:47] quilt pop -a [16:47] this will finally create debian/patches/muhaha.diff with the changes to README [16:47] as I already said above, quilt has a patch list, too [16:47] in debian/patches/series [16:48] which is much like debian/patches/00list for dpatch [16:48] except that you don't edit it manually usually [16:48] if you push -a, then the patch will land on top of the patch stack, and will automatically be put at the end of series [16:48] of course you can create the patch in other levels of the patch stack [16:48] sometimes, when you pull changes from upstream CVS, it's better to put them at the bottom of the stack [16:48] i. e. upstream changes shuold generally come *before* distro-specific changes [16:48] someone has an idea why this is done? [16:49] or, rather, should be done [16:50] same as before, upstream changes are only guarantied to work on original source [16:50] first that [16:50] and second, it forces you to port your local patches to the updated source [16:51] so that, when you update to a new upstream version which incorporates the patch, it's much easier [16:51] i. e. the closer to upstream a patch is, the more stable are your distro patches [16:52] ok, that was the hard bit :) [16:52] == A glimpse into the future === [16:52] As you saw, Debian source packages do not have any requirements wrt. structure, patch systems, etc. [16:52] other source package systems like SRPM are much stricter wrt that. [16:52] This of course means more flexibility, but also much more learning overhead. [16:52] As a member of the security team I can tell tales of the pain of a gazillion different source package layouts... :) [16:53] there has been an attempt to teach an official patch system to dpkg itself ("dpkg 2.0", aka. "Wig&Pen format") [16:53] but unfortunately development on it has ceased [16:53] Therefore some clever people sat together the other day to propose a new design which would both give us a new and unified source package and patch system that uses bzr (with a quilt-like workflow). [16:53] This would also integrate packages and patches much better into Launchpad and revision control in general. [16:54] Please take a look at https://wiki.ubuntu.com/NoMoreSourcePackages if you are interested in this. [16:54] -- [16:54] so, thanks a lot for your attention! [16:54] I hope it was a bit useful for you [16:54] we have five more minutes for Q+A [16:55] https://wiki.ubuntu.com/PackagingGuide/PatchSystems is some written documentation about patch systems [16:55] a nice reference about what I explained here [16:56] Can't come up with any questions really at this point, but thanks very much Martin for this session! :) Highly beneficial to over the steps in correct order and manner. :) [16:56] warp10| QUESTION: If I need to apply a patch to a brand new package, or to fix a bug, or whatever, and a patchsystem has not been deployed yet, which (or how) should I choose? [16:56] that's mostly a matter of taste [16:56] if adding a patch system is actually justified in terms of keeping the delta to Debian low, then I'd give the following guidelines: [16:57] * if the package already uses cdbs, simply include simple-patchsys.mk and add the patch [16:57] no question with this, that's unintrusive [16:57] * if the package doesn't use cdbs, and you get along with dpatch, use that [16:58] (please ask questions here now) [16:58] ETA for NoMoreSourcePackages is currently undefined [16:58] right now we try to push forward the complete bzr import of ubuntu packages [16:58] which is a first step [16:58] QUESTION: do you know where to find some good documentation regarding working with .rej files? (as we don't have time now to explain it) [16:58] I don't know docs, sorry [16:59] because at that point it's really common sense and experience [16:59] if there was a programmatic way how to resolve them, we wouldn't need them in the first place :) [16:59] cdbs-edit-patch and dpatch-edit-patch will deal with it [16:59] i. e. if a patch doesn't apply, they give you the .rej, you resolve them manually and Ctrl+D [16:59] that's what I mean, how to resolve them [17:00] make sure to delete the .rej after resolving [17:00] they deliberately don't ignore .rej files [17:00] just to make sure you don't accidentally overlooked them [17:00] ok, my time is up [17:00] thank you for the class! [17:00] thanks pitti [17:00] thanks pitti [17:00] please continue questions to me personally or in -chat [17:00] Thanks again, that was super. :) [17:00] thanks everyone! [17:01] thank you pitti, that was very interesting :) [17:01] pitti: great, thank you. [17:01] thanks pitti!! [17:01] right, about time to start PPA session ... [17:02] Who is here for the PPA session (take 2) ? (say +1) [17:02] +1 [17:02] +1 [17:02] o/ [17:02] +1 [17:02] +1 [17:02] +1 [17:03] +1 [17:03] I will sit this one out, sorry [17:03] +1 [17:03] well, I guess, we have to continue with a smaller audience that the last session ... np [17:04] so questions can be asked in #ubuntu-classroom-chat ... [17:04] In the last session, we have started with the 3W approach (WHAT - WHERE - WAIT) to teach people how to use PPAs. See the session transcription in https://wiki.ubuntu.com/MeetingLogs/devweek0802/PPAs1. [17:04] As a brief summary of the PPA cycle: [17:04] * WHAT: signed source uploads (reusing orig.tar.gz from ubuntu Primary archive); [17:04] * WHERE: tell dput to upload it to ppa.launchpad.net (override changesfile target if you want); [17:04] * WAIT: wait the source to be built (restart the cycle if you received a build-failure-notification). [17:05] I'm very happy to say that the last LP release (done on Wednesday) make the 'WAIT' stage very shorter [17:06] now a build request is queued immediately after we recognize the new source, so the only waiting involved in the PPA cycle is related to the buildfarm load [17:06] we currently have 3 i386, 3 amd64 and 3 lpia builder, which is quite enough to make everyone happy :) [17:07] sorry, 'LP release' was a confusing term [17:07] I was referring to the last Launchpad codeline release, which happened 2 days ago [17:08] a new set of features in Launchpad is released every month ... [17:09] PPAs ... Let's sort some questions, I don't believe you don' t have any ? [17:09] phoenix24: QUESTION: Could you tell a little about PPA ? [17:10] Nope, None. [17:11] you can check the transcription of the last session, but briefly it a 'parallel instance of the services used to maintain ubuntu primary archive' [17:11] it's a *public* service and anyone using Launchpad is welcome to try. [17:12] it allow any user to upload, build, publish and distribute their own packages [17:13] you just need to be familiar with debian/ubuntu development tools to build the source package you want to change, all the rest is done by Launchpad. [17:13] phoenix24: does it answer you question ? [17:13] warp10: QUESTION: Why doesn't LP provides a sparc builder too? [17:13] Yes, thanks! [17:14] warp10: the PPA builders are based in XEN VMs for proper isolation of the sources being built, and XEN doesn't support sparc officially [17:14] warp10: yet ... [17:15] warp10: QUESTION: Do PPAs build packages just like Soyuz does? I mean: If a package builds fine on a PPA, may I be 100% sure that it will build fine with buildd? [17:15] * warp10 likes the "yet ..." :-) [17:16] warp10: yes, the 'ubuntu infrastrucure' mentioned above is Soyuz. I'm glad you actually notice it :) [17:17] warp10: yes, I've read sometime ago that there were some effort on the sparc XEN port. I'm sure out IS team will be more than happy to adopt it when it gets official and stable enough. [17:17] warp10: QUESTION: can you anticipate plans for future improvements of PPA? What kind of new features are to be implemented? [17:18] warp10: yes, I can ... the current focus of our sub-team is allow quicker and simpler workflow to get work done in PPAs merged into ubuntu [17:19] warp10: for instance the REVU application (MOTU review system) which is the path to get new source officially uploaded to ubuntu [17:20] that's pretty much it, 'integration' is the word to define our goals in the next 2 or 3 months [17:21] secretlondo: QUESTION: how much space do we get for our PPA - and are there ways of increasing it? [17:22] secretlondo: by default you get 1GiB, but it can be increased by an launchpad administrator by request [17:22] secretlondo: you simply need to justify why you need more space, for instance, 'I'm playing with firefox3 and openoffice packages' :) [17:22] :) [17:23] warp10: sorry, I didn't answered you question about what are the main differences of PPA backend and Ubuntu backend [17:24] warp10: first, we automatically override all package uploaded to PPA to the main component [17:25] warp10: in the case it is being copied/synced to ubuntu primary archive it won't necessarily like in main and will be submitted to other ogre-model restricitions. [17:26] PPA builders also don't extract translations neither mangles the package information as they would do in ubuntu primary archive. [17:27] Those are the only simplifications we have in PPA that makes it slightly different than the ubuntu primary archive. [17:27] AFAICS, none of this should be a problem from the development print of view and they help users to get their job done in a quicker way. [17:28] warp10: did I addressed all the points you wanted ? [17:28] cprov-out: you absolutely did, thank you :) [17:29] tamrat: QUESTION: I get the error "Signer has no upload rights at all to this distribution." when I try to upload sth to my personal archive. What could be wrong? [17:29] tamrat: you are possibly not uploading to your PPA, but instead to ubuntu, check your ~/.dput.cf config [17:30] tamrat: ensure you are using the target with 'incoming = /~/ubuntu/' [17:32] okay, I have a question myself: QUESTION: how do I delete a package from my PPA ? How long does it takes to be removed from my archive ? [17:32] anyone interested ? [17:32] +1 [17:32] Use +me/+archive/+delete-packages UI (allowed for PPA owners or team-admins), select one or more of the published sources and type a comment. The will be immediately marked as DELETED in the UI, within 20 minutes they won't be listed in the archive indexes and they will be removed 24 hours after the deletion (the remover runs every night). [17:33] phoenix24: ehe, thanks. I was feeling alone. [17:33] QUESTION: It's a very generic question, what's the utility of providing PPAs ? [17:34] phoenix24: in very basic terms, it allow more contributions to the ubuntu itself [17:34] phoenix24: we are looking for very talented people not yet added in ubuntu-keyring ;) [17:35] :) [17:35] kidding, but that's the real goal of PPA. It's meant to allow more an more people to be able to contribute with FOSS [17:36] To me PPA's seem to be, like personalized.. Package-Builders with StorageSpace & Computational power. [17:36] secretlondo: QUESTION: does the changes in translations between PPA and soyuz mean that localisation doesn't work on PPAs? Or is that just connected to rosseta? [17:36] secretlondo: yes, PPA doesn't support changes in translation, neither bugs, atm [17:37] phoenix24: right, I prefer to see the effects in people, behind the machines/systems. [17:37] I think it's great that i'll have my own little repo to play with :) [17:38] Yesh! that's likely. [17:38] QUESTION: How can I sign my PPA repo ? It's so annoying atm ! [17:39] who wants to know about it ? :) [17:39] yes [17:39] +1 [17:39] I knew it ! ;) [17:39] +1 [17:39] tellme tellme!! [17:40] right, we are working on a the infrastructure bits to allow signed PPAs [17:40] first, let me explain why we will not signed them with a single 'Launchpad PPA' key as we do in the ubuntu archive [17:42] if we do that, we would be hiding the problem under the carpet .., the things we expect to have by using a signed/trusted information wouldn't be exactly reached [17:43] we would create a system that would sign *anything* that it was request to ... [17:44] a user would trust it once and install *everything* coming from LP PPAs w/o any warning. [17:45] what we will provide is a mechanism that you can trust effectively. Launchpad will handle unique GPG keys for each PPA and do the signature/revocation on demand. [17:46] I promise to give you more details in the next ubuntu-week :) [17:46] when it will be probably implemented. [17:47] any other questions ? [17:48] * phoenix24 is reading the previous PPA talk. [17:52] we have 10 minutes yet, maybe you want to discuss previous rejection-messages you have upload stuff to your ppa. Let me find one. [17:54] QUESTION: what does "MD5 sum of uploaded file does not match existing file in archive" error means ? [17:55] it means that there is already a file with the same name published in ubuntu or in your PPA, it usually refers to a different orig.tar.gz [17:56] ups: QUESTION: After deleting a package from the PPA, can I re-upload the same version again? [17:56] ups: good question, thanks [17:56] i've tried it, unsuccessfully :) [17:57] ups: yes, you can, but you will have to wait it to be removed from the archive, normally 24 hours [17:58] ups: and there is also a issue with the origs files being hold by previous publications, the deleted candidate won't be removed [17:58] cprov-out: not sure i understood the last statement [17:59] ups: the best approach is to always use a higher source version [17:59] ups: it's complicated, but let's say you have foo-bar_1.0 published in gutsy and you uploaded foo_1.1 (using the same orig.tar.gz) to hardy [18:00] ups: if you request foo_1.1 deletion, it won't be removed from disk because foo_1.0 is requiring the orig to remain published [18:01] ups: do you see how the removals in a poll-based repository works ? [18:01] ok, that's what happened when i tried it [18:02] * dholbach hugs cprov-out [18:02] ups: so, bump the version and forget about this, diffs and dsc are very small [18:02] that explains it, thanks [18:02] dholbach: thanks, it's all yours. [18:03] ups: great, thanks you for asking. [18:03] thanks a lot cprov-out for a great session - you rock! [18:03] you all enjoyed the cprov PPA show? :) [18:03] thanks! [18:03] thank you all for attending the session [18:03] * ups nods [18:03] Thanks a lot Celso! Great answers. [18:03] * polopolo missed it :( [18:03] Bring it On! [18:03] next up is the MOTU Q&A Session [18:04] it's a session we have each Friday, usually at 13:00 UTC in #ubuntu-classroom [18:04] so if you have any questions, any problem you want a few people to look at, there's an hour dedicated just for that [18:05] of course there's always #ubuntu-motu and the mailing lists, but it's usually a smaller audience and a nice get-together :) [18:05] :) [18:05] How do I make one specific dependency ignored by dh_shlibdeps? [18:05] we usually start with introduction so we know who's new and who just joined the contributors world [18:05] * dholbach is Daniel Holbach, MOTU for quite a while and hopes you all enjoyed UDW as much as I did [18:06] who else is here? [18:06] me [18:06] just secretlondo and LucidFox? :) [18:06] * AstralJava is [18:06] me! [18:06] and me. [18:06] * warp10 is Andrea Colangelo MOTU contributor who would like to see a Developer Week... well... every week! *grin* [18:06] +1 [18:06] me [18:06] dholbach: hi :) [18:06] +1 [18:06] I am Akshay Dua and this is my first time in a ubuntu chat room. I am currently working through the packaging guide. [18:07] I'm Caroline Ford, I'm a bug traiger who is involved in tuxpaint upstream. I intend to learn packaging and become a MOTU (eventually) [18:07] oh, and hi everyone [18:07] * recon is the n00b trying to find something to package. And failing. [18:07] BRING IT ON! That's the spirit and what I'd like to hear! :-) [18:07] * LucidFox is Matvey Kozhev, a recently approved MOTU and beginning Debian contributor [18:07] * Iulian is Iulian Udrea and would like to join the MOTU world [18:07] perfect :) [18:07] * polopolo already asked a question btw [18:07] Daniel Brumbaugh Keeney is the slow builder of a great Ruby Debian packaging machine called Tanzanite [18:07] ok... we have a few questions in the queue already [18:08] me [18:08] How do I make one specific dependency ignored by dh_shlibdeps? [18:08] LucidFox: can you explain to those who just came here what dh_shlibdeps is for... in a nutshell [18:09] dh_shlibdeps is a script for debian/rules that automatically fills dependencies for a package based on shared libraries its binaries link with [18:09] that sounds useful. [18:09] it's the best thing since sliced bread :) [18:10] so if you add ${shlibs:Depends} as a variable to the Depends: line of your package in debian/control in the end it will have all the library dependencies automagically filled in [18:10] LucidFox: I just heard of cases where that was necessary 2 or 3 times - the only solution I seem to remember right now is messing with debian/substvars [18:11] dholbach: well, if you wanted to ignore a dependency, couldn't you just edit it out of the control file? [18:11] LucidFox: I think one package was ffmpeg or some media library [18:11] LucidFox: and the other one libgoffice (because it built a -gtk and a -gnome variant) or something [18:11] * eddyMul is Eddy Mulyono, packaged a bunch of stuff for Gentoo, and now looking at helping out Ubuntu [18:11] "dh_shlibdeps -- -xpackage-name" is worth a try [18:12] recon: no, because you have just ${shlibs:Depends} there which gets all the library depends automatically filled in [18:12] recon> No, the dependencies are generated during build and inserted into the deb's control file, and debian/control is untouched [18:12] <-- James Westby, MOTU hopeful [18:12] oh. [18:12] I can see a situation if you are backporting something - eg dapper doesn't have SDL_pango, and programX can be compiled without it [18:12] LucidFox: better to try what james_w said first :) [18:12] QUESTION: what ask the ubuntu team of its MOTU? or: what do you first to know before you are motu? [18:13] secretlondo: in that case you would normally edit the "Build-Depends" to remove the package so that it wasn't linked with. [18:13] secretlondo> in this case, it's usually sufficient to remove the -dev package from debian-control, in some cases also pass a --disable-X argument to configure [18:13] * RainCT is Siegfried Gevatter, MOTU since some weeks and hoping to become a DD someday [18:13] ok thanks [18:14] * jpatrick is Jonathan Davies, MOTU (~2 years) and Debian contributor [18:14] polopolo: I answered that question on http://wiki.ubuntu.com/MOTU/FAQ - it seems to be a regular question coming up "Do I need to know a lot of programming languages to become a MOTU?" [18:14] let me quote from the wiki page [18:14] Much more important than having a lot of progamming experience is [18:14] * being a good team player [18:14] * learning by reading documentation, trying things out and not being afraid to ask questions [18:14] * being highly motivated [18:14] * having a knack for trying to make things work [18:14] * having some detective skills [18:14] polopolo: I know it's kind of hand-wavy but I hope it helps to answer your question [18:14] * like being hugged by dholbach [18:15] * dholbach hugs james_w :-) [18:15] QUESTION: I am confused about the differences between the use of the underscore and the hyphen while packaging. Why are they needed? Which one to use when?(- vs _) [18:15] hehe [18:15] dholbach: yes, you have, thank you [18:15] akshay: use the hyphen in the version number in debian/changelog and the underscore when you name the files [18:15] great, thanks! [18:16] so it's gedit_2.23.4.orig.tar.gz and gedit_2.23.4-0ubuntu.diff.gz [18:16] but it's 2.23.4-0ubuntu1 in the debian/changelog [18:16] ok great [18:16] QUESTION: Why would you need a seperate chroot to make packages, ala pbuilder, as opposed to the rest of your system? [18:16] recon: pbuilder is useful because it helps you to pinpoint which build-depends you need exactly [18:17] dholbach: ...should'a thought of that one. [18:17] recon: I usually tend to have lots and lots of libraries installed on my regular machine, but that's not what happens on the build daemon: the buildd takes a minimal chroot, then adds only the Build-Depends [18:17] recon: and you don't need to install all build dependencies on your machine [18:17] also it's a "clean system" [18:17] right, good point RainCT [18:17] QUESTION: .desktop files belong in different places depending on kde3/kde4/gnome. How is this usually handled in a debian package? [18:18] recon: it also allows you to build for a release that you are not running, e.g. to backport to dapper. [18:19] db-keen: they are just installed into one place [18:20] as I understand it gnome-panel (for example) understands where to look them up [18:20] please correct me if I'm wrong - I never dived into this [18:20] Until now I installed everything in /usr/share/applications/ [18:21] RainCT: that's where I install .desktop files to too [18:21] there is some directory for KDE but I'm not sure how this works.. perhaps jpatrick or someone other can elaborate on this [18:22] what I've experienced is that menu entries for KDE files pop up in my menu even if I use GNOME, so if KDE3/4 uses a different directory, they still are shown :) [18:22] QUESTION: (sorry if this is the wrong session). What is a "native package" - I've read that the underscore vs hyphen thing is connected to "native packages" [18:22] and then .desktop files can also be used for stuff other than the menu, placing the file in some special directory, but I don't know neither how this works... [18:22] secretlondo: that's a good question - it gets asked a lot :) [18:23] secretlondo: a non-native package is the "usual" case where you take an upstream tarball directly from their website, rename it to _.orig.tar.gz [18:23] then extract it, add the debian/ubuntu packaging, then build the source package [18:24] and get a _-.diff.gz [18:24] so it's a clear separation between 'upstream code' and 'distro code' [18:24] * secretlondo nods [18:24] now i understand more...thanks for the question secretlondo [18:24] in the case of a native package, it's all just one .tar.gz [18:24] so for example ubuntu-artwork which has no released upstream tarball [18:25] it's ubuntu-artwork_45.tar.gz [18:25] no .diff.gz [18:25] no "ubuntu revision" number [18:25] QUESTION: Where can I find the current distro revision? (e.g. -2.23.4-0ubuntu1) [18:25] akshay: can you try to rephrase your question? I'm not sure I understand [18:26] akshay: what version number a package has in hardy, you mean? [18:26] RainCT: precisely [18:26] akshay: you can find the version number checking on http://packages.ubuntu.com/ [18:26] if you run hardy, you can just run; apt-cache showsrc [18:26] if you don't run hardy yet, you can either check out RainCT's page or http://launchpad.net/ubuntu/+source/ [18:27] or use the rmadison tool from devscripts [18:27] or if you have a hardy entry in /etc/apt/sources.list (a deb-src is enough, which you should have to be able to use "apt-get source") then you also see it with apt-cache [18:27] what I really like is aptitude changelog [18:27] apt-cache show package | grep Version will give you both, the version in Gutsy and the version in Hardy in that case [18:27] understood. Thanks. [18:28] any more questions? any problems you ran into? things that are not quite clear from other sessions? things you've always wondered? like what james_w's favourite kind of music is? [18:29] :) [18:29] QUESTION: When I upload a package to REVU, but it's not possible to add it to hardy, should it be added to the next version or should I try it again when the time's comes again? [18:29] polopolo: always upload 'to the current development release' [18:29] it should be quick enough to update to 'intrepid' once it's open :) [18:30] dholbach: ok I understood [18:30] excellent [18:30] QUESTION: What is the "XSBC" in XSBC-Original-Maintainer? [18:30] akshay: that's something I wondered myself [18:30] haha [18:30] and I think somebody asked it before, somebody else answered and I forgot it again [18:30] sorry [18:30] X: User defined (not defined in the debian policy), SBC = the field should be visible in the Source, in the Binary and in the Changes file [18:31] iirc [18:31] dholbach:np I know what goes in the field so its ok [18:31] X just mean is an extra-flag user defined [18:31] SBC are for source, binary and control respectively [18:31] akshay: there you go - it's always good to ask :) [18:31] means the field will be added to those packages [18:31] thanks guys [18:31] norsetto: what do you mean with 'control'? [18:32] ops, changes :-) [18:32] cool, I didn't say it wrong :) [18:32] dholbach: well, if there are no questions, I wanna know why you are a part of MOTU? whatÅ› the history in it? [18:32] polopolo: nice question :) [18:33] back when I joined the MOTU team, I really should have spent the time on my thesis instead [18:33] but I had to update a library to a newer version and I had done it in a personal repository (PPA did not exist back then), so mvo (and others) encouraged me to do it right [18:34] and join the team [18:34] the processes were all very different but what I liked so much about working in the Ubuntu team was the pioneer atmosphere [18:34] dholbach: I am doing my PhD too, do you advice against joining the MOTU now :) [18:34] may i ask a question ? [18:34] there's always something to do, always something to take care of, new teams to found, people to plan new things with etc [18:34] about yestaday package xnetcardconfig ? [18:34] akshay: not at all - I finished my thesis on time :-) [18:35] rZr: fire away [18:35] great, then I have nothing to worry about [18:35] I submited the debdiff [18:35] * dholbach hugs akshay [18:35] and removed the original maintainer from debian/control [18:35] i feel loved [18:35] rZr: I replied on the bug [18:35] since the package never entered debian [18:36] rZr: we usually don't do that unless we really intend to maintain the package [18:36] got to go, this was a lot of fun. Will be back next week. bye [18:36] so i dont see why we should keep a reference to debian [18:36] bye akshay [18:36] akshay: this session is normally earlier in the day, so don't get caught out. [18:36] dholbach: I plan to "adopt" this one and pushing into debian then [18:36] bye akshay [18:37] rZr: in this case the upstream author is listed as the maintainer [18:37] rZr: you probably should get in touch with him before changing the maintainer field [18:37] ok will do then [18:37] rZr: I was just surprised you changed it [18:37] ok great [18:37] QUESTION: Is the priority= field in debian/changelog used for anything in Ubuntu? [18:37] RainCT: that's a good question for Kamion :) [18:38] the only use of it I know is that dpkg will complain if you tell it to remove an essential package [18:38] RainCT> I assume you meant urgency? [18:38] ups, yes [18:38] I've been wondering it as well [18:38] oh, you mean urgency? [18:38] urgency is not used at all [18:38] yes [18:38] it's pointless to change it [18:39] while we're at it, what is priority in debian/control used for? Is there any difference between optional and extra for Ubuntu? [18:39] (19:38:16) dholbach: the only use of it I know is that dpkg will complain if you tell it to remove an essential package [18:39] :P [18:39] LucidFox: I think that synaptic (and other package managers if they support it) will display it in different categories [18:39] dholbach: I feared that XSBC-Original-Maintainer field is tracked and generate unwanted trafic to debian .. [18:39] it has its relevance in the policy, but that's all I know [18:39] essential is not actually part of the priority. [18:40] dholbach: since no process ever started regarding debian and this package [18:40] no RFS or ITP [18:40] rZr> If a package is actively maintained in Debian, Ubuntu generally commits itself to small, nonintrusive changes, and tries to push everything not Ubuntu-specific back to Debian [18:40] rZr: no, it doesn't, we still set it for NEW packages (that never were in Debian) [18:40] hence the need for XSBC-Original-Maintainer being the Debian maintainer [18:41] all current questions answered? [18:41] LucidFox: the package i am talking about is a custom built out of debian [18:41] james_w: argh I'm stupid today :P [18:41] no - there are more in -chat [18:41] Ooh, Japanese. [18:41] QUESTION: Is there an easy way of reusing PPA work. I'm planning on using my PPA to experiment with packaging [18:42] secretlondo: can you explain what you mean by 'reusing'? [18:42] dholbach: I'll update the debdiff, contact the author and merge it to debian then [18:42] rZr: great, thanks [18:43] as in getting the same package in, say, intrepid [18:43] getting it into, say, intrepid. In the last session celso said something about making a connection between PPAs and REVU [18:43] dholbach: debian would never accepted a such package as it was anyway ;) [18:43] secretlondo: that's would be nice indeed [18:44] secretlondo: an a branch manager then .. [18:44] secretlondo: not right now unfortunately - REVU has features that PPA does not have (diff between uploaded versions, etc), that's why we currently stick with REVU for NEW packages [18:45] it's unfortunate and yet another site to register for, but that's all I can say right now [18:45] dholbach: eh.. there's no signup :) [18:45] ok [18:45] right, you have to join the team and ask for the keyring to be synced [18:45] QUESTION: well then ignoring .desktops, how should a Debian package handle files that may or may not be installed based on some system aspect? [18:46] db-keen: can you explain a usecase? [18:47] Sometimes a program might use a configuration system like GConf if it is available, but can easily work without it, just won't persist settings between sessions [18:47] packages usually install files in one place, if they might be needed in another place you can use symlinks for that [18:48] if it's files that probably are not needed at all, you could stick them in a separate package that is only recommended or something [18:49] I just wouldn't be sure where to put a gconf schema without gconf installed [18:49] db-keen: add gconf to recommends or suggests dependending on how necessary it is (recommends are usualy installed automatically, suggests not) [18:49] db-keen: just install the gconf schema, it's just a few kb extra, if it's not needed and it's ok to not be used, it will live in /usr/share/gconf/schemas [18:50] QUESTION: If I wanna upload a package to ubuntu/debian, is it needed to call the devolper of the upstream package first? or not? and if yes, what if I cannot find the upstream devolper? [18:50] polopolo: if you upload a NEW package, you act as its maintainer - that's a role of responsibility [18:50] polopolo: because you liaise between the upstream developers, the package's users, other developer and so on [18:51] you're one of the important bonds that make open source happen and that ubuntu is built upon [18:51] so yeah: it's great if you let upstream know and have a good relationship with them [18:51] it's also one of the things that make work in the open source landscape and particularly in Ubuntu so exciting [18:51] you're in touch with a lot of people [18:51] and if you fix things, you make a lot of people very happy :-) [18:51] dholbach: ok, I undersrand it ,well, there are no question, can I ask a personal question? [18:52] polopolo: go for it [18:52] dholbach: why do you use linux and now windows? howlong do you use linux? howlong do you use ubuntu? and why ubuntu and not mandriva pclox opensuse etc? [18:53] I guess you mean "not windows", right? :) [18:53] not what? [18:53] hehe [18:53] yes, not windows sorry [18:53] I just use windows for my taxes stuff, there's just no work-able program for that in the Linux world (if somebody knows something that works for German taxes, please let me know) [18:54] there is something as we had a sync request for it as it was on the 2007 version [18:54] I've used Linux for 8 or 9 years now and I was always excited by the people and what they make happen [18:54] * secretlondo remembers the bug ;) [18:54] polopolo: many of us probably have input on that question [18:54] I find it so much more usable and it's great fun to be part of the huge community [18:55] I used Debian before I used Ubuntu, all the people who invited me to join the community then finally made it [18:55] everybody was friendly (and forgiving when I messed things up) [18:55] especially seb128, he was really patient with me, when I did not understand shlibdeps in the first place :) [18:56] I'm still unsure of how I should be doing this: If gconf isn't installed, should I still be putting files in that directory? [18:56] db-keen: yeah [18:56] * seb128 hugs dholbach [18:56] * dholbach hugs seb128 back [18:56] it's great to work with seb128 :-) [18:56] do we have any other questions? [18:57] * polopolo gonna ask to dholbach: gonna include this personal talk also on the ubuntu wiki or not? [18:57] polopolo: sure :) [18:57] OK everybody, if that's it, let me give you a few final pointers: [18:58] http://wiki.ubuntu.com/MOTU/GettingStarted <- bookmark it and go from there :) [18:58] next MOTU Q&A Session every Friday 13:00 UTC [18:58] there's also always #ubuntu-motu and ubuntu-motu@lists.ubuntu.com [18:58] thanks everybody for this great session - you ROCK [18:58] Got distracted for a bit, but thanks Daniel again for your time! :) [18:59] * polopolo wanna thank dholbach for this session [18:59] dholbach, thank you [18:59] thank YOU! [18:59] heya people [18:59] Thanks dholbach! [18:59] next up we have Stefan sistpoty Potyra - a great guy, MOTU for quite some time and somebody who always manages to make time for you [18:59] * sistpoty bows [19:00] thanks for the introduction dholbach :) [19:00] RainCT: /usr/share/applications/kde/ or kde4? [19:00] hello [19:00] He's going to give you a two hours talk about Library Packaging [19:00] jpatrick: both [19:00] so keep your favourite drink handy and enjoy the show! [19:00] * dholbach hugs sistpoty [19:00] Hello sistpoty !! [19:00] * sistpoty hugs dholbach [19:00] at least I hope, we can do the session in two hours [19:00] (last time, it was much longer, but I try to be short ;) [19:01] ok, so who's around for library packaging? [19:01] raise your hands ;) [19:01] + [19:01] raises! [19:01] * secretlondo is lurking as this will be too hard [19:01] o/ [19:01] same here, lurking [19:01] <\sh> sistpoty: renewing my knowledge so let's do it [19:01] +1 :) [19:01] * daishujin hopes he understands some of this session [19:01] * polopolo is here to learn! [19:01] ok, first off... this session will be a two part session [19:02] +0.5 [19:02] in the first part, we'll learn the grey theory with some practical examples [19:02] -> everything you don't know about symbols and always wanted to ask ;) [19:02] in the second part, we'll take a look at a practical example, together with a few more hints [19:03] for the first part, I guess all you need is readelf, gdb, nm, gcc, objdump (I guess build-essential is enough to have installed for these) [19:03] in the first part, we'll take a close look at shared objects... [19:03] shared objects can be used in two ways: [19:03] 1) as a plugin mechanism (to be loaded with dlopen) [19:04] and 2) as shared libraries [19:04] we'll focus solely on 2) here [19:04] an ELF shared object (.so) is a collection of symbols together with some meta-information [19:05] a symbol denotes any named entity in c (or c++), e.g. a function call or a (global) variable [19:05] one of the meta-information about a symbol in a shared object is the section it resides in [19:05] e.g. if it has a default value, can be executed... we'll soon look at this [19:06] actually now... so everyone get http://www.potyra.de/library_packaging/example.c [19:06] oh, if there are any questions, don't hesitate to ask :) [19:06] everyone got that file? [19:06] yep [19:07] then let's compile it: [19:07] yes [19:07] gcc -c example.c -o example.o [19:07] now let's take a look at the (not yet shared) object file... [19:07] nm example.o [19:08] what you see there, are the symbols in the object file [19:08] the rightmost thing is the name [19:08] the letter before is the type of the symbol [19:09] upper case letters denote, that the symbol is visible outside of the object file as well [19:09] while a lower case letter means that it is local to the symbol [19:09] hence, e.g. extern_global could be used by a different c-file while static_global could be only used from the c-file we just compiled [19:10] short overview of the types, and what they mean: [19:10] t -> text section (may be mapped readonly), executable [19:10] d -> initialized data (statics, initialized globals) [19:10] c -> uninitialized data (uninitialized globals) [19:10] r -> read only data (const variables), not necessarily read only though. [19:11] u -> undefined. We use a symbol from elsewhere, which will get resolved later by the loader [19:11] you can find these in the manpage of nm for reference [19:11] now, let's compare this with the c-code [19:12] for example the symbol "extern_global" is defined as "int extern_global;" [19:12] it's not initialized with a value, hence the type nm spits out is "c" [19:13] and since it can be used from other c-files, it's an upper case letter [19:13] any questions so far? [19:13] surprisingly clear so far [19:13] ok, let's build a shared object from example.c [19:14] Just a clarification that you probably meant "local to the module", instead of *symbol. [19:14] Err... that was a question, sorry. :) [19:14] *looking* (I pasted some stuff *g*) [19:14] Gotcha. :) [19:15] AstralJava: can you give me the line? ;) [19:15] sistpoty: while a lower case letter means that it is local to the symbol [19:15] Yes, that. [19:16] ah... right... of course local to the object file (or c-file that gcc translate to an object file) [19:16] sistpoty: QUESTION : Why is there no symbol information for "int local_var" (local_function)? [19:16] good question, phoenix24: anyone got the answer for him? [19:16] Coz its a local variable ? [19:17] those would be taken from the local system? [19:17] phoenix24: exactly [19:17] but, static_local_var is static. [19:17] local variables inside functions reside on the stack [19:17] these will get put on the stack, once the function is called, and will get removed when the function returns [19:17] thanks! [19:18] hence these are not part of the object file [19:18] however static local variables will be in the data section... because they keep their value in the next funciton call [19:18] -> part of the object file [19:18] ok, now let's build a shared lib, shall we? [19:18] yep! [19:19] gcc -Wl,-z,defs -Wl,-soname,libexample.so.1 -fPIC -shared example.c -o libexample.so [19:19] the -Wl,... are commands that gcc passes to the linker [19:20] please explain a bit on them. [19:20] ok, the first one basically tells the linker, that it needs an entry for every symbol (i.e. no unresolved symbols) [19:20] note, that an "U" entry is ok here as well [19:21] because that would mean that it is linked against an shared object, which contains the symbol (the implicit libc, which gcc will always add here) [19:21] I'll come to the soname part later in the session... this option will specify that libexample.so should have the soname libexample.so.1 [19:22] sistpoty: might I interject a clarification regarding -Wl,-z,defs? [19:22] slangasek: sure [19:22] what that option really means is that, at build time, ld must be able to find a match for each undefined symbols in one of the shared objects that you're linking against [19:23] i.e, it controls *how* " U " symbols are handled [19:23] thanks for the clarification slangasek! [19:23] specifically, does -Wl,z,defs mean that ld will be passed the "-z defs" flag? [19:23] dooglus: exactly [19:23] thanks [19:24] -Wl,something means to pass something as an option to ld [19:24] the -fPIC will tell gcc to produce position independent code... I'll just say that it is needed to shared objects and not go into details here ;) [19:25] finally, -shared tells gcc to produce a shared object [19:25] now, to match the shared objects, you have on the system, let's strip it [19:25] strip libexample.so [19:25] +.1 [19:26] +1 [19:26] erm... what I wrote... (just messed things up locally *g*) [19:26] let's take a look again with nm [19:27] nm libexample.so [19:27] oops [19:27] No symbols : nm: libexample.so: no symbols [19:27] let's try to get it right... as it's a shared object, we're interested in the dynamic symbols [19:28] nm -D libexample.so [19:29] sistpoty: can I ask a question? [19:29] dooglus: sure [19:29] +1 [19:29] * siretart notices that the output of eu-nm (from elfutils) is much clearer that binutil's nm [19:30] sistpoty: this "so" is a shared object - has any linking been done yet? when we were talking about options being passed to ld, has that happened yet? the term 'shared object' makes me think it hasn't been linked yet (like a .o hasn't) [19:30] dooglus: good one [19:30] for a shared object, linking has been done [19:31] yes, it's "partially" linked, with final linking happening at runtime via ld.so [19:31] I thought so - I 'damaged' one of the 'fprintf's in the file, and it refused to compile any more - I was surprised. [19:32] there isn't much difference between an ELF shared object and an executable. the executable happens to have a function called main(), though :) [19:32] ok, thanks [19:32] so does the ELF [19:33] siretart: have a main I mean [19:33] snewland: yes, because the main is still in there (s.th. libraries usually don't contain) [19:33] k thanks [19:33] dooglus: you might like to compare the results with and without the -Wl,-z,defs [19:33] (in the case of damaging the fprintf) [19:34] there are also other tools to look at the shared object's information [19:34] let's try readelf -s libexample.so [19:35] with the -Wl,z,defs I see "example.c:(.text+0x56): undefined reference to `myfprintf'" - without it I see no error at all [19:35] exactly, because in the first call, the linker definitely wants to resolve myfprintf, in the second one it won't [19:36] dooglus: I thought the command was -Wl,-z,defs [19:36] snewland: I typed it out badly by hand there. [19:36] ok, let's take a look at the output of the readelf command [19:36] anything interesting that you note? [19:37] size 0 [19:37] @GLIBC_2.0 [19:38] I'm not too sure what size 0 means exactly (probably, that it doesn't take any storage space in the shared object itself) [19:38] but let's look at what eddyMul has found [19:38] these are versioned symbols [19:38] i.e. the name of the symbol contains a version as well [19:39] oh, I should explain s.th. first [19:39] in an shared object (and a normal object as well), there can be only one symbol with a specific name [19:40] (apart from dirty linker commands getting used) [19:40] that means, you couldn't have two symbols with the name stderr [19:40] however, in the c-library uses versioned symbols, hence the version is part of the symbol [19:41] GLIBC is written all in capitals; I've never seen it in capitals before [19:41] so stderr could be defined for example by an older version as well [19:42] however this symbol from our shared object would then always get resolved to the one of 2.0 [19:42] (or mine against 2.2.5) [19:42] printing out symbol versions is s.th. which afaik nm doesn't do [19:43] hence, when looking at libraries, nm should be avoided [19:43] you can also use objdump to look at a shared object [19:43] it can tell other important information as well [19:43] let's try this [19:44] objdump -x libexample.so [19:44] this will produce lot's of output [19:44] objdump -p libexample.so [19:44] will give much less output with the most interesting information [19:44] so let's see what objdump -p libexample.so will tell us [19:45] the most interesting bits are [19:45] the SONAME [19:45] and the NEEDED enty (there can and usually is more than one NEEDED entry) [19:46] the SONAME entry denotes some kind of "version" of the shared object, denoting a stable abi [19:46] that means basically that you can always use a newer version of a library with the same SONAME as an older version [19:46] and your program using it will still work [19:47] anyone recalling, where the SONAME entry came from? [19:47] gcc -Wl,-soname... [19:47] stdin: perfect [19:47] Damn, beat me to it! [19:48] however that means, that someone set it manually [19:48] or in other words, if the person who set it did s.th. wrong, you might end up with a changed ABI [19:49] let's take a look what problems can arise by an example [19:49] first, let's get http://www.potyra.de/library_packaging/libmyhello-1.0.1.tar.gz [19:49] extract it and compile it [19:49] tar -xvzf libmyhello-1.0.1.tar.gz [19:49] make [19:50] if you've got it, please install it (with root privs): [19:50] sudo make install [19:50] no worries, it will only place stuff under /usr/local, and comes with an uninstall rule as well [19:50] We're brave people. :) [19:51] of course you can look at the makefile, if you don't trust me :P [19:51] now run ldconfig (with root privs as well), so that ld will be notified to look out for s.th. new [19:52] of course a library alone is no fun yet, so you want a program using it as well: http://www.potyra.de/library_packaging/hello_prog-1.0.tar.gz [19:52] you'll only need to compile this one [19:53] let's try it: ./hello_prog 10 [19:53] amazing software, right? *g* [19:53] everyone got it so far? [19:53] Yup. [19:54] +1 [19:54] ok, so now let's check out the new library version [19:54] http://www.potyra.de/library_packaging/libmyhello-1.0.2.tar.gz [19:55] you should know the procedure... extract, make, sudo make install [19:56] if you've got it, please try to run the application again (not to rebuild it, just run the one you've got) [19:57] Eeewwww.... symbol lookup error! [19:57] ok, let's try to find out what happened... [19:58] look at the symbols, that are undefined in hello_prog [19:58] readelf -s hello_prog [19:58] in readelf, these are listed as "UND" btw. [19:59] you'll see, that hello_prog will want a symbol called print_hello [19:59] if you look at the old shared object, there indeed is such a symbol [19:59] 7: 00000000000005a0 32 FUNC GLOBAL DEFAULT 11 print_hello [20:00] the new library however doesn't come with one [20:01] hence, the ABI is obviously not stable [20:01] the term ABI refers to the abstract binary interface [20:01] it means, the symbols (by name) defined in the shared object and their type [20:02] for functions, it also means how the symbols can be used (i.e. how many parameters must the function have, and of which type must these be) [20:03] as you've seen, once you remove a symbol, the ABI is not stable (because any program might have used it) [20:03] the type of a symbol shouldn't change as well [20:04] this information can be easily found out with the tools you know so far [20:04] however to find out the arguments of a function call, symbol names and type alone are not sufficient [20:04] Was just about to ask. :) [20:05] one possibility is to use the debug information (which however is no longer present in binary packages) [20:05] for debug information, gdb is our tool of choice [20:05] let's try (pick one of the two library versions you like) [20:05] gdb libmyhello.so [20:06] inside gdb, you can type [20:06] info functions [20:07] you can also look at the type (as in the programming language's type, not to confuse with the symbol type) with [20:07] info variables# [20:07] -# [20:07] however this will only work, if debugging symbols are still present [20:07] you can compare this, after running strip on the shared object [20:09] ok, how about makeing a 5 minute break now, and then sum up part 1 and start with part 2? [20:10] Works for me. [20:11] ok, then let's continue at 20.15 UTC (in the hope my local clock is correct) [20:16] everyone back? [20:17] +1 [20:17] ok, let's sum up part 1 [20:17] Yep. [20:17] so far, we've learned that we can lookup the symbols via nm, readelf and objdump [20:18] a stable ABI means, that no symbols are removed and the symbols type is not changed... also that the corresponding c-convention (e.g. number of arguments of a function, type of arguments of a function, type of a variable) is not changed [20:18] in contrast (what I didn't mention yet), is the API [20:19] a stable API denotes, that a program will still be able to compile with a new library [20:19] you can have a stable API but have breakages in the ABI of course [20:19] e.g. if things get moved from a c-file to inlined version in the header file [20:20] (I just managed to catch up, very instructive so far :)) [20:20] if the SONAME stays the same, it means that the library author *believes* that the ABI is stable === phoenix24_ is now known as phoenix24 [20:20] but it's no guarantuee, since it's set by a human ;) [20:21] oh, one thing I missed: the NEEDED entry [20:21] or entries [20:22] the NEEDED entry in a shared object means, that a shared object with such a SONAME is needed [20:22] (likewise in a elf-binary aka program) [20:22] it's needed, because unresolved symbols, which the loader will resolve are defined there [20:22] let's take a short look at a real world example [20:23] objdump -p /usr/lib/libgnome-2.so.0 [20:23] (in the hope, that everyone has this shared object) [20:24] these NEEDED entries point to shared objects (with the same SONAME), that contain symbols that libgnome-2.so.0 uses, but doesn't define [20:24] I guess you all know ldd [20:24] let's compare [20:24] ldd /usr/lib/libgnome-2.so.0 [20:25] can anyone spot the difference? [20:26] sistpoty: please brief about ldd. [20:27] phoenix24: ok [20:27] ldd seems to return the whole dependency tree with path to the SO file [20:28] ldd will basically do the same thing, that the loader will do [20:28] i.e. it will resolve the NEEDED entries to shared objects [20:29] and as elisee wrote, will do the same for the shared objects found as dependencies as well [20:29] in fact, ldd uses the loader to do what it does ;) [20:29] it will finally spit out the *pathes* to the shared object it found [20:29] heh [20:29] linker / load = the same program? [20:30] with different options [20:30] -load+loader [20:30] elisee: nope... the linker is used at *compile* time... to find out the NEEDED entries [20:30] by "loader" here we're talking about ld.so, which is the runtime linker, yes [20:31] elisee: the loader will take the NEEDED entries as input and find the object file at *run time* [20:31] sistpoty, I get that [20:31] and ld.so is automatically loaded by the OS? because it's a shared object too, right? [20:31] got it. [20:32] elisee: that's what I assume, but I guess slangasek could tell in more detail ;) [20:33] elisee: the way this works under Linux is that when you exec() a program, the kernel looks at the head of the file, sees that it's an ELF file, and passes control to ld.so to work out what to do with it [20:33] at least it looks like, according to the ld.so man page [20:33] ok thanks a lot, all of this is so instructive [20:35] finally, a very important note, which we'll dig into in part 2: shared objects with a different SONAME can be installed side-by-side [20:35] any further questions to part 1 so far? [20:35] nope [20:36] ok, then let's go for part 2 [20:36] yes! [20:36] while my plan was to actually have everyone package libmyhello, I guess we'll never do that in time [20:36] luckily, I've got s.th. prepared, let me look at where to get it :) [20:37] bzr branch http://bazaar.launchpad.net/~sistpoty/+junk/library-packaging-session [20:37] (in the hope, that everyone has bzr installed) [20:38] i guess anyway it can be quickly installed by anyone through Ubuntu repositories [20:38] :) [20:38] everyone got it? [20:39] +1 [20:39] ok for me [20:39] ok, since this contains the upstream code as well, we'll first need to make a tarball of it [20:39] +1 [20:39] cd libmyhello/0.1 && make dist [20:40] * sistpoty tries to remember the funny build system used for the package [20:41] I don't know much about packaging, and I wonder : does that kind of Makefile gets written by a human-being? [20:41] or do tools do it for us? [20:41] elisee: is this a question whether I am human? :P [20:41] (I wrote everything there by hand)= [20:41] elisee: he's not [20:41] sistpoty, aren't you? ;) [20:42] heh, no comment *g* [20:42] he's an advanced cyborg cron job [20:42] So AI devel is much further than They(tm) let us believe, huh? [20:43] ;) [20:43] :p [20:44] ok, if you've got the make dist, you'll need another dir [20:44] in libmyhello, make the directory work [20:45] if you've got that, go to the subdir packaging, and just call make (this will place some symlinks into ../work) [20:45] oh, hard links, it seems [20:46] it will also extract the packaging I prepared [20:46] everyone got it so far? [20:47] got it [20:48] ok, first off, before looking at anything, a *very* good resource is the debian library packaging guide... keep that under your pillow, when working on libs ;) [20:48] http://www.netfort.gr.jp/~dancer/column/libpkg-guide/libpkg-guide.html [20:48] subdir named "packaging ?", [20:49] phoenix24, library-packaging-session/libmyhello/packaging [20:49] phoenix24: libmyhello/packaging [20:51] if everyone got it, let's start with the control file library-packaging-session/libmyhello/work/libmyhello-1.0.1/debian/control [20:52] the work directory should contain the extracted package, hence libmyhello-1.0.1 in there is the directory which I'll refer from now on [20:53] in debian/control, there are two (binary) packages for this library defined [20:53] that's pretty much standard (except some libraries ship an application as well, or have big documentation) [20:53] the first one, will contain the shared object [20:54] the libmyhello1 package name is special: [20:54] it is related to the SONAME that is defined in the shared object it contains [20:54] since shared objects with a different SONAME can be installed side-by-side, the packaging must respect this [20:55] hence the relation of the package name to the SONAME, but it also has other consequences [20:56] in libmyhello1, there mustn't be a file which would have the same name (in the same directory) as a file from a shared object with a different SONAME [20:56] hence putting e.g. a manpage, or an application in there won't work [20:57] this package is the thing, which will (on the users) system usually get installed, because another package (usually an application using this shared object) draws in [20:57] others than that, there is no much use to install it directly [20:58] in contrast, the -dev package contains everything that is needed to compile programs against the shared object [20:59] this means the header files which programs must include, but also in our case the libmyhello1.so symlink [20:59] erm... sorry libmyhello.so symlink in /usr/lib [20:59] now, what's that symlink good for? [21:00] there's no version in it because we link against a share object and not an abi version, right? [21:00] elisee: right [21:00] usually, we'll always want to call s.th. like [21:00] gcc -lsomelib [21:01] this will make gcc to search for a shared object call libsomelib.so [21:01] in the library path [21:01] (it prepends lib and appends .so, iirc .a is also possible if no .so is found, but I'm not 100% sure on this)= [21:02] hence it makes sense to include the .so in the -dev package, so that programs will find the (right) shared object when linking [21:02] oh, for the .a: these are static libraries [21:03] since these will be part of the resulting program, and won't get loaded at run time, there is no such thing as a SONAME for these [21:03] It's simply not needed [21:03] so one can't name a .so file without lib prepend to it? or will it first try without the lib? [21:03] hence these are also part of the -dev package (if there) [21:03] without the 'lib' prefix I mean [21:04] elisee: I'm not 100% sure, but what I know: yes [21:04] ok [21:04] (and looking at my /usr/lib/ directory seems to underline that) [21:05] elisee: of course plugins (which are also shared objects, opened via dlopen) can use whatever names they wish [21:05] but we didn't want to look at these here ;) [21:05] ok [21:06] back to the control file [21:06] the -dev package must always add a (hard) dependency on the shared object [21:06] on the package containing the shared object [21:06] because you can't link anything with just a symlink ;) [21:07] that's the depends line of libmyhello-dev [21:07] others than that (and where the example is not too exact), the -dev package must also make sure, that every library, that's needed for compiling is drawn in [21:08] e.g. if I'd use stuff from libasound2 (because it's e.g. included from libmyhello headers) then there must be a dependency on libasound2-dev [21:09] oh, sometimes, it makes sense, to also name the -dev package correlating to the soname, in case you want to have more than one version of the library in the archive [21:09] e.g. if many apps won't compile with the new version [21:12] ok, so why do we need to make sure, that the library package (libmyhello1) is installable together with a package of a different SONAME (e.g. libmyhello2), even if we plan to have only one version in the archive? [21:12] anyone? [21:13] because the user might need two versions of the library [21:13] upgrades maybe ? (either way aptitude wouldn't have that issue) [21:13] because he has two programs needing these two different versions [21:14] exactly elisee and h3sp4wn_ [21:14] just consider, that the user has a package called hello_prog installed. It was built and got the dependency on libmyhello1 [21:14] now libmyhello2 is available [21:15] the library however won't get upgraded (unless a different package needs it) [21:15] and libmyhello1 can only be uninstalled, in case hello_prog was removed [21:16] so in the archive, we'd rebuild hello_prog. That way it will pick up libmyhello2 as a dependency [21:17] then, and only then, hello_prog of the user can get upgraded (and this would draw in libmyhello2 then). However libmyhello1 wouldn't automatically go away [21:17] got it? [21:17] (yep) [21:17] Figured as much. So how do we get rid of deprecated libs when no app needs them anymore? [21:18] AstralJava: since these (should) always get drawn in by s.th. and not installed by hand, "apt-get autoremove" will do that trick [21:18] the package managers usually provide a way to remove no-longer-needed automatically installed dependencies [21:18] yeah, apt-get autoremove, that's what I mean [21:19] +t [21:19] Okay thanks! [21:19] :) [21:19] ok, I guess debian/rules is not too exciting. maybe apart from one call [21:19] dh_makeshlibs [21:19] this one will create a shlibs file [21:20] an shlibs file contains the info, which library package contains which shared object (together with the version of the library package that's needed for it) [21:21] it's built with the info we've just put in control? [21:21] this is a special part of a debian package... you can find all shlibs files of installed packages in /var/lib/dpkg/info [21:22] elisee: yes. It also can contain the info that a specific version is required (I'll come to that in a minute) [21:22] specific version of the package even [21:23] sorry, yet another question: can't figure out what "dh" means in dh_makeshlibs [21:23] elisee: that means it's a debhelper command [21:23] okay [21:23] debhelper contains commands to make common needed tasks of packaging easier [21:24] e.g. dh_shlipdebs is basically a wrapper to dpkg-shlipdeps... [21:25] which will take care that anything with ${shlibs:Depends} in debian/control will get replaced by a dependency to the library package [21:25] now how does it do that? [21:26] it looks up the NEEDED entries (with tools, we've learned in part 1) of every elf (both binaries and shared objects) and checks via the shlibs files on the system, which package it needs [21:27] now let's reconsider part 1... a stable abi (and hence a stable SONAME) means no symbols get removed and their type and meaning don't change [21:27] however upstream could add new features (e.g. new functions) which would result in new symbols [21:28] and the ABI would still stay the same [21:28] i.e. programs that use the old shared object can use the new shared object [21:28] but what, if a program uses exactly one of the new symbols? [21:29] it could then not run with an old shared object not containing the symbol yet, which however is not different in regards to the soname [21:29] hence on the mere basis of an SONAME there doesn't seem to be a solution [21:30] luckily the debian packaging system can help here [21:30] as I stated earlier, the shilbs file may also contain a version [21:30] you can specify the version that goes in there with the -V parameter of dh_makeshlibs [21:31] so, in case there are new symbols in a package, you'll always want to use that: [21:32] any program, that uses the newer functionality, cannot compile against the old version (we've seen what happens with -Wl,-z,defs) [21:32] so it will need to compile the newer version of the -dev package... and the shlibs mechanism contains then a shlibs file which will state: [21:32] you'll need a this version or a later one of lib... [21:33] was that too fast? [21:33] so there's a numbering scheme inside the shlibs different from the SONAME version? [21:34] correct [21:34] SONAME documents each backwards-incompatible ABI change [21:34] shlibs gives you the other piece, to document backwards-compatible ABI changes [21:34] I don't really know what these shlibs are in fact, but maybe it's not the purpose to explain this now [21:35] oh, then I *was* too fast [21:35] let's try again... [21:35] if you've got a shared library with a given soname, the shlibs file will basically say in what debian package this can be found [21:36] but it will also say what version of the debian package you need (at least) [21:37] in case a program gets built, which contains NEEDED entries (i.e. uses symbols from some shared object) [21:37] the shlibs file will be used to lookup in what package the shared object is [21:37] ok, that's much clearer this way for me [21:38] this is then added as a dependency to the programs package (actually ${shilbs:Depends} in debian/control will get replaced by the found entries) [21:38] ok, any other questions? [21:39] Probably loads, but they might pop up in practice. Can't think of any right now. [21:39] ok, then I guess one final hint (as I've once again used more time than available): [21:40] as we learned today, the most difficult bit is *upgrading* a library package [21:40] there, you must ensure, that the ABI doesn't break [21:41] of course you should also keep a look what packages build against this package (not that you'll upgrade it and every using package won't build any longer) [21:41] but the tricky part is to ensure ABI stability [21:41] I hope, you learned some tools with which you can check that today [21:42] Definitely! What a fantastic session it was. [21:42] thanks a lot :) [21:42] others than that, there exists a package which can be a help, but I forgot it's name... slangasek: what creates the manifest again? === ianc is now known as IanC [21:45] ok, sorry I really have completely forgotten the name of the package... [21:46] however thanks for coming, and I hope once you'll maintain a library package, you'll be prepared ;) [21:46] Can you post it on ubuntu-motu@ or something later on? [21:46] AstralJava: you mean which package I mean? [21:46] Again, thank you very much for this, hugely appreciated! :) [21:47] sistpoty: That's right, if/when you recall or somebody else verifies it. [21:47] it must be in the old logs... but I'll post it ;) [21:47] sistpoty: manifests are created by dpkg-dev itself now, no? dpkg-gensymbols? [21:48] or icheck, for API manifests [21:48] slangasek: no, that package which parses the c-code and produces a .manifest file [21:48] icheck [21:48] thanks! [21:48] Gotcha. [21:49] Thanks, Stefan and Steve. [21:49] (strange enough I was searching for iwcheck the whole time... close but no hit) [21:49] thanks for listening ;) [21:49] now, I'll have a cool drink :) [21:50] Well deserved it too. :) [21:53] leaving, have a good night (or maybe day) everyone [21:53] * slangasek waves [21:54] Me too, was a fine day today. Hope to see similar soon again. :) Bye all. === emgent is now known as carlo__ === carlo__ is now known as emgent