=== BobJonkman is now known as BobJonkman_AFK === BobJonkman_AFK is now known as BobJonkman === MickE is now known as Guest34399 === nigel is now known as nigelb === yofel_ is now known as yofel === raju1 is now known as raju === coalwater_ is now known as coalwater === [1]damageboy is now known as damageboy === raju1 is now known as raju === Pendulum_ is now known as Pendulum [11:05] hi to everyone [11:07] hello thanos713_ [11:08] When the next classrooom is going to be held? [11:08] class* [11:20] Can anyone help me with apt-cacher-ng setup? [12:55] karthick87: Support in #ubuntu, as per /topic [15:50] HELLO EVERYBODY! Welcome to Day 3 of Ubuntu Developer Week! [15:50] We still have 10 minutes left until we start, but I just wanted to bring up a few organisational bits and pieces first: [15:51] - If you can't make it to a session or missed one, logs to the sessions that already happened are linked from https://wiki.ubuntu.com/UbuntuDeveloperWeek [15:51] - If you want to chat or ask questions, please make sure you also join #ubuntu-classroom-chat [15:51] - If you ask questions, please prefix them with QUESTION: [15:52] ie: QUESTION: dpm, How hard was it to learn German? [15:52] - if you are on twitter/identi.ca or facebook, follow @ubuntudev to get the latest development updates :) [15:52] (answer: difficult if you start learning in the Swabian dialect) [15:52] ha! :) [15:53] that's it from me [15:53] you still hvae 7 minutes until David "dpm" Planella will kick off today and talk about Launchpad Translations Sharing! [15:53] Have a great day! === Andre_Go` is now known as Andre_Gondim === ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: Getting Translations Quicker into Launchpad: Upstream Imports Sharing - Instructors: dpm [16:01] Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/13/%23ubuntu-classroom.html following the conclusion of the session. [16:02] hey everyone! [16:03] welcome all to the 3rd day of Ubuntu Developer Week [16:03] and welcome to this UDW talk about one of the coolest features of Launchpad Translations: upstream imports sharing [16:03] My name is David Planella, and I work on the Community team at Canonical as the Ubuntu Translations Coordinator [16:04] As part of my job, I get the pleasure of working with the awesome Ubuntu translation teams and with the equally awesome Launchpad developers [16:04] I also use Launchpad for translating Ubuntu on a regular basis, as part of my volunteer contribution to the project, [16:04] which is why I'm excited about this feature and I'd like to share it with you guys [16:05] So, without further ado, let's roll! [16:05] oh, btw, I've set aside some time for questions at the end, but feel free to ask anything during the talk. [16:05] just remember to prepend your questions with QUESTION: and ask them on the #ubuntu-classroom-chat channel [16:06] So... [16:06] [16:06] What is message sharing [16:06] ----------------------- [16:06] [16:06] Before we delve into details, let's start with some basics: what's all this about? [16:07] In short, message sharing is a unique feature in Launchpad with which translations for identical messages are essentially linked into one single message [16:07] This means that as a translator, you just need to translate that message once and your translations will appear automatically in all other places where that message appears. [16:08] The only requirements are that those messages are contained within a template with the same name and are part of different series of a distribution or project in Launchpad. [16:08] This may sound a bit abstract, so let's have a look at an example: [16:08] Let's say you're translating Unity in Ubuntu Oneiric: [16:08] https://translations.launchpad.net/ubuntu/oneiric/+source/unity/+pots/unity/ [16:09] (you translate it from that URL in Launchpad) [16:09] And you translate the message "Quit" into your language [16:09] Instantly your translation will propagate to the Unity translation in _Natty_ (and all other releases): [16:10] So it will appear translated here: [16:10] https://translations.launchpad.net/ubuntu/natty/+source/unity/+pots/unity/ [16:10] without you having had to actually translate it there [16:10] So basically Launchpad is doing the work for you! :-) [16:11] It also works when you want to do corrections: [16:12] Let's say you are not happy with the translation of "Quit" in Unity, and you change it in the Natty series in Launchpad [16:12] So you go to: https://translations.launchpad.net/ubuntu/natty/+source/unity/+pots/unity/ [16:12] and change it === tavish_ is now known as tavish [16:13] actually, I could point you to the actual message: [16:13] https://translations.launchpad.net/ubuntu/natty/+source/unity/+pots/unity/ca/2/+translate [16:13] anyway, so you fix the "Quit" translation [16:13] perhaps there was a typo, perhaps you were not happy with the current translation, etc. [16:15] That change will also be propagated to all other Ubuntu series, so that you don't have to manually fix each one of them [16:15] Isn't that cool? [16:15] And as you see, it works both ways: from older series to new ones, and viceversa. The order does not really matter at all. [16:15] we've got a question: [16:15] shnatsel|busy asked: If there was a change in terminology between the series, e.g. I translated "torrent" with one word, then the app changed (some strings changed, some strings added) and now I use a different word, is there a way to prevent the new terminology partly leaking to the older series and making a terrible mess there? [16:16] that's a good point [16:16] however, with the current implementation this will not happen [16:16] message sharing works only with identical messages [16:17] this means that if in the first series the original string was "torrent", [16:17] and in the next series the original string was "torrent new", [16:18] there won't be any message sharing at all, avoiding inadvertently translating strings you don't want to be translated automatically [16:19] gpc asked: So, "torrent" and "torrent." are not the same string? [16:20] that's correct, even with such a small change as this, they're not considered the same string [16:20] they need to be identical [16:20] so rather than a fuzzy match, message sharing works only on identical matches [16:21] fuzzy matching would need further development in Launchpad [16:21] If you're interested on how difficult it would be to implement it, or even in implement it yourself, [16:21] I'd recommend asking on the #launchpad channel [16:21] that's where the Launchpad developers hang out [16:22] and they're always happy to help [16:22] Anyway, let'w move on... [16:22] So far we've only talked about distributions, and in particular Ubuntu. But this equally works within series of projects in Launchpad [16:22] But more on that later on... [16:22] You'll find out more about sharing here as well: [16:22] http://blog.launchpad.net/translations/sharing-translations [16:22] http://danilo.segan.org/blog/launchpad/automatic-translations-sharing [16:23] [16:23] Benefits of messsage sharing [16:23] ---------------------------- [16:23] Continuing with the basics: what good is this for? [16:24] In terms of Launchpad itself, it makes it much more effective to store the data: rather than storing many translations for a same message, only one needs to be stored. [16:24] But most importantly: [16:24] For project maintainers: when they upload a template to a new release series and it gets imported, will instantly get all existing translations from previous releases shared with the new series and translators won’t have to re-do their work. They won’t have to worry about uploading correct versions of translated PO files, and can just care about POT files instead. [16:25] For translators: they no longer have to worry about back-porting translation fixes to older releases anymore, and they can simply translate the latest release: translations will automatically propagate to older releases. Also, this works both ways, so if you are translating a current stable release, newer development release will get those updates too! [16:26] Any other questions so far? [16:27] Ok, let's continue then [16:27] [16:27] What's new in message sharing [16:27] ----------------------------- [16:27] Until recently, message sharing had only worked within the limits of a distribution or of a project. [16:28] That is, messages could be shared amongst all the series in a distribution or amongst all series of a project. [16:28] As cool as it already was, that was it: data could not flow across projects and distributions, and each one of these Launchpad entities behaved like isolated islands with regards to message sharing. [16:29] But before going forward, let me recap quickly on some other basic concepts in Launchpad. When we're talking about message sharing, we're interested mostly in two types of Launchpad entities [16:30] * Projects: these are standalone projects whose translations are exposed in Launchpad. If these projects are packaged in a distribution, we often refer to the actual project at a location such as https://launchpad.net/some-project as to the upstream project. An example is the Chromium project at https://translations.launchpad.net/chromium-browser/. Upstream projects can host their translations in Launchpad or externally. In the latter case, translat [16:30] ions can still be imported into Launchpad, but more on that later on [16:31] * Distributions: these are collections of source packages, each one of which is also exposed for translation. The most obvious example is Ubuntu. Here's an example of the Natty series of the Ubuntu distribution: https://translations.launchpad.net/ubuntu/natty [16:32] So the news, and the main subject of this talk, is that from now on translations can be shared, given the right permissions, across projects and distributions. [16:32] Again, let's take an example: [16:32] • The Synaptic project is translatable in Launchpad: https://translations.launchpad.net/synaptic [16:33] • At the same time, the Ubuntu distribution has got a Synaptic package which is translatable in Launchpad: https://translations.staging.launchpad.net/ubuntu/natty/+source/synaptic/ [16:33] • Now, given that the upstream maintainer has enabled sharing and has set the right permissions, now translators can translate Synaptic in Ubuntu and their translations will seamlessly flow into the upstream project! [16:33] • This works again both ways: if one translates Synaptic in the upstream project, translations will appear in the Ubuntu distribution [16:34] So no more backporting translations or exporting them and committing them back and forth. [16:35] ashams asked: So why there is a separate set of Templates for each release of Ubuntu, i.e. (in Natty: https://translations.launchpad.net/ubuntu/natty/+source/unity/+pots/unity/) in Oneric: https://translations.launchpad.net/ubuntu/oneiric/+source/unity/+pots/unity/) [16:36] This is due to the fact that for each series of Ubuntu you not only get new applications with new templates (or some go away), but also you get different messages in the templates [16:36] This is how releases of projects work, it hasn't got anything to do with message sharing [16:37] i.e. we need different sets of templates for each series, otherwise if we had one single set, it will overwrite the old ones on each release [16:38] Anyway, combine the previous example with automatic translation imports and exports, and project maintainers get a fully automated translation workflow, which is really really awesome :-) [16:38] More on automatic imports/exports: [16:38] https://help.launchpad.net/Translations/YourProject/ImportingTranslations [16:38] http://blog.launchpad.net/general/exporting-translations-to-a-bazaar-branch [16:39] So far we've covered projects hosted in Launchpad - what happens with projects hosted externally? [16:39] If translations of a given project are hosted externally, you won't get the benefit of full integration into Launchpad, but you'll still get some important advantages: [16:40] • Translations will need to be imported from a mirror branch into a Launchpad upstream project [16:40] • They will then be regularly imported to Launchpad Translations [16:40] • From there, they will flow quickly, and on a regular basis, into the Ubuntu distribution [16:40] • Up until here, the end result is the same as for upstream projects hosting translations in Launchpad [16:40] • However, translations will only flow in the direction upstream -> Ubuntu, as we don't have a way to automatically send translations to the external project yet [16:41] • The big benefit here is that translations will be imported reliably and quickly on a regular basis [16:42] For an overview on message sharing across projects and distributions, check out this UDS presentation by Henning Eggers, one of the Launchpad developers who implemented this feature, and myself: [16:42] http://ubuntuone.com/p/skw/ [16:43] [16:43] How to enable message sharing [16:43] ----------------------------- [16:43] The cool thing to know is that within a project or a distribution message sharing is already enabled [16:44] There are no steps that the project maintainers need to follow: every new series will automatically share messages with all the others [16:45] However, if you want to share messages between an upstream project and a distribution (e.g. Ubuntu), there are a set of steps that need to be performed first: [16:45] * Enable translations in the upstream project, setting the right permissions and translations group [16:46] * If the project you want to enable sharing for is hosting translations externally, you'll need to request a bzr import, so that translations can get imported from an external branch [16:47] * Finally, you'll need to link the upstream project to the corresponding source package in Launchpad [16:47] Right now just a few projects and source packages are linked this way, but this cycle we're planning a community effort to enable sharing for all supported packages. [16:48] I've prepared a table with an overview of the supported packages here: [16:48] https://wiki.ubuntu.com/Translations/Projects/LaunchpadUpstreamImports [16:48] And will soon announce how the community can help in enabling these for sharing. [16:48] Stay tuned to the Ubuntu translators list for more info: [16:49] https://wiki.ubuntu.com/Translations/Contact/#Mailing_lists [16:49] Ok, I think that's all I had on content, so let's go for questions! [16:49] [16:49] Q & A [16:49] ----- [16:50] yurchor asked: Why does translation sharing behaves strange (diffs are really weird)? Ex.: https://translations.launchpad.net/ubuntu/natty/+source/avogadro/+pots/libavogadro/uk/+translate?show=new_suggestions [16:51] There are 10 minutes remaining in the current session. [16:51] I think that particular case is a project in which there was some data that needed migration (i.e. Ubuntu translations exported and uploaded in the upstream package) and that did not happen. [16:51] I'd suggest pointing this out in the #launchpad channel, where the Launchpad devs can have a look at it in more detail [16:52] yurchor asked: What if the project does not have repository with translations (like Fedora's libvirt, etc. on Transifex, translations are generated at package creation)? What will be imported from upstream? [16:53] I'm not familiar with how translations are stored in Fedora's libvirt. The only layout that's supported for external repositories is PO files stored in an external version control system that can be imported as a bzr branch (e.g. git, mercurial, svn, etc) [16:54] QUESTION: For example, I'm translating a BitTorrent client. I had a translation of the term "torrent" that I used in all strings that contain it. Then a new major release arrived that has some strings added, some strings removed and some strings (like "New torrent") intact. For the new version, I find a better translation of the term "torrent", and change it in all those strings in the newer series. But then the new translation of "New torrent", wit [16:54] h the new term to describe "torrent" will leak to the older series, while some other strings in there still use the old term. How can I prevent it? [16:55] oh, I understand what you mean now [16:55] There are 5 minutes remaining in the current session. [16:56] Unfortunately, there is no way to detect this in Launchpad, as there is no way to link the "New torrent" to the "torrent" messages [16:56] In this particular case, one thing you can do is to export the translation as a PO file, and replace all the "New torrent" translations with the new term [16:57] and then reupload in Launchpad [16:57] One thing I forgot to mention, and it might be useful if you want to keep the old terminology in older series, [16:57] is that you can explicitly choose individual messages to diverge [16:58] so you can call "torrent" as "a" in a series and "b" in another series, bypassing message sharing [16:59] Ok, I think there is not much time for more questions, so we can probably wrap it up here [16:59] If you've got other questions, feel free to ask me on the #ubuntu-translators channel on Freenode [16:59] So thank you everyone for listening and for participating with your questions [17:00] I hope you enjoyed the talk and see you soon! === ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: Debugging the Kernel - Instructors: jjohansen [17:01] Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/13/%23ubuntu-classroom.html following the conclusion of the session. [17:01] Hi, before we start I figured I would introduce myself, I am John Johansen an Ubuntu contributor and member of the Kernel team [17:01] Feel free to ask questions through out in he #ubuntu-classroom-chat channel, and remember to prepend QUESTION: to your question [17:01] So lets get started [17:01] ------------------------------------------ [17:01] Debugging Kernels is a vast topic, and there is lots of documentation out there. Since we only have an hour I thought I would cover a few thing specific to the Ubuntu kernel. So perhaps the topic really should have been changed to debugging the Ubuntu kernel. I am not going to walk through a live example as we don't have time for multiple kernel builds, installing and testing, working with the kernel takes time. [17:02] First up is the all important documentation link https://wiki.ubuntu.com/Kernel [17:02] It has a lot of useful information buried in its links. [17:02] it takes a while to read through but its really worth doing if you are interested in the kernel [17:02] The Ubuntu kernels are available from git://kernel.ubuntu.com/ubuntu/ubuntu-.git [17:02] ie. for natty you would do [17:02] git clone git://kernel.ubuntu.com/ubuntu/ubuntu-natty.git [17:03] https://wiki.ubuntu.com/Kernel/Dev/KernelGitGuide [17:03] gives the full details if you need more [17:03] The Ubuntu kernel uses debian packaging and fakeroot to build, so you will need to pull in some dependencies [17:03] sudo apt-get build-dep linux-image-$(uname -r) [17:04] once you have the kernel you can change directory into the tree and build it [17:04] fakeroot debian/rules clean [17:04] fakeroot debian/rules binary-headers binary-generic [17:04] Note, a couple of things, 1. the clean must be done before your first attempt at building, its sets up some of the environment. 2 we are building the one kernel flavour in the above example, and the end result should be some debs. [17:04] also if you ever see me use fdr its an alias I set up for fakeroot debian/rules, so I can get a way with doing [17:04] fdr clean [17:04] it saves on typing, I'll try not to slip up but just incase ... [17:04] Everyone good so far? [17:05] Alright onto bisecting [17:05] We need to cover a little more information about how the Ubuntu kernel is put together. Each release of the Ubuntu linux kernel is based on some version of the upstream kernel. The Ubuntu kernel carries a set of patches on top of the upstream linux kernel. The patches really can be broken down into 3 catagories, packaging/build and configs, features/drivers that are not upstream (see the ubuntu directory for most of t [17:05] During the development cycle, the Ubuntu kernel is rebased on top of the current linux kernel, as the linux kernel is updated so is the development Ubuntu kernel; it occasionally gets rebased on top of newly imported linux kernels. This means a couple of things patches that have been merged upstream get dropped, and that the Ubuntu patches stay at the top of the stack. [17:06] During the development cycle we hit a point called kernel freeze, where we stop rebasing on upstream, and from this point forward only bug fixes are taken with commits going on top. [17:06] So why mention all of this? Because it greatly affects how we can do kernel bisecting. If a regression occurs after a kernel is released (say the natty kernel), and there is a known good natty kernel, then bisecting is relatively easy. We can just checkout the natty kernel and start a git bisect between the released kernel tags. [17:06] However bisecting between development releases or different released versions (say maverick and natty) of the kernel becomes much more difficult. This is because there is no continuity because of the rebasing, so bisect doesn't work correctly, and if you are lucky and have continuity the bisecting may remove the packaging patches. [17:07] So how do you bisect bugs in the Ubuntu kernel then? We use the upstream kernel of course :) [17:07] There are two ways to do this, the standard upstream build route and using a trick to get debian packages. [17:07] The upstream route is good if you are just doing local builds and not distributing the kernels [17:08] but if you want other people to install your kernels you are probably best of using the debian packaging route [17:08] So the trick to get debian packaging is pretty simple [17:09] you checkout the upstream kernel [17:09] checkout an Ubuntu kernel [17:09] identify a good and bad upstream kernel [17:10] you can do this by using the ubuntu mainline kernel builds available from the kernel team ppa [17:11] http://kernel.ubuntu.com/~kernel-ppa/mainline/ [17:11] that saves you from having to build kernel [17:12] now copy the debian/ and debian.master/ directories from the ubuntu kernel into the upstream kernel [17:12] you do not want to commit these [17:12] as that will just make them disappear with the bisect [17:12] you can the change directory into the upstream kernel [17:12] edit debian.master/changelog, the top of the file should be something like [17:12] linux (2.6.38-10.44) natty-proposed; urgency=low [17:14] you want to change the version to something that will have meaning to you [17:14] and that can be easily replaced by newer kernels [17:14] you use a debian packaging trick to do this [17:15] change 2.6.38-10.44 to something like 2.6.38-10.44~jjLP645123.1 [17:16] the jj indicates me, then the launchpad bug number and I like to use a .X to indicate how far into the bisect [17:16] you can use what ever make sense to you [17:17] the important part is the ~ which allows kernels with higher version numbers to install over the bisect kernel without breaking things [17:18] if you are going to upload the kernel to a ppa you will also want to update the release info [17:18] ie. natty-proposed in this example [17:19] if you are using the current dev kernel it will say UNRELEASED and you need to specify a release pocket [17:19] ie. natty, maverick, ... === Cas071 is now known as Cas [17:19] however ppa builds are slow and I just about never use them, at least not for regular bug testing === Cas is now known as Cas07 [17:20] now you can build the kernel [17:20] fakeroot debian/rules clean [17:20] fakeroot debian/rules binary-headers binary-generic [17:21] this will churn through and should build some .debs that can be installed using [17:21] dpkg -i [17:21] now on to doing the actual bisect [17:23] so bisecting is basically a binary search, start with a known good point and bad, cut the commits in half, build a kernel test if its good, rinse lather, and repeat [17:23] git bisect is just a tool to help you do this [17:24] it is smarter than just doing the cut in half, it actually takes merges and other things into account [17:24] the one important thing to note, for these bisects is if you look at the git log etc, you may find yourself in kernel versions outside of your bisect range [17:25] this is because of how merges are handled, don't worry about it, git bisect will handle it for you [17:25] so the basics of git bisect are [17:26] git bisect start [17:26] where bad is the bad kernel and good is the good kernel [17:26] the problem is how do you know which kernel versions to use [17:27] if you are using the upstream kernels for a bisect then the ubuntu version tags are not available to you [17:27] you need to use either a commit sha, or tag in the upstream tree [17:29] sorry lost my internet there for a minute [17:31] if you used the mainline kernel builds to find a good and bad point then you can just use the kernel tags for those, ie. v2.6.36 [17:31] if you used and ubuntu kernel then you can find a mapping of kernel versions here [17:32] http://kernel.ubuntu.com/~kernel-ppa/info/kernel-version-map.html [17:32] alright so I was asked what the debian.master directory is about [17:32] in the Ubuntu kernel we have two directories to handle the debian packaging [17:32] debian and debian.master/ [17:33] this allows abstracting out parts of the packaging [17:34] when you setup a build, the parts of debian.master/ are copied into debian/ and that is used [17:35] the difference between the two isn't terribly important for most people, think of debian as the working directory for the packaging, and master as the reference [17:36] when I had you edit debian.master/changelog above I could have changed things around and had you edit debian/changelog [17:36] however [17:36] fakeroot debian/rules clean [17:36] will endup copying debian.master/changelog into debian/ [17:37] thus if you change debian/change log you have to do a full edit on it every time you do a clean [17:37] so if you are editing debian/ you do [17:37] fdr clean [17:37] edit debian/changelog [17:38] which is the reversion of doing it to debian.master/changelog [17:38] edit debian.master/changelog [17:38] fdr clean [17:38] for me editing debian.master/changelog keeps me from making a mistake and building a kernel with out my edits to the kernel version [17:40] hopefully that is enough info on the debian/ and debian.master/ for now [17:40] and we will jump back to bisecting for a little longer [17:41] so assuming you have your kernel version for good and bad you start, your bisection [17:41] git will put you on a commit roughly in the middle [17:41] then you can do [17:41] fdr clean [17:42] fakeroot debian/rules binary-headers binary-generic [17:42] sorry caught myself using fdr [17:42] this will build your kernel you can install and test [17:43] and then you can input your info into git bisect [17:43] ie. [17:43] git bisect good [17:43] or [17:43] git bisect bad [17:43] the import thing to remember is to not, commit any of your changes to git [17:44] I tend to edit the debian.master/changelog and update the .# at the end of my version string every iteration of the bisection [17:44] you don't have to do this [17:45] you can get away with just rebuilding, straight or if you want doing a partial build [17:46] the partial build is a nice trick if you don't have a monster build machine but it doesn't save you anything early on in the bisect, when git is jumping lots of commits and lots of files are getting updated [17:46] the trick to doing a partial build in the Ubuntu build system is removing the stamp file [17:47] when the kernel is built there are some stamp files generated and placed in [17:47] debian/stamps/ [17:47] there is one for prepare and one for the actual build [17:48] if you build a kernel, and the build stamp file is around, starting a new build will just use the build that already exists and package it into a .deb [17:48] you don't want to do this [17:49] so after you have stepped you git bisect (git bisect good/bad) [17:49] you [17:49] rm debian/stamps/stamp-build-generic [17:50] this will cause the build system to try building the kernel again, and make will use its timestamp dependencies to determine what needs to get rebuilt [17:51] There are 10 minutes remaining in the current session. [17:51] if the bisect is only stepping within a driver or subsystem this can save you a log of time on your builds, however if the bisect updates lots of files (moves lots of commits) or updates some common includes, you are going to end up doing a full kernel build [17:52] so now for the other tack, what do you do if you don't want to mess with the .deb build and just want to build the kernel old fashioned way [17:52] well you build as you are familiar with. [17:52] make [17:52] make install [17:52] make modules_install [17:53] then you need to create a ramdisk, and update grub === AndroUser is now known as daemonTutorials [17:54] sudo update-initramfs -c k [17:54] will create the ram disk you need if you don't want to mess with the kernel version [17:54] sudo update-initramfs -c -k all [17:54] then you can do [17:55] sudo update-grub [17:55] and you are done [17:55] so QUESTION: After rm debian/stamps/stamp-build-generic do you still do a fakeroot debian/rules clean when doing the incremental build? [17:55] the answer is no [17:55] There are 5 minutes remaining in the current session. [17:56] doing a clean will remove all the stamp files, and remove your .o files which will cause a full build to happen [17:57] so with only a couple minutes left I am not going to jump into a new topic but will mention something I neglected about about the Ubuntu debian builds [17:58] our build system has some extra checks for expected abi, configs, and modules [17:58] when building against an upstream kernel you will want to turn these off [17:58] you can do this by setting some variables on the command line [17:58] fakeroot debian/rules binary-headers binary-generic [17:58] becomes [17:59] skipabi=true skipconfig=true skipmodule=true fakeroot debian/rules binary-headers binary-generic [17:59] this can also be used when tuning your own configs etc/ [18:00] I think I will stop there [18:00] thanks for attending, drop by #ubuntu-kernel if you have any questions === ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: dotdee - break a flat file into dynamically assembled snippets - Instructors: kirkland [18:01] Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/13/%23ubuntu-classroom.html following the conclusion of the session. [18:01] howdy all [18:02] this session is on dotdee [18:02] there will be a live streamed demo at: http://bit.ly/uclass [18:02] i invite you to join me there [18:02] the username and password is guest/guest [18:03] for those interested, this is a tool called ajaxterm [18:03] which embeds a terminal in a web browser [18:03] i've set up a byobu/screen session for the guest user (which is readonly for all of you) [18:03] I'll drive the demo [18:04] and annotate it here [18:04] alternatively, you can ssh guest@ec2-50-19-128-105.compute-1.amazonaws.com [18:04] with password guest [18:05] okay, on to dotdee :-) [18:05] if you've ever configured a Linux/UNIX system, you're probably familiar with the /etc directory [18:05] and inside of /etc, there are many directories that end in a ".d" [18:05] watch the terminal while I find a few in /etc [18:06] there's a bunch! [18:06] this is a very user friendly way of offering configuration to users [18:06] usually, files in a .d directory are concatenated, or executed sequentially [18:07] it give users quite a bit of flexibility for editing or adding configurations [18:07] to some software or service [18:07] it also helps developers and packagers of software [18:07] as it often allows them to drop snippets of configuration into place [18:07] but not every configuration file is setup in this way [18:08] in fact, most of them are really quite "flat" [18:08] a few months ago, I found myself repeatedly needing to converted some flat configuration files [18:08] to .d style ones [18:09] this was for software I was working with, as a developer/packager [18:09] but not software that I had written myself [18:09] ideally, I would just ask the upstream developers to change their flat .conf file [18:09] to a .d directory [18:09] and they would magically do it [18:09] and test it [18:09] and release it [18:09] immediately :-) [18:10] that rarely happens though :-P [18:10] so I wrote a little tool that would generically to that for me! [18:10] and that tool is called "dotdee" [18:10] so let's take a look! [18:10] over in the terminal, i'm going to install dotdee, which is already in Ubuntu oneiric [18:10] sudo apt-get install dotdee [18:11] for older ubuntu distros, you can 'sudo apt-add-repository ppa:dotdee/ppa' [18:11] and update, and install it there too [18:11] cool, so now i have a /usr/sbin/dotdee executable [18:11] let's take a flat file, and turn it into a dotdee directory! [18:12] I'm going to use /etc/hosts as my example [18:12] first, I need to "setup" the file [18:12] i can first verify that /etc/hosts is in fact a flat file, [18:12] -rw-r--r-- 1 root root 296 2011-07-13 12:14 /etc/hosts [18:13] I'm going to set it up like this: [18:13] sudo dotdee --setup /etc/hosts [18:13] INFO: [/etc/hosts] updated by dotdee [18:13] cool! [18:13] now let's look at what dotdee did.... [18:13] $ ll /etc/hosts [18:13] lrwxrwxrwx 1 root root 27 2011-07-13 18:13 /etc/hosts -> /etc/alternatives/etc:hosts [18:13] so /etc/hosts is now a symlink [18:13] pointing to an alternatives link [18:14] $ ll /etc/alternatives/etc:hosts [18:14] lrwxrwxrwx 1 root root 22 2011-07-13 18:13 /etc/alternatives/etc:hosts -> /etc/dotdee//etc/hosts [18:14] which is pointing to a flat file in /etc/dotdee [18:14] $ ll /etc/dotdee//etc/hosts [18:14] -r--r--r-- 1 root root 296 2011-07-13 18:13 /etc/dotdee//etc/hosts [18:14] if I look at the contents of that file, I see exactly what I had before [18:15] but let's go to that directory, /etc/dotdee [18:15] inside of /etc/dotdee, there is a file structure that mirrors the same file structure on the system [18:15] importantly, we now have a .d directory [18:15] that refers exclusively to our newly managed file === daker is now known as daker_ [18:16] namely, /etc/dotdee/etc/hosts.d [18:16] in that directory, all we have is a file called 50-original [18:16] which is the original contents of our /etc/hosts [18:16] but let's say we want to "append" a host to that fie [18:16] file [18:16] let's create a new file in this directory [18:16] and call it 60-googledns [18:17] so I edit the new file (as root) [18:17] add the entry, 8.8.8.8 googledns [18:17] and I'm going to write the file [18:17] but before i write the file, let me split the screen [18:18] so that we can watch it get updated, automatically, in real time! [18:18] so i ran 'watch -n1 cat /etc/hosts' [18:18] which is just printing that file every 1 second [18:18] now i'm going to save our 60-googledns file [18:18] and voila! [18:19] we have a new entry appended to our /etc/hosts [18:19] through the magic of inotify :-) [18:19] which is a daemon that monitors filesystem events [18:19] dotdee comes with a configuration (which is dotdee managed, of course!) that adds and removes patterns [18:19] as you setup/remove dotdee management [18:20] let's prepend a host [18:20] we'll call this one 40-foo [18:20] and see it land at the beginning of the file [18:20] bingo [18:20] now it's at the beginning of our /etc/hosts [18:22] okay [18:22] so adding/removing a flat text file is one way of affecting our managed file [18:23] flat text files are just appended or prepended, based on its alpha-numeric positioning [18:23] but there are 2 other ways as well! [18:23] you can also put executables in this .d directory [18:23] which operate on the standard in and out === Cas07 is now known as cas07 [18:23] if you want to modify a flat file by "processing" it [18:24] for instance [18:24] let's make this file all uppercase [18:25] whoops [18:25] okay, there we go [18:25] which brings me to the --update command :-) [18:26] dotdee --update can be called against any managed file [18:26] to update it immediately [18:26] in case the inotify bit didn't pick up the change [18:26] in any case, i just did that, and now our /etc/hosts is all uppercase [18:26] because of our 70-uppercase executable [18:27] what happens if we move it from 70-uppercase to 10-uppercase? [18:27] or, rather, how about 51-uppercase? [18:27] see the output now [18:28] note that 51-uppercase was applied against the "current state" of the output, as of position 51 [18:28] but 60- was applied afterward [18:28] so it wasn't affected [18:28] so that's two ways we can affect the contents of the file [18:28] a) flat text files, b) scripts that process stdin and write to stdout [18:29] the third way is patches or diff files [18:29] given that this is a developer audience, we're probably familiar with quilt [18:29] and directories of patches [18:29] this is particularly useful if you need to do some 'surgery' on a file [18:29] let's say I want to "insert" a line into the middle of this file [18:30] into the middle of 50-original, for instance [18:31] okay, so i've added "10.9.8.7 hello-there" to the middle of a copy of this file [18:31] and i'm going to use diff -up to generate a patch [18:31] there we go [18:31] okay, let's put that in this .d dir [18:32] note that I have to add .patch or .diff as the file extension [18:32] and now i can cat /etc/hosts and see that the patch has been applied! [18:33] i could stack a great number of these patches here [18:33] much like a quilt directory [18:33] okay, so now let's undo this configuration [18:34] oh, first [18:34] sudo dotdee --list /etc/hosts [18:34] /etc/hosts [18:34] $ echo $? [18:34] 0 [18:34] this verifies that /etc/hosts is in fact dotdee managed [18:34] if i try this against some other file [18:35] $ sudo dotdee --list /boot/vmlinuz-3.0-3-virtual [18:35] ERROR: [/boot/vmlinuz-3.0-3-virtual] is not managed by dotdee [18:35] (I don't recommend dotdee'ing your kernel :-) [18:35] but we can undo our /etc/hosts [18:35] $ sudo dotdee --undo /etc/hosts [18:35] update-alternatives: using /etc/dotdee//etc/hosts.d/50-original to provide /etc/hosts (etc:hosts) in auto mode. [18:35] INFO: [/etc/hosts] has been restored [18:35] INFO: You may want to manually remove [/etc/dotdee//etc/hosts /etc/dotdee//etc/hosts.d] [18:36] and now our /etc/hosts is back to being whatever we saved in 50-original [18:36] so ... [18:36] that's how it works [18:36] and the /etc/hosts example is only marginally useful [18:37] what I would *really* like to use it for is in configuration file management in Debian/Ubuntu packaging [18:37] in the case where the upstream daemon or utility has a single, flat .conf file [18:37] but I really would prefer it to be a .d directory [18:37] so i just did a find on /etc [18:38] sudo find /etc/ -type f -name "*.conf" [18:38] and chose one at random [18:38] /etc/fonts/fonts.conf [18:38] which happens to be XML [18:38] and I just thought about this [18:38] i should have mentioned it in our previous section [18:39] XML is tougher than a linear file, like a shell script [18:39] in that you can't just append, or prepend text [18:39] you have to surgically insert the bits you want [18:39] in which case the latter two methods I mentioned, the executable and the diff/patch will be your friend! [18:40] okay [18:40] now I would *really* like to see dpkg learn just a little bit about dotdee [18:41] I'd like for it to be able to determine *if* a file is managed by dotdee [18:41] (easy to check using dotdee --list, or just checking if the file is a symlink itself) [18:41] and if so, then it would use $(dotdee --original ...) to find the 50-original file path [18:42] and dpkg would write its changes to that location (the 50-original file) [18:42] such that local admin, or even other packages, could dabble in the .d directory, without causing conffile conflicts or .dpkg-original files [18:43] okay, anyway, let's take a break for questions [18:43] I think I've demo'd most of what I'd like to show you [18:43] any questions? [18:44] Question: Is there any issues when you upgrade or update due to dotdee? [18:44] coalitians: great question ! [18:44] coalitians: right, so that's what I was saying about dpkg needing to "learn" about dotdee [18:44] coalitians: let's look at an example over in our test system [18:45] coalitians: I'm going to dotdee --setup that font xml [18:47] coalitians: okay, as you can see, i've made a change [18:47] coalitians: let's upgrade (or reinstall) the package that owns this file [18:47] $ sudo apt-get install --reinstall fontconfig-config [18:47] lrwxrwxrwx 1 root root 38 2011-07-13 18:45 fonts.conf -> /etc/alternatives/etc:fonts:fonts.conf [18:47] -rw-r--r-- 1 root root 5287 2011-07-01 12:12 fonts.conf.dpkg-new [18:48] unfortunately, dpkg dump fonts.conf.dpkg-new here :-( [18:48] i have toyed with another inotify/iwatch regex that would look for these :-) [18:48] slurp them up, and move them over to 50-original [18:48] which works reasonably well [18:49] except that I don't yet have the interface for the merge/conffile questions, like dpkg does [18:49] coalitians: so to answer your question, upgrades are not yet handled terribly gracefully [18:49] coalitians: and would take a little work within dpkg itself [18:49] coalitians: sorry; but great question! [18:49] any others? [18:50] I see roughly 19# in the byobu session :-) [18:50] that's what the red-on-white 19# means [18:50] 19 ssh sessions [18:50] did the web interface work for anyone? [18:50] this is the first time I've tried it [18:51] There are 10 minutes remaining in the current session. [18:51] oh, already? :-) [18:51] kirkland, is your session over now? [18:51] okay, I reckon my session is over [18:51] no more questions [18:51] :-) [18:51] zyga: sure, you can have it ;-) [18:51] okay [18:51] thanks all [18:51] awesome, thanks [18:55] There are 5 minutes remaining in the current session. === ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: Introduction to LAVA - Instructors: zyga [19:01] Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/13/%23ubuntu-classroom.html following the conclusion of the session. [19:01] welcome everyone :) [19:01] I'm glad to be able to tell you something about LAVA today [19:02] my name is Zygmunt Krynicki, I'm one of the developers working on the project [19:02] feel free to ask questions at any time, check the topic for instructions on how to do so [19:02] okay [19:02] let's get started [19:02] So first off, what is LAVA? [19:03] LAVA is an umbrella project, created by Linaro, that focuses on overall quality automation [19:04] you can think of it as a growing collection of small, focused projects that work well together [19:05] the overall goal of LAVA is to improve quality that developers perceive while working on ARM-based platforms [19:05] we're trying to do that by building tools that can be adopted by third party developers and Linaro members alike [19:06] okay [19:06] As I mentioned earlier LAVA is a collection of projects, I'd like to enumerate the most important ones that exist now [19:06] first of all all of our projects can be roughly grouped into two bins "server side" and "client side" [19:07] where client is either client of the "server" or a unrelated non-backend computer (like your workstation or a device being tested) [19:08] the key project on the server is called lava-server, it acts as a entry point for all other server side projects [19:08] essentially it's an extensible application container that simplifies other projects [19:09] next up we have lava-dashboard - a test result repository with data mining and reporting [19:09] lava-scheduler - scheduler for "jobs" for lava-dispatcher [19:09] lava-dispatcher - automated deployment, environment control tool that can remotely run tests on ubuntu and android images [19:10] the last one is really important and is getting a lot of focus recently [19:10] essentially it's something that knows how to control physical devices so that you can do automated image deployment, monitoring and recovery on real hardware [19:11] on the client side we have a similar list: [19:11] lava-tool is a generic wrapper for other client side tools, it allows you to interact with server side components using command line instead of the raw API exposed by our services [19:11] lava-dashboard-tool talks to the dashboard API [19:12] lava-scheduler-tool talks to the scheduler API [19:12] and most important of all: lava-test, it's a little bit different as it is primarily an "offline" component [19:12] it's a wrapper framework for running tests of any kind and processing the results in a way that lava-dashboard can consume [19:13] all of those projects are full of developer friendly APIs that allow for easy customization and extensions [19:13] we use the very same tools to build additional features [19:13] lava-test is also important because it is a growing collection of wrapper for existing tests [19:14] using our APIs you can easily wrap your test code so that the test can be automated and processed in our stack [19:14] some test definitions are shipped in the code of lava-test but more and more are using the out-of-tree API to register tests from 3rd party packages [19:15] if you are an application author you could easily expose your test suite to lava this way [19:15] okay [19:15] that's the general overview [19:15] now for two more things: [19:15] 1) what can LAVA give you today [19:15] 2) how can you help us if you are interested [19:16] While most of our focus is not what typical application developers would find interesting (arm? automation? testing? ;-) [19:16] some things are quite useful for a general audience [19:16] you can use lava-server + lava-dashboard to trivially deploy a private test result repository [19:17] the dashboard has a very powerful data system, you could store crash reports, user-submitted benchmark measurements, results from CI systems that track your development trees [19:18] in general anything that you'd like to retain for data mining and reporting that you (perhaps) currently store in a custom solution that you need to maintain yourself [19:18] all of lava releases are packaged in our ppa (ppa:linaro-validation/ppa) and can be installed on ubuntu lucid+ with a single command [19:19] the next thing you could use is our various API layers: you could integrate some test/benchmark code in your application and allow users to submit this data to your central repository for analysis [19:20] if you are really into testing you could wrap your tests in lava-test and benefit from the huge automation effort that we bring with the lava-dispatcher [19:21] in general, as soon as testing matters to you and you are looking for a toolkit please consider what we offer and how that might solve your needs [19:22] during this six month cycle a few interesting thins are planned to land [19:22] first of all: end user and developer documentation [19:23] overview of lava project, various stack layers, APIs and examples [19:23] jykae asked: any projects that use successfully lava tools? [19:23] jykae, linaro is our primary consumer at this time but ubuntu QA is looking at what we produce in hope for alignment [19:24] jykae, next big users are ARM vendors (all the founding members of linaro) that use lava daily and contribute to various pieces [19:25] jykae, finally I know of one big user, also from the ARM space, expect some nice announcement from them soon - they are really rocking (with what they do with LAVA and in general) [19:25] jykae, but I hope to build LAVA in a way that _any_ developer can just deploy and start using, like a bug tracker that virtually all pet projects have nowdays [19:25] jykae, we need more users and we will gladly help them get started [19:26] ok, back to the "stuff coming this cycle" [19:26] so documentation is the number one thing [19:26] another thing in the pipe is email notification for test failures and benchmark regressions [19:26] this will probably land next month [19:27] we are also looking at better data mining / reporting features, currently it's quite hard to use this feature, this will be somewhat improved with good documentation but we still think it can be more accessible [19:27] the goal is to deliver a small IDE that allows users to do data mining and reporting straight from their browsers [19:27] this is a big topic but small parts of it will land before 11.10 [19:28] finally we are looking at some build automation features so that LAVA can help you out as a CI system [19:29] and of course: more tests wrapper in lava-test, more automaton (arm boards, perhaps x86) [19:29] jykae asked: do you have irc channel for lava? [19:29] jykae, yes, we use #linaro for all lava talks [19:30] jykae, a lot of people there know about it or use it and can help people out [19:30] jykae, also all the core developers are lurking there so it's the best place to seek assistance and chat to us [19:30] ok [19:30] so [19:30] a few more things: [19:31] 1) I already mentioned our PPA, we have a policy of targeting Ubuntu 10.04 LTS for our server side code [19:31] you should have no problems in installing our packages there [19:32] if you want more modern system (we also support all the subsequent ubuntu releases, except for 11.10 which will be supported soon enough) [19:32] if you want most of the code is also published on pypi and can be installed on any system with pip or easy_install [19:32] 2) We have a website at http://validation.linaro.org where you can find some of the stuff we are making in active production [19:33] The most prominent feature there is lava-server with dashboard and scheduler [19:33] (the dispatcher is also there but has no web presence at this time) [19:34] There is one interesting thing I wanted to show to encourage you to browse that site more: http://validation.linaro.org/lava-server/dashboard/reports/benchmark/ [19:34] this is a simple report (check the source code button to see how it works) that shows a few simple benchmarks we run daily on various arm boards === yofel_ is now known as yofel [19:35] there are other reports but they are not as interesting (pictures :-) unless you know what they show really [19:36] another URL I wanted to share (it's not special, just one I selected now): http://validation.linaro.org/lava-server/dashboard/streams/anonymous/lava-daily/bundles/7c0da1d8765e806102c6f8a707ff22b99a43c485/ [19:36] this shows a "bundle" which is the primary thing that dashboard stores [19:37] bundles are containers for test results [19:37] from that page click on the bundle viewer tab to see how a bundle really looks like [19:38] in the past whenever we were talking about "dashboard bundles" people had a hard time understanding what those bundles are and this is a nice visual way to learn that [19:38] okay [19:38] one more thing before I'm done [19:38] what we'd like from You [19:39] 1) Solve your problem, tell us about what you need and how LAVA might help you reach your goal (or what is preventing you from using it effectively), work with us to make that happen [19:40] 2) Testing toolkit authors: consider allowing your users to save test results in our format [19:41] 3) Application authors: if you care about quality please tell us what features you'd like to see the most [19:41] 4) Coders: help us implement new features, we are a friendly and responsible upstream [19:41] okay [19:42] that's all I wanted to broadcast, I'm happy to answer any questions now [19:45] nobody into quality it seems :-) [19:49] okay, guess that's it -- thanks everyone :-) [19:51] There are 10 minutes remaining in the current session. [19:55] There are 5 minutes remaining in the current session. === ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Developer Week - Current Session: Introduction to Upstart - Instructors: marrusl [20:01] Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/13/%23ubuntu-classroom.html following the conclusion of the session. [20:01] Hi folks! [20:01] I have a secret to admit. [20:01] I'm not actually an Ubuntu Developer. [20:01] Quick introduction: [20:02] I work for Canonical as a system support engineer helping customers with implementing and supporting Ubuntu, UEC, Landscape, etc. [20:02] On a scale from dev to ops, I'm pretty firmly ops. [20:02] However, for that very reason, I have a keen interest in what is managing the processes on my systems and how those systems boot and shutdown. [20:03] Last thing before I really start... [20:03] if you're not very familiar with Upstart, this might be a bit dense with new concepts. [20:03] But to paraphrase Upstart's author, Scott James Remnant: thankfully this is being recorded, so if it doesn't make complete sense now, you can read it again later! [20:04] The best way to start is probably to define what Upstart is. If you visit http://upstart.ubuntu.com, you'll find this description: [20:04] vices during boot, stopping them during shutdown and supervising them while the system is running.” [20:04] let me try that again [20:04] “Upstart is an event-based replacement for the /sbin/init daemon which handles starting of tasks and services during boot, stopping them during shutdown and supervising them while the system is running.” [20:05] Most of that definition applies to any init system, be it classic System V init scripts, SMF on Solaris, launchd on Mac OS X, or systemd. [20:05] What sets Upstart apart from the others is that it is "event-based" and not "dependency-based". [20:06] (note: launchd is not dependency-based, but it's also not event-based like Upstart. I could explain why, but we're all here to talk about Linux, right? :) [20:06] So let's unpack those terms: [20:07] A dependency-based system works a lot like a package manager. [20:07] If you want to install a package, you tell the package manager to install your "goal package". [20:07] From there, your package manager determines the required dependencies (and the dependencies of those dependencies and so on) and then installs everything required for your package. [20:08] Likewise, in a dependency-based init system, you define a target service and when the system wishes to start that service, it first determine and starts all the dependent services and completes dependent tasks. [20:09] For example, depending on configuration, a mysql installation might depend on the existence of a remote file system. [20:09] The remote filesystem in turn would require networking to be up. [20:09] Networking requires the local filesystems to be mounted, which is carried out by the mountall task. [20:10] This works fairly well with a static set of services and tasks, but it has trouble with dynamic events, such as hot-plugging hardware. [20:11] To steal an example from the Upstart Cookbook (http://upstart.ubuntu.com/cookbook) let's say you want to start a configuration dialog box whenever an external monitor is plugged in. [20:11] In a dependency-based system you would need to have an additional daemon that polls for hardware being plugged. [20:12] Whereas Upstart is already listening to udev events and you can create a job for your configuration app to start when that event occurs. [20:12] Certainly this requires udev to be running, but there's no need to define that dependency. [20:13] Sometimes we refer to this as "booting forward". A dependency-based system defines the end goals and works backwards. [20:13] It meets all of the goal service's dependencies before running the goal service. [20:14] Upstart starts a service when its required conditions are met. [20:14] It's a subtle distinction, hopefully it will become clearer as we go. [20:15] A nice result of this type of thinking is that when you want to know why "awesome" is running (or not running) you can look at /etc/init/awesome.conf and inspect its start and stop criteria (or on Natty+ run `initctl show-config -e awesome`). [20:15] There's no need to grep around and figure out what other other service called for it to start. [20:16] But enough about init models... let's get to the real reason I suspect you're here: how to understand, modify, and write Upstart jobs. [20:16] Upstart jobs come in two main forms: tasks and services. [20:16] A task is a job that runs a finite process, complete it, and ends. [20:17] Cron jobs are like tasks, whereas crond (the cron daemon itself) is a service. [20:17] So like other service jobs, it's a long running process that typically is not expected to stop itself. [20:17] ssh, apache, avahi, and network-manager are all good examples. [20:18] Now events... [20:18] An event is a notification sent by Upstart to any job that is interested in that event. [20:19] Before Natty, there were for main types of events: init events, mountall events, udev events and what I'll call "service events". [20:19] In Natty that was expanded to socket events (UNIX or TCP/IP) and D-Bus events. [20:20] Eventually this will include time-based events (for cron/atd functionality) and filesystem events (e.g. when this file appears, do stuff!). [20:20] You an type `man upstart-events` on natty or oneiric to see a tabular summary of all "well-known events" along with information about each. [20:21] We're going to mostly focus on the service events, of which there are four. These are the events that start and stop jobs. [20:21] 1. Starting. This event is emitted by Upstart when a job is *about* to start. [20:22] It's the equivalent of Upstart saying "Hey! In case anyone cares, I'm going to start cron now, if you need to do something before con starts, you'd better do it now!" [20:22] 2. Started. This event is emitted by Upstart when a job is now running. [20:22] "Hey! If anyone was waiting for ssh to be up, it is!" [20:23] 3. Stopping. Like the starting even, this event is emitted when Upstart is *about* to stop a job. [20:23] 4. Stopped. "DONE!" [20:24] Note that "stopping" and "stopped" are also emitted when a job fails. It is possible to establish the manner in which they fail to. See the man pages for more details. [20:24] (and yes, Upstart shouts everything) [20:24] These events allow other Upstart jobs to coordinate with the life cycle of another job. [20:25] It's probably time to look at an Upstart job to see how this works. [20:25] Since I couldn't find a real job that takes advantage of each phase of the cycle, I've created a fake one to walk through. [20:25] Please `bzr branch lp:~marrusl/+junk/UDW` and open the file "awesome.conf" [20:26] If you don't have access to bzr at the moment, you can find the files here: [20:26] http://ubuntuone.com/p/14JL/ [20:26] While we look at awesome.conf, it might also help to open the file "UpstartUDW.pdf" and take a look at the second page. [20:26] Hopefully this will make the life cycle more clear. [20:27] Awesome is a made-up system daemon named in honor of our awesome and rocking and jamming Community Manager (please see: http://mdzlog.alcor.net/2010/03/19/introducing-the-jonometer/) [20:28] I mentioned start and stop criteria earlier... well those are the first important lines of the job. [20:28] What we are saying here is "if either the jamming or rocking daemons signal that they are ready to start, awesome should start first". [20:29] If I want to make sure that awesome runs *after* those services, I would have used "start on started" instead of "starting". [20:29] So let's say Upstart emits "starting jamming", this will trigger awesome to start. [20:29] Upstart will emit "starting awesome" and now the pre-start stanza will run. [20:30] Some common tasks you might consider putting into "pre-start" are things like loading a settings file into the environment or cleaning up any files or directories that might have been left if the service dies abnormally. [20:31] One more key use of the pre-start is if you want some sanity checks to see if you should even run (are the required files in place?) [20:31] After pre-start, now we are ready to eithe exec a binary or run a script. Here we are executing the daemon. [20:31] In most cases, this is when Upstart would emit the "started" event. In this example, we have one more thing to do: the post-start stanza. [20:32] You might want to use the post-start stanza when waiting for the PID to exist isn't enough to say that the service is truly ready to respond. [20:32] For example, you start up mysql, the process is running, but it might be another moment or two before mysql has finished loading your databases and is ready to respond to queries. [20:33] In my example, I essentially ripped something out of the CUPS upstart job because it illustrates the point well enough. [20:33] This post-start stanza waits for the /tmp/awesome/ directory to exist. But it doesn't wait forever, it checks every half second for 5 seconds. [20:34] If awesome isn't ready to go by the, something is very wrong and I want it to exit. [20:34] Since that script exits with a non-zero status, Upstart will stop the service. [20:34] This might be a good place to mention that all shell fragments run with `sh -e` which means two things... [20:35] Your scripts will run with the default system shell, and unless you've changed it, this is by default linked to /bin/dash. [20:36] So do remember to avoid "bashisms" (though you can use "here files" to use any interpreter, please ask later if you'd like to know how, but it's really better form to use only POSIX-complaint sh, imo). [20:36] The other thing it means, is that if any command fails in the script it will exit. You really can't be too careful running scripts as root. [20:37] Stopping a service is essentially the reverse... Upstart emits "stopping awesome", exexutes the pre-stop stanza (notice I used an exec in place of a script, you can do this in any of the other stanzas as well). [20:37] Now it tires to SIGTERM the process, if that takes longer than the "kill timeout", it will then send a SIGKILL. [20:38] I should point out that a well-written daemon probably doesn't need pre-stop. It should handle SIGTERM gracefully and if it needs to flush something to disk it does so itself. [20:38] If 5 seconds (the default) isn't enough, specify a longer setting in the job as I did here. In a real job you wouldn't likely be upping the kill timeout _and_ using a “pre-stop” action, I just wanted to illustrate both methods. [20:39] Once post-stop has run (if present), Upstart emits "stopped awesome". [20:39] And the cycle is complete! [20:39] Now, I've covered the major sections of a job, but there are some important additional keywords I'd like to introduce (this is not an exhaustive list): task, respawn, expect [fork or daemon], and manual. [20:39] “task”. This keyword, as you might suspect, should be present in task jobs. There's no argument to it, just put it on a line by itself. [20:40] This keyword lets Upstart know that this process will run its main script/exec and then should be stopped. Some good examples of task jobs on a standard Ubuntu system are: procps, hwclock, and control-alt-delete. [20:40] “respawn”. There are a number of system services that you want to make sure are running constantly, even if they crash or otherwise exit. The classic examples are ssh, rsyslog, cron, and atd. [20:40] “expect [fork|daemon]”. Classic UNIX daemons, well, daemonize... that is they fork off new processes and detach from the terminal they started from. “expect fork” is for daemons that fork *once*, “expect daemon” will expect the process to fork exactly *twice*. [20:41] In many cases, if your service has a “don't daemonize” or “run in foreground” mode, it's simpler to create an Upstart job without “expect” entirely. You may just have to try both approaches to find out which works best for your service. [20:41] Well, unless you are the author, in that case, you probably already know. :) [20:41] “manual”. The essence of manual is that it disables the job from starting or stopping automatically. Another way of putting that (and more precise) is that if the word “manual” appears by itself on a line, anywhere in a job, Upstart will *ignore* any previously specified “start on” condition. So, assuming “manual” appears after the “start on” condition, the service will only run if the [20:41] administrator manually starts it. [20:41] Note that were an administrator to start the job by running, “start myjob”, Upstart will still emit the same set of 4 events automatically. So, starting a job manually may cause other jobs to start. [20:42] Note too that it is good practise to specify a “stop on” condition since if you do not, the only reasonable manner to stop the job is to kill it at some unspecified time/ordering when the system is shut down. [20:42] By specifying a “stop on”, you provide information to Upstart to enable it to stop the job in an appropriate fashion and at an appropriate time. [20:42] adding “manual” seems like a clunky way to disable jobs, doesn't it? I'd rather not have to hack conf files to disable a job. [20:43] And what happens to my modified job if there is a new version of the package released and I update? [20:43] I'll tell you, your changes will be clobbered. [20:43] (ok, actually you'll be prompted by dpkg to confirm or deny the changes, but that is still pretty annoying and can be confusing for new administrators). [20:43] Which is a nice segue into “override” files, which first appear in Natty. Override files allow you to change an Upstart job without needing to modify the original job. [20:44] What override files really accomplish is... if you put the word “manual” all by itself into a file called /etc/init/awesome.override, it will have the same effect as adding “manual” to awesome.conf. [20:44] So now you can disable a job from starting with a single command: [20:44] echo manual >> /etc/init/awesome.override [20:45] note: this is as root only. Shell redirection doesn't really play nice with sudo. [20:45] o disable a job as an admin user: [20:45] echo manual | sudo tee -a /etc/init/awesome.override [20:45] Since the override file won't be owned the awesome package, dpkg won't object and you can cleanly update it without having to worry about your customizations. Yay! [20:45] I don't really know, but I suspect the original purpose of override files was just to make disabling jobs cleaner. But then a lightbulb went off somewhere... why not let administrators override any stanza in the original job? [20:45] Let's change awesome's start criteria to make it start *after* rocking or jamming. [20:46] Simply create /etc/init/awesome.override and have it contain only this: [20:46] “start on (started rocking or started jamming)” [20:46] Now Upstart will use all of the original job file with only this one stanza changed. This works for any other stanza or keyword. Want to tweak the kill timeout? Customize the pre-start? Add a post-stop? [20:46] Override files can do that. [20:46] On to the last topic of this presentation: an example of converting a Sys V script to Upstart. [20:46] (looks like it will have to be fast!) [20:46] In the files you branched or downloaded, I've included the Sys V script for landscape-client and my first attempt at an Upstart job to do the same thing (landscape-client.conf). [20:47] First, some disclaimers... this is *not* any sort of official script, I'm not suggesting anyone use it. I haven't gotten feedback from the landscape team yet, or properly tested it myself. [20:47] But so far, it seems to be working for me fine. :) [20:47] And yet, I'm pretty sure I've overlooked something. I mentioned I wasn't a developer, right? [20:47] Not knowing the internals of how landscape-client behaves, I started by trying “expect fork” and “expect daemon”. [20:47] Both allowed me to start the client fine, but failed to stop them cleanly (actually the stop command never returned!). [20:48] Clearly I picked the wrong approach. In the end, running it in the foreground (no expect) allowed me to start and stop cleanly. [20:48] Now, if you compare the two scripts side-by-side, the most obvious difference is the length. The Upstart job is about 65% fewer lines. [20:48] This is because Upstart does a lot of things for you that had to be manually coded in Sys V scripts. [20:48] In particular it eliminates the need for PID file management and writing case statements for stop, start, and restart. [20:48] Well, depending on your previous experience with upstart, that was probably quite a bit of information and new concepts. I know it took me ages to grok Upstart, and Ubuntu is my full-time job! [20:49] So let me wrap up the formal part of this session with suggestions on the best ways to learn more about Upstart. They are: [20:49] “man 5 init” [20:49] “man upstart-events” [20:49] The Upstart Cookbook (http://upstart.ubuntu.com/cookbook) [20:49] The Upstart Development blog (http://upstart.at) [20:49] Your /etc/init directory. [20:49] (Looking through the existing jobs on Ubuntu is incredibly helpful.) [20:50] And of course.... #upstart on freenode. [20:50] wait... jcastro will kill me if I don't mention http://askubuntu.com/questions/tagged/upstart [20:50] With that... questions? [20:50] There are 10 minutes remaining in the current session. [20:52] I'd also like to encourage people to open questions on askubuntu... for the sheer knowledgebase win. [20:52] this link will open a new question and tag it "upstart" for you: [20:52] http://askubuntu.com/questions/ask?tags=upstart [20:53] Thanks for your time and attention, folks. HTH. :) I'll be around on freenode for a while if something pops up. [20:55] There are 5 minutes remaining in the current session. [20:58] lborda asks... first of all, Thank you for the presentation! second, what about debugging upstart services? [20:59] There are a couple levels... debugging upstart itself with job events, and debugging individual jobs. [20:59] The best techniques are in the Cookbook. Please see: http://upstart.ubuntu.com/cookbook/#debugging [21:00] I guess that's a full wrap. Take care. [21:00] Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/13/%23ubuntu-classroom.html === ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat ||