[11:05] <thanos713_> hi to everyone
[11:07] <coalwater> hello thanos713_
[11:08] <thanos713_> When the next classrooom is going to be held?
[11:08] <thanos713_> class*
[11:20] <karthick87> Can anyone help me with apt-cacher-ng setup?
[12:55] <astraljava> karthick87: Support in #ubuntu, as per /topic
[15:50] <dholbach> HELLO EVERYBODY! Welcome to Day 3 of Ubuntu Developer Week!
[15:50] <dholbach> We still have 10 minutes left until we start, but I just wanted to bring up a few organisational bits and pieces first:
[15:51] <dholbach>  - If you can't make it to a session or missed one, logs to the sessions that already happened are linked from https://wiki.ubuntu.com/UbuntuDeveloperWeek
[15:51] <dholbach>  - If you want to chat or ask questions, please make sure you also join #ubuntu-classroom-chat
[15:51] <dholbach>  - If you ask questions, please prefix them with QUESTION:
[15:52] <dholbach>    ie: QUESTION: dpm, How hard was it to learn German?
[15:52] <dholbach>  - if you are on twitter/identi.ca or facebook, follow @ubuntudev to get the latest development updates :)
[15:52] <dpm> (answer: difficult if you start learning in the Swabian dialect)
[15:52] <dholbach> ha! :)
[15:53] <dholbach> that's it from me
[15:53] <dholbach> you still hvae 7 minutes until David "dpm" Planella will kick off today and talk about Launchpad Translations Sharing!
[15:53] <dholbach> Have a great day!
[16:01] <ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/13/%23ubuntu-classroom.html following the conclusion of the session.
[16:02] <dpm> hey everyone!
[16:03] <dpm> welcome all to the 3rd day of Ubuntu Developer Week
[16:03] <dpm> and welcome to this UDW talk about one of the coolest features of Launchpad Translations: upstream imports sharing
[16:03] <dpm> My name is David Planella, and I work on the Community team at Canonical as the Ubuntu Translations Coordinator
[16:04] <dpm> As part of my job, I get the pleasure of working with the awesome Ubuntu translation teams and with the equally awesome Launchpad developers
[16:04] <dpm> I also use Launchpad for translating Ubuntu on a regular basis, as part of my volunteer contribution to the project,
[16:04] <dpm> which is why I'm excited about this feature and I'd like to share it with you guys
[16:05] <dpm> So, without further ado, let's roll!
[16:05] <dpm> oh, btw, I've set aside some time for questions at the end, but feel free to ask anything during the talk.
[16:05] <dpm> just remember to prepend your questions with QUESTION: and ask them on the #ubuntu-classroom-chat channel
[16:06] <dpm> So...
[16:06] <dpm>  
[16:06] <dpm> What is message sharing
[16:06] <dpm> -----------------------
[16:06] <dpm>  
[16:06] <dpm> Before we delve into details, let's start with some basics: what's all this about?
[16:07] <dpm> In short, message sharing is a unique feature in Launchpad with which translations for identical messages are essentially linked into one single message
[16:07] <dpm> This means that as a translator, you just need to translate that message once and your translations will appear automatically in all other places where that message appears.
[16:08] <dpm> The only requirements are that those messages are contained within a template with the same name and are part of different series of a distribution or project in Launchpad.
[16:08] <dpm> This may sound a bit abstract, so let's have a look at an example:
[16:08] <dpm> Let's say you're translating Unity in Ubuntu Oneiric:
[16:08] <dpm> https://translations.launchpad.net/ubuntu/oneiric/+source/unity/+pots/unity/
[16:09] <dpm> (you translate it from that URL in Launchpad)
[16:09] <dpm> And you translate the message "Quit" into your language
[16:09] <dpm> Instantly your translation will propagate to the Unity translation in _Natty_ (and all other releases):
[16:10] <dpm> So it will appear translated here:
[16:10] <dpm> https://translations.launchpad.net/ubuntu/natty/+source/unity/+pots/unity/
[16:10] <dpm> without you having had to actually translate it there
[16:10] <dpm> So basically Launchpad is doing the work for you! :-)
[16:11] <dpm> It also works when you want to do corrections:
[16:12] <dpm> Let's say you are not happy with the translation of "Quit" in Unity, and you change it in the Natty series in Launchpad
[16:12] <dpm> So you go to: https://translations.launchpad.net/ubuntu/natty/+source/unity/+pots/unity/
[16:12] <dpm> and change it
[16:13] <dpm> actually, I could point you to the actual message:
[16:13] <dpm> https://translations.launchpad.net/ubuntu/natty/+source/unity/+pots/unity/ca/2/+translate
[16:13] <dpm> anyway, so you fix the "Quit" translation
[16:13] <dpm> perhaps there was a typo, perhaps you were not happy with the current translation, etc.
[16:15] <dpm> That change will also be propagated to all other Ubuntu series, so that you don't have to manually fix each one of them
[16:15] <dpm> Isn't that cool?
[16:15] <dpm> And as you see, it works both ways: from older series to new ones, and viceversa. The order does not really matter at all.
[16:15] <dpm> we've got a question:
[16:15] <ClassBot> shnatsel|busy asked: If there was a change in terminology between the series, e.g. I translated "torrent" with one word, then the app changed (some strings changed, some strings added) and now I use a different word, is there a way to prevent the new terminology partly leaking to the older series and making a terrible mess there?
[16:16] <dpm> that's a good point
[16:16] <dpm> however, with the current implementation this will not happen
[16:16] <dpm> message sharing works only with identical messages
[16:17] <dpm> this means that if in the first series the original string was "torrent",
[16:17] <dpm> and in the next series the original string was "torrent new",
[16:18] <dpm> there won't be any message sharing at all, avoiding inadvertently translating strings you don't want to be translated automatically
[16:19] <ClassBot> gpc asked: So, "torrent" and "torrent." are not the same string?
[16:20] <dpm> that's correct, even with such a small change as this, they're not considered the same string
[16:20] <dpm> they need to be identical
[16:20] <dpm> so rather than a fuzzy match, message sharing works only on identical matches
[16:21] <dpm> fuzzy matching would need further development in Launchpad
[16:21] <dpm> If you're interested on how difficult it would be to implement it, or even in implement it yourself,
[16:21] <dpm> I'd recommend asking on the #launchpad channel
[16:21] <dpm> that's where the Launchpad developers hang out
[16:22] <dpm> and they're always happy to help
[16:22] <dpm> Anyway, let'w move on...
[16:22] <dpm> So far we've only talked about distributions, and in particular Ubuntu. But this equally works within series of projects in Launchpad
[16:22] <dpm> But more on that later on...
[16:22] <dpm> You'll find out more about sharing here as well:
[16:22] <dpm> http://blog.launchpad.net/translations/sharing-translations
[16:22] <dpm> http://danilo.segan.org/blog/launchpad/automatic-translations-sharing
[16:23] <dpm>  
[16:23] <dpm> Benefits of messsage sharing
[16:23] <dpm> ----------------------------
[16:23] <dpm> Continuing with the basics: what good is this for?
[16:24] <dpm> In terms of Launchpad itself, it makes it much more effective to store the data: rather than storing many translations for a same message, only one needs to be stored.
[16:24] <dpm> But most importantly:
[16:24] <dpm> For project maintainers: when they upload a template to a new release series and it gets imported, will instantly get all existing translations from previous releases shared with the new series and translators won’t have to re-do their work. They won’t have to worry about uploading correct versions of translated PO files, and can just care about POT files instead.
[16:25] <dpm> For translators: they no longer have to worry about back-porting translation fixes to older releases anymore, and they can simply translate the latest release: translations will automatically propagate to older releases. Also, this works both ways, so if you are translating a current stable release, newer development release will get those updates too!
[16:26] <dpm> Any other questions so far?
[16:27] <dpm> Ok, let's continue then
[16:27] <dpm>  
[16:27] <dpm> What's new in message sharing
[16:27] <dpm> -----------------------------
[16:27] <dpm> Until recently, message sharing had only worked within the limits of a distribution or of a project.
[16:28] <dpm> That is, messages could be shared amongst all the series in a distribution or amongst all series of a project.
[16:28] <dpm> As cool as it already was, that was it: data could not flow across projects and distributions, and each one of these Launchpad entities behaved like isolated islands with regards to message sharing.
[16:29] <dpm> But before going forward, let me recap quickly on some other basic concepts in Launchpad. When we're talking about message sharing, we're interested mostly in two types of Launchpad entities
[16:30] <dpm> * Projects: these are standalone projects whose translations are exposed in Launchpad. If these projects are packaged in a distribution, we often refer to the actual project at a location such as https://launchpad.net/some-project as to the upstream project. An example is the Chromium project at https://translations.launchpad.net/chromium-browser/. Upstream projects can host their translations in Launchpad or externally. In the latter case, translat
[16:30] <dpm> ions can still be imported into Launchpad, but more on that later on
[16:31] <dpm> * Distributions: these are collections of source packages, each one of which is also exposed for translation. The most obvious example is Ubuntu. Here's an example of the Natty series of the Ubuntu distribution: https://translations.launchpad.net/ubuntu/natty
[16:32] <dpm> So the news, and the main subject of this talk, is that from now on translations can be shared, given the right permissions, across projects and distributions.
[16:32] <dpm> Again, let's take an example:
[16:32] <dpm> • The Synaptic project is translatable in Launchpad: https://translations.launchpad.net/synaptic
[16:33] <dpm> • At the same time, the Ubuntu distribution has got a Synaptic package which is translatable in Launchpad: https://translations.staging.launchpad.net/ubuntu/natty/+source/synaptic/
[16:33] <dpm> • Now, given that the upstream maintainer has enabled sharing and has set the right permissions, now translators can translate Synaptic in Ubuntu and their translations will seamlessly flow into the upstream project!
[16:33] <dpm> • This works again both ways: if one translates Synaptic in the upstream project, translations will appear in the Ubuntu distribution
[16:34] <dpm> So no more backporting translations or exporting them and committing them back and forth.
[16:35] <ClassBot> ashams asked: So why there is a separate set of Templates for each release of Ubuntu, i.e. (in Natty: https://translations.launchpad.net/ubuntu/natty/+source/unity/+pots/unity/) in Oneric: https://translations.launchpad.net/ubuntu/oneiric/+source/unity/+pots/unity/)
[16:36] <dpm> This is due to the fact that for each series of Ubuntu you not only get new applications with new templates (or some go away), but also you get different messages in the templates
[16:36] <dpm> This is how releases of projects work, it hasn't got anything to do with message sharing
[16:37] <dpm> i.e. we need different sets of templates for each series, otherwise if we had one single set, it will overwrite the old ones on each release
[16:38] <dpm> Anyway, combine the previous example with automatic translation imports and exports, and project maintainers get a fully automated translation workflow, which is really really awesome :-)
[16:38] <dpm> More on automatic imports/exports:
[16:38] <dpm> https://help.launchpad.net/Translations/YourProject/ImportingTranslations
[16:38] <dpm> http://blog.launchpad.net/general/exporting-translations-to-a-bazaar-branch
[16:39] <dpm> So far we've covered projects hosted in Launchpad - what happens with projects hosted externally?
[16:39] <dpm> If translations of a given project are hosted externally, you won't get the benefit of full integration into Launchpad, but you'll still get some important advantages:
[16:40] <dpm> • Translations will need to be imported from a mirror branch into a Launchpad upstream project
[16:40] <dpm> • They will then be regularly imported to Launchpad Translations
[16:40] <dpm> • From there, they will flow quickly, and on a regular basis, into the Ubuntu distribution
[16:40] <dpm> • Up until here, the end result is the same as for upstream projects hosting translations in Launchpad
[16:40] <dpm> • However, translations will only flow in the direction upstream -> Ubuntu, as we don't have a way to automatically send translations to the external project yet
[16:41] <dpm> • The big benefit here is that translations will be imported reliably and quickly on a regular basis
[16:42] <dpm> For an overview on message sharing across projects and distributions, check out this UDS presentation by Henning Eggers, one of the Launchpad developers who implemented this feature, and myself:
[16:42] <dpm> http://ubuntuone.com/p/skw/
[16:43] <dpm>  
[16:43] <dpm> How to enable message sharing
[16:43] <dpm> -----------------------------
[16:43] <dpm> The cool thing to know is that within a project or a distribution message sharing is already enabled
[16:44] <dpm> There are no steps that the project maintainers need to follow: every new series will automatically share messages with all the others
[16:45] <dpm> However, if you want to share messages between an upstream project and a distribution (e.g. Ubuntu), there are a set of steps that need to be performed first:
[16:45] <dpm> * Enable translations in the upstream project, setting the right permissions and translations group
[16:46] <dpm> * If the project you want to enable sharing for is hosting translations externally, you'll need to request a bzr import, so that translations can get imported from an external branch
[16:47] <dpm> * Finally, you'll need to link the upstream project to the corresponding source package in Launchpad
[16:47] <dpm> Right now just a few projects and source packages are linked this way, but this cycle we're planning a community effort to enable sharing for all supported packages.
[16:48] <dpm> I've prepared a table with an overview of the supported packages here:
[16:48] <dpm> https://wiki.ubuntu.com/Translations/Projects/LaunchpadUpstreamImports
[16:48] <dpm> And will soon announce how the community can help in enabling these for sharing.
[16:48] <dpm> Stay tuned to the Ubuntu translators list for more info:
[16:49] <dpm> https://wiki.ubuntu.com/Translations/Contact/#Mailing_lists
[16:49] <dpm> Ok, I think that's all I had on content, so let's go for questions!
[16:49] <dpm>  
[16:49] <dpm> Q & A
[16:49] <dpm> -----
[16:50] <ClassBot> yurchor asked: Why does translation sharing behaves strange (diffs are really weird)? Ex.: https://translations.launchpad.net/ubuntu/natty/+source/avogadro/+pots/libavogadro/uk/+translate?show=new_suggestions
[16:51] <ClassBot> There are 10 minutes remaining in the current session.
[16:51] <dpm> I think that particular case is a project in which there was some data that needed migration (i.e. Ubuntu translations exported and uploaded in the upstream package) and that did not happen.
[16:51] <dpm> I'd suggest pointing this out in the #launchpad channel, where the Launchpad devs can have a look at it in more detail
[16:52] <ClassBot> yurchor asked: What if the project does not have repository with translations (like Fedora's libvirt, etc. on Transifex, translations are generated at package creation)? What will be imported from upstream?
[16:53] <dpm> I'm not familiar with how translations are stored in Fedora's libvirt. The only layout that's supported for external repositories is PO files stored in an external version control system that can be imported as a bzr branch (e.g. git, mercurial, svn, etc)
[16:54] <dpm> QUESTION: For example, I'm translating a BitTorrent client. I had a translation of the term "torrent" that I used in all strings that contain it. Then a new major release arrived that has some strings added, some strings removed and some strings (like "New torrent") intact. For the new version, I find a better translation of the term "torrent", and change it in all those strings in the newer series. But then the new translation of "New torrent", wit
[16:54] <dpm> h the new term to describe "torrent" will leak to the older series, while some other strings in there still use the old term. How can I prevent it?
[16:55] <dpm> oh, I understand what you mean now
[16:55] <ClassBot> There are 5 minutes remaining in the current session.
[16:56] <dpm> Unfortunately, there is no way to detect this in Launchpad, as there is no way to link the "New torrent" to the "torrent" messages
[16:56] <dpm> In this particular case, one thing you can do is to export the translation as a PO file, and replace all the "New torrent" translations with the new term
[16:57] <dpm> and then reupload in Launchpad
[16:57] <dpm> One thing I forgot to mention, and it might be useful if you want to keep the old terminology in older series,
[16:57] <dpm> is that you can explicitly choose individual messages to diverge
[16:58] <dpm> so you can call "torrent" as "a" in a series and "b" in another series, bypassing message sharing
[16:59] <dpm> Ok, I think there is not much time for more questions, so we can probably wrap it up here
[16:59] <dpm> If you've got other questions, feel free to ask me on the #ubuntu-translators channel on Freenode
[16:59] <dpm> So thank you everyone for listening and for participating with your questions
[17:00] <dpm> I hope you enjoyed the talk and see you soon!
[17:01] <ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/13/%23ubuntu-classroom.html following the conclusion of the session.
[17:01] <jjohansen> Hi, before we start I figured I would introduce myself,  I am John Johansen an Ubuntu contributor and member of the Kernel team
[17:01] <jjohansen> Feel free to ask questions through out in he #ubuntu-classroom-chat channel, and remember to prepend QUESTION: to your question
[17:01] <jjohansen> So lets get started
[17:01] <jjohansen> ------------------------------------------
[17:01] <jjohansen> Debugging Kernels is a vast topic, and there is lots of documentation out there.  Since we only have an hour I thought I would cover a few thing specific to the Ubuntu kernel.  So perhaps the topic really should have been changed to debugging the Ubuntu kernel.  I am not going to walk through a live example as we don't have time for multiple kernel builds, installing and testing, working with the kernel takes time.
[17:02] <jjohansen> First up is the all important documentation link https://wiki.ubuntu.com/Kernel
[17:02] <jjohansen> It has a lot of useful information buried in its links.
[17:02] <jjohansen> it takes a while to read through but its really worth doing if you are interested in the kernel
[17:02] <jjohansen> The Ubuntu kernels are available from git://kernel.ubuntu.com/ubuntu/ubuntu-<release>.git
[17:02] <jjohansen> ie. for natty you would do
[17:02] <jjohansen>   git clone git://kernel.ubuntu.com/ubuntu/ubuntu-natty.git
[17:03] <jjohansen> https://wiki.ubuntu.com/Kernel/Dev/KernelGitGuide
[17:03] <jjohansen> gives the full details if you need more
[17:03] <jjohansen> The Ubuntu kernel uses debian packaging and fakeroot to build, so you will need to pull in some dependencies
[17:03] <jjohansen>   sudo apt-get build-dep linux-image-$(uname -r)
[17:04] <jjohansen> once you have the kernel you can change directory into the tree and build it
[17:04] <jjohansen>   fakeroot debian/rules clean
[17:04] <jjohansen>   fakeroot debian/rules binary-headers binary-generic
[17:04] <jjohansen> Note, a couple of things, 1. the clean must be done before your first attempt at building, its sets up some of the environment.  2 we are building the one kernel flavour in the above example, and the end result should be some debs.
[17:04] <jjohansen> also if you ever see me use fdr its an alias I set up for fakeroot debian/rules, so I can get a way with doing
[17:04] <jjohansen>   fdr clean
[17:04] <jjohansen> it saves on typing, I'll try not to slip up but just incase ...
[17:04] <jjohansen> Everyone good so far?
[17:05] <jjohansen> Alright onto bisecting
[17:05] <jjohansen> We need to cover a little more information about how the Ubuntu kernel is put together.  Each release of the Ubuntu linux kernel is based on some version of the upstream kernel.  The Ubuntu kernel carries a set of patches on top of the upstream linux kernel.  The patches really can be broken down into 3 catagories, packaging/build and configs, features/drivers that are not upstream (see the ubuntu directory for most of t
[17:05] <jjohansen> During the development cycle, the Ubuntu kernel is rebased on top of the current linux kernel, as the linux kernel is updated so is the development Ubuntu kernel; it occasionally gets rebased on top of newly imported linux kernels.  This means a couple of things patches that have been merged upstream get dropped, and that the Ubuntu patches stay at the top of the stack.
[17:06] <jjohansen> During the development cycle we hit a point called kernel freeze, where we stop rebasing on upstream, and from this point forward only bug fixes are taken with commits going on top.
[17:06] <jjohansen> So why mention all of this?  Because it greatly affects how we can do kernel bisecting.  If a regression occurs after a kernel is released (say the natty kernel), and there is a known good natty kernel, then bisecting is relatively easy.  We can just checkout the natty kernel and start a git bisect between the released kernel tags.
[17:06] <jjohansen> However bisecting between development releases or different released versions (say maverick and natty) of the kernel becomes much more difficult.  This is because there is no continuity because of the rebasing, so bisect doesn't work correctly, and if you are lucky and have continuity the bisecting may remove the packaging patches.
[17:07] <jjohansen> So how do you bisect bugs in the Ubuntu kernel then?  We use the upstream kernel of course :)
[17:07] <jjohansen> There are two ways to do this, the standard upstream build route and using a trick to get debian packages.
[17:07] <jjohansen> The upstream route is good if you are just doing local builds and not distributing the kernels
[17:08] <jjohansen> but if you want other people to install your kernels you are probably best of using the debian packaging route
[17:08] <jjohansen> So the trick to get debian packaging is pretty simple
[17:09] <jjohansen> you checkout the upstream kernel
[17:09] <jjohansen> checkout an Ubuntu kernel
[17:09] <jjohansen> identify a good and bad upstream kernel
[17:10] <jjohansen> you can do this by using the ubuntu mainline kernel builds available from the kernel team ppa
[17:11] <jjohansen> http://kernel.ubuntu.com/~kernel-ppa/mainline/
[17:11] <jjohansen> that saves you from having to build kernel
[17:12] <jjohansen> now copy the debian/ and debian.master/ directories from the ubuntu kernel into the upstream kernel
[17:12] <jjohansen> you do not want to commit these
[17:12] <jjohansen> as that will just make them disappear with the bisect
[17:12] <jjohansen> you can the change directory into the upstream kernel
[17:12] <jjohansen> edit debian.master/changelog, the top of the file should be something like
[17:12] <jjohansen>   linux (2.6.38-10.44) natty-proposed; urgency=low
[17:14] <jjohansen> you want to change the version to something that will have meaning to you
[17:14] <jjohansen> and that can be easily replaced by newer kernels
[17:14] <jjohansen> you use a debian packaging trick to do this
[17:15] <jjohansen> change 2.6.38-10.44 to something like 2.6.38-10.44~jjLP645123.1
[17:16] <jjohansen> the jj indicates me, then the launchpad bug number and I like to use a .X to indicate how far into the bisect
[17:16] <jjohansen> you can use what ever make sense to you
[17:17] <jjohansen> the important part is the ~ which allows kernels with higher version numbers to install over the bisect kernel without breaking things
[17:18] <jjohansen> if you are going to upload the kernel to a ppa you will also want to update the release info
[17:18] <jjohansen> ie. natty-proposed in this example
[17:19] <jjohansen> if you are using the current dev kernel it will say UNRELEASED and you need to specify a release pocket
[17:19] <jjohansen> ie. natty, maverick, ...
[17:19] <jjohansen> however ppa builds are slow and I just about never use them, at least not for regular bug testing
[17:20] <jjohansen> now you can build the kernel
[17:20] <jjohansen>   fakeroot debian/rules clean
[17:20] <jjohansen>   fakeroot debian/rules binary-headers binary-generic
[17:21] <jjohansen> this will churn through and should build some .debs that can be installed using
[17:21] <jjohansen>   dpkg -i
[17:21] <jjohansen> now on to doing the actual bisect
[17:23] <jjohansen> so bisecting is basically a binary search, start with a known good point and bad, cut the commits in half, build a kernel test if its good, rinse lather, and repeat
[17:23] <jjohansen> git bisect is just a tool to help you do this
[17:24] <jjohansen> it is smarter than just doing the cut in half, it actually takes merges and other things into account
[17:24] <jjohansen> the one important thing to note, for these bisects is if you look at the git log etc, you may find yourself in kernel versions outside of your bisect range
[17:25] <jjohansen> this is because of how merges are handled, don't worry about it, git bisect will handle it for you
[17:25] <jjohansen> so the basics of git bisect are
[17:26] <jjohansen> git bisect start <bad> <good>
[17:26] <jjohansen> where bad is the bad kernel and good is the good kernel
[17:26] <jjohansen> the problem is how do you know which kernel versions to use
[17:27] <jjohansen> if you are using the upstream kernels for a bisect then the ubuntu version tags are not available to you
[17:27] <jjohansen> you need to use either a commit sha, or tag in the upstream tree
[17:29] <jjohansen> sorry lost my internet there for a minute
[17:31] <jjohansen> if you used the mainline kernel builds to find a good and bad point then you can just use the kernel tags for those, ie. v2.6.36
[17:31] <jjohansen> if you used and ubuntu kernel then you can find a mapping of kernel versions here
[17:32] <jjohansen> http://kernel.ubuntu.com/~kernel-ppa/info/kernel-version-map.html
[17:32] <jjohansen> alright so I was asked what the debian.master directory is about
[17:32] <jjohansen> in the Ubuntu kernel we have two directories to handle the debian packaging
[17:32] <jjohansen> debian and debian.master/
[17:33] <jjohansen> this allows abstracting out parts of the packaging
[17:34] <jjohansen> when you setup a build, the parts of debian.master/ are copied into debian/ and that is used
[17:35] <jjohansen> the difference between the two isn't terribly important for most people, think of debian as the working directory for the packaging, and master as the reference
[17:36] <jjohansen> when I had you edit debian.master/changelog above I could have changed things around and had you edit debian/changelog
[17:36] <jjohansen> however
[17:36] <jjohansen> fakeroot debian/rules clean
[17:36] <jjohansen> will endup copying debian.master/changelog into debian/
[17:37] <jjohansen> thus if you change debian/change log you have to do a full edit on it every time you do a clean
[17:37] <jjohansen> so if you are editing debian/ you do
[17:37] <jjohansen>   fdr clean
[17:37] <jjohansen>   edit debian/changelog
[17:38] <jjohansen> which is the reversion of doing it to debian.master/changelog
[17:38] <jjohansen>   edit debian.master/changelog
[17:38] <jjohansen>   fdr clean
[17:38] <jjohansen> for me editing debian.master/changelog keeps me from making a mistake and building a kernel with out my edits to the kernel version
[17:40] <jjohansen> hopefully that is enough info on the debian/ and debian.master/ for now
[17:40] <jjohansen> and we will jump back to bisecting for a little longer
[17:41] <jjohansen> so assuming you have your kernel version for good and bad you start, your bisection
[17:41] <jjohansen> git will put you on a commit roughly in the middle
[17:41] <jjohansen> then you can do
[17:41] <jjohansen>   fdr clean
[17:42] <jjohansen>   fakeroot debian/rules binary-headers binary-generic
[17:42] <jjohansen> sorry caught myself using fdr
[17:42] <jjohansen> this will build your kernel you can install and test
[17:43] <jjohansen> and then you can input your info into git bisect
[17:43] <jjohansen> ie.
[17:43] <jjohansen>   git bisect good
[17:43] <jjohansen> or
[17:43] <jjohansen>   git bisect bad
[17:43] <jjohansen> the import thing to remember is to not, commit any of your changes to git
[17:44] <jjohansen> I tend to edit the debian.master/changelog and update the .# at the end of my version string every iteration of the bisection
[17:44] <jjohansen> you don't have to do this
[17:45] <jjohansen> you can get away with just rebuilding, straight or if you want doing a partial build
[17:46] <jjohansen> the partial build is a nice trick if you don't have a monster build machine but it doesn't save you anything early on in the bisect, when git is jumping lots of commits and lots of files are getting updated
[17:46] <jjohansen> the trick to doing a partial build in the Ubuntu build system is removing the stamp file
[17:47] <jjohansen> when the kernel is built there are some stamp files generated and placed in
[17:47] <jjohansen>   debian/stamps/
[17:47] <jjohansen> there is one for prepare and one for the actual build
[17:48] <jjohansen> if you build a kernel, and the build stamp file is around, starting a new build will just use the build that already exists and package it into a .deb
[17:48] <jjohansen> you don't want to do this
[17:49] <jjohansen> so after you have stepped you git bisect (git bisect good/bad)
[17:49] <jjohansen> you
[17:49] <jjohansen>   rm debian/stamps/stamp-build-generic
[17:50] <jjohansen> this will cause the build system to try building the kernel again, and make will use its timestamp dependencies to determine what needs to get rebuilt
[17:51] <ClassBot> There are 10 minutes remaining in the current session.
[17:51] <jjohansen> if the bisect is only stepping within a driver or subsystem this can save you a log of time on your builds, however if the bisect updates lots of files (moves lots of commits) or updates some common includes, you are going to end up doing a full kernel build
[17:52] <jjohansen> so now for the other tack, what do you do if you don't want to mess with the .deb build and just want to build the kernel old fashioned way
[17:52] <jjohansen> well you build as you are familiar with.
[17:52] <jjohansen>   make
[17:52] <jjohansen>   make install
[17:52] <jjohansen>   make modules_install
[17:53] <jjohansen> then you need to create a ramdisk, and update grub
[17:54] <jjohansen>   sudo update-initramfs -c k <kernelversion>
[17:54] <jjohansen> will create the ram disk you need if you don't want to mess with the kernel version
[17:54] <jjohansen>   sudo update-initramfs -c -k all
[17:54] <jjohansen> then you can do
[17:55] <jjohansen>   sudo update-grub
[17:55] <jjohansen> and you are done
[17:55] <jjohansen> so QUESTION: After rm debian/stamps/stamp-build-generic do you still do a fakeroot debian/rules clean when doing the incremental build?
[17:55] <jjohansen> the answer is no
[17:55] <ClassBot> There are 5 minutes remaining in the current session.
[17:56] <jjohansen> doing a clean will remove all the stamp files, and remove your .o files which will cause a full build to happen
[17:57] <jjohansen> so with only a couple minutes left I am not going to jump into a new topic but will mention something I neglected about about the Ubuntu debian builds
[17:58] <jjohansen> our build system has some extra checks for expected abi, configs, and modules
[17:58] <jjohansen> when building against an upstream kernel you will want to turn these off
[17:58] <jjohansen> you can do this by setting some variables on the command line
[17:58] <jjohansen>   fakeroot debian/rules binary-headers binary-generic
[17:58] <jjohansen> becomes
[17:59] <jjohansen>   skipabi=true skipconfig=true skipmodule=true fakeroot debian/rules binary-headers binary-generic
[17:59] <jjohansen> this can also be used when tuning your own configs etc/
[18:00] <jjohansen> I think I will stop there
[18:00] <jjohansen> thanks for attending, drop by #ubuntu-kernel if you have any questions
[18:01] <ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/13/%23ubuntu-classroom.html following the conclusion of the session.
[18:01] <kirkland> howdy all
[18:02] <kirkland> this session is on dotdee
[18:02] <kirkland> there will be a live streamed demo at: http://bit.ly/uclass
[18:02] <kirkland> i invite you to join me there
[18:02] <kirkland> the username and password is guest/guest
[18:03] <kirkland> for those interested, this is a tool called ajaxterm
[18:03] <kirkland> which embeds a terminal in a web browser
[18:03] <kirkland> i've set up a byobu/screen session for the guest user (which is readonly for all of you)
[18:03] <kirkland> I'll drive the demo
[18:04] <kirkland> and annotate it here
[18:04] <kirkland> alternatively, you can ssh guest@ec2-50-19-128-105.compute-1.amazonaws.com
[18:04] <kirkland> with password guest
[18:05] <kirkland> okay, on to dotdee :-)
[18:05] <kirkland> if you've ever configured a Linux/UNIX system, you're probably familiar with the /etc directory
[18:05] <kirkland> and inside of /etc, there are many directories that end in a ".d"
[18:05] <kirkland> watch the terminal while I find a few in /etc
[18:06] <kirkland> there's a bunch!
[18:06] <kirkland> this is a very user friendly way of offering configuration to users
[18:06] <kirkland> usually, files in a .d directory are concatenated, or executed sequentially
[18:07] <kirkland> it give users quite a bit of flexibility for editing or adding configurations
[18:07] <kirkland> to some software or service
[18:07] <kirkland> it also helps developers and packagers of software
[18:07] <kirkland> as it often allows them to drop snippets of configuration into place
[18:07] <kirkland> but not every configuration file is setup in this way
[18:08] <kirkland> in fact, most of them are really quite "flat"
[18:08] <kirkland> a few months ago, I found myself repeatedly needing to converted some flat configuration files
[18:08] <kirkland> to .d style ones
[18:09] <kirkland> this was for software I was working with, as a developer/packager
[18:09] <kirkland> but not software that I had written myself
[18:09] <kirkland> ideally, I would just ask the upstream developers to change their flat .conf file
[18:09] <kirkland> to a .d directory
[18:09] <kirkland> and they would magically do it
[18:09] <kirkland> and test it
[18:09] <kirkland> and release it
[18:09] <kirkland> immediately :-)
[18:10] <kirkland> that rarely happens though :-P
[18:10] <kirkland> so I wrote a little tool that would generically to that for me!
[18:10] <kirkland> and that tool is called "dotdee"
[18:10] <kirkland> so let's take a look!
[18:10] <kirkland> over in the terminal, i'm going to install dotdee, which is already in Ubuntu oneiric
[18:10] <kirkland> sudo apt-get install dotdee
[18:11] <kirkland> for older ubuntu distros, you can 'sudo apt-add-repository ppa:dotdee/ppa'
[18:11] <kirkland> and update, and install it there too
[18:11] <kirkland> cool, so now i have a /usr/sbin/dotdee executable
[18:11] <kirkland> let's take a flat file, and turn it into a dotdee directory!
[18:12] <kirkland> I'm going to use /etc/hosts as my example
[18:12] <kirkland> first, I need to "setup" the file
[18:12] <kirkland> i can first verify that /etc/hosts is in fact a flat file,
[18:12] <kirkland> -rw-r--r-- 1 root root 296 2011-07-13 12:14 /etc/hosts
[18:13] <kirkland> I'm going to set it up like this:
[18:13] <kirkland> sudo dotdee --setup /etc/hosts
[18:13] <kirkland> INFO: [/etc/hosts] updated by dotdee
[18:13] <kirkland> cool!
[18:13] <kirkland> now let's look at what dotdee did....
[18:13] <kirkland> $ ll /etc/hosts
[18:13] <kirkland> lrwxrwxrwx 1 root root 27 2011-07-13 18:13 /etc/hosts -> /etc/alternatives/etc:hosts
[18:13] <kirkland> so /etc/hosts is now a symlink
[18:13] <kirkland> pointing to an alternatives link
[18:14] <kirkland> $ ll /etc/alternatives/etc:hosts
[18:14] <kirkland> lrwxrwxrwx 1 root root 22 2011-07-13 18:13 /etc/alternatives/etc:hosts -> /etc/dotdee//etc/hosts
[18:14] <kirkland> which is pointing to a flat file in /etc/dotdee
[18:14] <kirkland> $ ll /etc/dotdee//etc/hosts
[18:14] <kirkland> -r--r--r-- 1 root root 296 2011-07-13 18:13 /etc/dotdee//etc/hosts
[18:14] <kirkland> if I look at the contents of that file, I see exactly what I had before
[18:15] <kirkland> but let's go to that directory, /etc/dotdee
[18:15] <kirkland> inside of /etc/dotdee, there is a file structure that mirrors the same file structure on the system
[18:15] <kirkland> importantly, we now have a .d directory
[18:15] <kirkland> that refers exclusively to our newly managed file
[18:16] <kirkland> namely, /etc/dotdee/etc/hosts.d
[18:16] <kirkland> in that directory, all we have is a file called 50-original
[18:16] <kirkland> which is the original contents of our /etc/hosts
[18:16] <kirkland> but let's say we want to "append" a host to that fie
[18:16] <kirkland> file
[18:16] <kirkland> let's create a new file in this directory
[18:16] <kirkland> and call it 60-googledns
[18:17] <kirkland> so I edit the new file (as root)
[18:17] <kirkland> add the entry, 8.8.8.8 googledns
[18:17] <kirkland> and I'm going to write the file
[18:17] <kirkland> but before i write the file, let me split the screen
[18:18] <kirkland> so that we can watch it get updated, automatically, in real time!
[18:18] <kirkland> so i ran 'watch -n1 cat /etc/hosts'
[18:18] <kirkland> which is just printing that file every 1 second
[18:18] <kirkland> now i'm going to save our 60-googledns file
[18:18] <kirkland> and voila!
[18:19] <kirkland> we have a new entry appended to our /etc/hosts
[18:19] <kirkland> through the magic of inotify :-)
[18:19] <kirkland> which is a daemon that monitors filesystem events
[18:19] <kirkland> dotdee comes with a configuration (which is dotdee managed, of course!) that adds and removes patterns
[18:19] <kirkland> as you setup/remove dotdee management
[18:20] <kirkland> let's prepend a host
[18:20] <kirkland> we'll call this one 40-foo
[18:20] <kirkland> and see it land at the beginning of the file
[18:20] <kirkland> bingo
[18:20] <kirkland> now it's at the beginning of our /etc/hosts
[18:22] <kirkland> okay
[18:22] <kirkland> so adding/removing a flat text file is one way of affecting our managed file
[18:23] <kirkland> flat text files are just appended or prepended, based on its alpha-numeric positioning
[18:23] <kirkland> but there are 2 other ways as well!
[18:23] <kirkland> you can also put executables in this .d directory
[18:23] <kirkland> which operate on the standard in and out
[18:23] <kirkland> if you want to modify a flat file by "processing" it
[18:24] <kirkland> for instance
[18:24] <kirkland> let's make this file all uppercase
[18:25] <kirkland> whoops
[18:25] <kirkland> okay, there we go
[18:25] <kirkland> which brings me to the --update command :-)
[18:26] <kirkland> dotdee --update can be called against any managed file
[18:26] <kirkland> to update it immediately
[18:26] <kirkland> in case the inotify bit didn't pick up the change
[18:26] <kirkland> in any case, i just did that, and now our /etc/hosts is all uppercase
[18:26] <kirkland> because of our 70-uppercase executable
[18:27] <kirkland> what happens if we move it from 70-uppercase to 10-uppercase?
[18:27] <kirkland> or, rather, how about 51-uppercase?
[18:27] <kirkland> see the output now
[18:28] <kirkland> note that 51-uppercase was applied against the "current state" of the output, as of position 51
[18:28] <kirkland> but 60- was applied afterward
[18:28] <kirkland> so it wasn't affected
[18:28] <kirkland> so that's two ways we can affect the contents of the file
[18:28] <kirkland> a) flat text files, b) scripts that process stdin and write to stdout
[18:29] <kirkland> the third way is patches or diff files
[18:29] <kirkland> given that this is a developer audience, we're probably familiar with quilt
[18:29] <kirkland> and directories of patches
[18:29] <kirkland> this is particularly useful if you need to do some 'surgery' on a file
[18:29] <kirkland> let's say I want to "insert" a line into the middle of this file
[18:30] <kirkland> into the middle of 50-original, for instance
[18:31] <kirkland> okay, so i've added "10.9.8.7        hello-there" to the middle of a copy of this file
[18:31] <kirkland> and i'm going to use diff -up to generate a patch
[18:31] <kirkland> there we go
[18:31] <kirkland> okay, let's put that in this .d dir
[18:32] <kirkland> note that I have to add .patch or .diff as the file extension
[18:32] <kirkland> and now i can cat /etc/hosts and see that the patch has been applied!
[18:33] <kirkland> i could stack a great number of these patches here
[18:33] <kirkland> much like a quilt directory
[18:33] <kirkland> okay, so now let's undo this configuration
[18:34] <kirkland> oh, first
[18:34] <kirkland> sudo dotdee --list /etc/hosts
[18:34] <kirkland> /etc/hosts
[18:34] <kirkland> $ echo $?
[18:34] <kirkland> 0
[18:34] <kirkland> this verifies that /etc/hosts is in fact dotdee managed
[18:34] <kirkland> if i try this against some other file
[18:35] <kirkland> $ sudo dotdee --list /boot/vmlinuz-3.0-3-virtual
[18:35] <kirkland> ERROR: [/boot/vmlinuz-3.0-3-virtual] is not managed by dotdee
[18:35] <kirkland> (I don't recommend dotdee'ing your kernel :-)
[18:35] <kirkland> but we can undo our /etc/hosts
[18:35] <kirkland> $ sudo dotdee --undo /etc/hosts
[18:35] <kirkland> update-alternatives: using /etc/dotdee//etc/hosts.d/50-original to provide /etc/hosts (etc:hosts) in auto mode.
[18:35] <kirkland> INFO: [/etc/hosts] has been restored
[18:35] <kirkland> INFO: You may want to manually remove [/etc/dotdee//etc/hosts /etc/dotdee//etc/hosts.d]
[18:36] <kirkland> and now our /etc/hosts is back to being whatever we saved in 50-original
[18:36] <kirkland> so ...
[18:36] <kirkland> that's how it works
[18:36] <kirkland> and the /etc/hosts example is only marginally useful
[18:37] <kirkland> what I would *really* like to use it for is in configuration file management in Debian/Ubuntu packaging
[18:37] <kirkland> in the case where the upstream daemon or utility has a single, flat .conf file
[18:37] <kirkland> but I really would prefer it to be a .d directory
[18:37] <kirkland> so i just did a find on /etc
[18:38] <kirkland> sudo find /etc/ -type f -name "*.conf"
[18:38] <kirkland> and chose one at random
[18:38] <kirkland>  /etc/fonts/fonts.conf
[18:38] <kirkland> which happens to be XML
[18:38] <kirkland> and I just thought about this
[18:38] <kirkland> i should have mentioned it in our previous section
[18:39] <kirkland> XML is tougher than a linear file, like a shell script
[18:39] <kirkland> in that you can't just append, or prepend text
[18:39] <kirkland> you have to surgically insert the bits you want
[18:39] <kirkland> in which case the latter two methods I mentioned, the executable and the diff/patch will be your friend!
[18:40] <kirkland> okay
[18:40] <kirkland> now I would *really* like to see dpkg learn just a little bit about dotdee
[18:41] <kirkland> I'd like for it to be able to determine *if* a file is managed by dotdee
[18:41] <kirkland> (easy to check using dotdee --list, or just checking if the file is a symlink itself)
[18:41] <kirkland> and if so, then it would use $(dotdee --original ...) to find the 50-original file path
[18:42] <kirkland> and dpkg would write its changes to that location (the 50-original file)
[18:42] <kirkland> such that local admin, or even other packages, could dabble in the .d directory, without causing conffile conflicts or .dpkg-original files
[18:43] <kirkland> okay, anyway, let's take a break for questions
[18:43] <kirkland> I think I've demo'd most of what I'd like to show you
[18:43] <kirkland> any questions?
 Question: Is there any issues when you upgrade or update due to dotdee?
[18:44] <kirkland> coalitians: great question !
[18:44] <kirkland> coalitians: right, so that's what I was saying about dpkg needing to "learn" about dotdee
[18:44] <kirkland> coalitians: let's look at an example over in our test system
[18:45] <kirkland> coalitians: I'm going to dotdee --setup that font xml
[18:47] <kirkland> coalitians: okay, as you can see, i've made a change
[18:47] <kirkland> coalitians: let's upgrade (or reinstall) the package that owns this file
[18:47] <kirkland> $ sudo apt-get install --reinstall fontconfig-config
[18:47] <kirkland> lrwxrwxrwx  1 root root   38 2011-07-13 18:45 fonts.conf -> /etc/alternatives/etc:fonts:fonts.conf
[18:47] <kirkland> -rw-r--r--  1 root root 5287 2011-07-01 12:12 fonts.conf.dpkg-new
[18:48] <kirkland> unfortunately, dpkg dump fonts.conf.dpkg-new here :-(
[18:48] <kirkland> i have toyed with another inotify/iwatch regex that would look for these :-)
[18:48] <kirkland> slurp them up, and move them over to 50-original
[18:48] <kirkland> which works reasonably well
[18:49] <kirkland> except that I don't yet have the interface for the merge/conffile questions, like dpkg does
[18:49] <kirkland> coalitians: so to answer your question, upgrades are not yet handled terribly gracefully
[18:49] <kirkland> coalitians: and would take a little work within dpkg itself
[18:49] <kirkland> coalitians: sorry;  but great question!
[18:49] <kirkland> any others?
[18:50] <kirkland> I see roughly 19# in the byobu session :-)
[18:50] <kirkland> that's what the red-on-white 19# means
[18:50] <kirkland> 19 ssh sessions
[18:50] <kirkland> did the web interface work for anyone?
[18:50] <kirkland> this is the first time I've tried it
[18:51] <ClassBot> There are 10 minutes remaining in the current session.
[18:51] <zyga> oh, already? :-)
[18:51] <zyga> kirkland, is your session over now?
[18:51] <kirkland> okay, I reckon my session is over
[18:51] <kirkland> no more questions
[18:51] <zyga> :-)
[18:51] <kirkland> zyga: sure, you can have it ;-)
[18:51] <zyga> okay
[18:51] <kirkland> thanks all
[18:51] <zyga> awesome, thanks
[18:55] <ClassBot> There are 5 minutes remaining in the current session.
[19:01] <ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/13/%23ubuntu-classroom.html following the conclusion of the session.
[19:01] <zyga> welcome everyone :)
[19:01] <zyga> I'm glad to be able to tell you something about LAVA today
[19:02] <zyga> my name is Zygmunt Krynicki, I'm one of the developers working on the project
[19:02] <zyga> feel free to ask questions at any time, check the topic for instructions on how to do so
[19:02] <zyga> okay
[19:02] <zyga> let's get started
[19:02] <zyga> So first off, what is LAVA?
[19:03] <zyga> LAVA is an umbrella project, created by Linaro, that focuses on overall quality automation
[19:04] <zyga> you can think of it as a growing collection of small, focused projects that work well together
[19:05] <zyga> the overall goal of LAVA is to improve quality that developers perceive while working on ARM-based platforms
[19:05] <zyga> we're trying to do that by building tools that can be adopted by third party developers and Linaro members alike
[19:06] <zyga> okay
[19:06] <zyga> As I mentioned earlier LAVA is a collection of projects, I'd like to enumerate the most important ones that exist now
[19:06] <zyga> first of all all of our projects can be roughly grouped into two bins "server side" and "client side"
[19:07] <zyga> where client is either client of the "server" or a unrelated non-backend computer (like your workstation or a device being tested)
[19:08] <zyga> the key project on the server is called lava-server, it acts as a entry point for all other server side projects
[19:08] <zyga> essentially it's an extensible application container that simplifies other projects
[19:09] <zyga> next up we have lava-dashboard - a test result repository with data mining and reporting
[19:09] <zyga> lava-scheduler - scheduler for "jobs" for lava-dispatcher
[19:09] <zyga> lava-dispatcher - automated deployment, environment control tool that can remotely run tests on ubuntu and android images
[19:10] <zyga> the last one is really important and is getting a lot of focus recently
[19:10] <zyga> essentially it's something that knows how to control physical devices so that you can do automated image deployment, monitoring and recovery on real hardware
[19:11] <zyga> on the client side we have a similar list:
[19:11] <zyga> lava-tool is a generic wrapper for other client side tools, it allows you to interact with server side components using command line instead of the raw API exposed by our services
[19:11] <zyga> lava-dashboard-tool talks to the dashboard API
[19:12] <zyga> lava-scheduler-tool talks to the scheduler API
[19:12] <zyga> and most important of all: lava-test, it's a little bit different as it is primarily an "offline" component
[19:12] <zyga> it's a wrapper framework for running tests of any kind and processing the results in a way that lava-dashboard can consume
[19:13] <zyga> all of those projects are full of developer friendly APIs that allow for easy customization and extensions
[19:13] <zyga> we use the very same tools to build additional features
[19:13] <zyga> lava-test is also important because it is a growing collection of wrapper for existing tests
[19:14] <zyga> using our APIs you can easily wrap your test code so that the test can be automated and processed in our stack
[19:14] <zyga> some test definitions are shipped in the code of lava-test but more and more are using the out-of-tree API to register tests from 3rd party packages
[19:15] <zyga> if you are an application author you could easily expose your test suite to lava this way
[19:15] <zyga> okay
[19:15] <zyga> that's the general overview
[19:15] <zyga> now for two more things:
[19:15] <zyga> 1) what can LAVA give you today
[19:15] <zyga> 2) how can you help us if you are interested
[19:16] <zyga> While most of our focus is not what typical application developers would find interesting (arm? automation? testing? ;-)
[19:16] <zyga> some things are quite useful for a general audience
[19:16] <zyga> you can use lava-server + lava-dashboard to trivially deploy a private test result repository
[19:17] <zyga> the dashboard has a very powerful data system, you could store crash reports, user-submitted benchmark measurements, results from CI systems that track your development trees
[19:18] <zyga> in general anything that you'd like to retain for data mining and reporting that you (perhaps) currently store in a custom solution that you need to maintain yourself
[19:18] <zyga> all of lava releases are packaged in our ppa (ppa:linaro-validation/ppa) and can be installed on ubuntu lucid+ with a single command
[19:19] <zyga> the next thing you could use is our various API layers: you could integrate some test/benchmark code in your application and allow users to submit this data to your central repository for analysis
[19:20] <zyga> if you are really into testing you could wrap your tests in lava-test and benefit from the huge automation effort that we bring with the lava-dispatcher
[19:21] <zyga> in general, as soon as testing matters to you and you are looking for a toolkit please consider what we offer and how that might solve your needs
[19:22] <zyga> during this six month cycle a few interesting thins are planned to land
[19:22] <zyga> first of all: end user and developer documentation
[19:23] <zyga> overview of lava project, various stack layers, APIs and examples
[19:23] <ClassBot> jykae asked: any projects that use successfully lava tools?
[19:23] <zyga> jykae, linaro is our primary consumer at this time but ubuntu QA is looking at what we produce in hope for alignment
[19:24] <zyga> jykae, next big users are ARM vendors (all the founding members of linaro) that use lava daily and contribute to various pieces
[19:25] <zyga> jykae, finally I know of one big user, also from the ARM space, expect some nice announcement from them soon - they are really rocking (with what they do with LAVA and in general)
[19:25] <zyga> jykae, but I hope to build LAVA in a way that _any_ developer can just deploy and start using, like a bug tracker that virtually all pet projects have nowdays
[19:25] <zyga> jykae, we need more users and we will gladly help them get started
[19:26] <zyga> ok, back to the "stuff coming this cycle"
[19:26] <zyga> so documentation is the number one thing
[19:26] <zyga> another thing in the pipe is email notification for test failures and benchmark regressions
[19:26] <zyga> this will probably land next month
[19:27] <zyga> we are also looking at better data mining / reporting features, currently it's quite hard to use this feature, this will be somewhat improved with good documentation but we still think it can be more accessible
[19:27] <zyga> the goal is to deliver a small IDE that allows users to do data mining and reporting straight from their browsers
[19:27] <zyga> this is a big topic but small parts of it will land before 11.10
[19:28] <zyga> finally we are looking at some build automation features so that LAVA can help you out as a CI system
[19:29] <zyga> and of course: more tests wrapper in lava-test, more automaton (arm boards, perhaps x86)
[19:29] <ClassBot> jykae asked: do you have irc channel for lava?
[19:29] <zyga> jykae, yes, we use #linaro for all lava talks
[19:30] <zyga> jykae, a lot of people there know about it or use it and can help people out
[19:30] <zyga> jykae, also all the core developers are lurking there so it's the best place to seek assistance and chat to us
[19:30] <zyga> ok
[19:30] <zyga> so
[19:30] <zyga> a few more things:
[19:31] <zyga> 1) I already mentioned our PPA, we have a policy of targeting Ubuntu 10.04 LTS for our server side code
[19:31] <zyga> you should have no problems in installing our packages there
[19:32] <zyga> if you want more modern system (we also support all the subsequent ubuntu releases, except for 11.10 which will be supported soon enough)
[19:32] <zyga> if you want most of the code is also published on pypi and can be installed on any system with pip or easy_install
[19:32] <zyga> 2) We have a website at http://validation.linaro.org where you can find some of the stuff we are making in active production
[19:33] <zyga> The most prominent feature there is lava-server with dashboard and scheduler
[19:33] <zyga> (the dispatcher is also there but has no web presence at this time)
[19:34] <zyga> There is one interesting thing I wanted to show to encourage you to browse that site more: http://validation.linaro.org/lava-server/dashboard/reports/benchmark/
[19:34] <zyga> this is a simple report (check the source code button to see how it works) that shows a few simple benchmarks we run daily on various arm boards
[19:35] <zyga> there are other reports but they are not as interesting (pictures :-) unless you know what they show really
[19:36] <zyga> another URL I wanted to share (it's not special, just one I selected now): http://validation.linaro.org/lava-server/dashboard/streams/anonymous/lava-daily/bundles/7c0da1d8765e806102c6f8a707ff22b99a43c485/
[19:36] <zyga> this shows a "bundle" which is the primary thing that dashboard stores
[19:37] <zyga> bundles are containers for test results
[19:37] <zyga> from that page click on the bundle viewer tab to see how a bundle really looks like
[19:38] <zyga> in the past whenever we were talking about "dashboard bundles" people had a hard time understanding what those bundles are and this is a nice visual way to learn that
[19:38] <zyga> okay
[19:38] <zyga> one more thing before I'm done
[19:38] <zyga> what we'd like from You
[19:39] <zyga> 1) Solve your problem, tell us about what you need and how LAVA might help you reach your goal (or what is preventing you from using it effectively), work with us to make that happen
[19:40] <zyga> 2) Testing toolkit authors: consider allowing your users to save test results in our format
[19:41] <zyga> 3) Application authors: if you care about quality please tell us what features you'd like to see the most
[19:41] <zyga> 4) Coders: help us implement new features, we are a friendly and responsible upstream
[19:41] <zyga> okay
[19:42] <zyga> that's all I wanted to broadcast, I'm happy to answer any questions now
[19:45] <zyga> nobody into quality it seems :-)
[19:49] <zyga> okay, guess that's it -- thanks everyone :-)
[19:51] <ClassBot> There are 10 minutes remaining in the current session.
[19:55] <ClassBot> There are 5 minutes remaining in the current session.
[20:01] <ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/13/%23ubuntu-classroom.html following the conclusion of the session.
[20:01] <marrusl> Hi folks!
[20:01] <marrusl> I have a secret to admit.
[20:01] <marrusl> I'm not actually an Ubuntu Developer.
[20:01] <marrusl> Quick introduction:
[20:02] <marrusl> I work for Canonical as a system support engineer helping customers with implementing and supporting Ubuntu, UEC, Landscape, etc.
[20:02] <marrusl> On a scale from dev to ops, I'm pretty firmly ops.
[20:02] <marrusl> However, for that very reason, I have a keen interest in what is managing the processes on my systems and how those systems boot and shutdown.
[20:03] <marrusl> Last thing before I really start...
[20:03] <marrusl> if you're not very familiar with Upstart, this might be a bit dense with new concepts.
[20:03] <marrusl> But to paraphrase Upstart's author, Scott James Remnant:  thankfully this is being recorded, so if it doesn't make complete sense now, you can read it again later!
[20:04] <marrusl> The best way to start is probably to define what Upstart is.  If you visit http://upstart.ubuntu.com, you'll find this description:
[20:04] <marrusl> vices during boot, stopping them during shutdown and supervising them while the system is running.”
[20:04] <marrusl> let me try that again
[20:04] <marrusl> “Upstart is an event-based replacement for the /sbin/init daemon which handles starting of tasks and services during boot, stopping them during shutdown and supervising them while the system is running.”
[20:05] <marrusl> Most of that definition applies to any init system, be it classic System V init scripts, SMF on Solaris, launchd on Mac OS X, or systemd.
[20:05] <marrusl> What sets Upstart apart from the others is that it is "event-based" and not "dependency-based".
[20:06] <marrusl> (note: launchd is not dependency-based, but it's also not event-based like Upstart.  I could explain why, but we're all here to talk about Linux, right? :)
[20:06] <marrusl> So let's unpack those terms:
[20:07] <marrusl> A dependency-based system works a lot like a package manager.
[20:07] <marrusl> If you want to install a package, you tell the package manager to install your "goal package".
[20:07] <marrusl> From there, your package manager determines the required dependencies (and the dependencies of those dependencies and so on) and then installs everything required for your package.
[20:08] <marrusl> Likewise, in a dependency-based init system, you define a target service and when the system wishes to start that service, it first determine and starts all the dependent services and completes dependent tasks.
[20:09] <marrusl> For example, depending on configuration, a mysql installation might depend on the existence of a remote file system.
[20:09] <marrusl> The remote filesystem in turn would require networking to be up.
[20:09] <marrusl> Networking requires the local filesystems to be mounted, which is carried out by the mountall task.
[20:10] <marrusl> This works fairly well with a static set of services and tasks, but it has trouble with dynamic events, such as hot-plugging hardware.
[20:11] <marrusl> To steal an example from the Upstart Cookbook (http://upstart.ubuntu.com/cookbook) let's say you want to start a configuration dialog box whenever an external monitor is plugged in.
[20:11] <marrusl> In a dependency-based system you would need to have an additional daemon that polls for hardware being plugged.
[20:12] <marrusl> Whereas Upstart is already listening to udev events and you can create a job for your configuration app to start when that event occurs.
[20:12] <marrusl> Certainly this requires udev to be running, but there's no need to define that dependency.
[20:13] <marrusl> Sometimes we refer to this as "booting forward".  A dependency-based system defines the end goals and works backwards.
[20:13] <marrusl> It meets all of the goal service's dependencies before running the goal service.
[20:14] <marrusl> Upstart starts a service when its required conditions are met.
[20:14] <marrusl> It's a subtle distinction, hopefully it will become clearer as we go.
[20:15] <marrusl> A nice result of this type of thinking is that when you want to know why "awesome" is running (or not running) you can look at /etc/init/awesome.conf and inspect its start and stop criteria (or on Natty+ run `initctl show-config -e awesome`).
[20:15] <marrusl> There's no need to grep around and figure out what other other service called for it to start.
[20:16] <marrusl> But enough about init models...  let's get to the real reason I suspect you're here:  how to understand, modify, and write Upstart jobs.
[20:16] <marrusl> Upstart jobs come in two main forms: tasks and services.
[20:16] <marrusl> A task is a job that runs a finite process, complete it, and ends.
[20:17] <marrusl> Cron jobs are like tasks, whereas crond (the cron daemon itself) is a service.
[20:17] <marrusl> So like other service jobs, it's a long running process that typically is not expected to stop itself.
[20:17] <marrusl> ssh, apache, avahi, and network-manager are all good examples.
[20:18] <marrusl> Now events...
[20:18] <marrusl> An event is a notification sent by Upstart to any job that is interested in that event.
[20:19] <marrusl> Before Natty, there were for main types of events: init events, mountall events, udev events and what I'll call "service events".
[20:19] <marrusl> In Natty that was expanded to socket events  (UNIX or TCP/IP) and D-Bus events.
[20:20] <marrusl> Eventually this will include time-based events (for cron/atd functionality) and filesystem events (e.g. when this file appears, do stuff!).
[20:20] <marrusl> You an type `man upstart-events` on natty or oneiric to see a tabular summary of all "well-known events" along with information about each.
[20:21] <marrusl> We're going to mostly focus on the service events, of which there are four.  These are the events that start and stop jobs.
[20:21] <marrusl> 1. Starting.  This event is emitted by Upstart when a job is *about* to start.
[20:22] <marrusl> It's the equivalent of Upstart saying "Hey! In case anyone cares, I'm going to start cron now, if you need to do something before con starts, you'd better do it now!"
[20:22] <marrusl> 2. Started. This event is emitted by Upstart when a job is now running.
[20:22] <marrusl> "Hey!  If anyone was waiting for ssh to be up, it is!"
[20:23] <marrusl> 3. Stopping.  Like the starting even, this event is emitted when Upstart is *about* to stop a job.
[20:23] <marrusl> 4. Stopped.  "DONE!"
[20:24] <marrusl> Note that "stopping" and "stopped" are also emitted when a job fails.  It is possible to establish the manner in which they fail to.  See the man pages for more details.
[20:24] <marrusl> (and yes, Upstart shouts everything)
[20:24] <marrusl> These events allow other Upstart jobs to coordinate with the life cycle of another job.
[20:25] <marrusl> It's probably time to look at an Upstart job to see how this works.
[20:25] <marrusl> Since I couldn't find a real job that takes advantage of each phase of the cycle, I've created a fake one to walk through.
[20:25] <marrusl> Please `bzr branch lp:~marrusl/+junk/UDW` and open the file "awesome.conf"
[20:26] <marrusl> If you don't have access to bzr at the moment, you can find the files here:
[20:26] <marrusl> http://ubuntuone.com/p/14JL/
[20:26] <marrusl> While we look at awesome.conf, it might also help to open the file "UpstartUDW.pdf" and take a look at the second page.
[20:26] <marrusl> Hopefully this will make the life cycle more clear.
[20:27] <marrusl> Awesome is a made-up system daemon named in honor of our awesome and rocking and jamming Community Manager (please see:  http://mdzlog.alcor.net/2010/03/19/introducing-the-jonometer/)
[20:28] <marrusl> I mentioned start and stop criteria earlier... well those are the first important lines of the job.
[20:28] <marrusl> What we are saying here is "if either the jamming or rocking daemons signal that they are ready to start, awesome should start first".
[20:29] <marrusl> If I want to make sure that awesome runs *after* those services, I would have used "start on started" instead of "starting".
[20:29] <marrusl> So let's say Upstart emits "starting jamming", this will trigger awesome to start.
[20:29] <marrusl> Upstart will emit "starting awesome" and now the pre-start stanza will run.
[20:30] <marrusl> Some common tasks you might consider putting into "pre-start" are things like loading a settings file into the environment or cleaning up any files or directories that might have been left if the service dies abnormally.
[20:31] <marrusl> One more key use of the pre-start is if you want some sanity checks to see if you should even run (are the required files in place?)
[20:31] <marrusl> After pre-start, now we are ready to eithe exec a binary or run a script.  Here we are executing the daemon.
[20:31] <marrusl> In most cases, this is when Upstart would emit the "started" event.  In this example, we have one more thing to do: the post-start stanza.
[20:32] <marrusl> You might want to use the post-start stanza when waiting for the PID to exist isn't enough to say that the service is truly ready to respond.
[20:32] <marrusl> For example, you start up mysql, the process is running, but it might be another moment or two before mysql has finished loading your databases and is ready to respond to queries.
[20:33] <marrusl> In my example, I essentially ripped something out of the CUPS upstart job because it illustrates the point well enough.
[20:33] <marrusl> This post-start stanza waits for the /tmp/awesome/ directory to exist.  But it doesn't wait forever, it checks every half second for 5 seconds.
[20:34] <marrusl> If awesome isn't ready to go by the, something is very wrong and I want it to exit.
[20:34] <marrusl> Since that script exits with a non-zero status, Upstart will stop the service.
[20:34] <marrusl> This might be a good place to mention that all shell fragments run with `sh -e` which means two things...
[20:35] <marrusl> Your scripts will run with the default system shell, and unless you've changed it, this is by default linked to /bin/dash.
[20:36] <marrusl> So do remember to avoid "bashisms" (though you can use "here files" to use any interpreter, please ask later if you'd like to know how, but it's really better form to use only POSIX-complaint sh, imo).
[20:36] <marrusl> The other thing it means, is that if any command fails in the script it will exit.  You really can't be too careful running scripts as root.
[20:37] <marrusl> Stopping a service is essentially the reverse... Upstart emits "stopping awesome", exexutes the pre-stop stanza (notice I used an exec in place of a script, you can do this in any of the other stanzas as well).
[20:37] <marrusl> Now it tires to SIGTERM the process, if that takes longer than the "kill timeout", it will then send a SIGKILL.
[20:38] <marrusl> I should point out that a well-written daemon probably doesn't need pre-stop.  It should handle SIGTERM gracefully and if it needs to flush something to disk it does so itself.
[20:38] <marrusl> If 5 seconds (the default) isn't enough, specify a longer setting in the job as I did here.  In a real job you wouldn't likely be upping the kill timeout _and_ using a “pre-stop” action, I just wanted to illustrate both methods.
[20:39] <marrusl> Once post-stop has run (if present), Upstart emits "stopped awesome".
[20:39] <marrusl> And the cycle is complete!
[20:39] <marrusl> Now, I've covered the major sections of a job, but there are some important additional keywords I'd like to introduce (this is not an exhaustive list):  task, respawn, expect [fork or daemon], and manual.
[20:39] <marrusl> “task”.  This keyword, as you might suspect, should be present in task jobs.  There's no argument to it, just put it on a line by itself.
[20:40] <marrusl> This keyword lets Upstart know that this process will run its main script/exec and then should be stopped.  Some good examples of task jobs on a standard Ubuntu system are:  procps, hwclock, and control-alt-delete.
[20:40] <marrusl> “respawn”.  There are a number of system services that you want to make sure are running constantly, even if they crash or otherwise exit.  The classic examples are ssh, rsyslog, cron, and atd.
[20:40] <marrusl> “expect [fork|daemon]”.  Classic UNIX daemons, well, daemonize... that is they fork off new processes and detach from the terminal they started from.  “expect fork” is for daemons that fork *once*, “expect daemon” will expect the process to fork exactly *twice*.
[20:41] <marrusl> In many cases, if your service has a “don't daemonize” or “run in foreground” mode, it's simpler to create an Upstart job without “expect” entirely.  You may just have to try both approaches to find out which works best for your service.
[20:41] <marrusl> Well, unless you are the author, in that case, you probably already know. :)
[20:41] <marrusl> “manual”.  The essence of manual is that it disables the job from starting or stopping automatically.  Another way of putting that (and more precise) is that if the word “manual” appears by itself on a line, anywhere in a job, Upstart will *ignore* any previously specified “start on” condition.  So, assuming “manual” appears after the “start on” condition, the service will only run if the
[20:41] <marrusl> administrator manually starts it.
[20:41] <marrusl> Note that were an administrator to start the job by running, “start myjob”, Upstart will still emit the same set of 4 events automatically. So, starting a job manually may cause other jobs to start.
[20:42] <marrusl> Note too that it is good practise to specify a “stop on” condition since if you do not, the only reasonable manner to stop the job is to kill it at some unspecified time/ordering when the system is shut down.
[20:42] <marrusl> By specifying a “stop on”, you provide information to Upstart to enable it to stop the job in an appropriate fashion and at an appropriate time.
[20:42] <marrusl> adding “manual” seems like a clunky way to disable jobs, doesn't it?  I'd rather not have to hack conf files to disable a job.
[20:43] <marrusl> And what happens to my modified job if there is a new version of the package released and I update?
[20:43] <marrusl> I'll tell you, your changes will be clobbered.
[20:43] <marrusl> (ok, actually you'll be prompted by dpkg to confirm or deny the changes, but that is still pretty annoying and can be confusing for new administrators).
[20:43] <marrusl> Which is a nice segue into “override” files, which first appear in Natty.  Override files allow you to change an Upstart job without needing to modify the original job.
[20:44] <marrusl> What override files really accomplish is...  if you put the word “manual” all by itself into a file called /etc/init/awesome.override, it will have the same effect as adding “manual” to awesome.conf.
[20:44] <marrusl> So now you can disable a job from starting with a single command:
[20:44] <marrusl> echo manual >> /etc/init/awesome.override
[20:45] <marrusl> note: this is as root only.  Shell redirection doesn't really play nice with sudo.
[20:45] <marrusl> o disable a job as an admin user:
[20:45] <marrusl> echo manual | sudo tee -a /etc/init/awesome.override
[20:45] <marrusl> Since the override file won't be owned the awesome package, dpkg won't object and you can cleanly update it without having to worry about your customizations.  Yay!
[20:45] <marrusl> I don't really know, but I suspect the original purpose of override files was just to make disabling jobs cleaner.  But then a lightbulb went off somewhere...  why not let administrators override any stanza in the original job?
[20:45] <marrusl> Let's change awesome's start criteria to make it start *after* rocking or jamming.
[20:46] <marrusl> Simply create /etc/init/awesome.override and have it contain only this:
[20:46] <marrusl> “start on (started rocking or started jamming)”
[20:46] <marrusl> Now Upstart will use all of the original job file with only this one stanza changed.  This works for any other stanza or keyword.  Want to tweak the kill timeout?  Customize the pre-start?  Add a post-stop?
[20:46] <marrusl> Override files can do that.
[20:46] <marrusl> On to the last topic of this presentation:  an example of converting a Sys V script to Upstart.
[20:46] <marrusl> (looks like it will have to be fast!)
[20:46] <marrusl> In the files you branched or downloaded, I've included the Sys V script for landscape-client and my first attempt at an Upstart job to do the same thing (landscape-client.conf).
[20:47] <marrusl> First, some disclaimers... this is *not* any sort of official script, I'm not suggesting anyone use it.  I haven't gotten feedback from the landscape team yet, or properly tested it myself.
[20:47] <marrusl> But so far, it seems to be working for me fine. :)
[20:47] <marrusl> And yet, I'm pretty sure I've overlooked something.  I mentioned I wasn't a developer, right?
[20:47] <marrusl> Not knowing the internals of how landscape-client behaves, I started by trying “expect fork” and “expect daemon”.
[20:47] <marrusl> Both allowed me to start the client fine, but failed to stop them cleanly (actually the stop command never returned!).
[20:48] <marrusl> Clearly I picked the wrong approach.  In the end, running it in the foreground (no expect) allowed me to start and stop cleanly.
[20:48] <marrusl> Now, if you compare the two scripts side-by-side, the most obvious difference is the length. The Upstart job is about 65% fewer lines.
[20:48] <marrusl> This is because Upstart does a lot of things for you that had to be manually coded in Sys V scripts.
[20:48] <marrusl> In particular it eliminates the need for PID file management and writing case statements for stop, start, and restart.
[20:48] <marrusl> Well, depending on your previous experience with upstart, that was probably quite a bit of information and new concepts.  I know it took me ages to grok Upstart, and Ubuntu is my full-time job!
[20:49] <marrusl> So let me wrap up the formal part of this session with suggestions on the best ways to learn more about Upstart.  They are:
[20:49] <marrusl> “man 5 init”
[20:49] <marrusl> “man upstart-events”
[20:49] <marrusl> The Upstart Cookbook (http://upstart.ubuntu.com/cookbook)
[20:49] <marrusl> The Upstart Development blog (http://upstart.at)
[20:49] <marrusl> Your /etc/init directory.
[20:49] <marrusl> (Looking through the existing jobs on Ubuntu is incredibly helpful.)
[20:50] <marrusl> And of course.... #upstart on freenode.
[20:50] <marrusl> wait... jcastro will kill me if I don't mention http://askubuntu.com/questions/tagged/upstart
[20:50] <marrusl> With that...  questions?
[20:50] <ClassBot> There are 10 minutes remaining in the current session.
[20:52] <marrusl> I'd also like to encourage people to open questions on askubuntu... for the sheer knowledgebase win.
[20:52] <marrusl> this link will open a new question and tag it "upstart" for you:
[20:52] <marrusl> http://askubuntu.com/questions/ask?tags=upstart
[20:53] <marrusl> Thanks for your time and attention, folks.  HTH.  :)  I'll be around on freenode for a while if something pops up.
[20:55] <ClassBot> There are 5 minutes remaining in the current session.
[20:58] <marrusl> lborda asks...  first of all, Thank you for the presentation! second, what about debugging upstart services?
[20:59] <marrusl> There are a couple levels... debugging upstart itself with  job events, and debugging individual jobs.
[20:59] <marrusl> The best techniques are in the Cookbook.  Please see: http://upstart.ubuntu.com/cookbook/#debugging
[21:00] <marrusl> I guess that's a full wrap.  Take care.
[21:00] <ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/13/%23ubuntu-classroom.html