[00:08] <TheYsNoi_> hi there
[00:15] <ysnoi> checking
[00:36] <dnewkirk> exit
[01:25] <metasansana> red green
[01:25] <metasansana> green red
[01:36] <raju> #ubuntu-irc
[01:37] <raju> aah i am sorry command  command problem ,really sorry
[02:11] <BlueProtoman> Anyone here mind helping me out with my hardware acceleration?  I still can't get it working.  Ubuntu 11.04, Intel HD 3000 and nVidia GeForce 520M, both controlled via Optimus.  I've tried this https://github.com/Bumblebee-Project/Bumblebee/wiki/Troubleshooting and https://github.com/Bumblebee-Project/Bumblebee/wiki/Upgrading-on-Ubuntu , and after resetting numerous times I still have no luck.
[03:55] <TheYsNoi> lunch break..
[04:59] <TheYsNoi> installing hoN
[05:00] <ashickur-noor> What is BTW?
[05:01] <ashickur-noor> *this
[05:39] <TheYsNoi> by the way?
[09:42] <lexiadmin> ?
[12:33] <Pritam_chaingang> h
[12:34] <Pritam_chaingang> me wiki
[12:34] <Pritam_chaingang> whois
[12:34] <Pritam_chaingang> WHOIS
[12:34] <Pritam_chaingang> nick Pritam_chaingang
[12:35] <raju> Pritam_chaingang:  may i help you ? here this channel is for classroom session
[12:40] <Pritam_chaingang> quit
[12:40] <Pritam_chaingang> Quit
[12:46] <Pritam_chaingang> yes yes
[12:46] <Pritam_chaingang> i am not able to enable my wireless in ubuntu
[12:46] <Pritam_chaingang> how do i do that
[12:47] <Pritam_chaingang> it was working before
[12:48] <sbte> Pritam_chaingang, please ask your questions about Ubuntu in #ubuntu
[12:56] <Pritam_chaingang> quit
[12:57] <Partyschaum> lol
[13:03] <Pritam_chaingang> CHAT
[13:05] <Pritam_chaingang> when is the class
[13:15] <Partyschaum> Pritam_chaingang: see topic
[13:16] <Pritam_chaingang> thanks
[14:12] <Abvayad> Hello
[14:15] <Abvayad> .
[14:20] <tachyons> .
[14:25] <metasansana> :P
[14:51] <seb128> hey
[14:52] <nigelb> Heya! :)
[14:54] <abvayad> l
[14:55] <dholbach> Welcome everybody to day 3 of Ubuntu Developer Week!
[14:56] <dholbach> Before I hand over to seb128, here a few organisational bits
[14:56] <dholbach> If you haven't joined #ubuntu-classroom-chat yet, please do it
[14:56] <dholbach> it's where we chat and ask questions
[14:56] <dholbach> and if you do ask questions, please prefix them with QUESTION:, ie:
[14:56] <dholbach> QUESTION: What should be the default language in Ubuntu?
[14:57] <dholbach> also, check out https://wiki.ubuntu.com/UbuntuDeveloperWeek because we added logs for all the sessions which happened yesterday and the day before
[14:57] <dholbach> and tomorrow morning I'll go and add session logs for today
[14:57] <dholbach> How has UDW been for you up until now? :)
[14:59] <dholbach> I'm glad at least one person is answering in #ubuntu-classroom-chat ;-)
[14:59] <dholbach> you all have 2 minutes left to wake up or get something to drink before my good friend Sébastien seb128 Bacher takes over
[14:59] <dholbach> enjoy! :)
[14:59] <seb128> oh, drink!
[14:59]  * seb128 grabs a glass for water
[15:00] <seb128> dholbach, hey btw, how was *your* week? ;-)
[15:00] <ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2012/02/02/%23ubuntu-classroom.html following the conclusion of the session.
[15:00] <seb128> ok, it's time to start
[15:00] <seb128> thanks everybody for joining
[15:01] <seb128> I'm Sebastien Bacher and working in the Ubuntu Desktop Team
[15:01] <seb128> I will show you a bit of what you do in the desktop team today
[15:01] <seb128> first some infos, you can find us on #ubuntu-desktop on this IRC channel
[15:02] <seb128> or that mailing list: https://lists.ubuntu.com/mailman/listinfo/ubuntu-desktop
[15:02] <seb128> feel free to join to ask question, participate to discussion or help doing work ;-)
[15:02] <seb128> so let's get started, what we do in the desktop team is basically keeping the desktop in shape
[15:02] <seb128> which means
[15:03] <seb128> - keeping the components updated
[15:03] <seb128> - fixing bugs
[15:03] <seb128> - looking to your bug reports
[15:03] <seb128> - talking to our upstreams (GNOME, Debian, dx, etc)
[15:03] <seb128> you can find some pointers and documentation on https://wiki.ubuntu.com/DesktopTeam/
[15:04] <seb128> I will first walk you through a backport of an upstream GNOME git commit to one of our packages
[15:04] <seb128> note that the desktop team packages use a different workflow for most part than the "UDD" new standard workflow
[15:05] <seb128> the reason are basically that it works better for us still and we wait for a few extra issues to get sorted with UDD (mostly quilt handling and download times)
[15:05] <seb128> you can read https://wiki.ubuntu.com/DesktopTeam/Bzr to get some details on our workflow
[15:05] <seb128> so
[15:05] <seb128> let's start by fetch a source
[15:05] <seb128> we will use gnome-control-center in that example
[15:05] <seb128> for that you usually want to use "debcheckout <source>"
[15:06] <seb128> $ debcheckout gnome-control-center
[15:06] <seb128> in this case
[15:06] <seb128> which should give you something around those lines
[15:06] <seb128> bzr branch http://code.launchpad.net/~ubuntu-desktop/gnome-control-center/ubuntu gnome-control-center ...
[15:06] <seb128> then it's downloading the source for you
[15:07] <seb128> wait a bit for that to be done and you should get that:
[15:07] <seb128> $ ls
[15:07] <seb128> gnome-control-center
[15:07] <seb128> gnome-control-center_3.2.2-2ubuntu5.debian.tar.gz
[15:07] <seb128> gnome-control-center_3.2.2-2ubuntu5.dsc
[15:07] <seb128> gnome-control-center_3.2.2.orig.tar.bz2
[15:07] <seb128> the first one is a checkout of the vcs directory
[15:07] <seb128> the other files are the source package from the archive
[15:07] <seb128>  
[15:08] <seb128> let's have a look to the dir
[15:09] <seb128> hum, demo effect, that didn't do what I wanted today on precise
[15:09] <seb128> will have to look into that
[15:09] <seb128> so what it should have done and what I'm doing now it's what it wrote
[15:09] <seb128> which is basically
[15:10] <seb128> bzr branch lp:~ubuntu-desktop/gnome-control-center/ubuntu g
[15:10] <seb128> bzr branch lp:~ubuntu-desktop/gnome-control-center/ubuntu
[15:10] <seb128> ok, now I got what I wanted ;-)
[15:10] <seb128> let's visit that directory
[15:10] <seb128> cd ubuntu
[15:10] <seb128> $ ls
[15:11] <seb128> debian
[15:11] <seb128> so you get a debian dir there, it's where we will do our work
[15:11] <seb128> to continue you will need the bzr-builddeb and the build-depends of gnome-control-center
[15:11] <seb128> so assuming that you have the deb-src for oneiric or precise enable you can type those commands
[15:12] <seb128> sudo apt-get install bzr-builddeb
[15:12] <seb128> sudo apt-get build-dep gnome-control-center
[15:12] <seb128> the second one will install all the Build-Depends for gnome-control-center, i.e what you need to build it
[15:13] <seb128> now you should be set
[15:13] <seb128> so we are in the debian directory
[15:13] <seb128> rather in the ubuntu dir
[15:13] <seb128> bzr bd-do will give you a shell environement where you can do changes
[15:14] <seb128> whatever change you do that in the debian directory will be applied to the vcs checkout you did when you exit 0
[15:14] <seb128> if you exit with something else than 0 you will not get the changes copied over
[15:16] <seb128> let's assume for the exercice than we want to backport the fix from http://git.gnome.org/browse/gnome-control-center/commit/?h=gnome-3-2&id=81fb28de5cf8708932224f482a8923322191b00f
[15:16] <seb128> ie http://git.gnome.org/browse/gnome-control-center/patch/?id=81fb28de5cf8708932224f482a8923322191b00f
[15:16] <seb128> so you did bzr bd-do and are in an unpacked source of gnome-control-center
[15:18] <seb128> let's save the content of that commit in a file, name it git_backport_printer_fix.patch
[15:18] <seb128> new you do
[15:18] <seb128> quilt import git_backport_printer_fix.patch
[15:19] <seb128> Importing patch git_backport_printer_fix.patch (stored as git_backport_printer_fix.patch)
[15:19] <seb128>  
[15:19] <seb128> the patch is imported
[15:19] <seb128> let's check that it applies well
[15:19] <seb128> $ quilt push
[15:19] <seb128> Applying patch git_backport_printer_fix.patch
[15:19] <seb128> patching file panels/printers/cc-printers-panel.c
[15:19] <seb128> Now at patch git_backport_printer_fix.patch
[15:19] <seb128>  
[15:19] <seb128> it does
[15:19] <seb128> let's refresh it
[15:19] <seb128> $ quilt refresh
[15:19] <seb128> Refreshed patch git_backport_printer_fix.patch
[15:20] <seb128> so all good
[15:20] <ClassBot> There are 10 minutes remaining in the current session.
[15:20] <seb128> we have our patch, it applies to our package and we refresh it
[15:20] <seb128> we are happy about it
[15:20] <seb128> we can exit
[15:20] <seb128> $ exit 0
[15:20] <seb128> it bring you back to your vcs dir
[15:21] <seb128> let's see where we stand
[15:21] <seb128> $ bzr statusmodified:
[15:21] <seb128>   debian/patches/series
[15:21] <seb128> unknown:
[15:21] <seb128>   debian/patches/git_backport_printer_fix.patch
[15:21] <seb128> ok
[15:21] <seb128> so the series got updated, and we have a new file
[15:21] <seb128> $ bzr diff
[15:21] <seb128> [15:21] <seb128> --- debian/patches/series	2012-01-25 16:38:52 +0000
[15:21] <seb128> +++ debian/patches/series	2012-02-02 15:18:21 +0000
[15:21] <seb128> @@ -25,3 +25,4 @@
[15:21] <seb128>  93_change_window_role_on_panel_change.patch
[15:21] <seb128>  94_git_adding_shortcuts.patch
[15:21] <seb128>  95_git_ctrlw_shortcut.patch
[15:21] <seb128> +git_backport_printer_fix.patch
[15:21] <seb128>  
[15:21] <seb128> the patch got added to the serie
[15:21] <seb128> we need to add the file to the vcs, that was not done for us
[15:21] <seb128> so we use
[15:22] <seb128> $ bzr add debian/patches/git_backport_printer_fix.patch
[15:22] <seb128> adding debian/patches/git_backport_printer_fix.patch
[15:22] <seb128> at this point we are mostly good
[15:22] <seb128> we need still to update the changelog
[15:22] <seb128> type
[15:22] <seb128> dch -i
[15:22] <seb128> and write a changelog entry
[15:22] <seb128>  * backport fix from git (lp: #...) for example
[15:22] <seb128> then you can bzr diff again
[15:22] <seb128> you get a new revision with a backported patch ready to test!
[15:22] <seb128> now to test it's easy, run bzr bd
[15:23] <seb128> it will build the new version for you
[15:23] <seb128> if you don't have a gpg key matching your changelog email don't worry, it will warn you about that, that's ok
[15:23] <seb128> your deb are in ../build-area after the build
[15:23] <seb128> you can go there, sudo dpkg -i *.deb and try the new version
[15:24] <seb128> if you are happy with the change go back to the ubuntu dir
[15:24] <seb128> $ bzr commit
[15:24] <seb128> to commit you work
[15:24] <seb128> then
[15:24] <seb128> $ bzr push lp:~yourlaunchpadid/gnome-control-conter/my-fix
[15:24] <seb128> $ bzr lp-submit to do a merge proposal
[15:25] <seb128> and wait for us to review your work ;-)
[15:25] <seb128>  
[15:25] <ClassBot> There are 5 minutes remaining in the current session.
[15:25] <seb128> ok, that was basically it for a patch backport, 5 minutes left
[15:25] <seb128> questions?
[15:25] <ClassBot> dholbach asked: Will Wanda the Fish ever come back?
[15:26] <seb128> dholbach, it's still in gnome-panel upstream, you can "free the fish" in the dash but it's not as cool as the vuntz' version I've been told and will not go around you screen ;-)
[15:26] <ClassBot> jincreator asked: I tried debcheckout but only get debian folder, not having ,dsc, .debian.tar.gz, .orig.tar.bz2.
[15:27] <seb128> jincreator, that's probably better, seems the tools changed behaviour in precise which confused me
[15:27] <seb128> you probably got the equivalent of my second command
[15:27] <seb128>  
[15:27] <ClassBot> ali1234 asked: don't you need to export QUILT_PATCH_DIR?
[15:27] <seb128> good point, I'm sorry about that
[15:27] <seb128> it's one of those suboptimal thing you have to do, I did it years ago and forgot :p
[15:28] <seb128> you need to "export QUILT_PATCHES=debian/patches"
[15:28] <seb128> put that in your environement then you can forget about it like me :p
[15:28] <seb128>  
[15:28] <ClassBot> ali1234 asked: what if the package doesn't use quilt but something else?
[15:28] <seb128> I couldn't cover all the patch systems there
[15:29] <seb128> you can use edit-patch (great tool that mvo wrote) than wrap around the different patch systems
[15:29] <seb128> it will do the quilt magic for you, or call cdbs-edit-patch or similar
[15:29] <seb128>  
[15:29] <seb128> one minutes left and no question
[15:29] <seb128> seems we are good ;-)
[15:30] <ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2012/02/02/%23ubuntu-classroom.html following the conclusion of the session.
[15:31] <om26er> Hey! should we start
[15:31] <om26er> Hello! my name is Omer Akram. I have been in Ubuntu since mid 2009 and have been active in the community since then. I am a Ubuntu BugSquad and Bug control member. I also am a Bug Triager for Unity and the related projects. I had my first ever date yesterday <3
[15:32] <om26er> So in this session I am going to discuss about how to triage desktop bugs efficiently. We receive hundreds of Bug reports in Ubuntu daily. There are basic two types of Bugs. 1 functionality problem 2 Crash :-)
[15:32] <om26er> the crash are the dangerous ones and are mostly hard to fix
[15:33] <om26er> A bug in a software is a fault that creates problems for people using it, it could be visual, functional etc
[15:33] <om26er> The role of Ubuntu bugsquad is to complete bug reports. i.e. to add any missing information, clear up the description of the bug report and make the report clean and simple so that the developers spend more time in fixing the issue then looking through bugs.
[15:34] <om26er> I have in bugsquad long enough I believe, we always get new contributors some who really do triage like crazy for so many hours
[15:34] <om26er> Bugs with proper steps to reproduce are the one's that really are USEFUL bugs or if they have a proper crash stacktrace.
[15:35] <om26er> the steps to reproduce are really very important
[15:35] <om26er> there are like 90000 bugs logged in launchpad and I believe there are many of those useless
[15:35] <om26er> i.e. the reporters didn't care to report enough details etc
[15:36] <om26er> lets say for example this bug https://bugs.launchpad.net/ubuntu/+source/unity/+bug/904348
[15:36] <om26er> the reporter reported the bug and did not bother to mention the problem really
[15:37] <om26er> its totally not related to Unity and needs to be relocated to some other package, he didnot mention detailed when the problem started
[15:37] <om26er> so our (BugSquad
[15:37] <om26er> responsibility here is that we should inquire the reporter to send more information about the issue so that the bug can be triaged easily
[15:38] <om26er> We do not handle questions in Bug reports so please people don't ask questions in Bug reports we have an answers section in launchpad for that.
[15:39] <om26er> the above was one example of useless bugs ;)
[15:39] <om26er> Sometimes people throw a bunch of issues into one single bug report that is really not a good thing when reporting a bug.
[15:39] <om26er> thats another type
[15:39] <om26er> our most beloved type is ofcourse called the RANT type :p
[15:40] <om26er> so to talk business, if you are want to start help Ubuntu Bug triage could be a good entering point
[15:41] <om26er> since you learn alot while triaging bugs, a bit about branches and stuff
[15:41] <om26er> If you are starting/thinking to start helping in Ubuntu bug triage you should join the Ubuntu BugSquad that is the team to handle bug reports in Ubuntu.
[15:41] <om26er> Most of the bugs in Ubuntu are Upstream
[15:41] <om26er> whats upstream?
[15:42] <om26er> actually every software have an upstream :P
[15:42] <om26er> upstream means the developers the people writing the software
[15:42] <om26er> SO out bugs need to be sent to the package's upstream
[15:43] <om26er> Empathy for example our chat client in Ubuntu have the nice batch of developers, who are really helpful and quick to response.
[15:44] <om26er> bugs reported by Ubuntu users in Launchpad for empathy need to be sent to bugzilla.gnome.org where the empathy upstream project is hosted
[15:44] <om26er> (oh boy I feel I am typing quite fast, never really typed that much)
[15:45] <om26er> the upstream developers would fix bugs which they understand so if people report useless bug reports you should not send the report straight upstream you should first try to get a good set of information from the reporter in launchpad and then send the bug to Bugzilla so that the developers don't get headaches ;-)
[15:46] <om26er> we really need to keep our upstreams happy
[15:47] <om26er> I actually think if someone, A Ubuntu user really wants to help Ubuntu he/should just go to software-properties and enable the -proposed repository and help test the proposed bug fixes
[15:47] <om26er> this is a very useful guide about how to triage: https://wiki.ubuntu.com/Bugs/HowToTriage/
[15:48] <om26er> I read it quite a lot when I was applying for Ubuntu bug control
[15:48] <om26er> so who are Bug Control
[15:49] <om26er> well they are the next step after bugsquad, bug control people are those who set importance to bug reports, and assign bugs to developers and nominate bug for any specific release
[15:50] <ClassBot> There are 10 minutes remaining in the current session.
[15:51] <om26er> We use stock responses for bug reports for example If a bug report is missing its steps to reproduce I would not just go and say "you are missing the steps to reproduce the problem"
[15:51] <om26er> but we choose a more respectful approach and use the stock responses for Bugs  https://wiki.ubuntu.com/Bugs/Responses
[15:52] <om26er> so for this specific case of missing steps we use:
[15:52] <om26er> Thank you for taking the time to report this bug and helping to make Ubuntu better. Please answer these questions:
[15:52] <om26er> * Is this reproducible?
[15:52] <om26er> * If so, what specific steps should we take to recreate this bug?
[15:52] <om26er> This will help us to find and resolve the problem.
[15:53] <om26er> I remember a few of the initial bugs I reported in Ubuntu so the response came very humble from seb128 that the issue was upstream one and I should report it there...
[15:54] <om26er> I really was like these are good people they respect.. this is what I got the first reply for my very intial bug report
[15:54] <om26er> .
[15:54] <om26er> Thank you for taking the time to report this bug and helping to make Ubuntu better. The issue you are reporting is an upstream one and it would be nice if somebody having it could send the bug to the developers of the software by following the instructions at https://wiki.ubuntu.com/Bugs/Upstream/GNOME. If you have done so, please tell us the number of the upstream bug (or the link), so we can add a bugwatch that will inform us about its st
[15:54] <om26er> atus. Thanks in advance.
[15:55] <ClassBot> There are 5 minutes remaining in the current session.
[15:55] <om26er> that thanks 2 times really inspired me, these Ubuntu people are so kind. so these responses really matter
[15:57] <om26er> in summary, a useful bug report is the one that is reprodueable with steps
[15:57] <om26er> a useful bug report have a visual attached i.e. a screenshot or video to show the problem
[15:57] <om26er> a good bug report is easy to understand
[15:57] <om26er> Now a good BugTraiger is the one who is patient
[15:58] <om26er> since the reporters can sometime really be Pain in... :D
[15:58] <om26er> Desktop bugs needs YOU
[15:58] <om26er> Ubuntu needs You :)
[15:58] <om26er> new contributors are always welcome
[15:59] <om26er> look at me I only triaged bugs and now I am a Ubuntu Member, Gnome Foundation member and Work for Canonical
[16:00] <om26er> (well contract for canonical :p)
[16:00] <om26er> I guess the time is ending
[16:00] <ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2012/02/02/%23ubuntu-classroom.html following the conclusion of the session.
[16:00] <mhall119> thanks om26er
[16:01] <mhall119> bug triaging really helps developers keep up, and anybody can do it
[16:01] <mhall119> alright, welcome everybody
[16:02] <mhall119> so what is Singlet?
[16:02] <mhall119> well, it's a python library for writing Unity lenses
[16:03] <mhall119> but Lenses are just DBus and GObject, they don't *need* a dedicated python library
[16:03] <mhall119> in fact, many community lenses are already being written in python withut Singlet
[16:03] <mhall119> so then, why Singlet?
[16:04] <mhall119> because as a Python developer, I hate GObject and DBus
[16:04] <mhall119> it's not that they're bad technologies, it's just that they're not "pythonic"
[16:04] <mhall119> so using them in python feels unnecessarily complicated
[16:05] <mhall119> I wrote my first 2 Unity lenses using them, and Singlet developed out of that
[16:06] <mhall119> so Singlet itself works to hide all of those non-pythonic bits from you, so as a developer you get a nice, clean, simple interface
[16:06] <mhall119> Singlet does this with a combination of traditional object-oriented abstraction, and a bit of Python meta-class programming
[16:06] <mhall119> http://mhall119.com/2011/12/writing-unity-lenses/ gives an introduction and has some example source code
[16:07] <mhall119> whoops, wrong link
[16:07] <mhall119> http://mhall119.com/2012/01/simplified-unity-lens-development-with-singlet/
[16:07] <mhall119> that one
[16:07] <mhall119> if anybody here has done any Django programming, it should look familiar to you
[16:08] <mhall119> that's because I borrowed heavily form the way Django used simple Python classes as models, and from there can automatically do a lot of the boiler-plate work for you
[16:08] <mhall119> Singlet does the same, you give it a little meta-data to describe your lens
[16:08] <mhall119> define some categories
[16:08] <mhall119> and then implement the search() method
[16:09] <mhall119> and behind the scenes it's instantiating the GObject classes you'll need and connecting everything to DBus for you
[16:10] <mhall119> Singlet also includes helper commands for running your Lens as a daemon process, and generating the .lens and .service files needed to install it, all from the meta-data you provide
[16:10] <mhall119> any questions so far on what Singlet is?
[16:11] <mhall119> alright, moving along
[16:11] <mhall119> the example given in http://mhall119.com/2012/01/simplified-unity-lens-development-with-singlet/ shows how to make a simple Lens with one built-in Scope
[16:12] <mhall119> if you attended davidcalle and mhr3's Lens session on Tuesday, you'll know that they can either exist in the same code, or separate code
[16:12] <mhall119> In Singlet 0.1, which only works on Oneiric, this was all it could do
[16:13] <mhall119> but Singlet 0.2, which was upgraded for Precise, introduces the ability to separate them
[16:13] <mhall119> http://mhall119.com/2012/01/singlet-part-0-2/
[16:13] <mhall119> Singlet 0.2 is being packaged for Precise, and will be available in the default repositories
[16:14] <mhall119> this means you can write lenses and scopes that use Singlet, and make them easily installable
[16:15] <mhall119> Now you don't *have* to use Singlet to write a Python lens, but it cuts the amount of code you need to write to get one started in half
[16:15] <mhall119> and it also means that you have a compatibility layer between your code and any possible API changes
[16:15] <mhall119> for example, in the move from Unity 4(Oneiric) to Unity5(Precise), the Lens API changed
[16:15] <mhall119> but Singlet's API didn't
[16:16] <mhall119> so a Lens written for Singlet 0.1 on Oneiric would be able to run without modification on Singlet 0.2 on Precise
[16:18] <mhall119> As for packaging, sometime soon we will have a Quickly template for writing Singlet lenses and scopes
[16:18] <mhall119> for anybody not familiar with Quickly, it's a tool for rapid application development
[16:19] <mhall119> it'll create any files and directories you need to get started on a specific kind of project (like command-line program, desktop app, or Unity lens)
[16:19] <mhall119> it also provide packaging files for you, so that your app can be uploaded to the Ubuntu Software Center
[16:19] <mhall119> there are already  a handful of Unity lenses and scopes in the Software Center
[16:20] <ClassBot> There are 10 minutes remaining in the current session.
[16:20] <mhall119> and with Singlet + Quickly, we'll open the door for opportunistic developers to put together lenses that will feed their desired content directly into the Unity dash
[16:21] <mhall119> Lenses are one of the most exciting and unique features for extending Unity, and I expect to see a large number of them being developed for Ubuntu 12.04
[16:21] <mhall119> and, hopefully, a large number of them will be doing it with Singlet
[16:21] <mhall119> alright, any questions before I'm out of time?
[16:23] <mhall119> alright, thanks to those you attended
[16:24] <mhall119> anybody interested can get the project and code from https://launchpad.net/singlet
[16:24] <mhall119> and the above blog entries will serve as a tutorial for writing Singlet lenses and scope, at least until I can put a more formal tutorial together
[16:25] <ClassBot> There are 5 minutes remaining in the current session.
[16:27] <mhall119> alright, I guess I'll turn things over to tumbleweed then
[16:27]  * tumbleweed waves
[16:28] <tumbleweed> I was still writing some notes, but no reason not to start a couple of minutes early :)
[16:28] <tumbleweed> Hi everyone,
[16:28] <tumbleweed> Some of you may have been at the session I took yesterday afternoon, on working with Debian
[16:28] <tumbleweed> For everyone else, hi, I'm Stefano Rivera, an Ubuntu MOTU and Debian Developer
[16:28] <tumbleweed> I'm here to talk to you about building packages locally with pbuilder
[16:28] <tumbleweed> If you followed Daniel Holbach's introduction on Tuseday, you'll have already set up pbuilder, using pbuilder-dist. https://wiki.ubuntu.com/MeetingLogs/devweek1201/DevEnvironmentSetup
[16:28] <tumbleweed> If you haven't, install ubuntu-dev-tools and pbuilder right now, and run "pbuilder-dist precise create" (it'll take quite a while)
[16:29] <tumbleweed> Maybe you'll be done by the end of the session... :)
[16:29] <tumbleweed> As you can imagine from its name, pbuilder is a tool for building Debian packages
[16:30] <tumbleweed> (Ubuntu packages aren't any different)
[16:30] <ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2012/02/02/%23ubuntu-classroom.html following the conclusion of the session.
[16:30] <tumbleweed> aha, we've officially started now
[16:30] <tumbleweed> If you haven't used pbuilder before, you make wonder what the point is: Anyone can just build a source package by extracting it and running debuild, right?
[16:31] <tumbleweed> Well, you need all the build dependencancies installed, so you're machine will quickly end up with lots of packages that you don't really need installed.
[16:31] <tumbleweed> Your machine will have packages installed in it that the official build machines don't
[16:31] <tumbleweed> All packages in the main archive are built in restricted build environments on buildds (build machines)
[16:32] <tumbleweed> so, just building locally with debuild may give different results
[16:32] <tumbleweed> (with well behaved packages it doesn't, but we probably spend more time on the badly behaving ones, right? :)
[16:32] <tumbleweed> The buildds run a program called sbuild to do the builds. It installs the build dependencies in a minimal chroot, builds the package, and then throws away the chroot
[16:32] <tumbleweed> We'd like to have a similar environment for testing builds
[16:33] <tumbleweed> Something we can use to find likley build failures before we upload
[16:33] <tumbleweed> (you do test before uploading, right?)
[16:33] <tumbleweed> and to help us debug failures
[16:33] <tumbleweed> Launchpad has PPAs, using the same build systems an the primary Ubuntu archives, why do you need to build things locally?
[16:33] <tumbleweed> Well, there are a few reasons:
[16:33] <tumbleweed> Launchpad's PPA builders can get rather backed up, sometimes the queue gets as long as a few days (currently they are quite short, though: https://launchpad.net/builders )
[16:34] <tumbleweed> Your machine is probably faster than some of the buildds (at least mine is :P )
[16:34] <tumbleweed> You don't have to bump the version every time you want to try a build
[16:34] <tumbleweed> You can log into the build environment and debug things, when they go wrong
[16:34] <tumbleweed> ^ That one is the killer feature
[16:34] <tumbleweed> So, we want to set up a reproducable minimal build environment locally, that we can use for testing package builds and debugging them
[16:34] <tumbleweed> You could use sbuild, like the buildds, or you could use pbuilder, which is simpler and easier to customise.
[16:35] <tumbleweed> ere's also cowbuilder, which is a "faster pbuilder", but I won't bother going into that, if you're reaching the limits of pbuilder, I suggest also looking at sbuild
[16:35] <tumbleweed> err "There's"
[16:36] <tumbleweed> Personally, I build everything with sbuild, but I'm here to talk about pbuilder, so that's what I'm doing :P I used to use it, there's nothing wrong with it
[16:36] <tumbleweed> now seems to be a good time to see if I've actually got an audience
[16:36] <tumbleweed> anyone currently building a pbuilder chroot?
[16:36] <tumbleweed> anyone run into trouble already?
[16:37] <tumbleweed> So, details:
[16:37] <tumbleweed> What pbuilder does:
[16:37] <tumbleweed> When you create a chroot, it debootstraps a clean install into a directory (debootstrap is a very low level tool for minimal Debian/Ubuntu installs)
[16:37] <tumbleweed> It then packs up that directory into a tarball.
[16:37] <tumbleweed> Whenever you do a build, it upacks that tarball, chroots into the unpack directory, and does the build.
[16:37] <tumbleweed> The directory can be deleted afterwards
[16:38] <tumbleweed> (It's worth noting that sbuild used to be far more complicated, using LVM, but these days it supports overlay filesystems, so it's totally reasonable to set it up locally)
[16:38] <tumbleweed> So, how come I'm talking about pbuilder when I told you to run "pbuilder-dist" ?
[16:38] <tumbleweed> Well, it turns out that pbuilder is very flexible, but doesn't come with all the features you'd want, out the box
[16:38] <tumbleweed> It assumes that you are building everything for a single distribution and release (which makes some sense in Debian, but less so in Ubuntu)
[16:38] <tumbleweed> So, we have a wrapper around pbuilder in ubuntu called pbuilder-dist
[16:39] <tumbleweed> All it does is call pbuilder with a bunch of command line options (you can see that with a ps x)
[16:39] <tumbleweed> It also puts the chroot tarbals and built files into a friendly ~/pbuilder/ rather than /var/cache/pbuilder/
[16:39] <tumbleweed> before pbuilder-dist, most people used to use complicated pbuilder configuratino files that did all of that for them
[16:39] <tumbleweed> figuring out what release to build for, from environment variables
[16:40] <tumbleweed> e.g. https://wiki.ubuntu.com/PbuilderHowto#Multiple_pbuilders
[16:40] <tumbleweed> pbuilder-dist makes it a lot quciker for you to get started
[16:40] <tumbleweed> Right, so, if you have a working pbuilder, you can test building something in it
[16:41] <tumbleweed> grab a package that'll build quickly, here's one I can think of, beautifulsoup:
[16:41] <tumbleweed> $ pull-lp-source -d beautifulsoup
[16:41] <tumbleweed> $ pbuilder-dist precise beautifulsoup_3.2.0-2build1.dsc
[16:42] <tumbleweed> and it should build...
[16:42] <ClassBot> coolbhavi asked: How to get pbuilder log files?
[16:42] <tumbleweed> pbuilder will store a log file next to the build output
[16:43] <tumbleweed> so, next to the debs, you should see a file called beautifulsoup_3.2.0-2build1_amd64.build
[16:43] <tumbleweed> (or i386, if you use i386)
[16:44] <tumbleweed> Next up: pbuilder maintainance
[16:44] <tumbleweed> it's not very useful to build in last week's development environment, Ubuntu development releases move fast
[16:44] <tumbleweed> you always want to build with current packages
[16:45] <tumbleweed> so, every now and then (say, when you start working on something for the first time in the day, run pbuilder-dist update precise)
[16:45] <tumbleweed> (you may set up a cronjob for that if you do this a lot)
[16:45] <ClassBot> coolbhavi asked: I heard about pbuilder and pdebuild whats the difference between them?
[16:46] <tumbleweed> right, pdebuild is like "debuild" for pbuilder
[16:46] <tumbleweed> you can run it inside an extracted source package, and it'll build the source package, and pass it to pbuilder
[16:46] <tumbleweed> unfortunetly, we don't have a pdebuild for pbuilder-dist (there's a bug requesting one)
[16:46] <tumbleweed> so it's not very useful for pbuilder-dist users
[16:47] <tumbleweed> OK, onwards!
[16:48] <tumbleweed> right, so you updated your pbuilder 3 hours ago, and it's already clearly out of date, because you are seeing 404 errors during the dependancy installation phase
[16:48] <tumbleweed> it'd make a lot more sense to make it update itself at the start of the build
[16:48] <tumbleweed> fortunately, pbuilder comes with a "hook script" feature
[16:48] <tumbleweed> you can make it run programs at certain times
[16:48] <tumbleweed> (you can read all the gorey details in the pbuilder manpage)
[16:48] <tumbleweed> there are a few nice example scripts in /usr/share/doc/pbuilder/examples/ that I want to bring to your attention
[16:50] <tumbleweed> hrm, I swear there used to be a D90update script there, but I don't see one
[16:50] <ClassBot> There are 10 minutes remaining in the current session.
[16:50] <tumbleweed> anyway, I have my own, which just runs "/usr/bin/apt-get update"
[16:50] <tumbleweed> handy for making sure the package lists are up to date before building
[16:51] <tumbleweed> there are also examples (debc and lintian) for outputting useful information at the end of the build
[16:51] <tumbleweed> debc shows you what's in the built packages
[16:51] <tumbleweed> lintian does some cursory checks on the packages
[16:51] <tumbleweed> (all of this ends up in your build log, and is very handy for debugging)
[16:52] <tumbleweed> the dpkg-i script tries to install and remove the packages that were built
[16:52] <tumbleweed> this can pick up problems in the maintainer scripts
[16:52] <tumbleweed> (you can also do this separately with piuparts)
[16:52] <tumbleweed> and by far the most useful one, C10shell
[16:52] <tumbleweed> this is awesome
[16:53] <tumbleweed> when a build fails, it dumps you in a shell, inside the build chroot
[16:53] <tumbleweed> so you can play around, and figure out what went wrong and how to fix it
[16:53] <tumbleweed> I highly recommend enabling it
[16:53] <tumbleweed> how do you use these? Well you create a directory in your home directory, traditionally ~/.pbuilder-hooks
[16:54] <tumbleweed> then you put HOOKDIR=$HOME/.pbuilder-hooks in your .pbuilderrc
[16:54] <ClassBot> jincreator asked: You mentioned "pbuilder-dist update precise" but I think it is "pbuilder-dist precise update"...
[16:54] <tumbleweed> jincreator: that may be true, I'm rusty
[16:54] <tumbleweed> I know it does offer some leewaywith getting arguments in the wrong order
[16:55] <ClassBot> There are 5 minutes remaining in the current session.
[16:55] <tumbleweed> another pretty cool thing you can do with pbuilder is to build on a different architecture
[16:55] <tumbleweed> if instead of saying "precise" you say "precise-i386", it'll create an i386 chroot on your amd64 system. Handy
[16:56] <tumbleweed> you can even build armel packages on i386/amd64, through emulation with qemu-user-static
[16:56] <tumbleweed> It'll sometimes not work, but it mostly does
[16:56] <tumbleweed> (I'm talking about qemu there, sometimes it segfaults unexpectadly)
[16:57] <tumbleweed> final questions?
[16:57] <tumbleweed> Sorry that was crazy fast, 30 min slots are short...
[16:58] <tumbleweed> There are lots of people who know pbuilder, cowbuilder and sbuild backwards, who hang out in #ubuntu-motu, feel free to ask us questions any time
[16:59] <ClassBot> alucardni asked: how can I set pbuilder to build the package twice?
[16:59] <tumbleweed> You just say --twice
[16:59] <tumbleweed> That's really handy for finding packages that don't clean correctly
[16:59] <tumbleweed> that is, their clean rule doesn't cleanup everything that the build created / modified
[17:00] <tumbleweed> OK, I'm done, thanks for listening
[17:00] <ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2012/02/02/%23ubuntu-classroom.html following the conclusion of the session.
[17:00] <coolbhavi> thanks tumbleweed for the great session
[17:00] <coolbhavi> Hi all I am Bhavani Shankar a 24 year old ubuntu developer from India
[17:00] <coolbhavi> I am going to take you through what is meant by a changelog in general and how to write effective changelogs in the ubuntu sphere
[17:01] <coolbhavi> Before we start off please download a package (totem for instance) and when we navigate through the source tree we find files named ChangeLog, NEWS, et.al
[17:01] <coolbhavi> So lets start :)
[17:01] <coolbhavi> == Meaning of a changelog ==
[17:01] <coolbhavi> A changelog is nothing but a log or record of changes made to a project or a software
[17:02] <coolbhavi> == The basic funda behind a Changelog ==
[17:02] <coolbhavi> Naming convention of a changelog file generally is a txt file with a name as ChangeLog
[17:02] <coolbhavi> Sometimes its also called by CHANGES or HISTORY
[17:02] <coolbhavi> (It may be brought to notice that sometimes some packages/software have a file called NEWS is usually a different file reflecting changes between releases, not between the commits)
[17:03] <coolbhavi> Some Version control systems are able to generate the relevant information that is suited as a changelog.
[17:03] <coolbhavi> == General Format of a changelog file ==
[17:03] <coolbhavi> Most changelog files follow the following format
[17:03] <coolbhavi> YYYY-MM-DD  Joe Hacker  <joe@hacker.com>
[17:03] <coolbhavi>     * myfile.ext (myfunction): my changes made
[17:03] <coolbhavi>     additional changes
[17:03] <coolbhavi>     * myfile.ext (unrelated_change): my changes made
[17:03] <coolbhavi>     to myfile.ext but completely unrelated to the above
[17:03] <coolbhavi>     * anotherfile.ext (somefunction): more changes
[17:04] <coolbhavi> (These files are generally organised by paragraphs defining a unique change wrt a function or a file)
[17:04] <coolbhavi> More on changelogs and changelog formats here: http://www.gnu.org/prep/standards/html_node/Change-Logs.html
[17:05] <coolbhavi> ok now moving on,
[17:05] <coolbhavi> == Ubuntu and Changelogs ==
[17:05] <coolbhavi> The changelog file pertaining to a Ubuntu package is stored in the debian directory under the name changelog according to debian policy
[17:05] <coolbhavi> Debian policy defines its changelog to be in this format:
[17:05] <coolbhavi> package (version) distribution(s); urgency=urgency
[17:05] <coolbhavi>      	    [optional blank line(s), stripped]
[17:05] <coolbhavi>        * change details
[17:05] <coolbhavi>          more change details
[17:05] <coolbhavi>      	    [blank line(s), included in output of dpkg-parsechangelog]
[17:06] <coolbhavi>        * even more change details
[17:06] <coolbhavi>      	    [optional blank line(s), stripped]
[17:06] <coolbhavi>       -- maintainer name <email address>[two spaces]  date
[17:06] <coolbhavi> In particular, The date has the following format:
[17:06] <coolbhavi>      day-of-week, dd month yyyy hh:mm:ss +zzzz
[17:07] <coolbhavi> which is got by running the date -R command in a terminal
[17:07] <coolbhavi> where:
[17:07] <coolbhavi>     day-of week is one of: Mon, Tue, Wed, Thu, Fri, Sat, Sun
[17:07] <coolbhavi>     dd is a one- or two-digit day of the month (01-31)
[17:07] <coolbhavi>     month is one of: Jan, Feb, Mar, Apr, May, Jun, Jul, Aug, Sep, Oct, Nov, Dec
[17:07] <coolbhavi>     yyyy is the four-digit year (e.g. 2012)
[17:07] <coolbhavi>     hh is the two-digit hour (00-23)
[17:07] <coolbhavi>     mm is the two-digit minutes (00-59)
[17:07] <coolbhavi>     ss is the two-digit seconds (00-60)
[17:07] <coolbhavi>     +zzzz or -zzzz is the the time zone offset from Coordinated Universal Time (UTC). "+" indicates that the time is ahead of (i.e., east of) UTC and "-" indicates that the time is behind (i.e., west of) UTC. The first two digits indicate the hour difference from UTC and the last two digits indicate the number of additional minutes difference from UTC. The last two digits must be in the range 00-59.
[17:08] <coolbhavi> It can be worth mentioning here that the entire changelog must be coded in UTF-8 (normally taken care by dch command)
[17:09] <coolbhavi> for more info on the dch command please see man dch
[17:10] <coolbhavi> More explanation on the fields of the changelog file here: http://www.debian.org/doc/debian-policy/ch-source.html#s-dpkgchangelog
[17:10] <coolbhavi> Now moving on further
[17:10] <coolbhavi> == Do's and Donts ==
 QUESTION:  When are changelogs used, and who is responsible for putting them in.  also, where are they placed?
[17:14] <coolbhavi> kanliot, changelogs are used as a log for tracking changes in projects or software usually the project maintainers write the changelogs its usually placed in the source tree under the name ChangeLog with a txt extension normally and in debian/ubuntu the package chanelogs are placed in debian/changelog
[17:14] <coolbhavi> so moving on with do's and donts
[17:15] <coolbhavi> The purpose of writing a changelog generally is to make sure you document all the changes and history in a most readable n understandable manner :)
[17:15] <coolbhavi> Below are few points to effective changelog writing
[17:15] <coolbhavi> Fewer Assumptions: Try to make lesser assumptions about a particular user knowing something about a particular package so it ll help bringing out better detail and understandability
[17:16] <coolbhavi> Start with the easiest part first and push the more complex/more technical part at the end to make it a interesting read
[17:16] <coolbhavi> Grouping: Group the common stuff and related changes under a heading called summary to make it easier to understand
[17:16] <coolbhavi> Whitespaces: Give appropriate whitespaces to make a clean looking formatted changelog
[17:17] <coolbhavi> Summarising: Adding a one line summary of the change at the top makes the reader to get a overview of what the change is meant to be
[17:17] <coolbhavi> Closing the bugs via changelog while submitting a patch to a bug for sponsor review
[17:17] <coolbhavi> last but not the least, spellchecks!
[17:18] <coolbhavi> The following link gives few examples of debian/changelog writing http://pastebin.com/kQHazZ8y
[17:19] <coolbhavi> Please Remember that changelog writing becomes easy by practice!
[17:19] <coolbhavi> Sorry for hurrying the session up due to 30 min time slot
[17:20] <ClassBot> There are 10 minutes remaining in the current session.
[17:20] <coolbhavi> Please feel free to ask any questions on the same
 QUESTION ARE you going to go into the actual command to generate the changelogs?
[17:25] <ClassBot> There are 5 minutes remaining in the current session.
[17:26] <coolbhavi> after you type the changelog  debuild will use dpkg-parsechangelog and dpkg-genchanges to generate a changes file
[17:26] <coolbhavi> QUESTION: are the * - and + in that example file just arbitrary bullet points or do they have a special meaning in changelogs?
[17:27] <coolbhavi> the bullet points are used for distinguishing between the main summary of changes and a detailed description for formatting
 QUESTION: Sometimes my changelog sentence become too log for 1 line. What is the best way to solve it?
[17:28] <coolbhavi> wrap up the line to be around 80 charecters
 QUESTION: ​ Should you create changelogs before or after you merge changes into a repo?
[17:29] <coolbhavi> after doing the changes before merging the same its always preferable to document changes
[17:30] <coolbhavi> Thanks all of you for turning up :) Please feel free to hang out at #ubuntu-motu or #ubuntu-devel for any questions.
[17:30] <coolbhavi> You can also catch me up on bhavi@ubuntu.com or facebook.com/bshankar
[17:30] <coolbhavi> Thanks all again!
[17:30] <ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2012/02/02/%23ubuntu-classroom.html following the conclusion of the session.
[17:32] <jbicha> Hi, I'm Jeremy Bicha and I'm a volunteer member of Ubuntu's Documentation and Desktop teams.
[17:32] <jbicha> I joined the Documentation Team a year ago as Ubuntu badly needed help with integrating the new GNOME User Guide
[17:32] <jbicha> with Ubuntu documentation where we obviously use Unity by default instead of GNOME Shell.
[17:33] <jbicha> Today, I'd like to show you how you can get started with contributing to Ubuntu's documentation.
[17:34] <jbicha> I really like that you don't need to really be a coder to write good documentation; you just need good English writing skills and attention to detail.
[17:34] <jbicha> If you visit our team's wiki page, you can see that there are several different areas we work on.
[17:34] <jbicha> https://wiki.ubuntu.com/DocumentationTeam
[17:34] <jbicha> There is the help wiki which anyone can edit and all you need is a Ubuntu One/Launchpad account.
[17:34] <jbicha> https://help.ubuntu.com/community
[17:35] <jbicha> One thing that's confused me in the past about Ubuntu's wiki is that there are actually two of them!
[17:35] <jbicha> help.ubuntu.com is a reference for Ubuntu-related "how-to's, tips, tricks, and hacks" whereas wiki.ubuntu.com is a resource for contributing to Ubuntu so it has pages for all the different Ubuntu teams you can join.
[17:35] <jbicha> So if you want to contribute help guides, you can do that at help.ubuntu.com/community.
[17:35] <jbicha> Secondly, the Ubuntu Docs Team is responsible for the help that's shipped inside Ubuntu and the other Ubuntu flavors (Kubuntu, Xubuntu, etc.)
[17:36] <jbicha> You can see this system help if you type help into the dash in Unity.
[17:36] <jbicha> This help is also mirrored to https://help.ubuntu.com/11.10/ubuntu-help/
[17:36] <jbicha> There's several different ways you can help out with the system documentation.
[17:36] <jbicha> You can read the help and file bugs if you see typos.
[17:37] <jbicha> https://bugs.launchpad.net/ubuntu/+source/ubuntu-docs/
[17:37] <jbicha> Since this is Developer Week, I'd also like to show you can submit fixes in the same format we use.
[17:37] <jbicha> Run this command in Terminal:
[17:37] <jbicha> bzr branch lp:ubuntu-docs
[17:37] <jbicha> It may take a while depending on how fast your internet connection is, but that copies the source code from the Launchpad servers.
[17:38] <jbicha> You can then use gedit or your favorite text editor to open up one of the pages.
[17:38] <jbicha> Open up ubuntu-docs/ubuntu-help/help/C/unity-introduction.page for instance.
[17:39] <jbicha> The first 20 lines or so are special information that sets the page title, keeps track of the page authors and the last time it was formally reviewed and other similar information.
[17:39] <jbicha> Then you have the actual content. Each paragraph must start with <p> and end with </p>.
[17:40] <jbicha> The Ubuntu Desktop Guide is written in Mallard (older docs or non-GNOME stuff use a similar but slightly more complex format named Docboook). More information about the format and a 10-minute introduction can be found at http://projectmallard.org/
[17:41] <jbicha> The Mallard format is pretty cool as it is designed to be topic-based, as opposed to older computer help which read more like a manual or a book with chapters
[17:43] <jbicha> We're trying to convert more and more GNOME apps to use it because it is more useful as I believe most people want to read an answer to their problem or a how-to for a specific task instead of trying to page through a longer document with chapters
[17:44] <jbicha> We'd love it if other projects like KDE switched to the format too but that hasn't really happened yet
[17:44] <jbicha> Anyway, you probably also want to bookmark http://blogs.gnome.org/shaunm/files/2012/01/mallardcheatsheet.png or https://gitorious.org/projectmallard/mallard-cheat-sheets/trees/master
[17:45] <jbicha> that lets you see all the additional features you can add to your Mallard help files in one easy location
[17:46] <jbicha> After you've made your changes (fixed a typo or added some extra information for instance), navigate in your terminal to the ubuntu-docs folder and run
[17:46] <jbicha> bzr commit
[17:47] <jbicha> This will open up nano (by default) where you should type in a description of what changes you've made. Then hit Ctrl+O to save and Ctrl+X to exit the editor.
[17:47] <jbicha> Then you can run bzr push lp:~/ubuntu-docs/my-changes You'll want to definitely use something more descriptive than "my-changes" to describe what you've done.
[17:48] <jbicha> Then open https://code.launchpad.net/~ in your web browser
[17:48] <jbicha> Click the name of the branch you just pushed to
[17:48] <jbicha> And then click the Propose for Merging button
[17:49] <jbicha> the target branch should say lp:ubuntu-docs
[17:49] <jbicha> you can add a description if you like, but you should probably leave the reviewer field blank, then click Propose Merge
[17:50] <ClassBot> There are 10 minutes remaining in the current session.
[17:50] <jbicha> If you have any questions, you can ask them now
[17:51] <jbicha> Also, you can visit #ubuntu-docs or ask on our mailing list https://lists.ubuntu.com/mailman/listinfo/ubuntu-doc
[17:55] <jbicha> the Ubuntu Documentation team and GNOME Docs team definitely could use your help
[17:55] <ClassBot> There are 5 minutes remaining in the current session.
[17:56] <jbicha> the Ubuntu Docs team is all-volunteer and we're always looking for more contributors
[17:57] <ClassBot> jincreator asked: Can Mallard import ubuntu docs to pdf?
[17:58] <jbicha> yes, you can but I'm going to have to get back with you about how that works
[17:58] <jbicha> (since we're nearly out of time)
[17:58] <jbicha> but Mallard already has exporters for html, xhtml, epub, and I believe pdf
[17:59] <jbicha> thanks for your time!
[18:00] <ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2012/02/02/%23ubuntu-classroom.html following the conclusion of the session.
[18:03] <ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2012/02/02/%23ubuntu-classroom.html following the conclusion of the session.
[18:03] <aquarius> Hi, everyone!
[18:03] <aquarius> I'm Stuart Langridge, and I work on Ubuntu One
[18:04] <aquarius> Now I'm here to talk about adding Ubuntu One to your applications.
[18:04] <aquarius> Do please ask questions throughout the talk: in the #ubuntu-classroom-chat channel, write QUESTION: here is my question
[18:04] <aquarius> We want to make it possible, and easy, for you to add the cloud to your apps and to make new apps for the cloud
[18:04] <aquarius> So we do all the heavy lifting, and your users (and you!) get the benefits.
[18:04] <aquarius> So, you've built an app which does things for you: let's say it's a todo list app, since lots of people are doing that, with Getting Things Done or something.
[18:04] <aquarius> It'd be great to have your todo list on all your Ubuntu machines -- your desktop machine, your netbook, and so on.
[18:04] <aquarius> To do that, just sync the folder that you store the todo lists in with Ubuntu One.
[18:05] <aquarius> I'm going to talk about U1DB shortly, our in-progress effort to enable structured data sync, but you can also work with files quite happily.
[18:05] <aquarius> Working with Ubuntu One file sync programmatically is done through the Python library ubuntuone.platform.tools.SyncDaemonTool.
[18:05] <aquarius> (If you're not using Python, then don't worry; SyncDaemonTool is only a wrapper around the Ubuntu One D-Bus API, so all this works from other languages too. I'll explain the Python version here, though, for simplicity.)
[18:06] <aquarius> So: "pydoc ubuntuone.platform.tools.SyncDaemonTool" to see the documentation.
[18:06] <aquarius> (Note: you'll need to be running precise for this, either right now or after it's released.)
[18:06] <aquarius> Imagine, in your todo list app, you have a checkbox for "put my todo lists on all my machines", and you store your todo lists in ~/.local/share/mytodolist/lists.
[18:06] <aquarius> So ticking the checkbox should mark that folder as synced with Ubuntu One, and then it'll be synced everywhere.
[18:07] <aquarius> A simple example of how to do that is at http://paste.ubuntu.com/825171/
[18:07] <aquarius> As you can see, we check whether the folder is *already* on your list of synced folders, and if not, we use create_folder to create it.
[18:08] <aquarius> Everything that file sync can do, you can do with SyncDaemonTool or the underlying D-Bus API.
[18:08] <aquarius> The Ubuntu One control panel, and the technical u1sdtool, both just talk to that D-Bus API.
[18:08] <aquarius> So if you port your todo list app to Windows, or the web, or iOS or Android or Blackberry or all of them, they can all still get at the data.
[18:09] <aquarius> File sync is about more than just the-same-files-on-all-my-machines, though.
[18:09] <aquarius> You can use it for communication and distribution as well.
[18:09] <aquarius> To pick an example, jonobacon and dholbach and I and others have recently been working on an Ubuntu Accomplishments system (http://www.jonobacon.org/2012/01/29/more-ubuntu-accomplishments-hacking/).
[18:10] <aquarius> Most of this is all local stuff, which can be used to help people learn the Ubuntu desktop -- you send an email, you set your desktop background, and that's an accomplishment
[18:10] <aquarius> or as a reward -- you complete a level in a game, for example
[18:10] <aquarius> but there are also accomplishments which are to do with your activities on the wider internet
[18:10] <aquarius> So, you filed your first bug about Ubuntu, for example.
[18:11] <aquarius> Now, these need to be "verified" by another machine, something that isn't your computer, to prove you did it; that other machine checks with Launchpad that you've actually filed a bug and then says "yep, they did it".
[18:11] <aquarius> One way to implement this would be to run a web service -- you can make requests to it saying "I filed a bug: verify that, please"
[18:11] <aquarius> and the web service goes away and checks that you did and then says "yes, you did, and here's a token to prove it"
[18:12] <aquarius> But that's got all sorts of problems -- you have to be online to talk to the web; if the web service gets popular it might crash a lot (think of Twitter, here), and so on
[18:12] <aquarius> What you want is to work *asynchronously*.
[18:12] <aquarius> So, instead of directly contacting a web service, put the stuff you want to check in a folder, sync that folder with Ubuntu One, and then *share* that folder with a specific U1 user account
[18:12] <aquarius> That U1 user account is owned by a machine, and when your network comes back, your files will sync to U1, then they'll sync across to the machine's account
[18:12] <aquarius> The machine can then look at those files, verify them at its leisure, and then add the "yes they did it" token, and then that token will sync back to your machine
[18:13] <aquarius> So, you're working just like with a web service, but it *doesn't have to be real time*
[18:13] <aquarius> Which means that you don't need to worry about having super hardware running the web service, really complicated load-balancing; you don't have to worry about the user being offline and queueing up requests; none of that
[18:13] <aquarius> And this is all built in to Ubuntu. Anyone can have an Ubuntu One account.
[18:14] <aquarius> So you can happily use this in your Ubuntu apps.
[18:14] <aquarius> So, for example, imagine being able to instantly, one-click-ly, publish a file from your application to a public URL and then tweet that URL.
[18:14] <aquarius> Instant get-this-out-there-ness from your apps.
[18:15] <aquarius> The screenshot tool Shutter, for example, can do this already; PrtSc to take a screenshot, then "Export > Ubuntu One".
[18:15] <aquarius> Your app could have a button to "store all my files in the cloud", or "publish all my files when I save them", or "automatically share files that are part of Project X with my boss".
[18:15] <aquarius> So a backup program, for example, could back up your files straight into Ubuntu One and not sync them to your local machines, and that's exactly what the excellent Deja Dup does
[18:17] <aquarius> Ubuntu's default backup system will back up your stuff straight to Ubuntu One, so you've got a safe offsite backup when you need it
[18:17] <aquarius> That was entirely built on the Ubuntu One APIs.
[18:17] <aquarius> The interesting thing here, of course, is that if you build your own web service to go alongside the app you're building, then you have to run that web service, keep it going, pay the bills for it, be a sysadmin.
[18:17] <aquarius> But if your app uses Ubuntu One, then it's using the user's *own storage*, not yours.
[18:17] <aquarius> So you get all the benefits of your app having a web service, and none of the downsides!
[18:17] <aquarius> And of course being able to save things in and out of the cloud means that you can get an Ubuntu One sync solution on other platforms.
[18:18] <aquarius> So you could work with your files from your mobile phone (we've already got Android and iOS clients for phones, but there are plenty of people with N9s or Windows Phone or Blackberry)
[18:18] <aquarius> Build a fuse or gvfs backend for Ubuntu or Fedora or SuSE or Arch Linux. Build a WebDAV server which works with Ubuntu One and mount your Ubuntu One storage as a remote folder on your Mac.
[18:18] <aquarius> And web apps can work with your cloud too, for files as well as data.
[18:18] <aquarius> Imagine, say, a torrent client, running on the web, which can download something like a movie or music from legittorrents.info and save it directly into your cloud storage.
[18:18] <aquarius> So you see an album you want on that torrent site (say, Ghosts I by Nine Inch Nails) and go tell this web torrent client about it (and you've signed in to that web torrent client with Ubuntu One)
[18:18] <aquarius> And the website then downloads that NIN album directly into your personal cloud -- which of course makes it available for streaming direct to your phone.
[18:19] <aquarius> But it's not just about your content for yourself; think about sharing.
[18:19] <aquarius> Ubuntu One lets you share a folder with people. This would be great for distribution.
[18:19] <aquarius> Imagine that you publish an online magazine.
[18:20] <ClassBot> There are 10 minutes remaining in the current session.
[18:20] <aquarius> So, you create a folder on your desktop, and put issues of the magazine in it.
[18:20] <aquarius> Then, you put a button on your website saying "Sign in with Ubuntu One to get our magazine".
[18:20] <aquarius> When someone signs in, your website connects to the Ubuntu One files API, with your private OAuth token, and adds that signed-in user to the list of people that your magazine folder is shared with.
[18:21] <aquarius> Then, whenever your magazine has a new issue, you just drop it into that folder on your desktop.
[18:21] <aquarius> (Or even upload it to Ubuntu One directly through the website.)
[18:21] <aquarius> All the subscribed people will get the new issue instantly, on all the machines they want it on, and in the cloud.
[18:21] <aquarius> You could distribute anything like this. Imagine a podcast, or chapters of a novel.
[18:22] <aquarius> (I keep imagining an online game which grows over time, but that's just me)
[18:22] <aquarius> It would also work for paid-for content; when someone pays, have your code share a folder with them, and put their paid-for stuff in that folder. That's all doable through the files API.
[18:22] <aquarius> We have the start on documentation for all the APIs at https://one.ubuntu.com/developer and we're working hard on updating that at the moment
[18:22] <aquarius> (we've been moving the documentation to use the Sphinx docs system under the covers, so some of the more recent things aren't yet documented, and I'm working on that)
[18:23] <aquarius> That about covers things for this talk; it's a quick tour through some of the things that Ubuntu One can do for your apps
[18:23] <aquarius> There's, of course, another thing that you can do; bring U1 to new platforms
[18:24] <aquarius> We've recently had people start building wrapper libraries for Ubuntu One files in loads of different environments... the most recent was .net
[18:24] <aquarius> so that the chap can also build apps on windows which work with his files in Ubuntu One
[18:24] <aquarius> OK, that's the brief summary.
[18:25] <aquarius> If anyone has questions about this or about anything to do with Ubuntu One and app development, I'm happy to answer them
[18:25] <ClassBot> There are 5 minutes remaining in the current session.
[18:25] <aquarius> and I'm also talking about U1DB, our in-progress data sync thing, in an hour, so you may find that interesting too :)
[18:26] <aquarius> melvster says: QUESTION: this sounds absolutely awesome, you've alluded to the fact there will be a standards compliant REST API with global URLs exposed, 1. will it expose correct and flexible MIME types, 2. cross origin headers for interop with apps,  3. will there be fine grained access control
[18:26] <aquarius> 1. yes, there already is a REST API for your files in the cloud -- see https://one.ubuntu.com/developer/files/store_files/cloud/
[18:27] <aquarius> 1b. mime types for your files are what you set them to be, if you upload them through the APIs
[18:27] <aquarius> 2. CORS headers are an interesting one. I'd like to hear reasoning both for and against that. I've been thinking about it, and I'm leaning towards it being a good idea, but it's not done yet.
[18:28] <aquarius> 3. fine grained access control -- granting an access token which can access only one folder, or one file, or have read but not write access to your files -- is something that we plan to build, but we haven't yet.
[18:29] <ClassBot> burli asked: whats the status of u1db?
[18:29] <aquarius> You want to be here in an hour to see the u1db talk for that :)
[18:29] <aquarius> briefly, though, it's in progress, and you can start using it now in apps, and we'd love to hear from you about it :)
[18:30] <ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2012/02/02/%23ubuntu-classroom.html following the conclusion of the session.
[18:31] <kirkland> welcome all!
[18:31] <kirkland> okay, this will be a totally live/interactive session
[18:31] <kirkland> please ssh guest@classroom.gazzang.net
[18:31] <kirkland> password is 'guest'
[18:31] <kirkland> you'll have read only access there
[18:31] <kirkland> where I'm going to demo some neat practices for pair programming and code review in the cloud!
[18:32] <kirkland> for the best experience, you may want to maximize your window
[18:32] <kirkland> or expand it to at least 180x45 characters
[18:32] <kirkland> you may want to use ctrl - to decrease font size (in gnome-terminal anyway)
[18:32] <kirkland> there's a clicker at the bottom, counting the number people in the session
[18:32] <kirkland> i see "14#" at the bottom
[18:33] <kirkland> I've used byobu many, many times for these sessions
[18:33] <kirkland> but this is the *first* time I've used byobu + tmux ;-)
[18:33] <kirkland> we'll see how it goes!
[18:33] <kirkland> okay, now that we've got a good count of people in here, let's get started
[18:34] <kirkland> in that shared screen session, I'm running IRSSI, a command line irc client
[18:34] <kirkland> you can stay connected there (here?) and see what I'm typing
[18:34] <kirkland> no need to go back and forth between the terminal session and your chat client
[18:34] <kirkland> these are a handful of best practices me and my colleagues around Canonical, Ubuntu, and Gazzang have used over the last few years
[18:34] <kirkland> that have really made working with people around the world a lot easier
[18:34] <kirkland> chief among these (for me) has been shared GNU screen and now Tmux sessions, in EC2
[18:35] <kirkland> we're able to avoid firewalls and networking/routing issues by "meeting in the middle"
[18:35] <kirkland> one or more of us can easily ssh into an EC2 machine
[18:35] <kirkland> and these instances can cost as little as $0.02/hour
[18:35] <kirkland> now, this one, is a big one ;-)
[18:35] <kirkland> 32 cpus and 60GB of memory
[18:35] <kirkland> for a "whopping" $2.40/hour :-)
[18:36] <kirkland> anyway, this system is running Ubuntu 11.10, plus an updated version of byobu and tmux from ppa:byobu/ppa
[18:36] <kirkland> you all who are logged in as the 'guest' user
[18:36] <kirkland> are attached to a readonly GNU screen session
[18:36] <kirkland> which is wrapped around a tmux session
[18:36] <kirkland> which is being driven by my 'ubuntu' user
[18:37] <kirkland> for which I'm the only person with access
[18:37] <kirkland> that's just for display here
[18:37] <kirkland> but if you want to see how to set that up, see the Juju charm 'byobu-classroom'
[18:37] <kirkland> in most cases, when I'm doing this with my trusted colleagues
[18:37] <kirkland> we just share the same ubuntu account
[18:37] <kirkland> and both of us have read/write access
[18:38] <kirkland> I'm sure you can appreciate all the things that might go wrong if I gave 27 of you simultaneous read/write access ;-)
[18:38] <kirkland> mayhem :-D
[18:38] <kirkland> okay, so let's first feel our way around this session
[18:38] <kirkland> first, I'll press shift-F1 to display the help/keybindings
[18:38] <kirkland> here, i can see that byobu binds a bunch of actions to the F keys
[18:39] <kirkland> I'll quickly work my way through a couple of them
[18:39] <kirkland> F2 creates new windows, and F3/F4 will move right and left between them
[18:39] <kirkland> you won't be able to do that here in our shared session
[18:39] <kirkland> but you can do that on your local ubuntu computer by running 'byobu-tmux' on your command line
[18:39] <kirkland> so now i hit F2 3 times
[18:39] <kirkland> and have windows 0, 1, 2, 3
[18:40] <kirkland> and i can press F3 and F4 to move right and left among them
[18:40] <kirkland> i'm going to run the 'top' command in window 3
[18:40] <kirkland> and now i'm going to rename window 3 to say "top" by pressing F8
[18:40] <kirkland> and i'll rename window 0 to "irc" with F8 as well
[18:41] <kirkland> okay, so windows are one way of multi tasking
[18:41] <kirkland> but spliting the screen is far more interesting!
[18:41] <kirkland> ctrl-F2 will split the screen vertically
[18:41] <kirkland> and shift-F2 will split the screen horizontall
[18:41] <kirkland> and I can keep doing that pretty much ad nauseum!
[18:41] <kirkland> note that splits are per-window
[18:41] <kirkland> so if I go over to window 3
[18:42] <kirkland> i won't see any splits there, until I create them
[18:42] <kirkland> so much space over on the left of window 3
[18:42] <kirkland> i'll create another split there :-)
[18:42] <kirkland> moving between windows is F3/F4
[18:42] <kirkland> moving between splits is Shift-Up/Down/Left/Right
[18:43] <kirkland> and you should see the split highlighted in purple
[18:43] <kirkland> that's your active split
[18:43] <kirkland> as well as some numbers that pop up and identify them
[18:43] <kirkland> you can jump directly to a number split if you want
[18:43] <kirkland> so let's start doing some work in one of these splits
[18:43] <kirkland> i have downloaded the source code of libreoffice
[18:43] <kirkland> as well as its build dependencies
[18:43] <kirkland> let's see it build on a 32 processor system :-)
[18:44] <kirkland> and we'll pretend that we're pair programmers working on some big build
[18:44] <kirkland> i really like colorized logs
[18:44] <kirkland> so I'm going to pipe the output of debuild through ccze
[18:44] <kirkland> and that'll take a while :-)
[18:45] <kirkland> lets say that I want to break that split out to a window of its own
[18:45] <kirkland> as I really don't need to see that build right now
[18:45] <kirkland> pressing Shift-F1, I'll get the menu again
[18:45] <kirkland> and I can see that Alt-F11 will expand a split to a full window
[18:46] <kirkland> so now we have the libreoffice build happening in window 4
[18:46] <kirkland> now, let's look at some source code
[18:46] <kirkland> i'm going to install a web application
[18:46] <kirkland> we're going to debug a problem
[18:46] <kirkland> tail some logs
[18:46] <kirkland> and debug it in real time
[18:47] <kirkland> I'm going to install pictor
[18:47] <kirkland> a web application for browsing pictures over a web interface
[18:47] <kirkland> something like picasa, I suppose
[18:47] <kirkland> you can point a browser at http://classroom.gazzang.net/pictor
[18:47] <kirkland> and you'll see that we need to seed it with some images
[18:48] <kirkland> i'm going to grab some packages
[18:48] <kirkland> from ubuntu that serve wall papers
[18:48] <kirkland> now let's install some symlinks
[18:48] <kirkland> to our pictures
[18:49] <kirkland> and pretend that's what we want pictor to serve
[18:49] <kirkland> now let's tail our apache logs
[18:49] <kirkland> in another split
[18:50] <kirkland> I can see a few of you browsing that site now :-)
[18:50] <kirkland> let's tail the error log now
[18:50] <kirkland> I'm going to resize my splits now
[18:50] <kirkland> using Ctrl-Up/Down/Left/Right
[18:51] <kirkland> busy busy busy :-)
[18:51] <kirkland> now
[18:51] <kirkland> perhaps you've noticed our bug
[18:51] <kirkland> http://classroom.gazzang.net/pictor/?album=%2Fbackdrops&thumbs=1
[18:51] <kirkland> the "backdrops" album is empty
[18:51] <kirkland> I wonder why ....
[18:52] <kirkland> hmm, so the files in the backdrops directory are png's
[18:52] <kirkland> meanwhile, the "working" ones are .jpgs
[18:53] <kirkland> let's hack it into the code, then
[18:53] <kirkland> this is the point at which
[18:53] <kirkland> you and your team
[18:53] <kirkland> perhaps pair programming
[18:53] <kirkland> might dig through the code and find the bug
[18:54] <kirkland> i'm going to search this code for hardcoded "jpg" references
[18:54] <kirkland> aha!
[18:54] <kirkland> there's an is_image() function
[18:54] <kirkland> that looks for files ending in "jpg" and "jpeg"
[18:54] <kirkland> let's try adding ".png"
[18:54] <kirkland> now refresh
[18:54] <kirkland> lovely!
[18:54] <kirkland> of course, I just made that change directly on the /usr/share source code
[18:54] <kirkland> if we wanted to do this right, we should branch bzr and submit the change
[18:54] <kirkland> let's do that very quickly
[18:55] <kirkland> there's our 1-line change
[18:55] <kirkland> in our bzr cdiff
[18:56] <kirkland> and there we have it!
[18:56] <kirkland> I'll make sure to credit all of #ubuntu-classroom when I commit this fix later today ;-)
[18:56] <kirkland> let's go back and check our office build, out of curiosity
[18:57] <kirkland> still chugging along
[18:57] <kirkland> are you getting a feel for how this works?
[18:57] <kirkland> we use it often for education (like this!)
[18:57] <kirkland> where one person is trying to teach one or more how to do something
[18:58] <kirkland> but it's also phenomenally useful for debugging complex problems
[18:58] <kirkland> johnny-e> QUESTION: is your byobu is using tmux instead of screen?
[18:58] <kirkland> johnny-e: well, it's a bit of both
[18:58] <kirkland> what you're SEEING is byobu + tmux
[18:58] <kirkland> the one line across the bottom
[18:58] <kirkland> and the elegant splits
[18:58] <kirkland> yes, that's tmux
[18:59] <kirkland> but the thing that's keep all of you with readonly access is a wrapper of GNU screen
[18:59] <kirkland> though that's entirely transparent to you
 QUESTION: ​ so the double layer of tmux + screen is only because of the need for read-only in this session?
[18:59] <kirkland> ssweeny-lernid: yes, exactly
[18:59] <kirkland> ssweeny-lernid: when you do this with your friends/colleagues, it'll be trivial to setup
[18:59] <kirkland> (this configuration took me a few minutes)
[18:59] <kirkland> here's what you'll do ...
[18:59] <kirkland> get an Ubuntu system running somewhere
[19:00] <kirkland> the "cloud" is nice, because it removes the need for VPNs or fancy routing
[19:00] <kirkland> but you could certainly use a physical local system or a VM
[19:00] <kirkland> next, you could very conveniently use the ssh-import-id command
[19:00] <kirkland> this is also a very useful tool to have in your toolbelt when doing pair programming and code review in the cloud
[19:01] <kirkland> you can 'ssh-import-id SOME_LAUNCHPAD_USERNAME'
[19:01] <kirkland> and it will securely (over https) connect to Launchpad
[19:01] <kirkland> and retrieve that user's public SSH keys and insert them into the local authorized_keys file
[19:01] <kirkland> in this way, you can avoid needing to share passwords
[19:01] <kirkland>  i would have done that today for this session
[19:02] <kirkland> but I'm not sure how long it would have taken to import all 29 of you ;-)
[19:02] <kirkland> actually, just before we sign off, I'll import your keys
[19:02] <kirkland> and let you trash this system for a few seconds before I terminate it :-P
 QUESTION: do you have any problems nesting byobu? like running byobu on my desktop and on my server?
[19:03] <kirkland> well, yeah, kinda sorta
[19:03] <kirkland> nesting byobu can get complex
[19:03] <kirkland> when doing so, you'll need to know the tmux and/or screen key bindings
[19:03] <kirkland> and I'd recommend using two different escape sequences
[19:03] <kirkland> i use ctrl-a for mour outter session
[19:03] <kirkland> and ctrl-b for my inner session
[19:04] <kirkland> otherwise, byobu doesn't know which session you're talking to :-)
 QUESTION: I was reading the gnu screen man pages, do you need to set up screen in a specific way to do this? I read something about giving the program root permission or something...
[19:04] <kirkland> sinisterstuf: yeah, here's the magic ...
[19:04] <kirkland> sinisterstuf: follow me over in the shared session
[19:04] <kirkland> you can grab the bzr branch from http://bazaar.launchpad.net/~charmers/charms/oneiric/byobu-classroom/trunk/
[19:05] <kirkland> see hooks/install
[19:05] <kirkland> [ -f /usr/share/byobu/profiles/classroom ] || echo "
[19:05] <kirkland> aclumask guest+r guest-w guest-x
[19:05] <kirkland> aclchg guest +r-w-x '#?'
[19:05] <kirkland> aclchg guest +x 'detach'
[19:05] <kirkland> multiuser on
[19:05] <kirkland> escape "^Bb"
[19:05] <kirkland> " > /usr/share/byobu/profiles/classroom
[19:05] <kirkland> chmod 644 /usr/share/byobu/profiles/classroom
[19:05] <kirkland> that's most of it
[19:05] <kirkland> that changes the acls of the guest user to read only
[19:05] <kirkland> also, I tweaked the ssh config
[19:06] <kirkland> so that the *only* command that the guest user can run on ssh in is to attach to the byobu session
[19:06] <kirkland> echo "
[19:06] <kirkland> PasswordAuthentication yes
[19:06] <kirkland> AllowTcpForwarding no
[19:06] <kirkland> Match User guest
[19:06] <kirkland>   ForceCommand exec screen -x ubuntu/byobu-classroom
[19:06] <kirkland> " >> /etc/ssh/sshd_config
 QUESTION: so do you typically nest byobu or just run it in the server?
[19:06] <kirkland> jsnapp: I almost never nest byobu
[19:06] <kirkland> but that's a good question
[19:06] <kirkland> around what I find as best practices for this...
[19:06] <kirkland> so I run gnome-terminal (or terminator)
[19:06] <kirkland> at full screen
[19:06] <kirkland> I use multiple tabs
[19:07] <kirkland> but each tab is an ssh session to another server
[19:07] <kirkland> in each tab, I run byobu
[19:07] <kirkland> which is nice because I can always attach and detach
[19:07] <kirkland> actually ...  that's a good exercise
[19:07] <kirkland> I invite you to detach from the shared session
[19:07] <kirkland> using F6
[19:07] <kirkland> and then reattach
[19:07] <kirkland> by ssh'ing back in
[19:07] <kirkland> ssh guest@classroom.gazzang.net
[19:08] <kirkland> you'll see that everything is *exactly* as you left it
[19:08] <kirkland> I've just done the same thing
[19:08] <kirkland> detached
[19:08] <kirkland> and then reattached
[19:08] <kirkland> this is your lifeline savior when you have a crappy uplink connection :-)
[19:08] <kirkland> it would stink to have this office build die
[19:08] <kirkland> just because your ssh session had a hiccup
[19:09] <kirkland> now
[19:09] <kirkland> each time I need a new shell
[19:09] <kirkland> I can either create a new window (F2)
[19:09] <kirkland> or a new split (Ctrl-F2/Shift-F2)
[19:09] <kirkland> I tend to group similar shells with splits
[19:09] <kirkland> so if I'm working on libreoffice in one window
[19:09] <kirkland> i might have a handful of related splits
[19:09] <kirkland> maybe one building
[19:09] <kirkland> maybe one with code
[19:09] <kirkland> maybe two with code, side by side
[19:10] <kirkland> one for testing, etc.
[19:10] <kirkland> one tailing log files
[19:10] <kirkland> much like what we're looking at in window 0
[19:10] <kirkland> with 2 logs
[19:10] <kirkland> and one code
[19:10] <kirkland> oh, let's rearrange the windows
[19:10] <kirkland> let's say I want to move the access log left or right one split
[19:10] <kirkland> I use Ctrl-F3/F4
[19:11] <kirkland> I can do that with any of the splits
[19:11] <kirkland> also, there's a handful of predefined, useful arrange ments
[19:11] <kirkland> you can Shift-F8 will toggle through them
[19:11] <kirkland> here's equal vertical windows
[19:12] <kirkland> or equal horizontal windows
[19:12] <kirkland> here's one big one across the top
[19:12] <kirkland> and a bunch of verticals across the bottom
[19:12] <kirkland> or (probably my favorite)
[19:12] <kirkland> one vertical (where I look at my source code)
[19:12] <kirkland> and 3 more, where I tail my logs
[19:12] <kirkland> I'm going to move my source code over to the left split with Ctrl-F3
[19:13] <kirkland> ah, that's nice
[19:13] <kirkland> I can save a particular split arrangement
[19:13] <kirkland>     Ctrl-Shift-F8               Save the current split-pane layout
[19:13] <kirkland> and then restore it later
[19:13] <kirkland>     Ctrl-F8                     Restore a split-pane layout
[19:14] <kirkland> there's a menu of the predefined ones, and my saved one, 'favorite'
[19:14] <kirkland> oh, let's see tiled as well
[19:14] <kirkland> also nice
[19:14] <kirkland> it's lovely that *all* of this is happening over ONE single ssh connection
[19:14] <kirkland> we have over a dozen shells open
[19:14] <kirkland> doing some very busy things
[19:14] <kirkland> but just 1 ssh session to the server
[19:15] <kirkland> and it's all preserved across attaches and detaches
 QUESTION: ​ Is it possible to do something similar using "ssh -X" to share a desktop in readonly mode when you're working in an IDE?
[19:15] <kirkland> kermit666: not that I know of, sorry :-(
 QUESTION: so you're runing byobu on each server you connect to
[19:15] <kirkland> yes, always
[19:15] <kirkland> johnny-e: you can set that easily on a per-user basis with 'byobu-enable'
[19:16] <kirkland> johnny-e: or, you can set that on a system wide basis with 'dpkg-reconfigure byobu'
[19:16] <kirkland> HOWEVER
[19:16] <kirkland> the best way to do it, I believe
[19:16] <kirkland> is by adding a one-liner to your LOCAL ~/.bashrc file
[19:16] <kirkland> export LC_TERMTYPE=byobu
[19:16] <kirkland> put that in your local ~/.bashrc
[19:17] <kirkland> and any time you ssh to a system with a (recent) version of byobu
[19:17] <kirkland> this script, /etc/profile.d/Z97-byobu.sh
[19:17] <kirkland> will see that you've selected your TERMTYPE to byobu and it will try to launch it by default at startup!
 QUESTION: what if i had an extra view which i would like to toggle on or off ?
[19:17] <kirkland> caotic: I don't understand your question, can you clarify please?
 QUESTION: how are you keeping other users screen size from affecting your byobu session? is it the read only access?
[19:18] <kirkland> jsnapp: oh, that's a great question
[19:18] <kirkland> a *very* hard one as well
[19:18] <kirkland> jsnapp: I have overriden the default byobu setting
[19:19] <kirkland> jsnapp: normally, byobu sets the tmux option aggressive-resize 'on'
[19:19] <kirkland> jsnapp: however, what that means is that everyone is limited by the smallest attached screen
[19:19] <kirkland> jsnapp: with 29 of you, one of you could have set your screen to 80x25 and limited all of us
[19:19] <kirkland> jsnapp: in some case (most?) that might be the right answer
[19:19] <kirkland> jsnapp: but for this demo, it wasn't, sorry
[19:19] <kirkland> jsnapp: so I set that to 'off' for this session
 QUESTION: can you assign names to your custom split-pane layouts?
[19:20] <kirkland> johnny-e: yes, absolutely, just press ctrl-shift-F8
[19:20] <kirkland> johnny-e: and it'll prompt you to name it
[19:20] <ClassBot> There are 10 minutes remaining in the current session.
[19:20] <kirkland> johnny-e: in your ~/.byobu directory, you'll see a layouts dir
[19:20] <kirkland> johnny-e: I named my "favorite"
[19:20] <kirkland> johnny-e: and the contents of that file is a tmux split window layout:
[19:20] <kirkland> b57b,180x44,0,0{80x44,0,0,99x44,81,0[99x14,81,0,99x14,81,15,99x14,81,30]}
[19:21] <kirkland> (have fun interpreting that ;-)
[19:21] <kirkland> johnny-e: sorry, I did it earlier;  I encourage you to try it on your own byobu system later
[19:21] <kirkland> oh, one more important topic!
[19:22] <kirkland> now, tmux and screen differ here significantly on a few major points
[19:22] <kirkland> i want to cover one of them here
[19:22] <kirkland> in tmux, all users will be connected to the same exact session by default
[19:22] <kirkland> and all users will see the exact same thing
[19:22] <kirkland> which is very much what "pair programming" is all about
[19:22] <kirkland> but sometimes, you might want to split off from your colleague
[19:22] <kirkland> and work on something else
[19:22] <kirkland> on a different view
[19:23] <kirkland> here, you would create a new "session"
[19:23] <kirkland> which you can do using ctrl-shift-F2
[19:23] <kirkland> so to review:
[19:23] <kirkland>   F2                            Create a new window
[19:23] <kirkland>     Shift-F2                    Create a horizontal split
[19:23] <kirkland>     Ctrl-F2                     Create a vertical split
[19:23] <kirkland>     Ctrl-Shift-F2               Create a new session
[19:23] <kirkland> and then to move between sessions
[19:23] <kirkland>     Alt-Up/Down                 Move focus among sessions
[19:24] <kirkland> unfortunately, I won't be able to demo that here
[19:24] <kirkland> as your screen is attached to my tmux session
[19:24] <kirkland> and you won't follow me to the other one
[19:24] <kirkland> but try it yourself!  it works
[19:24] <kirkland> I think we covered all of these, but to review:
[19:24] <kirkland>     Shift-F1                    Display this help
[19:24] <kirkland> and creating new things ...
[19:25] <kirkland>   F2                            Create a new window
[19:25] <kirkland>     Shift-F2                    Create a horizontal split
[19:25] <kirkland>     Ctrl-F2                     Create a vertical split
[19:25] <kirkland>     Ctrl-Shift-F2               Create a new session
[19:25] <kirkland> all movement is under F3 and F4
[19:25] <ClassBot> There are 5 minutes remaining in the current session.
[19:25] <kirkland>   F3/F4                         Move focus among windows
[19:25] <kirkland>     Shift-F3/F4                 Move focus among splits
[19:25] <kirkland>     Ctrl-F3/F4                  Move a split
[19:25] <kirkland>     Ctrl-Shift-F3/F4            Move a window
[19:25] <kirkland>     Alt-Up/Down                 Move focus among sessions
[19:25] <kirkland>     Shift-Left/Right/Up/Down    Move focus among splits
[19:25] <kirkland>     Ctrl-Shift-Left/Right       Move focus among windows
[19:25] <kirkland>     Ctrl-Left/Right/Up/Down     Resize a split
[19:25] <kirkland> so F2/F3/F4 sort of form your "home keys" for byobu
[19:25] <kirkland> we can briefly look at the refresh commands
[19:25] <kirkland>   F5                            Reload profile, refresh status
[19:25] <kirkland>     Shift-F5                    Toggle through status lines
[19:25] <kirkland>     Ctrl-F5                     Reconnect ssh/gpg/dbus sockets
[19:25] <kirkland>     Ctrl-Shift-F5               Change status bar's color randomly
[19:26] <kirkland> you should see a few status items in the bottom right, periodically refreshing
[19:26] <kirkland> by default, those refresh every 1 second
[19:26] <kirkland> i backed that off to 5 seconds, since there are so many of us here
[19:26] <kirkland> I just presed Shift-F5
[19:26] <kirkland> and now I can see a different set of status
[19:27] <kirkland> including my cost, hostname, ip address
[19:27] <kirkland> ctrl-shift-f5 can give your system some personality
[19:27] <kirkland> if you have a lot of byobu sessions, seeing them under a different color helps some times
[19:27] <kirkland>   F6                            Detach session and then logout
[19:27] <kirkland>     Shift-F6                    Detach session and do not logout
[19:27] <kirkland>     Ctrl-F6                     Kill split in focus
[19:28] <kirkland> perhaps you tried the F6 detach
[19:28] <kirkland> shift-F6 will detach from the session, but NOT log you out of the server
[19:28] <kirkland> and ctrl-f6 will kill a split
[19:28] <kirkland> (perhaps a runaway process)
[19:28] <kirkland> OH
[19:28] <kirkland> scrollback!
[19:28] <kirkland> let's play with scrollback
[19:28] <kirkland> which is under F7
[19:28] <kirkland> and alt-pageup/pagedown
[19:29] <kirkland> so now I'm scrolling back through one particular split
[19:29] <kirkland> anyway, my time is up
[19:29] <kirkland> I'll gladly answer questions over in #ubuntu-classroom-chat
[19:30] <kirkland> hopefully you see how you can use shared screen and tmux sessions and byobu in the cloud to do code review and pair programming ;-)
[19:30] <ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2012/02/02/%23ubuntu-classroom.html following the conclusion of the session.
[19:44] <aquarius> Hi all, and welcome back. I'm still Stuart Langridge from Ubuntu One :)
[19:44] <aquarius> (thanks, akgraner :))
[19:44] <aquarius> I'm now going to talk about U1DB.
[19:45] <aquarius> In addition to syncing files and music and photos around, it's good to be able to sync data, too
[19:45] <aquarius> U1DB is our solution to that.
[19:45] <aquarius> It's still being worked on, and we made a preview release in late December.
[19:45] <aquarius> Basically, U1DB is for syncing data -- that is, something structured -- to every device you want.
[19:45] <aquarius> So, preferences or lists or data of any kind where you don't want to have to represent the data as separate files
[19:45] <aquarius> U1DB is an API.
[19:45] <aquarius> The reason for this is so it can be implemented in any language.
[19:45] <aquarius> So someone could build the U1DB API in Python, and then you could use that U1DB library in a Python Quickly app on Ubuntu.
[19:46] <aquarius> Someone else could build the U1DB API in JavaScript, and then you could use that U1DB library in a web app.
[19:46] <aquarius> At that point, you can build the Ubuntu app and the web app, and they can share data.
[19:46] <aquarius> Then you could build an Android app which uses U1DB -- again, this would be a standard Android app, in Java -- and it can also share and sync that data with the other apps in Python and JavaScript and whatever.
[19:46] <aquarius> So you've got the ability to build apps everywhere, for every platform, Ubuntu and the web and smartphones and desktops and netbooks, and have them all work with the same data: your data, and your users' data.
[19:46] <aquarius> At the moment, what we've built is called the "reference implementation": it's using Python and SQLite, and includes both a Python client (for use in Python apps), and a server.
[19:47] <aquarius> (by the way, you can ask questions in #ubuntu-classroom-chat; just type QUESTION: <your question> )
[19:47] <aquarius> Let's try a little bit of code.
[19:47] <aquarius> "bzr branch lp:u1db" to get the latest trunk code of u1db.
[19:47] <aquarius> We have early documentation online at http://people.canonical.com/~aquarius/u1db-docs
[19:47] <aquarius> If you take a look at http://people.canonical.com/~aquarius/u1db-docs/quickstart.html#starting-u1db that gives you an example of how to use u1db itself
[19:48] <aquarius> You'll see that U1DB stores "documents" -- JSON documents, that is
[19:48] <aquarius> A document can contain anything you want; there's no particular structure imposed on you
[19:48] <aquarius> So if you choose to store all your app's data in many separate documents, or group those documents together somehow, or require certain keys in them, that's fine
[19:48] <aquarius> So, creating a document is done with
[19:48] <aquarius> >>> content = json.dumps({"name": "Alan Hansen"}) # create a document
[19:48] <aquarius> >>> doc = db.create_doc(content)
[19:49] <aquarius> and that's saved that document into the database
[19:49] <aquarius> You use put_doc to overwrite an existing document:
[19:49] <aquarius> >>> doc.content = json.dumps({"name": "Alan Hansen", "position": "defence"}) # update the document's content
[19:49] <aquarius> >>> rev = db.put_doc(doc)
[19:49] <aquarius> (and that returns a revision number)
[19:49] <aquarius> U1DB is a revisioned database, meaning that it keeps track of the revision number of a document
[19:49] <aquarius> So it knows when things have changed
[19:50] <aquarius> Syncing two U1DBs together is manually commanded by your app, whenever it wants
[19:50] <aquarius> Because U1DB comes with a server, you can test this out for yourself
[19:50] <aquarius> U1DB can also be controlled from the command line, which makes testing this stuff easy
[19:50] <ClassBot> There are 10 minutes remaining in the current session.
[19:50] <aquarius> In one terminal, do:
[19:50] <aquarius> $ u1db-client init-db first.u1db # this creates a database
[19:50] <aquarius> $ echo '{"name": "Stuart Langridge"}' | u1db-client create first.u1db # create a document in it
[19:50] <aquarius> (This will print the ID of the new document, and its revision: something like this)
[19:50] <aquarius> id: D-cf8a96bea58b4b5ab2ce1ab9c1bfa053
[19:50] <aquarius> rev: f6657904254d474d9a333585928726df:1
[19:50] <aquarius> You can retrieve that document back again:
[19:50] <aquarius> $ u1db-client get first.u1db D-cf8a96bea58b4b5ab2ce1ab9c1bfa053 # fetch it
[19:50] <aquarius> {"key": "value"}
[19:51] <aquarius> rev: f6657904254d474d9a333585928726df:1
[19:51] <aquarius> Now, let's run the server in this folder:
[19:51] <aquarius> $ u1db-serve --verbose
[19:51] <aquarius> listening on: 127.0.0.1:43632
[19:51] <aquarius> Now, you have a U1DB server running on port 43632
[19:51] <aquarius> So, in another terminal:
[19:51] <aquarius> $ u1db-client init-db second.u1db # create a second database
[19:51] <aquarius> $ u1db-client get second.u1db D-cf8a96bea58b4b5ab2ce1ab9c1bfa053 # try and fetch a doc
[19:51] <aquarius> And you'll see that that says: Document not found (id: D-cf8a96bea58b4b5ab2ce1ab9c1bfa053)
[19:51] <aquarius> because that document doesn't exist in second.u1db
[19:51] <aquarius> Now, let's sync second with first:
[19:52] <aquarius> $ u1db-client sync second.u1db http://127.0.0.1:43632/first.u1db
[19:52] <aquarius> And now, the document exists in second:
[19:52] <aquarius> $ u1db-client get second.u1db D-cf8a96bea58b4b5ab2ce1ab9c1bfa053
[19:52] <aquarius> {"key": "value"}
[19:52] <aquarius> rev: f6657904254d474d9a333585928726df:1
[19:52] <aquarius> So syncing has worked!
[19:52] <aquarius> Syncing is over http -- the server is http and provides a nice RESTful API
[19:52] <aquarius> We already have implementations of U1DB under way on other platforms and languages
[19:53] <aquarius> The U1DB team are building a C + SQLite implementation
[19:53] <aquarius> dobey is working on a Vala implementation for Ubuntu (lp:shardbridge)
[19:53] <aquarius> and I'm working on a JavaScript implementation so that I can write web apps and mobile web apps which sync data with U1DB
[19:53] <aquarius> The documentation at http://people.canonical.com/~aquarius/u1db-docs should tell you all you need to know to get started
[19:53] <aquarius> We hang out in #u1db on freenode and on the mailing list at https://launchpad.net/~u1db-discuss
[19:53] <aquarius> So we'd be very interested in helping you use u1db in your apps.
[19:54] <aquarius> That's a very quick tour around U1DB, what it's going to be like, and how you can get started
[19:54] <aquarius> sorry for the rush -- I had to fit this talk into 15 minutes :)
[19:55] <aquarius> so, if anyone has any questions about U1DB, now's the time to ask them; I've got five minutes before I hand over to kelemengabor
[19:55] <ClassBot> There are 5 minutes remaining in the current session.
[19:56] <aquarius> (er, james_w :))
 QUESTION: what's the u1db equivalent of couchdb views?
[19:57] <aquarius> U1DB indexes. :)
[19:57] <aquarius> Create an index with create_index
[19:57] <aquarius> and then you can query that index
[19:58] <aquarius> http://people.canonical.com/~aquarius/u1db-docs/quickstart.html#starting-u1db has an example
[19:58] <aquarius> http://people.canonical.com/~aquarius/u1db-docs/high-level-api.html#document-storage-and-retrieval has more examples :)
[19:59] <aquarius> I've got one minute, so thank you all for listening!
[20:00] <aquarius> Chase us down on #u1db if you have further questions
[20:00] <ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2012/02/02/%23ubuntu-classroom.html following the conclusion of the session.
[20:01] <james_w> hello everyone, my name is James Westby, and I'm a developer of pkgme
[20:01] <james_w> later you'll be able to play along, if you want to prepare for that then you can run "sudo apt-get install devscripts debhelper python-virtualenv bzr" and then "bzr branch lp:pkgme"
[20:01] <james_w> that should get you set up with the packages you need and get you a copy of the code
[20:02] <james_w> with that out of the way, let's go back to the beginning
[20:02] <james_w> pkgme is a tool to help you package applications for Ubuntu
[20:03] <james_w> after you've written an application you need to package it up so that you can get it in to a PPA and then in to the software-center
[20:04] <james_w> it may be that you know how to do that already, and if so great, but not everyone knows how to do it already
[20:04] <james_w> if you don't know how to do it then you can learn yourself, find someone that does know, or try and use a tool to do it for you
[20:05] <james_w> in Debian/Ubuntu there are three classes of tool that help you to do this
[20:05] <james_w> the first is like checkinstall, which is nicer than installing software without it, but isn't suited for producing packages to distribute to other people
[20:05] <james_w> the second class is things like dh-make
[20:06] <james_w> this gives you a skeleton to work with, but usually you need to know a lot about packaging to get something useful
[20:06] <james_w> so it's mainly used by Debian/Ubuntu developers who want a skeleton to start from, or by those who are reading packaging guides
[20:06] <james_w> but it doesn't really help that last group
[20:07] <james_w> the third class is specialised tools like dh-make-perl
[20:07] <james_w> these deal with one type of package (in that case perl libraries)
[20:07] <james_w> they do it well, but you have to know that they exist
[20:08] <james_w> and also they all work differently, and every time someone wants to write a new one then they have to start almost from scratch, and implement the same things such as writing out the package files
[20:08] <james_w> so where does pkgme fit in to this?
[20:09] <james_w> pkgme sits somewhere between the second and third class
[20:09] <james_w> it gets the benefits of the third class, in that it produces good packages with little knowledge required on the users part
[20:10] <james_w> but it has the advantage that it has a single user interface, it re-uses code across the different types of packages, and you there's only one place to look to know if your package type is supported
[20:10] <james_w> obviously though, it needs to know how to handle each type of package
[20:11] <james_w> here a type of package is referring to a set of packages that have a convention about how they should be packaged
[20:11] <james_w> this means you have things like "python with a setup.py"
[20:11] <james_w> perl with a Makefile.PL
[20:11] <james_w> java with an ant build.xml
[20:12] <james_w> and it extends further as well, we recently added a backend for pdfs, to support the books and magazines that are being sold in Ubuntu Software Center now
[20:12] <james_w> each of these types of package is supported through a pkgme "backend"
[20:13] <james_w> the core of pkgme takes care of all the intracacies of debian package files etc., and the backend supplies the knowledge of that type of package
[20:13] <james_w> so when you run pkgme it first asks each backend to decide whether it can deal with the type of package in question
[20:13] <james_w> each backend returns a score
[20:13] <james_w> they can return 0 if they don't know what it is
[20:14] <james_w> or they can return 10 if they know how to provide some information
[20:14] <james_w> the reason it is is score is so that you can have more specialised backends
[20:14] <james_w> for instance, a ruby backend may see a ruby package and report "10"
[20:15] <james_w> but then a ruby-on-rails backend could see that it was really a RoR project, not a plain ruby one, and take over by reporting 20 for the score
[20:15] <james_w> this isn't limited to two levels either
[20:15] <james_w> there may be a particular subclass of RoRs projects that could have its own backend
[20:16] <james_w> once the backends have all reported their scores then the one with the highest score is used for the next part
[20:16] <james_w> at this point pkgme starts asking the backend for some information about the project
[20:16] <james_w> "what is the name of the project?"
[20:16] <james_w> "what is the version?"
[20:16] <james_w> "what dependencies does it have?"
[20:16] <james_w> "what's the description?"
[20:16] <james_w> etc.
[20:17] <james_w> once pkgme has all this information it puts it in to its templates and writes out the packaging
[20:17] <james_w> so, let's see how all this fits together
[20:17] <james_w> if you have downloaded the branch of pkgme you will see that it has no ./debian/ directory
[20:18] <james_w> this means that it itself is not packaged
[20:18] <james_w> so let's try using pkgme on itself
[20:18] <james_w> cd in to the pkgme directory that bzr gave you
[20:18] <james_w> and run:
[20:18] <james_w> virtualenv --no-site-packages virtualenv
[20:18] <james_w> source ./virtualenv/bin/activate
[20:18] <james_w> python setup.py develop
[20:18] <james_w> pkgme
[20:19] <james_w> it will think for a few seconds, and then will have written out the debian directory
[20:19] <james_w> it will then try and build the source package
[20:19] <james_w> that may or may not work for you (given that it's supposed to be used by the person that built the app)
[20:19] <james_w> that's it, we didn't have to tell pkgme anything about what we were doing, it just figured everything out, and made some sensible decisions for us
[20:20] <ClassBot> There are 10 minutes remaining in the current session.
[20:20] <james_w> it's likely always going to be possible that an experienced packager will find some better way of packaging the app, but that's ok
[20:20] <james_w> pkgme will produce something workable, which is the immediate goal
[20:21] <james_w> there's one thing I've glossed over so far
[20:21] <james_w> what if pkgme doesn't know how to deal with your particular application?
[20:21] <james_w> there are two possibilities here
[20:21] <james_w> the first is that your app looks like one of the types that pkgme knows about, but differs somehow
[20:21] <james_w> in these cases pkgme will give you an error, or the package won't work
[20:22] <james_w> if it doesn't turn out to be a pkgme bug, then unfortunately it's a sign that you will have to learn something about packaging, or find someone that does
[20:22] <james_w> we will try and accomodate different ways of doing things, but we can't have an automated tool know how to deal with everything
[20:23] <james_w> so when you are writing an app, stick to the conventions of whatever type of project you are writing
[20:23] <james_w> the second case is that pkgme doesn't know how to handle your type of project
[20:23] <james_w> in those cases you would need to write a pkgme backend
[20:23] <james_w> or at least work together with us to write it
[20:24] <james_w> this doesn't need any packaging knowledge (though it doesn't hurt)
[20:24] <james_w> as I said before there are two things the backend needs to be able to do
[20:24] <james_w> the first is decide whether a particular project is something it can handle
[20:24] <james_w> this is usually looking for a particular file (e.g. setup.py) and maybe some other checks
[20:24] <james_w> the second thing is answer some questions about the project
[20:24] <james_w> what the name of it is
[20:25] <james_w> what the dependencies are
[20:25] <ClassBot> There are 5 minutes remaining in the current session.
[20:25] <james_w> all things that don't really need any packaging knowledge
[20:26] <james_w> so if you are in this situation find us on launchpad (https://launchpad.net/~pkgme-developers) or on IRC (#pkgme)
[20:26] <james_w> and we'll help you write a backend (it takes about an hour to write something useful in my experience)
[20:27] <james_w> then you and everyone who writes the same types of apps can benefit from automatic packaging
[20:27] <james_w> any questions?
[20:27] <james_w> in the meantime I'll write a little about what we at Canonical are building based on pkgme
[20:28] <james_w> when you submit a commercial application to https://developer.ubuntu.com/dev then Canonical will help you package it
[20:28] <james_w> in order to speed that process up and allow more applications to be available on Ubuntu we are putting pkgme behind that form so that an attempt will be made to package your app automatically
[20:29] <james_w> for certain types of application at least
[20:29] <james_w> in addition, we're going to be trying to help libre applications too, by working with the ARB to have pkgme help developers and them to package applications
[20:29] <ClassBot> tomalan asked: ​ can pkgme als guess dependencies (e.g. by examining PKG_CHECK_MODULE in configure.ac in autotools)?
[20:30] <james_w> that would be how it did it for autotools, yes
[20:30] <ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2012/02/02/%23ubuntu-classroom.html following the conclusion of the session.
[20:30] <kelemengabor> Hi everyone!
[20:31] <kelemengabor> Welcome to this UDW talk about internationalization (i18n) bugs. I'm Gabor Kelemen, long time Hungarian translator and member of the Ubuntu Translation Coordinators team, with the task of managing i18n bugs.
[20:31] <kelemengabor> During this talk, I'll show you what are the most common reasons of the presence of untranslated strings on the Ubuntu UI, and how to make those translatable. But first things first, let's start with the basics.
[20:31] <kelemengabor> * i18n is a fairly complicated process, where many things have to be in place for the whole process to work.
[20:31] <kelemengabor> * These things are documented in the gettext manual: http://www.gnu.org/software/gettext/manual/gettext.html
[20:32] <kelemengabor> * Most of the infrastructure documented here works for the software Ubuntu packages, but there are always unpolished edges.
[20:32] <kelemengabor> * Most of the problems I'll talk about are *not* Ubuntu-specific, they affect every user of that software, independently of the distribution.
[20:32] <kelemengabor> In theory, you should not see a single English string while running Ubuntu using your native language.
[20:32] <kelemengabor> However that this is not always the case: even if the translators of your language did their best, you can still run into untranslated text.
[20:32] <kelemengabor> This is what we call an i18n bug. But what can You do with it?
[20:32] <kelemengabor> Let's suppose you run Ubuntu Precise, and you see an English string. First thing to check: is it just (yet) untranslated, or not even translatable?
[20:32] <kelemengabor> To do this, you need to click Help -> Translate this application, or if this does not help, look up the template of the application manually on https://translations.launchpad.net/ubuntu/precise/+lang/LL?batch=300
[20:32] <kelemengabor> where LL is your language code, like de for German or hu for Hungarian
[20:33] <kelemengabor> Once at the template, search for the given string. If you find it untranslated, then translate it!
[20:33] <kelemengabor> If it was translated recently, like a week ago or so, then maybe it is not exported yet into the language pack - there is always a few days delay.
[20:33] <kelemengabor> If it is there since a longer time, or if it is not there at all, then you just found an i18n bug, congratulations :).
[20:33] <kelemengabor> Run ubuntu-bug packagename if you know the name of the application (Recommended!), or go directly to https://bugs.launchpad.net/ubuntu-translations/ and report it.
[20:33] <kelemengabor> Either case, please include a screenshot!
[20:33] <kelemengabor> So, we have now a bug to solve. Or do we?
[20:34] <kelemengabor> If you don't see anything outstanding, but you would like to help solving problems - great! Go to https://bugs.launchpad.net/ubuntu-translations/ and pick a bug.
[20:34] <kelemengabor> You can also go to https://bugs.launchpad.net/ubuntu/ and search for i18n bugs there - keywords like "translat" or the English name of your language gives plenty of results.
[20:34] <kelemengabor> like 3-5 times more than we have on the ubuntu-translations project
[20:34] <kelemengabor> In an ideal world, all these should be marked as affecting the ubuntu-translations project, but... you might want to
[20:35] <kelemengabor> mark it as affecting that too, so it can get a little more attention
[20:35] <kelemengabor> Once you picked a bug, you can branch the code of the corresponding package, and start looking for the cause of the problem.
[20:35] <kelemengabor> Let's suppose that you already know how to do the branching :).
[20:35] <kelemengabor> Now that you have the code, what's the first thing to check?
[20:36] <kelemengabor> It is the presence of the string and grep is your friend here. Packages build upon each other, so maybe what you see untranslated comes from another package.
[20:36] <kelemengabor> If you cannot find the offending string, then you should search in the dependencies of the package. apt-cache can help with this.
[20:36] <kelemengabor> I mean in the sources of the dependencies :)
[20:36] <kelemengabor> Okay, so you have confirmed that the string is present in the source. It may or may not be present in the template (.pot file), let's see first what went wrong if it is not present in there.
[20:37] <kelemengabor> Most common problem is that it is simply not marked for translation.
[20:37] <kelemengabor> Example bug:
[20:37] <kelemengabor> https://bugzilla.gnome.org/show_bug.cgi?id=666773 and its patch: https://bugzilla.gnome.org/attachment.cgi?id=204150
[20:37] <kelemengabor> Overview:
[20:37] <kelemengabor> For strings to be extracted into pot files, they need to be marked for translation with the gettext() function, or its shortcut macro, _().
[20:37] <kelemengabor> In the attached patch, we see that this call was forgotten, the solution is pretty simple:
[20:38] <kelemengabor> -		similar_artists_item = gtk_menu_item_new_with_mnemonic (("Listen to _Similar Artists Radio"));
[20:38] <kelemengabor> +		similar_artists_item = gtk_menu_item_new_with_mnemonic (_("Listen to _Similar Artists Radio"));
[20:38] <kelemengabor> This applies for C, C++, Vala, and Python sources, other languages / source file types use other calls or methods to mark strings for translation.
[20:38] <kelemengabor> (This patch also contains a solution for an other type of problems, so don't close it yet.)
[20:39] <kelemengabor> Another common source of untranslated strings is the po/POTFILES.in file.
[20:39] <kelemengabor> Example bug:
[20:39] <kelemengabor> https://bugs.launchpad.net/bugs/923762
[20:39] <kelemengabor> Overview:
[20:39] <kelemengabor> This contains a list of file names, which contain strings marked for translation. This list is maintained manually by the maintainers, who often forget to update it when they add new source files.
[20:39] <kelemengabor> Luckily, we have a way to detect such files, and this is the intltool-update -m command.
[20:39] <kelemengabor> This generates the list of missing files, which you most probably want to include in the POTFILES.in file.
[20:39] <kelemengabor> Sometimes, there are files which really should not be exposed to translators, like sources of automated tests, or .c files generated from .vala sources.
[20:39] <kelemengabor> Such files should go to the POTFILES.skip file. The attached branch illustrates this too.
[20:40] <kelemengabor> intltool-update -m has its limitations too - for example, it can currently not detect translatable strings in .vala files, so you are on your own with those.
[20:40] <kelemengabor> While we are at the POTFILES.in file and intltool-update, I'd like to point out another limitation of the latter. This is file type detection, a prominent source of errors with Glade UI files.
[20:40] <kelemengabor> But we need to take a step back to understand this.
[20:40] <kelemengabor> Example bug:
[20:40] <kelemengabor> https://bugs.launchpad.net/oneconf/+bug/828897
[20:40] <kelemengabor> Overview:
[20:40] <kelemengabor> If you have read the gettext manual (okay-okay... you are here because no one does that, including me :))
[20:40] <kelemengabor> That I linked at the beginning, you might have noticed that it speaks about using xgettext
[20:41] <kelemengabor> for extracting the translatable strings from source code into the .pot file.
[20:41] <kelemengabor> This happens in Ubuntu too, so what is intltool anyway?
[20:41] <kelemengabor> intltool is a set of scripts, written to make the localization of formats not supported by xgettext possible.
[20:41] <kelemengabor> Such are .desktop files, .xml, Glade UI files, and GConf schemas, among others.
[20:41] <kelemengabor> intltool can detect such files based on their extension, but sometimes files have extensions different of the default.
[20:41] <kelemengabor> Glade files used to have the .glade extension, but since the latest format change they have .ui (sometimes .xml) extensions.
[20:41] <kelemengabor> So we need to explicitly tell intltool the type of such files. Maintainers forget/don't know this frequently:
[20:42] <kelemengabor> -./data/ui/oneconfinventorydialog.ui
[20:42] <kelemengabor> +[type: gettext/glade]./data/ui/oneconfinventorydialog.ui
[20:42] <kelemengabor> Simple enough, huh?
[20:42] <kelemengabor> Let's dig deeper into the gettext system then.
[20:42] <kelemengabor> You might remember that I said earlier:
[20:42] <kelemengabor> For strings to be extracted into pot files, they need to be marked for translation with the gettext() function, or its shortcut macro, _().
[20:42] <kelemengabor> The world is not this simple, unfortunately.
[20:42] <kelemengabor> There are situations in C/Python/others, where you cannot call a function, and gettext() is a function.
[20:42] <kelemengabor> Such are the constant arrays, and their strings should be marked for translation with the N_() macro.
[20:42] <kelemengabor> This is really a no-op, it serves only xgettext, so that it can extract the string into the .pot file.
[20:43] <kelemengabor> But for the program to show the actual translation, you need to call the gettext() function with the array items.
[20:43] <kelemengabor> This is what maintainers often forget and this can be seen in the last part of https://bugzilla.gnome.org/attachment.cgi?id=204150
[20:43] <kelemengabor> All in all, the _() macro marks the string for translation and does the translation at runtime, while the N_() macro does only the marking.
[20:43] <kelemengabor> There are other gettext functions and macros, but there is no time to cover those
[20:44] <kelemengabor> If you made it until this point, you can be fairly sure that the string will make it into the pot file: check it by running intltool-update -p
[20:44] <kelemengabor> But this does not means that the string will show up translated on the UI. We are just at the middle of the class :).
[20:44] <kelemengabor> When you grepped the source for the untranslated string, you might have found it in all the po files, translated into 20 languages, yet not showing up in any of those languages.
[20:44] <kelemengabor> What can be wrong at this point?
[20:44] <kelemengabor> Example bug:
[20:45] <kelemengabor> https://bugs.launchpad.net/ubuntu-translations/+bug/845473
[20:45] <kelemengabor> Overview
[20:45] <kelemengabor> Glade files need a little special attention to set up their i18n in the source code.
[20:45] <kelemengabor> Usually, people do something like this - this applies not only for C, but for other program languages too:
[20:45] <kelemengabor> GtkBuilder * builder = gtk_builder_new ();
[20:45] <kelemengabor> gtk_builder_add_from_file (builder, "something.ui", &error);
[20:45] <kelemengabor> This is not enough, if you want to show your items localized.
[20:45] <kelemengabor> As you can see it in the branch attached to the bug, a gtk_builder_set_translation_domain() call is necessary *after* you create the GtkBuilder object, and *before* you add the items of the .ui file.
[20:46] <kelemengabor> See also: http://developer.gnome.org/gtk3/stable/GtkBuilder.html#gtk-builder-set-translation-domain
[20:46] <kelemengabor> Side note: http://developer.gnome.org/gtk3/stable/GtkActionGroup.html#gtk-action-group-set-translation-domain documents a similar need for GtkActionGroups.
[20:46] <kelemengabor> Maintainers sometimes forget to do this. No problem, we are here to correct such mistakes :).
[20:46] <kelemengabor> Other sources of errors are libraries.
[20:46] <kelemengabor> Example bug:
[20:46] <kelemengabor> https://bugs.launchpad.net/libubuntuone/+bug/902655
[20:46] <kelemengabor> Overview:
[20:46] <kelemengabor> Libraries can have translatable strings, and the translation of these should be looked up from the translation file of the library.
[20:46] <kelemengabor> Pretty straightforward, isn't it?
[20:47] <kelemengabor> When i18n support is initialized in the software, the translation file (also called "domain") to look up strings from is defined.
[20:47] <kelemengabor> But this is never the same as the libraries domain!
[20:47] <kelemengabor> So how can we still see strings from both the program and the library?
[20:47] <kelemengabor> Libraries (should) use dgettext() instead of gettext(), which explicitly specifies the translation domain, unlike gettext(), which just uses the default.
[20:47] <kelemengabor> glib, on which most Ubuntu GUI software builds, has two convenience headers, which define the _() and some other macros not mentioned here.
[20:48] <kelemengabor> One is gi18n.h, which defines _() as gettext(), and the other is gi18n-lib.h, which defines _() as dgettext()
[20:48] <kelemengabor> Sometimes, the authors of libraries confuse these two, as you can see in the example bug.
[20:48] <kelemengabor> Another problem might be that initialization of the i18n support is sometimes incomplete.
[20:48] <kelemengabor> Example bugs:
[20:48] <kelemengabor> for C: https://bugzilla.gnome.org/show_bug.cgi?id=666516
[20:49] <kelemengabor> for Python: https://bugs.launchpad.net/ubuntu/+source/system-config-printer/+bug/783967
[20:49] <kelemengabor> for Vala: https://bugs.launchpad.net/ubuntu/+source/gwibber/+bug/837530
[20:49] <kelemengabor> Overview:
[20:49] <kelemengabor> In the  main source file of each standalone executable, you need a few lines of code to make the i18n work.
[20:49] <kelemengabor> These are documented here: http://www.gnu.org/software/gettext/manual/gettext.html#Triggering
[20:49] <kelemengabor> If the maintainer forgets some of these, xgettext will still extract the translatable strings - yet the gettext() calls won't know where to look for translations,
[20:49] <kelemengabor> or which language should they show the strings in.
[20:49] <kelemengabor> When this happens, usually whole windows and command line outputs show up untranslated, so this kind of problem is easy to spot.
[20:50] <kelemengabor> GTK+ only adds to the confusion, because it always calls setlocale() in the gtk_init*() function, so you can get used to not call it, even if you write a program that does not use GTK+
[20:50] <kelemengabor> This is what happened in the first bug.
[20:50] <ClassBot> There are 10 minutes remaining in the current session.
[20:50] <kelemengabor> The other two bugs are not complicated, they just show the complete lack of the crucial few lines in Python and Vala - I leave them here for future reference.
[20:50] <kelemengabor> Okay, I think it is enough of possible upstream bugs for today. These were the common ones, but there are many others not mentioned.
[20:50] <kelemengabor> These are upstream ones, because they happen in the code anyone can package for her favourite distribution.
[20:50] <kelemengabor> Now I'd like to talk a little about Ubuntu-specific problems, that can happen during the packaging process.
[20:51] <kelemengabor> Fortunately, there is not so many of these.
[20:51] <kelemengabor> Example bug:
[20:51] <kelemengabor> https://bugs.launchpad.net/ubuntu-translations/+bug/910268
[20:51] <kelemengabor> https://bugs.launchpad.net/ubuntu/+source/pulseaudio/+bug/876866
[20:51] <kelemengabor> Overview:
[20:51] <kelemengabor> If you want a translation template appear in LP Translation, you need to generate it during the build process.
[20:51] <kelemengabor> For this, usually dh_translations is used, which is a little helper script above intltool, to generate the translation template and prepare the package for use with language packs.
[20:51] <kelemengabor> However, sometimes it is not in use, which is a bug.
[20:51] <kelemengabor> So you need to make sure it is called, either as an argument of dh, like in the first bug
[20:51] <kelemengabor> or as a standalone call in the rules file at the end of the build, like in the second.
[20:51] <kelemengabor> Including the gnome.mk cdbs rule is also okay, because that runs dh_translations too
[20:52] <kelemengabor> This part is no rocket science :)
[20:52] <kelemengabor> Another possible Ubuntu-specific problem can be untranslated strings in patches.
[20:52] <kelemengabor> Example bug:
[20:52] <kelemengabor> https://bugs.launchpad.net/ubuntu-translations/+bug/883495
[20:52] <kelemengabor> Overview:
[20:52] <kelemengabor> Some Ubuntu patches add new strings, but the authors sometime make the same mistakes as upstream authors.
[20:52] <kelemengabor> Getting those bugs fixed is a little different, because you need to patch the patch.
[20:53] <kelemengabor> But that's all, the possible mistakes are the same as above.
[20:53] <kelemengabor> Recommended reading for this is https://wiki.ubuntu.com/PackagingGuide/Complete#Patch_Systems
[20:53] <kelemengabor> Most of the time, you need only to use edit-patch, but this is covered better in the guide, so now I just recommend reading it.
[20:53] <kelemengabor> My experience is that it may sound scary at first, but it isn't really!
[20:54] <kelemengabor> Okay, we are almost there!
[20:54] <kelemengabor> Last thing to talk about is submitting the patch. Where should you go with it?
[20:54] <kelemengabor> Now, you have a branch of the Ubuntu package tree, with a fixed bug.
[20:54] <kelemengabor> I recommended to branch the Ubuntu tree because you can instantly rebuild your patched package, and test it, which is a Good Thing.
[20:54] <kelemengabor> Unless it is an Ubuntu-specific bug, where you can go straight ahead with the "Commit - bzr push - Link a related branch - Propose for merging" dance –
[20:54] <kelemengabor> which was hopefully covered in other classes this week :) – you need also do the following:
[20:55] <kelemengabor> - Get the upstream source, whether it comes from Gnome git, an LP project or anywhere else.
[20:55] <kelemengabor> - Create a patch against your Ubuntu-tree
[20:55] <ClassBot> There are 5 minutes remaining in the current session.
[20:55] <kelemengabor> - Apply it on the upstream tree (let's assume  it applies cleanly :))
[20:55] <kelemengabor> - Submit it into the respective bug tracker of the project.
[20:55] <kelemengabor> If you are lucky, and you chose a project whose upstream is on LP, you can just mark that as also affected, link the branch, submit a merge proposal, and you are done!
[20:55] <kelemengabor> Thanks for the attention, if there are future questions or you would like to help, you can find me on #ubuntu-translators !
[20:55] <kelemengabor> unbelievable, I did it :)
[20:55] <kelemengabor> Questions?
[21:00] <ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2012/02/02/%23ubuntu-classroom.html following the conclusion of the session.
[21:00] <warp10> Hi all!
[21:00] <warp10> Who's here to learn how to hunt down bugs? Raise your hands in #ubuntu-classroom-chat, guys!
[21:01] <warp10> Ok, great!
[21:01] <warp10> Introductions first: I am Andrea Colangelo, Ubuntu Developer since 2008, with a particular focus on QA, and proudly member of a few teams of the rocking Italian Loco Team too.
[21:02] <warp10> In the next ~30 minutes we will have a glance at how to fix small bugs in Ubuntu.
[21:02] <warp10> If you followed the previous sessions, you already know how to setup your development box and the tools we use everyday, right?
[21:03] <warp10> In this session we will pick a few interesting bugs, already fixed and closed, and we will see how they have been solved.
[21:03] <warp10> This will cover both technical and "social" aspects. The latter are extremely important, since they impact the way you relate to you fellow ubuntu developers and upstreams.
[21:03] <warp10> One of our very special upstreams is Debian, and very often bugfixing is worth being done there rather than here in Ubuntu.
[21:04] <warp10> If you followed Laney and tumbleweed yesterday, they held great sessions about working {in,with} debian. I recommend reading them if you didn't already
[21:04] <warp10> So, let's get started!
[21:04] <warp10> Our first bug will be: https://bugs.launchpad.net/ubuntu/hardy/+source/abiword/+bug/194443
[21:05] <warp10> Although it's quite old right now, it's a nice example of the good old debdiff-way of doing things.
[21:05] <warp10> Looking at archive rebuild test result, I noticed abiword did FTBFS. Do you know what does this acronym means?
[21:06] <warp10> It stands for "Fail To Build From Source". It's pretty common in Debian/Ubuntu Development: it just means that a source package doesn't compile correctly and doesn't build a binary package. A pretty bad situation, as you can imagine.
[21:07] <warp10> To investigate the bug I carefully checked the build log, it is still available here: https://lists.ubuntu.com/archives/ubuntu-autotest/2008-February/018645.html
[21:07] <warp10> As you see, it crashes with a sad error: "E: Package libgoffice-0-5-dev has no installation candidate". Nevertheless, the builder itself gives us an hint, saying that libgoffice-0-6-dev replaces it.
[21:07] <warp10> What happened is that a new release of libgoffice overseded the older package. A pretty common situation who typically leads to a bad, but easily fixable, FTBFS.
[21:08] <warp10> Patch is really simple. I downloaded the source package with `apt-get source abiword`. This command grabs the three files of the source package from archive, unpack the original tarball and applies the diff file containing the debian/ directory. I guess you already had a chance to learn about this in a previous session.
[21:09] <warp10> Although we are moving towards using bzr in the Ubuntu Distributed Development way-of-life, knowing the old good methods (who are currently used in Debian!) is still a good thing to know, so questions are welcomed if something is unclear.
[21:09] <warp10> The file containing the dependencies needed to build the package is debian/control, so I modified the build-dep name there.
[21:09] <warp10> I also changed the changelog to bump the package version number and drop a line about what I did.
[21:10] <warp10> !Q
[21:11] <ClassBot> kanliot asked: I don't know the location of the debian director, or where the apt-get source files get put
[21:11] <warp10> kanliot: debian/directory is in the root of the source tree. And the source package is taken from the archive itself
[21:11] <warp10> Then, I simply rebuilt the source package with debuild -S, then compared the two source packages (the original one and the fixed one) with debdiff to extract the patch I attached to the bug report: https://launchpadlibrarian.net/12158067/abiword_2.4.6-3ubuntu3.debdiff
[21:12] <warp10> Do you know debdiff? It's a sort of wrapper around the diff utility specifically targeted to debian packages. It produces patches that you can apply to a source tree with patch. Please check `man debdiff` for more info.
[21:12] <warp10> At the time I fixed this bug I wasn't a MOTU already, so I created the debdiff, attached it to the bug report, and a couple days later a sponsor uploaded it into archive. Of course, I build-tested my fixed package with pbuilder before submitting the debdiff: you absolutely don't want to propose patch without a thorough testing, do you?
[21:12] <warp10> Debdiff sponsored, abiword built, bug fixed! Easy, isn't it?
[21:13] <warp10> And please, feel free to ask question in #ubuntu-classroom-chat , just prefix them with "question:"
[21:13] <warp10> Anyway, today we tend to prefer the new Ubuntu Distributed Development paradigm. As you know, it uses bzr branches rather than downloading source packages from archive, giving us a bunch of advantages compared to the good old Debian way.
[21:14] <warp10> barry had a great session yesterday about this topic, showing you and all the (few) commands and the workflow you need to know to propose a bugfix.
[21:14] <warp10> Let's see another bitesize example, using UDD this time. Please, head your browser at maximum warp speed toward https://bugs.launchpad.net/ubuntu/+source/update-manager/+bug/918302
[21:14] <warp10> "canceled" or "cancelled", that is the question! (cit.) We had a sort of philological discussion here! Looks like both are fine, but the discussion lead us to say that we want to use "canceled" consistently across the whole program.
[21:15] <warp10> You could fix this bug with `apt-get source update-manager` and than following the workflow we saw in the previous bug, but Daniel Polehn decided to do Ubuntu Development on steroids, using bzr.
[21:15] <warp10> Downloading the source branch is very simple: `bzr branch ubuntu:update-manager`. This command will download the branch, ready for development, with debian/ directory included and applied already.
[21:16] <warp10> The inconsistency is in DistUpgrade/DistUpgradeView.py, so we patch that file, then we add the usual entry in debian/changelog. You can see Daniel'swork in the branch he proposed for merging here: https://code.launchpad.net/~dpolehn-gmail/update-manager/fix-918302
[21:16] <warp10> We are almost done! Enough we commit the patch with the usual `bzr commit` and then upload our new branch and open the merge proposal.
[21:17] <warp10> This can be easily done with bzr lp-done, which will guide through the process within your browser. More information about uploading and sponsoring of merge proposal here: http://developer.ubuntu.com/packaging/html/udd-working.html
[21:17] <warp10> Of course, don't forget to test your work! bzr builddeb will help you generating both the source and debian package.
[21:17] <warp10> Any question so far?
[21:18] <warp10> Next bug will allow us to talk about a very important topic: working with upstream.
[21:18] <warp10> As you all know, Ubuntu is built upon Debian, and very often we import both the packages and bugs from there :)
[21:18] <warp10> Therefore, it's often a good thing to send back our patches to Debian. There are a lot of good reasons to do this, someone of them were discussed by Daniel in the first session of UDW.
[21:19] <warp10> Let's see an example of good cooperation with Debian: https://bugs.launchpad.net/ubuntu/+source/xemacs21/+bug/4883
[21:19] <ClassBot> PaoloRotolo asked: How I can find the FTBFS bugs on Launchpad? There is a list?
[21:20] <warp10> PaoloRotolo: very good question
[21:20] <warp10> PaoloRotolo: ubuntuwire is plenty of resources about QA and FTBFS
[21:20] <ClassBot> There are 10 minutes remaining in the current session.
[21:20] <warp10> PaoloRotolo: point your browser to http://qa.ubuntuwire.com/ annd especially to http://people.ubuntuwire.org/~wgrant/rebuild-ftbfs-test/
[21:20] <warp10> so, back to the bug above
[21:20] <warp10> We have a problem with xemacs21 info pages here, and debian has the very same issue. This is a very ancient bug, but don't care, we are interested more to the "spirit" of this bugfix rather than to technical aspects.
[21:21] <warp10> Scroll through the discussion in this bug report and go to comment #9. Ralph Janke proposed a debdiff here (generated in a very similar way to how we saw in the first bug). After attaching the patch, Ralph answered to the debian bug too (http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=330253), informing the debian maintainer that a patch was available in ubuntu.
[21:21] <warp10> Actually, the debdiff got improved in the following days, and eventually arrived to an optimal version. This debdiff was used by the debian maintainer to upload a fixed package in Debian. Later, the package was synced from Debian, and the fix arrived in Ubuntu as well.
[21:21] <warp10> Don't you think this is a great way of working with bugs? This way we avoid adding a useless delta from debian, give back something to debian, improve our relationships with our upstream, allow other debian derivatives to take advantage, and even more.
[21:22] <warp10> Actually, we can even go more far away, suggesting patches to the upstreams of our immediate upstream Debian, like we did in this bug: https://bugs.launchpad.net/ubuntu/+source/gnome-power-manager/+bug/397248
[21:22] <warp10> The problem here involves GNOME, so it's worth sending a patch to GNOME itself, rather than sending it to Debian only.
[21:23] <ClassBot> kelemengabor asked: what should I do if I want to hack on a particular package, but its bzr branch is outdated? like, Transmission in Precise.
[21:23] <warp10> kelemengabor: Just stick to the good old apt-get source and grab it straight from archive
[21:23] <warp10> So, back to the bug. IT is a little bit nasty here, and involves some advanced knwoledge about gdb, but we don't care this so much: the Internt is plenty of resources about debugging with gdb.
[21:24] <warp10> As you can see from the debdiff at comment #14, the patch is so small, but involved a little bit of work to get there and to understand what was going wrong. Don't worry, you absolutely don't need to be a gdb master to be a great MOTU.
[21:24] <warp10> You need good communication skills, instead, and Scott Howard reported the bug to the GNOME bug tracker (https://bugzilla.gnome.org/show_bug.cgi?id=588259) and added his findings too, giving a great help to the upstream developer to write a patch.
[21:24] <warp10> This way, the patch got integrated from GNOME and in a few days flowed downstream to Debian and Ubuntu. That's great!
[21:25] <warp10> So, all in all, what we learn from all of this? A lot of things:
[21:25] <ClassBot> There are 5 minutes remaining in the current session.
[21:25] <warp10> 1- fixing bugs in Ubuntu is easy and makes lot of people happy! :)
[21:25] <warp10> 2- it's not a matter of a being a great programmer. You rather need good communcation and detective skills
[21:26] <warp10> 3- you are not alone! Your fellow ubuntu developers and sponsors are there to help, teach you and give great hints to improve your work. Don't hesitate to ask in #ubuntu-motu or ubuntu-motu@ if you are in trouble!
[21:26] <warp10> 4- upstreams matter. Don't add useless deltas from debian, rather cooperate with debian developers to get the best for both worlds.
[21:27] <warp10> For more information about Bug triaging, please refer to: https://wiki.ubuntu.com/Bugs
[21:27] <warp10> Another interesting wiki page about bugfixing is: https://wiki.ubuntu.com/Bugs/HowToFix
[21:28] <warp10> And of course, the main entry point for everything about Ubuntu Development: https://wiki.ubuntu.com/UbuntuDevelopment
[21:28] <warp10> Ok, guys, this ends our travel across the bugfixing world
[21:28] <ClassBot> pawel_st_ asked: kelemengabor realized that a package was outdated in bzr branch. Why is that, do you always have to check if apt-get source provides newer version?
[21:29] <warp10> pawel_st_: you will be warned by bzr if bzr source branch is outdated
[21:29] <warp10> pawel_st_: sop, no need to apt-get source just to check it
[21:29] <warp10> Ok, hope you got eager to fix bugs, guys!
[21:30] <warp10> And now, everybody stand here to listen cprofitt who will tell us great things about life-cycle of bug reports in Ubuntu.
[21:30]  * warp10 says "Ciao a tutti!"
[21:30] <ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2012/02/02/%23ubuntu-classroom.html following the conclusion of the session.
[21:30]  * cprofitt says hello
[21:31] <cprofitt> Welcome to Problem Lifecycle in Ubuntu.
[21:31] <cprofitt> Let me introduce myself. I am Charles Profitt and have been an Ubuntu Member since January of 2009 and involved Loco Community.
[21:31] <cprofitt> This session is a result of a UDS-O I led on improving the community involvement with Bugs. That session was focused on the end users, not the developer or QA groups.
[21:31] <cprofitt> In this session I will try to focus this on the developer and QA part of the Ubuntu community. I encourage questions to be asked at any time.
[21:32] <cprofitt> To ask a question, you need to be in #ubuntu-classroom-chat and ask your question in the following format:
[21:32] <cprofitt> QUESTION: <your question here>
[21:32] <cprofitt> note: If you do not begin the line with QUESTION:, ClassBot will not recognize it, and your question will most likely not get answered.
[21:32] <cprofitt> the previous session was on fighting bugs,... but this session will focus on 'problems'
[21:32] <cprofitt> The first thing to remember is that a problem and a bug are not the same. While bugs to cause problems they are not the sole cause.
[21:32] <cprofitt> The next important part is to remember that we have three distinct groups in our community.
[21:33] <cprofitt> 1 - Users
[21:33] <cprofitt> 2 - Technical Users - Systems Administrators or More Advanced Users
[21:33] <cprofitt> 3 - Developers - app developers, packagers, patchers, etc
[21:33] <cprofitt> the users group is and will continue to be the largest group
[21:34] <cprofitt> when Ubuntu started that group was very technical, but as the ease of use has increased the groups has increasingly become composed of less technical users
[21:34] <cprofitt> this is a good thing!!
[21:34] <cprofitt> The final general piece is that bugs are best resolved in development releases and not in stable releases. Once a release is stable there is a much more detailed and stringent process that has to be followed for an update to be released (SRU - Stable Release Update). This is due to the requirement to not cause a cascade of 'bugs'.
[21:34] <cprofitt> This creates a bit of a problem because most, if not all, of the people that fall in the 'users' group will not be running the development release. In fact, many of the technical users will only run the development release on spare equipment.
[21:34] <cprofitt> As our community grows the percentage of Ubuntu users will largely be in the users group and with the developers ending up as the smallest group. To sustain growth it is essential for the entire community to understand the Problem Life Cycle in brief.
[21:35] <cprofitt> Why do I call it the Problem Life Cycle?
[21:35] <cprofitt> Because all bugs start with a problem. A problem might be resolved by a configuration being corrected or it could truly be a bug.
[21:35] <cprofitt> The resolutions of problems start prior to bug reports; bug reporting is for when a problem can be identified as being caused by a bug.
[21:35] <cprofitt> Here is a link to the diagram about the Ubuntu Problem Cycle.
[21:35] <cprofitt> http://ftbeowulf.files.wordpress.com/2011/11/ubuntu-problem2.png
[21:36] <cprofitt> the diagram was just translated this week as well...
[21:36] <cprofitt> and I would appreaciate anyone willing to translate it in to their language
[21:36] <cprofitt> The Ubuntu Community has several resources for users seeking help with a problem; they are:
[21:36] <cprofitt> Ubuntu Forums - ubuntuforums.org
[21:36] <cprofitt> Ask Ubuntu - askubuntu.com
[21:36] <cprofitt> IRC - multiple channels on freenode
[21:36] <cprofitt> Ubuntu Wiki - help.ubuntu.com/community (community) or help.ubuntu.com (official)
[21:37] <cprofitt> Local Community Teams - as my friend Randal says -- boots on the ground
[21:37] <cprofitt> You see this in the first set of boxes in the diagram. If a problem is solved here it is due to a misconfiguration or a mis-understanding of what a piece of software does. I do not consider a work-a-round a solution.
[21:37] <cprofitt> if you are a developer it can help that you are familiar with these areas
[21:37] <cprofitt> it might help expose items that are not bugs, but that are causing users difficulty
[21:37] <cprofitt> for application developers looking to have their applications grow in popularity this can be crucial
[21:38] <cprofitt> You see this in the first set of boxes in the diagram. If a problem is solved here it is due to a misconfiguration or a mis-understanding of what a piece of software does. I do not consider a work-a-round a solution.
[21:38] <cprofitt> If a problem in a stable release is not solved then a user can either wait for the next release or embark on a process that in the past usually ends in frustration for them. That frustration stems from a misperception of the lifecycle.
[21:38] <cprofitt> this misundertanding is important...
[21:39] <cprofitt> and we can not, as a community, respond with phrases like RTFM
[21:39] <cprofitt> regardless of stable or development release; the most important part of bug resolution is in the triage process. Reports must be complete and accurate.
[21:39] <cprofitt> A great resource for reporting bugs can be found here:
[21:39] <cprofitt> https://help.ubuntu.com/community/ReportingBugs
[21:40] <cprofitt> Realize that some bugs are not crashes, but unexpected behavior. As a very simple example imagine a calculator that gave a result of 5 for 2+2. It would not crash, but it would still have a bug that results in unexpected, and inaccurate, results.
[21:40] <cprofitt> For crashes application developers should strive to write error handling code that can provide the end user with meaninful data that can be reported back to the developer.
[21:41] <cprofitt> As a person who supports proprietary sofware in my day job I am very used to errors that tell me nothing more than what I already know --
[21:41] <cprofitt> like -- your app crashed
[21:41] <cprofitt> try to add information that you, as a developer, will need to find the bug
[21:41] <cprofitt> not always easy; I know
[21:42] <cprofitt> On the chart app developers would fall in the 'upstream' category. You should ensure that you have a process for bugs to be filed against your application.
[21:42] <cprofitt> App Developers also have to understand that Ubuntu will not usually push your 'patches' in to a stable release.
[21:42] <cprofitt> If you are involved in bug triage you really need to strive to be friendly and helpful to the user reporting the bug if you wish to keep them engaged. I understand that some reports are a 'waste of time', but take those as an opportunity to help the person reporting the bug learn how to complete better reports.
[21:42] <cprofitt> We also have to understand that waiting for the next release is not a 'bad' thing. Many users are coming from the Microsoft world and are used to new releases being three to five years apart. Remind yourself, and the users, that Ubuntu will put out six releases in a three year period of time. A potential six month wait is a small price to pay for the stability of the release.
[21:43] <cprofitt> so to summarize so far
[21:43] <cprofitt> Users will experience a problem (may or may not be a bug)
[21:43] <cprofitt> they will use any of the several community support resources to seek a solution
[21:43] <cprofitt> the solution could be a configuration change or a work-a-round
[21:44] <cprofitt> after that if their problem is not resolved they may choose to submit a bug report
[21:45] <cprofitt> if that bug report is against a stable release it will have to go through a stringent process before having an SRU update
[21:45] <cprofitt> if it is in a development release the liklihood of getting it fixed is much higher
[21:45] <cprofitt> and the payback for the community that much greater
[21:46] <cprofitt> this process is captured at a high level here:
[21:46] <cprofitt> http://ftbeowulf.files.wordpress.com/2011/11/ubuntu-problem2.png
[21:46] <cprofitt> any questions?
[21:48] <cprofitt> I understand this process has been very abstract...
[21:49] <cprofitt> more of a high level over-view
[21:49] <cprofitt> one thing to keep in mind is some of the potential end points for bugs
[21:50] <cprofitt> when a developer marks a bug 'will not fix' it might be a case of 'can not fix'
[21:50] <ClassBot> There are 10 minutes remaining in the current session.
[21:50] <cprofitt> consider the case of xubuntu which depends on upstream
[21:51] <cprofitt> there may be times when the upstream has decided a specific version is no longer supported
[21:51] <cprofitt> but Ubuntu promises support for an LTS beyond that point
[21:51] <ClassBot> pawel_st_ asked: One of the common complaints about ubuntu bug tracking and fixing is that there are bugs that span multiple releases and don't seem to be taken care of properly (sorry, can't give you an example from top of my head). What would you say to that critique?
[21:52] <cprofitt> properly is potentially a result of not understanding the lifecycle
[21:52] <cprofitt> once the bug is in a stable release it may not get fixed if there is the potential of it causing stability issues
[21:52] <cprofitt> in other cases, like the LTS
[21:53] <cprofitt> upstreams may not be supporting the version of the product included
[21:53] <cprofitt> the other issue is even with +1 or +2 bugs the fix might depend on an upstream developer
[21:54] <cprofitt> this is one area triagers can help
[21:54] <cprofitt> by learning how to report bugs upstream properly
[21:54] <cprofitt> or guiding the reporting end user how to report it upstream
[21:55] <cprofitt> I agree completely that the process can be frustrating when you report a bug and it does not get fixed
[21:55] <ClassBot> There are 5 minutes remaining in the current session.
[21:55] <cprofitt> I have reported bugs for four years and only twice had updates released for my reports
[21:56] <cprofitt> I think the key is really helping bug reporters have proper expectations as to how things get fixed
[21:56] <cprofitt> bugs in an SRU will rarely get fixed
[21:56] <cprofitt> bugs in +1 or +2 that are upstream will only get attention if they go upstream properly
[21:57] <cprofitt> My hope is that we can build a process of ensuring that our entire community has a overview of the process so that they have a reasonable set of expectations
[21:58] <cprofitt> to avoid the 'rage' I see in bug reports on occassion
[21:58] <cprofitt> and to avoid triager and developer burn out
[21:58] <ClassBot> kanliot asked: what is an SRU
[21:58] <cprofitt> and SRU is a Stable Release Update
[21:59] <cprofitt> when a part of the Stable Release gets a patch.... it is called an SRU
[21:59] <cprofitt> thank you to everyone who attended
[21:59] <cprofitt> I hope that this session can spark community discussion about this so we can move forward and manage our communities growth
[22:00] <ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2012/02/02/%23ubuntu-classroom.html